diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X7 Crack Serial Number How to Unlock the Full Potential of Corel Draw X7 for Free.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X7 Crack Serial Number How to Unlock the Full Potential of Corel Draw X7 for Free.md
deleted file mode 100644
index 361d821cffe75855deb47464e9a7322cf160d2f0..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X7 Crack Serial Number How to Unlock the Full Potential of Corel Draw X7 for Free.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-# Corel Draw X7 Crack Serial Number Free Download
-
-
If you are looking for a powerful and versatile graphic design software, you might want to try Corel Draw X7. Corel Draw X7 is a professional-grade vector graphics editor that can help you create stunning logos, illustrations, flyers, brochures, posters, and more. Corel Draw X7 also comes with a suite of tools and features that can enhance your creativity and productivity, such as advanced typography, color management, layout, photo editing, web graphics, and animation.
-
-
However, Corel Draw X7 is not a cheap software. The official price of Corel Draw X7 is $499 for the full version and $199 for the upgrade version. If you want to use Corel Draw X7 without paying for it, you might be tempted to download Corel Draw X7 crack serial number for free from the internet. Corel Draw X7 crack serial number is a code that can activate the software and unlock all its features without requiring a license key or registration.
-
-## How to Download Corel Draw X7 Crack Serial Number for Free?
-
-
To download Corel Draw X7 crack serial number for free, you need to follow these steps:
-
-
-
Go to the website that provides the link for the Corel Draw X7 crack serial number download. There are many websites that claim to offer the crack serial number, but some of them might be fake or malicious. To avoid getting scammed or infected by viruses, you should only use trusted and verified sources that have positive reviews and feedback from other users.
-
Click on the download button and wait for the file to be downloaded. The file size might vary depending on the website, but it should be around 500 MB. You might need to complete some surveys or offers before you can access the download link, but they are usually easy and quick to do.
-
Extract the file using a program like WinRAR or 7-Zip. You will get a folder that contains the crack serial number and the installation instructions. Make sure you have enough space on your hard drive to store the extracted files.
-
Follow the installation instructions carefully and enter the crack serial number when prompted. This will install Corel Draw X7 on your computer and activate it with the crack serial number.
-
Launch Corel Draw X7 by double-clicking on the icon on your desktop. You might need to run it as administrator or disable your antivirus software if you encounter any errors or problems.
-
Enjoy using Corel Draw X7 for free!
-
-
-## Is Corel Draw X7 Crack Serial Number Safe and Legal?
-
-
Downloading and using Corel Draw X7 crack serial number is not safe or legal. The crack serial number might contain viruses, malware, or spyware that can harm your computer or steal your personal information. The crack serial number might also cause errors, crashes, or performance issues that can ruin your graphic design projects. Moreover, downloading and using Corel Draw X7 crack serial number is illegal and violates the copyright laws and terms of service of Corel Corporation. You might face legal consequences or penalties if you are caught using the crack serial number.
-
-
Therefore, we do not recommend downloading or using Corel Draw X7 crack serial number for free. The best way to use Corel Draw X7 is to buy a legitimate copy of the software from an authorized retailer or online platform. This way, you can support the developers and enjoy the software without any risks or troubles.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Descargar-Tecaudio-Bat-Para-Vice-City.md b/spaces/1gistliPinn/ChatGPT4/Descargar-Tecaudio-Bat-Para-Vice-City.md
deleted file mode 100644
index 0016d516aa7832922d020e1e249da777059f0b15..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Descargar-Tecaudio-Bat-Para-Vice-City.md
+++ /dev/null
@@ -1,34 +0,0 @@
-Descargar Tecaudio Bat Para Vice City
-
-
-
-Click Here ::: [https://gohhs.com/2tvpaa](https://gohhs.com/2tvpaa)
-
-
-
-
-
-
-
-
-
-Here is a possible title and article with HTML formatting for the keyword "Descargar Tecaudio Bat Para Vice City":
-
-How to Fix Audio Problems in GTA Vice City with Tecaudio
-GTA Vice City is one of the most popular games in the Grand Theft Auto series, but it also has some common audio issues that can affect the gameplay experience. Some players may encounter a message saying "no audio hardware" or have no sound at all in the game. Fortunately, there is a simple solution that involves using a file called Tecaudio.
-Tecaudio is a MS-DOS batch file that can decode the audio files in GTA Vice City and make them compatible with your system. It is usually located in the GTA Vice City folder, but if you don't have it, you can download it from various websites. Here are the steps to use Tecaudio to fix audio problems in GTA Vice City:
-
-Download Tecaudio from a reliable source and save it in your GTA Vice City folder.
-Right-click on Tecaudio and select "Edit".
-Delete the line that says "startw wav2raw" and save the file.
-Run Tecaudio as administrator and wait for it to process the audio files.
-Launch GTA Vice City and enjoy the sound.
-
-If you still have audio issues after using Tecaudio, you may need to update your sound drivers or check your sound settings. You can also try other solutions such as changing the compatibility mode of GTA Vice City or installing patches or mods that improve the audio quality. However, Tecaudio is usually the easiest and most effective way to fix audio problems in GTA Vice City.Here are some more paragraphs for the article:
-
-GTA Vice City is set in the fictional city of Vice City, which is based on Miami in the 1980s. The game follows the story of Tommy Vercetti, a former mobster who is sent to Vice City by his boss to establish a criminal empire. The game features a large open world that can be explored by foot or by various vehicles, such as cars, motorcycles, boats, and helicopters. The game also has many side missions and activities that can be completed for money, weapons, or other rewards.
-The game has received critical acclaim for its graphics, gameplay, music, and voice acting. It has also been praised for its depiction of the 1980s culture and atmosphere, as well as its satire of American society and politics. The game has sold over 17.5 million copies worldwide and is considered one of the best games of all time. It has also spawned several sequels and spin-offs, such as GTA San Andreas and GTA Vice City Stories.
-However, the game has also faced some controversy and criticism for its violence, sexual content, drug use, and portrayal of certain groups and issues. The game has been banned or censored in some countries and has been the subject of several lawsuits and investigations. Some critics have also argued that the game glorifies crime and violence and has a negative influence on young players. Despite these controversies, the game remains popular and influential among fans and developers alike. dfd1c89656
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download ATH Swift Shader DX9 SM3 Build 3383x86 Rar TOP.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download ATH Swift Shader DX9 SM3 Build 3383x86 Rar TOP.md
deleted file mode 100644
index 365621188629824c5f7099e95f1be39ab1a8dd63..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download ATH Swift Shader DX9 SM3 Build 3383x86 Rar TOP.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Download ATH Swift Shader DX9 SM3 Build 3383x86 Rar
If you are a fan of action-adventure games, you must have heard of GTA 5, one of the most popular and successful games of all time. GTA 5 is the fifth installment in the Grand Theft Auto series, developed by Rockstar Games and released in 2013 for PlayStation 3, PlayStation 4, Xbox 360, Xbox One, and PC. But did you know that you can also play GTA 5 on your Android device?
GTA 5 is a game that lets you experience the life of a criminal in the fictional city of Los Santos, based on Los Angeles. You can explore the city, engage in various activities, such as robbing banks, stealing cars, shooting enemies, and more. You can also follow the story mode, which involves three main characters: Michael, a retired bank robber; Franklin, a street hustler; and Trevor, a psychopathic drug dealer. You can switch between these characters at any time and see how their lives intertwine.
-
Why download GTA 5 for Android?
-
Downloading GTA 5 for Android has many advantages. First of all, you can enjoy the game on the go, without needing a console or a PC. You can play it anytime, anywhere, as long as you have an internet connection. Second, you can save a lot of storage space on your device, as the game is highly compressed to only 670MB. Third, you can experience the same features and quality as the original version, with no compromise on graphics or gameplay.
-
How to download and install GTA 5 for Android
-
Requirements
-
Before you download GTA 5 for Android, you need to make sure that your device meets the following requirements:
-
-
Android version: 4.0 or higher
-
RAM: 2GB or more
-
Storage: At least 4GB of free space
-
Internet: A stable connection for downloading and playing online
-
-
Steps
-
Here are the steps to download and install GTA 5 for Android:
-
gta 5 mod apk + obb for android download 670mb compressed
-download gta 5 android apk + obb data highly compressed 2023
-gta 5 apk + obb android free download highly compressed 670mb
-how to install gta 5 android apk + obb data compressed 670mb
-gta 5 beta apk + obb for android highly compressed 670mb
-gta 5 android apk + obb data download link highly compressed 670mb
-gta 5 android apk + obb data highly compressed offline 670mb
-gta 5 apk + obb data for android highly compressed latest version 670mb
-gta 5 android apk + obb data highly compressed no verification 670mb
-gta 5 apk + obb data highly compressed zip file download for android 670mb
-gta 5 android apk + obb data highly compressed working 100% 670mb
-gta 5 mod menu apk + obb for android highly compressed download 670mb
-gta 5 android apk + obb data highly compressed free download mediafıre 670mb
-gta 5 real apk + obb for android highly compressed download link 670mb
-gta 5 apk + obb data for android highly compressed no password 670mb
-gta 5 android apk + obb data highly compressed google drive download link 670mb
-gta 5 lite apk + obb for android highly compressed download latest version 670mb
-gta 5 full game apk + obb for android highly compressed free download 670mb
-gta 5 online apk + obb for android highly compressed download without human verification 670mb
-gta 5 original apk + obb for android highly compressed download from rockstar games 670mb
-gta 5 premium edition apk + obb for android highly compressed free download link 670mb
-gta 5 redux apk + obb for android highly compressed download latest update 670mb
-gta 5 remastered apk + obb for android highly compressed free download modded version 670mb
-gta 5 rp apk + obb for android highly compressed download roleplay mod server link 670mb
-gta 5 ultra graphics apk + obb for android highly compressed free download realistic mod link 670mb
-
-
Click on this link to download the GTA 5 APK file.
-
After downloading the APK file, go to your device settings and enable the installation of unknown sources.
-
Install the APK file on your device.
-
Download the OBB data file from this link and extract it using any file manager app.
-
Copy the extracted folder named "com.rockstargames.gtav" to your device's internal storage in this path: Android/obb/.
-
Launch the game from your app drawer and enjoy!
-
-
Features of GTA 5 for Android
-
Stunning graphics and gameplay
-
GTA 5 for Android boasts amazing graphics and gameplay that will make you feel like you are playing on a console or a PC. The game uses advanced technologies, such as dynamic lighting, shadows, reflections, textures, and animations, to create a realistic and immersive environment. The game also runs smoothly and seamlessly on most devices, thanks to its optimization and compression.
-
Open-world freedom and exploration
-
GTA 5 for Android offers you a huge open-world map that you can explore at your own pace. You can visit different locations, such as the city, the countryside, the desert, the mountains, and the ocean. You can also interact with various characters, objects, and vehicles that populate the world. You can drive, fly, swim, bike, or walk around and discover new places and secrets.
-
Multiple characters and missions
-
GTA 5 for Android lets you play as three different characters: Michael, Franklin, and Trevor. Each character has their own personality, skills, and story. You can switch between them at any time and see how their lives affect each other. You can also complete various missions that involve action, stealth, racing, shooting, and more. You can choose to follow the main storyline or do some side quests and activities for fun and rewards.
-
Online multiplayer mode
-
GTA 5 for Android also features an online multiplayer mode called GTA Online. In this mode, you can create your own character and join other players from around the world. You can cooperate or compete with them in various modes, such as deathmatches, races, heists, and more. You can also customize your character, weapons, vehicles, and properties. GTA Online is constantly updated with new content and events to keep you entertained.
-
Tips and tricks for GTA 5 for Android
-
Use the map and radar
-
GTA 5 for Android has a large map that can be overwhelming at first. To help you navigate and find your way around, you should use the map and radar on your screen. The map shows you the locations of missions, shops, safe houses, and more. The radar shows you the direction of your objective, enemies, allies, and police. You can also zoom in and out of the map and set waypoints to guide you.
-
Switch between characters wisely
-
GTA 5 for Android allows you to switch between three characters at any time. However, you should not do it randomly or too often. You should switch between characters when it is necessary or beneficial for your situation. For example, you can switch to Trevor when you need to use his special ability of rage mode, which makes him deal more damage and take less damage. You can also switch to another character when you are in trouble or want to avoid the police.
-
Customize your weapons and vehicles
-
GTA 5 for Android gives you a lot of options to customize your weapons and vehicles. You can buy new weapons or upgrade your existing ones with attachments, such as scopes, silencers, extended magazines, and more. You can also buy new vehicles or modify your current ones with enhancements, such as armor, turbo, spoilers, paint jobs, and more. Customizing your weapons and vehicles can improve their performance and appearance.
-
Save your game often
-
GTA 5 for Android is a game that can be unpredictable and challenging at times. You may encounter enemies, accidents, glitches, or bugs that can ruin your progress or experience. To avoid losing your data or having to repeat a mission or activity, you should save your game often. You can save your game manually by using your phone or automatically by entering a safe house or completing a mission.
-
Conclusion
-
GTA 5 for Android is a game that you should not miss if you love action-adventure games. It is a game that offers you a lot of features, such as stunning graphics and gameplay, open-world freedom and exploration, multiple characters and missions, online multiplayer mode, and more. You can download GTA 5 for Android easily by following the steps above. You can also enjoy GTA 5 for Android better by using the tips and tricks above. GTA 5 for Android is a game that will keep you entertained for hours and hours.
-
FAQs
-
-
Is GTA 5 for Android free?
-
No, GTA 5 for Android is not free. You need to pay a small fee to download the APK file from the link above.
-
Is GTA 5 for Android safe?
-
Yes, GTA 5 for Android is safe. The APK file and the OBB data file are scanned and verified by antivirus software. However, you should always download them from trusted sources.
-
Is GTA 5 for Android compatible with my device?
-
GTA 5 for Android is compatible with most devices that run on Android 4.0 or higher. However, you should check the requirements above before downloading the game.
-
How much storage space does GTA 5 for Android need?
-
GTA 5 for Android needs about 4GB of storage space on your device. You need to have at least 2GB of free space to download the APK file and the OBB data file, and another 2GB of free space to install and run the game.
-
Can I play GTA 5 for Android offline?
-
No, you cannot play GTA 5 for Android offline. You need to have an internet connection to download and install the game, and to play the online multiplayer mode. However, you can play the story mode and some activities offline after installing the game.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download NFS Heat Studio App and Get Ready for NFS Heat Launch.md b/spaces/1phancelerku/anime-remove-background/Download NFS Heat Studio App and Get Ready for NFS Heat Launch.md
deleted file mode 100644
index cffd27c7d89be37d54199c95688c9931de2dbe1f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download NFS Heat Studio App and Get Ready for NFS Heat Launch.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-
NFS Heat Studio: How to Download and Customize Your Dream Cars
-
If you are a fan of racing games, you probably have heard of Need for Speed Heat, the latest installment in the popular franchise by Electronic Arts. But did you know that you can also enjoy the thrill of collecting and customizing the world's most amazing cars on your mobile device? That's right, with NFS Heat Studio, you can create your own unique designs and sync them with the game. In this article, we will tell you everything you need to know about NFS Heat Studio, including how to download it, what features it offers, and how to customize your cars like a pro.
A mobile app that lets you collect and customize cars for Need for Speed Heat
-
NFS Heat Studio is a mobile app that connects you directly to Need for Speed Heat, the upcoming racing game set in the neon-lit streets of Palm City. With NFS Heat Studio, you can expand your collection with weekly drops and complete challenges to unlock the most iconic cars on the streets today. You can also use the app to perfect your ride with a huge selection of body parts, wheels, exhausts, and more. You can even use the wrap editor to create truly custom designs. Once you have unlocked the car in Need for Speed Heat, simply press the studio button in the garage to import your creations. They'll then be ready and waiting for you to play with in NFS Heat when the game releases on November 5, 2019.
-
Features of NFS Heat Studio
-
Weekly car drops and challenges
-
Each week, a new container filled with the hottest cars will be yours to customize. Some are so special they'll need to be unlocked with progression points, earned by customizing your current rides, creating new wraps, or just spending time with your collection. You can also complete challenges to earn rewards and unlock more cars.
-
Wrap editor and color selector
-
This is your space to make each car your own. Go crazy with wild body-kits, wheels, exhausts, and more. Use the color selector to put your style on everything from the perfect finish to window tints. Even use the wrap editor to create custom designs. You can choose from a variety of decals, logos, patterns, shapes, and colors. You can also adjust the size, rotation, position, and opacity of each element.
-
Capture lab and AR mode
-
Once you are happy with your design, you can show it off to the world using the capture lab. You can set up still shots and videos in different locations and angles. You can also use the AR mode to instantly add your favorites anywhere - from your driveway to the highway. Then snap to share them on Facebook, Instagram, Twitter, or add them to your gallery.
-
Sync with Need for Speed Heat
-
The best part of NFS Heat Studio is that it connects you directly to Need for Speed Heat. All you need is an EA account (or create one) and you'll be able to sync your custom designs with the game. They'll then be ready and waiting for you to play with in NFS Heat when the game releases on November 5, 201
How to download NFS Heat Studio?
-
Available for free on iOS and Android devices
-
NFS Heat Studio is a free app that you can download on your iOS or Android device. You don't need to own Need for Speed Heat to use the app, but you will need an EA account to sync your designs with the game. You can create an EA account for free if you don't have one already.
-
QR code in-game or links to app stores
-
There are two ways to download NFS Heat Studio. One is to scan the QR code that appears in the garage of Need for Speed Heat. This will take you directly to the app store of your device. The other way is to use the links below to access the app store of your choice.
nfs heat studio download for pc
-nfs heat studio download apk
-nfs heat studio download ios
-nfs heat studio download android
-nfs heat studio download free
-nfs heat studio download windows 10
-nfs heat studio download mac
-nfs heat studio download link
-nfs heat studio download size
-nfs heat studio download error
-nfs heat studio app download
-nfs heat studio mobile download
-nfs heat studio qr code download
-nfs heat studio mod apk download
-nfs heat studio hack apk download
-nfs heat studio custom cars download
-nfs heat studio game download
-nfs heat studio online download
-nfs heat studio offline download
-nfs heat studio latest version download
-how to download nfs heat studio on pc
-how to download nfs heat studio on ios
-how to download nfs heat studio on android
-how to download nfs heat studio on mac
-how to download nfs heat studio app
-how to fix nfs heat studio download error
-how to scan qr code for nfs heat studio download
-how to get nfs heat studio mod apk download
-how to use nfs heat studio custom cars download
-how to update nfs heat studio latest version download
-need for speed heat studio pc download
-need for speed heat studio apk download
-need for speed heat studio ios download
-need for speed heat studio android download
-need for speed heat studio free download
-need for speed heat studio windows 10 download
-need for speed heat studio mac download
-need for speed heat studio link download
-need for speed heat studio size download
-need for speed heat studio error download
NFS Heat Studio supports the following languages: English, French, Italian, German, Spanish, Russian, Portuguese, Polish, Japanese, Korean, Traditional Chinese, and Simplified Chinese. The app requires iOS 11.0 or later and is compatible with iPhone, iPad, and iPod touch. The app requires Android 8.0 or later and is compatible with most Android devices.
-
How to customize your cars in NFS Heat Studio?
-
Access the Cars tab and the Workshop
-
To start customizing your cars, you need to access the Cars tab in the app. Here you will see your current collection of cars and the containers that contain new cars. You can tap on any car to enter the Workshop, where you can modify its appearance and performance.
-
Choose from a variety of body parts, wheels, exhausts, and more
-
In the Workshop, you can choose from a variety of body parts, wheels, exhausts, and more to make your car stand out. You can swipe left or right to switch between different categories of customization options. You can also tap on the icons at the bottom of the screen to access the color selector, the wrap editor, or the capture lab.
-
Use progression points to unlock special cars
-
Some cars are so special that they require progression points to unlock. You can earn progression points by customizing your current cars, creating new wraps, or just spending time with your collection. You can also complete challenges to earn more points and rewards. Once you have enough points, you can tap on the lock icon on the car to unlock it.
-
Share your designs on social media or in your Gallery
-
Once you are happy with your design, you can share it with the world using the capture lab. You can set up still shots and videos in different locations and angles. You can also use the AR mode to instantly add your favorites anywhere - from your driveway to the highway. Then snap to share them on Facebook, Instagram, Twitter, or add them to your gallery.
-
Conclusion
-
NFS Heat Studio is a fun and creative way to enhance your Need for Speed Heat experience
-
NFS Heat Studio is a mobile app that lets you collect and customize cars for Need for Speed Heat. You can enjoy weekly car drops and challenges, use the wrap editor and color selector to create custom designs, use the capture lab and AR mode to show off your creations, and sync them with Need for Speed Heat when the game releases on November 5, 2019.
-
Download the app today and start building your dream garage
-
If you are ready to unleash your creativity and passion for cars, download NFS Heat Studio today and start building your dream garage. You can download the app for free on iOS or Android devices using the links below or by scanning the QR code in-game. Have fun and see you on the streets of Palm City!
Do I need Need for Speed Heat to use NFS Heat Studio?
-
No, you don't need Need for Speed Heat to use NFS Heat Studio. However, you will need an EA account to sync your designs with the game when it releases on November 5, 2019.
-
How do I sync my designs with Need for Speed Heat?
-
To sync your designs with Need for Speed Heat, you need to have an EA account and be logged in on both NFS Heat Studio and the game. Once you have unlocked the car in Need for Speed Heat, simply press the studio button in the garage to import your creations. They'll then be ready and waiting for you to play with in NFS Heat.
-
How do I use the wrap editor and color selector?
-
To use the wrap editor and color selector, you need to enter the Workshop and tap on the icons at the bottom of the screen. You can choose from a variety of decals, logos, patterns, shapes, and colors. You can also adjust the size, rotation, position, and opacity of each element. You can use the color selector to put your style on everything from the perfect finish to window tints.
-
How do I use the capture lab and AR mode?
-
To use the capture lab and AR mode, you need to enter the Workshop and tap on the icons at the bottom of the screen. You can set up still shots and videos in different locations and angles. You can also use the AR mode to instantly add your favorites anywhere - from your driveway to the highway. Then snap to share them on Facebook, Instagram, Twitter, or add them to your gallery.
-
How do I complete challenges and earn progression points?
-
To complete challenges and earn progression points, you need to access the Cars tab in the app. Here you will see your current collection of cars and the containers that contain new cars. You can tap on any car to see its challenges and rewards. You can also earn progression points by customizing your current cars, creating new wraps, or just spending time with your collection. You can use progression points to unlock special cars.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy These Offline RPG Games APK on Your Android Device [No Internet Required].md b/spaces/1phancelerku/anime-remove-background/Enjoy These Offline RPG Games APK on Your Android Device [No Internet Required].md
deleted file mode 100644
index 03ae9eea5f7a48df4e8cd07a823875116d5fb66f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy These Offline RPG Games APK on Your Android Device [No Internet Required].md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-
Game Offline RPG APK: What are they and why you should play them
-
If you are a fan of role-playing games (RPGs), you might have noticed that most of them require an internet connection to play. Whether it is to access online features, download additional content, or watch ads, online RPGs can be frustrating for gamers who have limited or no access to WiFi or cellular data.
-
Fortunately, there is a solution for this problem: game offline RPG APKs. These are games that you can download and install on your Android device as APK files, which are application packages that contain all the necessary data and code to run an app. Game offline RPG APKs do not require an internet connection to play, so you can enjoy them anytime and anywhere.
But what are the benefits of playing game offline RPG APKs? And what are some of the best game offline RPG APKs for Android? In this article, we will answer these questions and show you how to download and install game offline RPG APKs on your device.
-
Game offline RPG APKs offer a variety of benefits for gamers who want to enjoy immersive role-playing games without internet connection or data usage.
-
The benefits of game offline RPG APKs
-
No internet connection required
-
One of the main advantages of game offline RPG APKs is that they do not require an internet connection to play. This means that you can play them whenever and wherever you want, without worrying about WiFi or cellular data availability or costs.
-
This is especially useful for gamers who live in areas with poor internet coverage, travel frequently, or have limited data plans. You can also save battery life by playing game offline RPG APKs, as they do not consume as much power as online games.
-
No ads or in-app purchases
-
Another benefit of game offline RPG APKs is that they are free from annoying ads or costly in-app purchases that can ruin the gaming experience. Unlike online games that rely on ads or microtransactions to generate revenue, game offline RPG APKs are usually paid or free apps that do not have any hidden fees or charges. You can enjoy the full game without being interrupted by ads or tempted by in-app purchases that can affect the game balance or progression.
-
No updates or downloads
-
A third benefit of game offline RPG APKs is that they do not require constant updates or additional downloads that can take up storage space or consume data. Once you download and install a game offline RPG APK, you can play it right away and do not have to worry about updating it to the latest version or downloading extra content that might not be compatible with your device.
-
game offline rpg apk mod
-game offline rpg apk no data
-game offline rpg apk free download
-game offline rpg apk terbaik
-game offline rpg apk 2023
-game offline rpg apk low mb
-game offline rpg apk hack
-game offline rpg apk android
-game offline rpg apk unlimited money
-game offline rpg apk high graphics
-game offline rpg apk adventure
-game offline rpg apk full version
-game offline rpg apk action
-game offline rpg apk strategy
-game offline rpg apk fantasy
-game offline rpg apk horror
-game offline rpg apk pixel art
-game offline rpg apk open world
-game offline rpg apk turn based
-game offline rpg apk anime
-game offline rpg apk medieval
-game offline rpg apk sci fi
-game offline rpg apk survival
-game offline rpg apk dungeon crawler
-game offline rpg apk dragon
-game offline rpg apk zombie
-game offline rpg apk vampire
-game offline rpg apk knight
-game offline rpg apk ninja
-game offline rpg apk samurai
-game offline rpg apk pirate
-game offline rpg apk superhero
-game offline rpg apk monster hunter
-game offline rpg apk diablo like
-game offline rpg apk final fantasy like
-game offline rpg apk star wars like
-game offline rpg apk baldur's gate like
-game offline rpg apk evoland like
-game offline rpg apk eternium like
-game offline rpg apk soul knight like
-
This also means that you can play game offline RPG APKs even if the developer stops supporting them or removes them from the app store. You do not have to depend on the availability or quality of the online service or server to enjoy your game.
-
The best game offline RPG APKs for Android
-
Now that you know the benefits of game offline RPG APKs, you might be wondering what are some of the best game offline RPG APKs for Android. There are many game offline RPG APKs available for download, but we have selected five of the most popular and highly rated ones for you to try out. Here they are:
-
Star Wars: Knights of the Old Republic
-
If you are a fan of Star Wars, you will love this game offline RPG APK. Star Wars: Knights of the Old Republic is a classic RPG that lets you create your own character and choose your path in the galaxy far, far away. You can join the Jedi or the Sith, fight with lightsabers or blasters, and interact with iconic characters like Darth Vader, Yoda, and Han Solo.
-
The game has an epic story, rich graphics, and immersive sound effects. It also has a lot of customization options, such as different classes, skills, feats, and items. You can play this game offline RPG APK for hours and hours without getting bored.
-
Star Wars: Knights of the Old Republic has a rating of 4.5 out of 5 stars on Google Play and costs $9.99 to download. It requires Android 4.1 or higher and 2.4 GB of storage space.
-
Baldur's Gate
-
If you are a fan of Dungeons & Dragons, you will love this game offline RPG APK. Baldur's Gate is a legendary RPG that is based on the famous tabletop game and set in the Forgotten Realms fantasy world. You can create your own hero and explore a vast and detailed world full of adventure, magic, and danger.
-
The game has a complex and engaging story, with multiple endings and choices that matter. It also has a lot of gameplay features, such as turn-based combat, party management, dialogue options, and quests. You can play this game offline RPG APK solo or with friends via local multiplayer.
-
Baldur's Gate has a rating of 4.4 out of 5 stars on Google Play and costs $9.99 to download. It requires Android 3.0 or higher and 2.6 GB of storage space.
Dungeon Quest
-
If you are a fan of hack and slash games, you will love this game offline RPG APK. Dungeon Quest is a fast-paced RPG that lets you loot and fight your way through endless dungeons full of monsters, traps, and bosses. You can choose from four different classes: Warrior, Rogue, Mage, or Shaman, and customize your character with various skills, weapons, and armor.
-
The game has a simple and addictive gameplay, with randomly generated levels and loot. It also has a lot of challenge modes, such as Arena, Survival, and Daily Dungeon. You can play this game offline RPG APK alone or with friends via local co-op.
-
Dungeon Quest has a rating of 4.5 out of 5 stars on Google Play and is free to download. It requires Android 4.0 or higher and 46 MB of storage space.
-
Eternium
-
If you are a fan of Diablo, you will love this game offline RPG APK. Eternium is a stylish RPG that lets you battle against evil forces in a dark fantasy world. You can choose from three different classes: Mage, Warrior, or Bounty Hunter, and use gestures to cast spells or attack enemies.
-
The game has a rich and immersive story, with over 90 levels and 10 worlds to explore. It also has a lot of gameplay features, such as crafting, fishing, mining, companions, and pets. You can play this game offline RPG APK without any restrictions or penalties.
-
Eternium has a rating of 4.8 out of 5 stars on Google Play and is free to download. It requires Android 4.0 or higher and 116 MB of storage space.
-
Soul Knight
-
If you are a fan of roguelike games, you will love this game offline RPG APK. Soul Knight is a fun and quirky RPG that lets you shoot your way through randomly generated dungeons full of aliens, robots, and magic. You can choose from over 170 different characters, each with their own unique abilities and weapons.
-
The game has a simple and smooth gameplay, with easy controls and colorful graphics. It also has a lot of content to unlock, such as new characters, weapons, pets, and items. You can play this game offline RPG APK solo or with friends via local multiplayer.
-
Soul Knight has a rating of 4.6 out of 5 stars on Google Play and is free to download. It requires Android 4.1 or higher and 100 MB of storage space.
-
How to download and install game offline RPG APKs
-
Find a reliable source
-
Before you can download and install game offline RPG APKs on your device, you need to find a reliable source that offers them for download. There are many websites and app stores that claim to provide game offline RPG APKs, but not all of them are trustworthy or safe.
-
Some sources might offer fake or modified game offline RPG APKs that contain malware or viruses that can harm your device or steal your personal information. Some sources might also offer outdated or incompatible game offline RPG APKs that do not work properly on your device.
-
To avoid these risks, you should only download game offline RPG APKs from reputable sources that have positive reviews and ratings from other users. You can also check the authenticity and quality of the game offline RPG APKs by looking at their file size, permissions, developer name, and version number.
-
Download the APK file
-
Once you have found a reliable source for the game offline RPG APK that you want to play, you need to download the APK file to your device. The APK file is the application package that contains all the data and code needed to run the app on your device.
-
To download the APK file, you need to click on the download link or button provided by the source and wait for the file to be downloaded to your device. Depending on the size of the file and the speed of your internet connection, this might take a few minutes or longer.
-
After the file is downloaded, you need to check its size and permissions before installing it on your device. The size of the file should match the size indicated by the source. The permissions of the file should match the functions of the app. If the file size or permissions are different from what you expected, you should delete the file and find another source.
-
Enable unknown sources
-
By default, Android devices do not allow users to install apps from unknown sources that are not verified by Google Play. This is a security measure that prevents users from installing malicious or harmful apps on their devices.
-
However, if you want to install game offline RPG APKs on your device, you need to enable the option to install apps from unknown sources in your device settings. This will allow you to install game offline RPG APKs that are not available on Google Play or that are not compatible with your device.
-
To enable unknown sources, you need to follow these steps:
-
-
Go to your device settings and tap on Security or Privacy.
-
Find the option that says Unknown sources or Install unknown apps and toggle it on.
-
A warning message will appear, telling you the risks of installing apps from unknown sources. Tap on OK or Allow to confirm.
-
-
Once you have enabled unknown sources, you can proceed to install the game offline RPG APK on your device.
-
Install the APK file
-
After you have downloaded and checked the game offline RPG APK file and enabled unknown sources, you can install the game offline RPG APK on your device. To install the game offline RPG APK, you need to follow these steps:
-
-
Locate the game offline RPG APK file on your device using a file manager app or your device's Downloads folder.
-
Tap on the game offline RPG APK file and a prompt will appear, asking you if you want to install the app.
-
Tap on Install and wait for the installation process to complete.
-
Once the installation is done, you can tap on Open to launch the game or Done to exit the prompt.
-
-
Congratulations! You have successfully installed a game offline RPG APK on your device. You can now enjoy playing your game offline RPG without any internet connection or data usage.
-
Conclusion
-
Game offline RPG APKs are games that you can download and install on your Android device as APK files, which are application packages that contain all the data and code to run an app. Game offline RPG APKs do not require an internet connection to play, so you can enjoy them anytime and anywhere.
-
Game offline RPG APKs offer a variety of benefits for gamers who want to enjoy immersive role-playing games without internet connection or data usage. Some of these benefits are:
-
-
No internet connection required. You can play game offline RPG APKs whenever and wherever you want, without worrying about WiFi or cellular data availability or costs.
-
No ads or in-app purchases. You can enjoy game offline RPG APKs without being interrupted by ads or tempted by in-app purchases that can ruin the gaming experience.
-
No updates or downloads. You can play game offline RPG APKs without having to update them or download extra content that can take up storage space or consume data.
-
-
Some of the best game offline RPG APKs for Android are:
-
-
Star Wars: Knights of the Old Republic. A classic RPG that lets you create your own character and choose your path in the Star Wars universe.
-
Baldur's Gate. A legendary RPG that is based on the Dungeons & Dragons tabletop game and set in the Forgotten Realms fantasy world.
-
Dungeon Quest. A fast-paced RPG that lets you loot and fight your way through endless dungeons full of monsters, traps, and bosses.
-
Eternium. A stylish RPG that lets you battle against evil forces in a dark fantasy world.
-
Soul Knight. A fun and quirky RPG that lets you shoot your way through randomly generated dungeons full of aliens, robots, and magic.
-
-
To download and install game offline RPG APKs on your device, you need to find a reliable source that offers them for download, download the APK file to your device and check its size and permissions, enable unknown sources in your device settings, and install the APK file on your device.
-
If you are looking for some great game offline RPG APKs to play on your Android device, we hope this article has helped you find some of them. Why not give them a try and see for yourself how much fun they are? You might be surprised by how much you enjoy playing game offline RPG APKs without any internet connection or data usage.
-
Frequently Asked Questions
-
What is an APK file?
-
An APK file is an application package that contains all the data and code needed to run an app on an Android device. It is similar to an EXE file for Windows or a DMG file for Mac. You can download and install APK files from various sources, such as websites or app stores, as long as they are compatible with your device and trustworthy.
-
What is an RPG?
-
An RPG is a role-playing game, which is a type of video game that lets you create your own character and immerse yourself in a fictional world where you can interact with other characters, complete quests, and fight enemies. RPGs are usually divided into two categories: online RPGs, which require an internet connection to play and offer online features, such as multiplayer, chat, or leaderboards; and offline RPGs, which do not require an internet connection to play and offer offline features, such as single-player, customization, or replayability.
-
What are some of the disadvantages of game offline RPG APKs?
-
Game offline RPG APKs are not perfect and have some disadvantages that you should be aware of before playing them. Some of these disadvantages are:
-
-
Limited content. Game offline RPG APKs usually have less content than online RPGs, as they do not have access to online features or updates that can add new content or fix bugs.
-
Potential security risks. Game offline RPG APKs can pose security risks for your device or personal information if they are downloaded from untrustworthy sources or contain malware or viruses.
-
Possible compatibility issues. Game offline RPG APKs might not be compatible with your device or operating system if they are outdated or modified.
-
-
To avoid these disadvantages, you should only download game offline RPG APKs from reliable sources, check their file size and permissions before installing them, and update your device and operating system regularly.
-
How can I delete game offline RPG APKs from my device?
-
If you want to delete game offline RPG APKs from your device, you can follow these steps:
-
-
Go to your device settings and tap on Apps or Applications.
-
Find the game offline RPG APK that you want to delete and tap on it.
-
Tap on Uninstall and confirm your choice.
-
The game offline RPG APK will be deleted from your device and its storage space will be freed up.
-
-
Can I play game offline RPG APKs on other devices?
-
Yes, you can play game offline RPG APKs on other devices as long as they are Android devices that support the game offline RPG APK that you want to play. You can transfer the game offline RPG APK file from one device to another using a USB cable, Bluetooth, Wi-Fi, or cloud storage. You can also download the game offline RPG APK file again from the same source that you used before. However, you might lose your game progress or data if you switch devices or delete the game offline RPG APK from your device.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_ancestral_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_ancestral_discrete.py
deleted file mode 100644
index 99e5d13abc40762a11171c4e7e1ee6d18f8ea7ac..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_ancestral_discrete.py
+++ /dev/null
@@ -1,233 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput, logging
-from .scheduling_utils import SchedulerMixin
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-@dataclass
-# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerAncestralDiscrete
-class EulerAncestralDiscreteSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
- `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: paddle.Tensor
- pred_original_sample: Optional[paddle.Tensor] = None
-
-
-class EulerAncestralDiscreteScheduler(SchedulerMixin, ConfigMixin):
- """
- Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson:
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear` or `scaled_linear`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- """
-
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- prediction_type: str = "epsilon",
- ):
- if trained_betas is not None:
- self.betas = paddle.to_tensor(trained_betas, dtype="float32")
- elif beta_schedule == "linear":
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
-
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32)
- self.sigmas = paddle.to_tensor(sigmas)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = self.sigmas.max()
-
- # setable values
- self.num_inference_steps = None
- timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy()
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
- self.is_scale_input_called = False
-
- def scale_model_input(self, sample: paddle.Tensor, timestep: Union[float, paddle.Tensor]) -> paddle.Tensor:
- """
- Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.
-
- Args:
- sample (`paddle.Tensor`): input sample
- timestep (`float` or `paddle.Tensor`): the current timestep in the diffusion chain
-
- Returns:
- `paddle.Tensor`: scaled input sample
- """
- step_index = (self.timesteps == timestep).nonzero().item()
- sigma = self.sigmas[step_index]
- sample = sample / ((sigma**2 + 1) ** 0.5)
- self.is_scale_input_called = True
- return sample
-
- def set_timesteps(self, num_inference_steps: int):
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- self.num_inference_steps = num_inference_steps
-
- timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy()
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
- self.sigmas = paddle.to_tensor(sigmas)
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
-
- def step(
- self,
- model_output: paddle.Tensor,
- timestep: Union[float, paddle.Tensor],
- sample: paddle.Tensor,
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
- return_dict: bool = True,
- ) -> Union[EulerAncestralDiscreteSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`paddle.Tensor`): direct output from learned diffusion model.
- timestep (`float`): current timestep in the diffusion chain.
- sample (`paddle.Tensor`):
- current instance of sample being created by diffusion process.
- generator (`paddle.Generator`, optional): Random number generator.
- return_dict (`bool`): option for returning tuple rather than EulerAncestralDiscreteSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput`] if `return_dict` is True, otherwise
- a `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- if not self.is_scale_input_called:
- logger.warning(
- "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
- "See `StableDiffusionPipeline` for a usage example."
- )
- step_index = (self.timesteps == timestep).nonzero().item()
- sigma = self.sigmas[step_index]
-
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
- if self.config.prediction_type == "epsilon":
- pred_original_sample = sample - sigma * model_output
- elif self.config.prediction_type == "v_prediction":
- # * c_out + input * c_skip
- pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
- )
- sigma_from = self.sigmas[step_index]
- sigma_to = self.sigmas[step_index + 1]
- sigma_up = (sigma_to**2 * (sigma_from**2 - sigma_to**2) / sigma_from**2) ** 0.5
- sigma_down = (sigma_to**2 - sigma_up**2) ** 0.5
-
- # 2. Convert to an ODE derivative
- derivative = (sample - pred_original_sample) / sigma
-
- dt = sigma_down - sigma
-
- prev_sample = sample + derivative * dt
-
- noise = paddle.randn(model_output.shape, dtype=model_output.dtype, generator=generator)
-
- prev_sample = prev_sample + noise * sigma_up
-
- if not return_dict:
- return (prev_sample,)
-
- return EulerAncestralDiscreteSchedulerOutput(
- prev_sample=prev_sample, pred_original_sample=pred_original_sample
- )
-
- def add_noise(
- self,
- original_samples: paddle.Tensor,
- noise: paddle.Tensor,
- timesteps: paddle.Tensor,
- ) -> paddle.Tensor:
- # Make sure sigmas and timesteps have the same dtype as original_samples
- self.sigmas = self.sigmas.cast(original_samples.dtype)
-
- schedule_timesteps = self.timesteps
- step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]
-
- sigma = self.sigmas[step_indices].flatten()
- while len(sigma.shape) < len(original_samples.shape):
- sigma = sigma.unsqueeze(-1)
-
- noisy_samples = original_samples + noise * sigma
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/7hao/bingo/src/pages/api/image.ts b/spaces/7hao/bingo/src/pages/api/image.ts
deleted file mode 100644
index 4b894bea86050c0f3888cc56f60c0cb7f8b57cfc..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/pages/api/image.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { debug } from '@/lib/isomorphic'
-import { createHeaders } from '@/lib/utils'
-import { createImage } from '@/lib/bots/bing/utils'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- const { prompt, id } = req.query
- if (!prompt) {
- return res.json({
- result: {
- value: 'Image',
- message: 'No Prompt'
- }
- })
- }
- try {
- const headers = createHeaders(req.cookies, {
- IMAGE_BING_COOKIE: process.env.IMAGE_BING_COOKIE
- })
-
- debug('headers', headers)
- const response = await createImage(String(prompt), String(id), {
- ...headers,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- })
- res.writeHead(200, {
- 'Content-Type': 'text/plain; charset=UTF-8',
- })
- return res.end(response)
- } catch (e) {
- return res.json({
- result: {
- value: 'Error',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/render_final.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/render_final.py
deleted file mode 100644
index 41b3bfdb2e6bff74aeaceb8f1a7ebac9dc1acaba..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/render_final.py
+++ /dev/null
@@ -1,194 +0,0 @@
-from models.rotation2xyz import Rotation2xyz
-import numpy as np
-from trimesh import Trimesh
-import os
-os.environ['PYOPENGL_PLATFORM'] = "osmesa"
-
-import torch
-from visualize.simplify_loc2rot import joints2smpl
-import pyrender
-import matplotlib.pyplot as plt
-
-import io
-import imageio
-from shapely import geometry
-import trimesh
-from pyrender.constants import RenderFlags
-import math
-# import ffmpeg
-from PIL import Image
-
-class WeakPerspectiveCamera(pyrender.Camera):
- def __init__(self,
- scale,
- translation,
- znear=pyrender.camera.DEFAULT_Z_NEAR,
- zfar=None,
- name=None):
- super(WeakPerspectiveCamera, self).__init__(
- znear=znear,
- zfar=zfar,
- name=name,
- )
- self.scale = scale
- self.translation = translation
-
- def get_projection_matrix(self, width=None, height=None):
- P = np.eye(4)
- P[0, 0] = self.scale[0]
- P[1, 1] = self.scale[1]
- P[0, 3] = self.translation[0] * self.scale[0]
- P[1, 3] = -self.translation[1] * self.scale[1]
- P[2, 2] = -1
- return P
-
-def render(motions, outdir='test_vis', device_id=0, name=None, pred=True):
- frames, njoints, nfeats = motions.shape
- MINS = motions.min(axis=0).min(axis=0)
- MAXS = motions.max(axis=0).max(axis=0)
-
- height_offset = MINS[1]
- motions[:, :, 1] -= height_offset
- trajec = motions[:, 0, [0, 2]]
-
- j2s = joints2smpl(num_frames=frames, device_id=0, cuda=True)
- rot2xyz = Rotation2xyz(device=torch.device("cuda:0"))
- faces = rot2xyz.smpl_model.faces
-
- if (not os.path.exists(outdir + name+'_pred.pt') and pred) or (not os.path.exists(outdir + name+'_gt.pt') and not pred):
- print(f'Running SMPLify, it may take a few minutes.')
- motion_tensor, opt_dict = j2s.joint2smpl(motions) # [nframes, njoints, 3]
-
- vertices = rot2xyz(torch.tensor(motion_tensor).clone(), mask=None,
- pose_rep='rot6d', translation=True, glob=True,
- jointstype='vertices',
- vertstrans=True)
-
- if pred:
- torch.save(vertices, outdir + name+'_pred.pt')
- else:
- torch.save(vertices, outdir + name+'_gt.pt')
- else:
- if pred:
- vertices = torch.load(outdir + name+'_pred.pt')
- else:
- vertices = torch.load(outdir + name+'_gt.pt')
- frames = vertices.shape[3] # shape: 1, nb_frames, 3, nb_joints
- print (vertices.shape)
- MINS = torch.min(torch.min(vertices[0], axis=0)[0], axis=1)[0]
- MAXS = torch.max(torch.max(vertices[0], axis=0)[0], axis=1)[0]
- # vertices[:,:,1,:] -= MINS[1] + 1e-5
-
-
- out_list = []
-
- minx = MINS[0] - 0.5
- maxx = MAXS[0] + 0.5
- minz = MINS[2] - 0.5
- maxz = MAXS[2] + 0.5
- polygon = geometry.Polygon([[minx, minz], [minx, maxz], [maxx, maxz], [maxx, minz]])
- polygon_mesh = trimesh.creation.extrude_polygon(polygon, 1e-5)
-
- vid = []
- for i in range(frames):
- if i % 10 == 0:
- print(i)
-
- mesh = Trimesh(vertices=vertices[0, :, :, i].squeeze().tolist(), faces=faces)
-
- base_color = (0.11, 0.53, 0.8, 0.5)
- ## OPAQUE rendering without alpha
- ## BLEND rendering consider alpha
- material = pyrender.MetallicRoughnessMaterial(
- metallicFactor=0.7,
- alphaMode='OPAQUE',
- baseColorFactor=base_color
- )
-
-
- mesh = pyrender.Mesh.from_trimesh(mesh, material=material)
-
- polygon_mesh.visual.face_colors = [0, 0, 0, 0.21]
- polygon_render = pyrender.Mesh.from_trimesh(polygon_mesh, smooth=False)
-
- bg_color = [1, 1, 1, 0.8]
- scene = pyrender.Scene(bg_color=bg_color, ambient_light=(0.4, 0.4, 0.4))
-
- sx, sy, tx, ty = [0.75, 0.75, 0, 0.10]
-
- camera = pyrender.PerspectiveCamera(yfov=(np.pi / 3.0))
-
- light = pyrender.DirectionalLight(color=[1,1,1], intensity=300)
-
- scene.add(mesh)
-
- c = np.pi / 2
-
- scene.add(polygon_render, pose=np.array([[ 1, 0, 0, 0],
-
- [ 0, np.cos(c), -np.sin(c), MINS[1].cpu().numpy()],
-
- [ 0, np.sin(c), np.cos(c), 0],
-
- [ 0, 0, 0, 1]]))
-
- light_pose = np.eye(4)
- light_pose[:3, 3] = [0, -1, 1]
- scene.add(light, pose=light_pose.copy())
-
- light_pose[:3, 3] = [0, 1, 1]
- scene.add(light, pose=light_pose.copy())
-
- light_pose[:3, 3] = [1, 1, 2]
- scene.add(light, pose=light_pose.copy())
-
-
- c = -np.pi / 6
-
- scene.add(camera, pose=[[ 1, 0, 0, (minx+maxx).cpu().numpy()/2],
-
- [ 0, np.cos(c), -np.sin(c), 1.5],
-
- [ 0, np.sin(c), np.cos(c), max(4, minz.cpu().numpy()+(1.5-MINS[1].cpu().numpy())*2, (maxx-minx).cpu().numpy())],
-
- [ 0, 0, 0, 1]
- ])
-
- # render scene
- r = pyrender.OffscreenRenderer(960, 960)
-
- color, _ = r.render(scene, flags=RenderFlags.RGBA)
- # Image.fromarray(color).save(outdir+'/'+name+'_'+str(i)+'.png')
-
- vid.append(color)
-
- r.delete()
-
- out = np.stack(vid, axis=0)
- if pred:
- imageio.mimsave(outdir + name+'_pred.gif', out, fps=20)
- else:
- imageio.mimsave(outdir + name+'_gt.gif', out, fps=20)
-
-
-
-
-
-if __name__ == "__main__":
- import argparse
- parser = argparse.ArgumentParser()
- parser.add_argument("--filedir", type=str, default=None, help='motion npy file dir')
- parser.add_argument('--motion-list', default=None, nargs="+", type=str, help="motion name list")
- args = parser.parse_args()
-
- filename_list = args.motion_list
- filedir = args.filedir
-
- for filename in filename_list:
- motions = np.load(filedir + filename+'_pred.npy')
- print('pred', motions.shape, filename)
- render(motions[0], outdir=filedir, device_id=0, name=filename, pred=True)
-
- motions = np.load(filedir + filename+'_gt.npy')
- print('gt', motions.shape, filename)
- render(motions[0], outdir=filedir, device_id=0, name=filename, pred=False)
diff --git a/spaces/AIFILMS/scene-edit-detection/app.py b/spaces/AIFILMS/scene-edit-detection/app.py
deleted file mode 100644
index 0c41facca6aa63cc2ab71d4e7cb00fbe42e4fcde..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/scene-edit-detection/app.py
+++ /dev/null
@@ -1,154 +0,0 @@
-# Dependencies, see also requirement.txt ;)
-import gradio as gr
-import cv2
-import numpy as np
-import os
-
-from scenedetect import open_video, SceneManager
-from scenedetect.detectors import ContentDetector
-
-from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip
-
-# —————————————————————————————————————————————————
-
-title = "Scene Edit Detection"
-description = "
Gradio demo of PySceneDetect. Automatically find every shots in a video sequence
1. gives you timecode in/out for each shot. 2. saves each shot as a splitted mp4 video chunk for you to download. 3. diplays a thumbnail for each shot as a gallery output.
")
- return content
-
- @classmethod
- def general_filter(cls, content, agent_name):
- return content
-
- @classmethod
- def filter(cls, content: str, agent_name: str, ui_name: str):
- """
- Description:
- Make certain modifications to the output content to enhance its aesthetics when content is showed in gradio.
- Input:
- content: output content
- agent_name: Whose output is it
- ui_name: What UI is currently launching
- Output:
- Modified content
- """
- mapping = {
- "SingleAgentUI": cls.singleagent_filter,
- "DebateUI": cls.debate_filter,
- "NovelUI": cls.novel_filter,
- "CodeUI": cls.code_filter,
- "GeneralUI": cls.general_filter
- }
- if ui_name in mapping:
- return mapping[ui_name](content, agent_name)
- else:
- return content
-
-class Client:
- """
- For inter-process communication, this is the client.
- `gradio_backend.PY` serves as the backend, while `run_gradio` is the frontend.
- Communication between the frontend and backend is accomplished using Sockets.
- """
- # =======================Radio Const String======================
- SINGLE_MODE = "Single Mode"
- AUTO_MODE = "Auto Mode"
- MODE_LABEL = "Select the execution mode"
- MODE_INFO = "Single mode refers to when the current agent output ends, it will stop running until you click to continue. Auto mode refers to when you complete the input, all agents will continue to output until the task ends."
- # ===============================================================
- mode = AUTO_MODE
- FIRST_RUN:bool = True
- # if last agent is user, then next agent will be executed automatically rather than click button
- LAST_USER:bool = False
-
- receive_server = None
- send_server = None
- current_node = None
- cache = {}
-
- def __init__(self, host=HOST, port=PORT, bufsize=1024):
- assert Client.mode in [Client.SINGLE_MODE, Client.AUTO_MODE]
- self.SIGN = SPECIAL_SIGN
- self.bufsize = bufsize
- assert bufsize > 0
- self.client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
- self.client_socket.connect((host, port))
- while True:
- data = self.client_socket.recv(self.bufsize).decode('utf-8')
- if data == "hi":
- self.client_socket.send("hello agent".encode('utf-8'))
- time.sleep(1)
- elif data == "check":
- break
- print_log("Client: connecting successfully......")
-
- def start_server(self):
- while True:
- message = yield
- if message == 'exit':
- break
- self.send_message(message=message)
-
- def send_message(self, message):
- """Send the message to the server."""
- if isinstance(message, list) or isinstance(message, dict):
- message = str(message)
- assert isinstance(message, str)
- message = message + self.SIGN["SPLIT"]
- self.client_socket.send(message.encode('utf-8'))
-
- def receive_message(self, end_identifier: str = None, split_identifier: str = SPECIAL_SIGN["SPLIT"]) -> List:
- """Receive messages from the server, and it will block the process. Supports receiving long text."""
- remaining = ""
- while True:
- # receive message
- dataset = self.client_socket.recv(self.bufsize)
- try:
- # If decoding fails, it indicates that the current transmission is a long text.
- dataset = dataset.decode('utf-8')
- except UnicodeDecodeError:
- if not isinstance(remaining, bytes):
- remaining = remaining.encode('utf-8')
- assert isinstance(dataset, bytes)
- remaining += dataset
- try:
- dataset = remaining.decode('utf-8')
- remaining = ""
- except UnicodeDecodeError:
- continue
- assert isinstance(remaining, str)
- dataset = remaining + dataset
- list_dataset = dataset.split(split_identifier)
- if len(list_dataset) == 1:
- # If there is only one result from the split, it indicates that the current sequence itself has not yet ended.
- remaining = list_dataset[0]
- continue
- else:
- remaining = list_dataset[-1]
- # Receive successfully
- list_dataset = list_dataset[:-1]
- return_value = []
- for item in list_dataset:
- if end_identifier is not None and item == end_identifier:
- break
- return_value.append(item)
- identifier = yield return_value
- if identifier is not None:
- end_identifier, split_identifier = identifier
-
- def listening_for_start_(self):
- """
- When the server starts, the client is automatically launched.
- At this point, process synchronization is required,
- such as sending client data to the server for rendering,
- then the server sending the modified data back to the client,
- and simultaneously sending a startup command.
- Once the client receives the data, it will start running.
- """
- Client.receive_server = self.receive_message()
- # Waiting for information from the server.
- data: list = next(Client.receive_server)
- assert len(data) == 1
- data = eval(data[0])
- assert isinstance(data, dict)
- Client.cache.update(data)
- # Waiting for start command from the server.
- data:list = Client.receive_server.send(None)
- assert len(data) == 1
- assert data[0] == ""
-
-class WebUI:
- """
- The base class for the frontend, which encapsulates some functions for process information synchronization.
- When a new frontend needs to be created, you should inherit from this class,
- then implement the `construct_ui()` method and set up event listeners.
- Finally, execute `run()` to load it.
- """
-
- def receive_message(
- self,
- end_identifier:str=None,
- split_identifier:str=SPECIAL_SIGN["SPLIT"]
- )->List:
- """This is the same as in Client class."""
- yield "hello"
- remaining = ""
- while True:
- dataset = self.client_socket.recv(self.bufsize)
- try:
- dataset = dataset.decode('utf-8')
- except UnicodeDecodeError:
- if not isinstance(remaining, bytes):
- remaining = remaining.encode('utf-8')
- assert isinstance(dataset, bytes)
- remaining += dataset
- try:
- dataset = remaining.decode('utf-8')
- remaining = ""
- except UnicodeDecodeError:
- continue
- assert isinstance(remaining, str)
- dataset = remaining + dataset
- list_dataset = dataset.split(split_identifier)
- if len(list_dataset) == 1:
- remaining = list_dataset[0]
- continue
- else:
- remaining = list_dataset[-1]
- list_dataset = list_dataset[:-1]
- return_value = []
- for item in list_dataset:
- if end_identifier is not None and item == end_identifier:
- break
- return_value.append(item)
- identifier = yield return_value
- if identifier is not None:
- end_identifier, split_identifier = identifier
-
- def send_message(self, message:str):
- """Send message to client."""
- SEP = self.SIGN["SPLIT"]
- self.client_socket.send(
- (message+SEP).encode("utf-8")
- )
-
- def _connect(self):
- # check
- if self.server_socket:
- self.server_socket.close()
- assert not os.path.isfile("PORT.txt")
- self.socket_port = check_port(PORT)
- # Step1. initialize
- self.server_socket = socket.socket(
- socket.AF_INET, socket.SOCK_STREAM
- )
- # Step2. binding ip and port
- self.server_socket.bind((self.socket_host, self.socket_port))
- # Step3. run client
- self._start_client()
-
- # Step4. listening for connect
- self.server_socket.listen(1)
-
- # Step5. test connection
- client_socket, client_address = self.server_socket.accept()
- print_log("server: establishing connection......")
- self.client_socket = client_socket
- while True:
- client_socket.send("hi".encode('utf-8'))
- time.sleep(1)
- data = client_socket.recv(self.bufsize).decode('utf-8')
- if data == "hello agent":
- client_socket.send("check".encode('utf-8'))
- print_log("server: connect successfully")
- break
- assert os.path.isfile("PORT.txt")
- os.remove("PORT.txt")
- if self.receive_server:
- del self.receive_server
- self.receive_server = self.receive_message()
- assert next(self.receive_server) == "hello"
-
- @abstractmethod
- def render_and_register_ui(self):
- # You need to implement this function.
- # The function's purpose is to bind the name of the agent with an image.
- # The name of the agent is stored in `self.cache[]`,
- # and the function for binding is in the method `add_agents` of the class `GradioConfig` in `Gradio_Config/gradio_config.py``.
- # This function will be executed in `self.first_recieve_from_client()`
- pass
-
- def first_recieve_from_client(self, reset_mode:bool=False):
- """
- This function is used to receive information from the client and is typically executed during the initialization of the class.
- If `reset_mode` is False, it will bind the name of the agent with an image.
- """
- self.FIRST_RECIEVE_FROM_CLIENT = True
- data_list:List = self.receive_server.send(None)
- assert len(data_list) == 1
- data = eval(data_list[0])
- assert isinstance(data, dict)
- self.cache.update(data)
- if not reset_mode:
- self.render_and_register_ui()
-
- def _second_send(self, message:dict):
- # Send the modified message.
- # It will be executed in `self.send_start_cmd()` automatically.
- self.send_message(str(message))
-
- def _third_send(self):
- # Send start command.
- # It will be executed in `self.send_start_cmd()` automatically.
- self.send_message(self.SIGN['START'])
-
- def send_start_cmd(self, message:dict={"hello":"hello"}):
- # If you have no message to send, you can ignore the args `message`.
- assert self.FIRST_RECIEVE_FROM_CLIENT, "Please make sure you have executed `self.first_recieve_from_client()` manually."
- self._second_send(message=message)
- time.sleep(1)
- self._third_send()
- self.FIRST_RECIEVE_FROM_CLIENT = False
-
- def __init__(
- self,
- client_cmd: list, # ['python','test.py','--a','b','--c','d']
- socket_host: str = HOST,
- socket_port: int = PORT,
- bufsize: int = 1024,
- ui_name: str = ""
- ):
- self.ui_name = ui_name
- self.server_socket = None
- self.SIGN = SPECIAL_SIGN
- self.socket_host = socket_host
- self.socket_port = socket_port
- self.bufsize = bufsize
- self.client_cmd = client_cmd
-
- self.receive_server = None
- self.cache = {}
- assert self.bufsize > 0
- self._connect()
-
- def _start_client(self):
- print(f"server: executing `{' '.join(self.client_cmd)}` ...")
- self.backend = subprocess.Popen(self.client_cmd)
-
- def _close_client(self):
- print(f"server: killing `{' '.join(self.client_cmd)}` ...")
- self.backend.terminate()
-
- def reset(self):
- print("server: restarting ...")
- self._close_client()
- time.sleep(1)
- self._connect()
-
- def render_bubble(self, rendered_data, agent_response, node_name, render_node_name:bool=True):
- # Rendered bubbles (HTML format) are used for gradio output.
- output = f"**{node_name}** " if render_node_name else ""
- for item in agent_response:
- for agent_name in item:
- content = item[agent_name].replace("\n", " ")
- content = UIHelper.filter(content, agent_name, self.ui_name)
- output = f"{output} {UIHelper.wrap_css(content, agent_name)}"
- rendered_data[-1] = [rendered_data[-1][0], output]
- return rendered_data
-
- def run(self,share: bool = True):
- self.demo.queue()
- self.demo.launch()
-
-
-if __name__ == '__main__':
- pass
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet34_gem.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet34_gem.py
deleted file mode 100644
index 5c0e0d3e8dc5d7a0b259f1624ee2402af8a401cd..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet34_gem.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# model settings
-model = dict(
- type='ImageClassifier',
- backbone=dict(
- type='ResNet',
- depth=34,
- num_stages=4,
- out_indices=(3, ),
- style='pytorch'),
- neck=dict(type='GeneralizedMeanPooling'),
- head=dict(
- type='LinearClsHead',
- num_classes=1000,
- in_channels=512,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
- topk=(1, 5),
- ))
diff --git a/spaces/Accel/media-converter/app.py b/spaces/Accel/media-converter/app.py
deleted file mode 100644
index 9a11e585487333ec6f0f5b103685f39015a35f4d..0000000000000000000000000000000000000000
--- a/spaces/Accel/media-converter/app.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import logging
-import subprocess
-from pprint import pprint
-from tempfile import _TemporaryFileWrapper
-
-from ffmpy import FFmpeg
-
-import gradio as gr
-from functions import (Clear, CommandBuilder, audio_channels, audio_codecs,
- audio_quality, audio_sample_rates,
- change_clipbox, containers, customBitrate, mediaChange, presets, supported_codecs, supported_presets, video_codecs, video_containers,
- vf)
-
-logging.basicConfig(level=logging.INFO)
-
-
-logging.info(msg=f"{video_containers}")
-
-
-def convert(file: _TemporaryFileWrapper, options: str,state):
- stderr=""
- stdout=""
- output_file=""
- video=""
- ffmpeg=FFmpeg()
- try:
- logging.info(f"File name: {file.name}")
- new_name, _ = file.name.split(".")
- logging.info(f"New filename:{new_name}")
- output_file = f"{new_name}1.{options.lower()}"
- ffmpeg = FFmpeg(inputs={file.name: None}, outputs={
- output_file: ffmpeg_commands.commands.split()}, global_options=["-y", "-hide_banner"])
- print(ffmpeg)
- print(ffmpeg.cmd)
-
- ffmpeg.run(stderr=subprocess.PIPE)
- # pprint(f"{stdout} {stderr}")
- output=f"{ffmpeg.cmd}"
- # video=gr.update(label=output_file,value=output_file)
-
- except Exception as e:
- stderr=e
- output=f"{stderr}"
- return [None,None,None,output]
-
- state=output_file
-
- return [output_file,output_file,output_file,output,state]
-
-
-with gr.Blocks(css="./styles.css") as dm:
- with gr.Tabs():
- with gr.TabItem("Format"):
- # Input Buttons
- with gr.Row():
- with gr.Column() as inputs:
- file_input = gr.File()
- options = gr.Radio(
- label="options", choices=containers,value=containers[0])
- with gr.Row():
- with gr.Row() as inputs_clip:
- clip = gr.Dropdown(
- choices=["None", "Enabled"], label="Clip:", value="None")
- start_time = gr.Textbox(
- label="Start Time:", placeholder="00:00", visible=False,interactive=True)
- stop_time = gr.Textbox(
- label="Stop Time:", placeholder="00:00", visible=False)
- with gr.Row():
- clearBtn = gr.Button("Clear")
- convertBtn = gr.Button("Convert", variant="primary")
-
- # Output Buttons
- with gr.Column():
- # media_output = gr.Audio(label="Output")
- with gr.Row():
- video_button=gr.Button("Video")
- audio_button=gr.Button("Audio")
- file_button=gr.Button("File")
- media_output_audio = gr.Audio(type="filepath",label="Output",visible=True,interactive=False,source="filepath")
- media_output_video = gr.Video(label="Output",visible=False)
- media_output_file = gr.File(label="Output",visible=False)
- with gr.Row() as command_output:
- output_textbox = gr.Textbox(label="command",elem_id="outputtext")
-
- resetFormat=Clear(inputs,inputs_clip)
- print(inputs_clip.children)
- print(resetFormat)
- state=gr.Variable()
- clearBtn.click(resetFormat.clear, inputs=resetFormat(), outputs=resetFormat())
- convertBtn.click(convert, inputs=[file_input, options,state], outputs=[
- media_output_audio,media_output_file,media_output_video, output_textbox,state])
-
- with gr.TabItem("Video"):
- with gr.Row() as video_inputs:
- video_options = gr.Dropdown(
- label="video", choices=video_codecs,value=video_codecs[-1])
- preset_options = gr.Dropdown(choices=presets, label="presets",value=presets[-1])
-
-
- with gr.Row(elem_id="button"):
- with gr.Column():
- clearBtn = gr.Button("Clear")
- videoReset=Clear(video_inputs)
- clearBtn.click(videoReset.clear, videoReset(), videoReset())
-
- with gr.TabItem("Audio"):
- with gr.Row() as audio_inputs:
- # print(names[0])
- audio_options = gr.Dropdown(
- label="audio", choices=audio_codecs, value=audio_codecs[-1])
- audio_bitrate=gr.Dropdown(choices=audio_quality, label="Audio Qualities",
- value=audio_quality[0])
- custom_bitrate=gr.Number(label="Audio Qualities",visible=False)
- gr.Dropdown(choices=audio_channels,
- label="Audio Channels", value=audio_channels[0])
- gr.Dropdown(choices=audio_sample_rates,
- label="Sample Rates", value=audio_sample_rates[0])
-
-
- with gr.Column(elem_id="button"):
- clearBtn = gr.Button("Clear")
- audioReset=Clear(audio_inputs)
- clearBtn.click(audioReset.clear, audioReset(), audioReset())
-
- with gr.TabItem("Filters") as filter_inputs:
- gr.Markdown("## Video")
- with gr.Row().style(equal_height=True) as filter_inputs:
- for i in vf:
- # print(i.values())
- # values = list(i.values())
- values=list(i.values())[0]
- choices=[j for lst in values for j in [lst.get("name")]]
- a=gr.Dropdown(label=list(i.keys()),
- choices=choices, value=choices[0])
- gr.Markdown("## Audio")
- with gr.Row(elem_id="acontrast") as filter_inputs_1:
- acontrastSlider=gr.Slider(label="Acontrast", elem_id="acontrast")
-
- with gr.Column(elem_id="button"):
- clearBtn = gr.Button("Clear")
-
- filterReset=Clear(filter_inputs,filter_inputs_1)
- clearBtn.click(filterReset.clear, filterReset(), filterReset())
-
- """ Format Tab change functions"""
- ffmpeg_commands=CommandBuilder(inputs_clip,video_inputs,audio_inputs,filter_inputs,filter_inputs_1)
- # ffmpeg_commands.do()
- dm.load(fn=ffmpeg_commands.reset,inputs=[],outputs=[])
- pprint(ffmpeg_commands.commands)
- ffmpeg_commands.update(output_textbox)
- # file_input.change(fn=updateOutput,inputs=file_input,outputs=output_textbox)
- clip.change(fn=change_clipbox, inputs=clip,
- outputs=[start_time, stop_time])
-
- options.change(supported_codecs,[options],[video_options,audio_options])
- # options.change(mediaChange,[options],[media_output_audio,media_output_video])
- # video_button.click(fn=videoChange,inputs=media_output_file,outputs=media_output_video)
- audio_button.click(mediaChange,[audio_button,state],[media_output_audio,media_output_video,media_output_file])
- video_button.click(mediaChange,[video_button,state],[media_output_audio,media_output_video,media_output_file])
- # media_output_audio.change(lambda x:gr.update(value=x),[media_output_audio],[media_output_video])
- file_button.click(mediaChange,[file_button,state],[media_output_audio,media_output_video,media_output_file])
- """Video Tab change functions"""
- video_options.change(supported_presets,[video_options],[preset_options])
- """Audio Tab change functions"""
- audio_bitrate.change(customBitrate,[audio_bitrate],[custom_bitrate])
-
-if __name__=='__main__':
- dm.launch()
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Lockchat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Lockchat.py
deleted file mode 100644
index c15eec8dd99f6a50b7eb02cf8ff14494380f4b9a..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Lockchat.py
+++ /dev/null
@@ -1,64 +0,0 @@
-from __future__ import annotations
-
-import json
-
-import requests
-
-from ..typing import Any, CreateResult
-from .base_provider import BaseProvider
-
-
-class Lockchat(BaseProvider):
- url: str = "http://supertest.lockchat.app"
- supports_stream = True
- supports_gpt_35_turbo = True
- supports_gpt_4 = True
-
- @staticmethod
- def create_completion(
- model: str,
- messages: list[dict[str, str]],
- stream: bool, **kwargs: Any) -> CreateResult:
-
- temperature = float(kwargs.get("temperature", 0.7))
- payload = {
- "temperature": temperature,
- "messages" : messages,
- "model" : model,
- "stream" : True,
- }
-
- headers = {
- "user-agent": "ChatX/39 CFNetwork/1408.0.4 Darwin/22.5.0",
- }
- response = requests.post("http://supertest.lockchat.app/v1/chat/completions",
- json=payload, headers=headers, stream=True)
-
- response.raise_for_status()
- for token in response.iter_lines():
- if b"The model: `gpt-4` does not exist" in token:
- print("error, retrying...")
- Lockchat.create_completion(
- model = model,
- messages = messages,
- stream = stream,
- temperature = temperature,
- **kwargs)
-
- if b"content" in token:
- token = json.loads(token.decode("utf-8").split("data: ")[1])
- token = token["choices"][0]["delta"].get("content")
- if token:
- yield (token)
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("temperature", "float"),
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/base.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/base.py
deleted file mode 100644
index 83028a911d812536d91e04656d2f8056ef942cc8..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/base.py
+++ /dev/null
@@ -1,98 +0,0 @@
-from __future__ import annotations
-
-from abc import abstractmethod
-from typing import TYPE_CHECKING, Any, List, Optional
-
-from agentverse.environments.simulation_env.rules.describer import (
- BaseDescriber,
- describer_registry,
-)
-from agentverse.environments.simulation_env.rules.order import BaseOrder, order_registry
-from agentverse.environments.simulation_env.rules.selector import (
- BaseSelector,
- selector_registry,
-)
-from agentverse.environments.simulation_env.rules.updater import (
- BaseUpdater,
- updater_registry,
-)
-from agentverse.environments.simulation_env.rules.visibility import (
- BaseVisibility,
- visibility_registry,
-)
-from agentverse.environments import BaseRule
-
-if TYPE_CHECKING:
- from agentverse.environments.base import BaseEnvironment
-
-from agentverse.message import Message
-
-
-# class Rule(BaseModel):
-class SimulationRule(BaseRule):
- """
- Rule for the environment. It controls the speaking order of the agents
- and maintain the set of visible agents for each agent.
- """
-
- order: BaseOrder
- visibility: BaseVisibility
- selector: BaseSelector
- updater: BaseUpdater
- describer: BaseDescriber
-
- def __init__(
- self,
- order_config,
- visibility_config,
- selector_config,
- updater_config,
- describer_config,
- ):
- order = order_registry.build(**order_config)
- visibility = visibility_registry.build(**visibility_config)
- selector = selector_registry.build(**selector_config)
- updater = updater_registry.build(**updater_config)
- describer = describer_registry.build(**describer_config)
- super().__init__(
- order=order,
- visibility=visibility,
- selector=selector,
- updater=updater,
- describer=describer,
- )
-
- def get_next_agent_idx(
- self, environment: BaseEnvironment, *args, **kwargs
- ) -> List[int]:
- """Return the index of the next agent to speak"""
- return self.order.get_next_agent_idx(environment, *args, **kwargs)
-
- def update_visible_agents(
- self, environment: BaseEnvironment, *args, **kwargs
- ) -> None:
- """Update the set of visible agents for the agent"""
- self.visibility.update_visible_agents(environment, *args, **kwargs)
-
- def select_message(
- self, environment: BaseEnvironment, messages: List[Message], *args, **kwargs
- ) -> List[Message]:
- """Select a set of valid messages from all the generated messages"""
- return self.selector.select_message(environment, messages, *args, **kwargs)
-
- def update_memory(self, environment: BaseEnvironment, *args, **kwargs) -> None:
- """For each message, add it to the memory of the agent who is able to see that message"""
- self.updater.update_memory(environment, *args, **kwargs)
-
- def get_env_description(
- self, environment: BaseEnvironment, *args, **kwargs
- ) -> List[str]:
- """Return the description of the environment for each agent"""
- return self.describer.get_env_description(environment, *args, **kwargs)
-
- def reset(self) -> None:
- self.order.reset()
- self.visibility.reset()
- self.selector.reset()
- self.updater.reset()
- self.describer.reset()
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/code_api.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/code_api.py
deleted file mode 100644
index a134b649b3bf215bdf05dd847fdc755b1f0ab24e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/code_api.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import io
-import sys
-import ast
-import json
-import astunparse
-import concurrent.futures
-import traceback
-
-
-def get_call_str(assert_statement: str) -> str:
- call_str = ast.parse(assert_statement).body[0].test.left # type: ignore
- return astunparse.unparse(call_str).strip()
-
-def get_output(func: str, assert_statement: str) -> str:
- try:
- func_call = get_call_str(assert_statement)
- try:
- exec(func, globals())
- output = eval(func_call)
- return output
- except Exception as e:
- return str(e)
- except:
- return "get_call_str error"
-
-def worker(code, globals=None, locals=None):
- old_stdout = sys.stdout
- redirected_output = sys.stdout = io.StringIO()
- if locals is None:
- locals = {}
- try:
- # TODO: exec(code, globals, locals) could be buggy
- # In cases where both import statement and function exits in the code, if the locals are given,
- # the code will not find the imported package.
- # For example,
- # code = "import math\ndef f(x):\n\treturn math.pow(x, 2)\nassert f(2) == 4"
- # It will return "NameError: name 'math' is not defined"
- exec(code, locals, locals)
- stdout = redirected_output.getvalue()
- return stdout, globals, locals
- except Exception as e:
- trace_str = traceback.format_exc()
- return f"Error: {trace_str}", globals, locals
- finally:
- sys.stdout = old_stdout # restore the original stdout
-
-def execute_code(code: str) -> str:
- """Execute a snippet of python code and return the output or the error message.
- """
- timeout = 5
- try:
- with concurrent.futures.ThreadPoolExecutor() as executor:
- future = executor.submit(worker, code)
- result, _, _ = future.result(timeout)
- return result
- except concurrent.futures.TimeoutError:
- return "Timeout"
-
-def execute_unit_tests(func_impl: str, tests: str) -> str:
- """Run a python function on a bunch of unit tests tests and return detailed feedback.
- """
- # tests = eval(tests)
- # assert type(tests) == list
-
- # Combine function code and assert statement
- func_test_list = [f'{func_impl}\n{test}' for test in tests]
-
- # Run the tests and collect the results
- success_tests = []
- failed_tests = []
- is_passing = True
- num_tests = len(func_test_list)
- for i in range(num_tests):
- output = execute_code(func_test_list[i])
- if output == "Timeout":
- failed_tests += [f"{tests[i]} # output: Timeout"]
- is_passing = False
- elif output.startswith("Error: "):
- # print(output)
- func_output = get_output(func_impl, tests[i])
- if func_output == "get_call_str error":
- func_output = output
- failed_tests += [f"{tests[i]} # output: {func_output}"]
- is_passing = False
- else:
- success_tests += [tests[i]]
-
- feedback = "Tested passed:\n\n"
- for test in success_tests:
- feedback += f"{test}\n\n"
- feedback += "Tests failed:\n\n"
- for test in failed_tests:
- feedback += f"{test}\n\n"
-
- return json.dumps({"is_passing": is_passing,
- "feedback": feedback})
-
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/visibility/prisoner.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/visibility/prisoner.py
deleted file mode 100644
index c21217312bed0ffd447eeaa115e7eb28d53c680b..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/visibility/prisoner.py
+++ /dev/null
@@ -1,48 +0,0 @@
-from __future__ import annotations
-
-import random
-from typing import TYPE_CHECKING, Any, List, Union
-
-from . import visibility_registry as VisibilityRegistry
-from .base import BaseVisibility
-
-if TYPE_CHECKING:
- from agentverse.environments import BaseEnvironment
-
-
-@VisibilityRegistry.register("prisoner")
-class PrisonerVisibility(BaseVisibility):
- """
- Visibility function for classroom, supports group discussion.
-
- Args:
- student_per_group:
- The number of students per group.
- num_discussion_turn:
- The number of turns for group discussion.
- grouping:
- The grouping information. If it is a string, then it should be a
- grouping method, options are ["random", "sequential"]. If it is a
- list of list of int, then it should be the grouping information.
- """
-
- current_turn: int = 0
-
- def update_visible_agents(self, environment: BaseEnvironment):
- self.update_receiver(environment, reset=False)
-
- def update_receiver(self, environment: BaseEnvironment, reset=False):
- if reset:
- for agent in environment.agents:
- agent.set_receiver(["all"])
- else:
- # 0:police 1: prisoner1 2: prisoner2
- # environment.agents[0].set_receiver({"Police", "Suspect1", "Suspect2"})
- # environment.agents[1].set_receiver({"Police", "Suspect1"})
- # environment.agents[2].set_receiver({"Police", "Suspect2"})
-
- # we update receiver in environment
- pass
-
- def reset(self):
- self.current_turn = 0
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/localstorage-data.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/localstorage-data.js
deleted file mode 100644
index 898f8f9256f39b30d47106b3afdb158872108e31..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/localstorage-data.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import DataManager from './storage/localstorage/data/DataManager.js';
-export default DataManager;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toucheventstop-plugin.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toucheventstop-plugin.d.ts
deleted file mode 100644
index e97fd03909c54fa736961a3b3f55aafbd5bb6c3e..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/toucheventstop-plugin.d.ts
+++ /dev/null
@@ -1,9 +0,0 @@
-import TouchEventStop from './toucheventstop';
-
-export default class TouchEventStopPlugin extends Phaser.Plugins.BasePlugin {
- add(
- gameObject: Phaser.GameObjects.GameObject,
- config?: TouchEventStop.IConfig
- ): TouchEventStop;
-
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-plugin.js
deleted file mode 100644
index df5d2cc999cceb328457f68e14f36a0742c04eb3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-plugin.js
+++ /dev/null
@@ -1,35 +0,0 @@
-import ObjectFactory from './ObjectFactory.js';
-
-import AudioFactory from './audio/Factory.js';
-import BallFactory from './ball/Factory.js';
-import BarsFactory from './bars/Factory.js';
-import BoxFactory from './box/Factory.js';
-import ClockFactory from './clock/Factory.js';
-import CubeFactory from './cube/Factory.js';
-import CustomFactory from './custom/Factory.js';
-import DotsFactory from './dots/Factory.js';
-import FacebookFactory from './facebook/Factory.js';
-import GridFactory from './grid/Factory.js';
-import LosFactory from './los/Factory.js';
-import OrbitFactory from './orbit/Factory.js';
-import OvalFactory from './oval/Factory.js';
-import PieFactory from './pie/Factory.js';
-import PuffFactory from './puff/Factory.js';
-import RadioFactory from './radio/Factory.js';
-import RingsFactory from './rings/Factory.js';
-import SpinnerFactory from './spinner/Factory.js';
-
-
-class SpinnerPlugin extends Phaser.Plugins.ScenePlugin {
- constructor(scene, pluginManager) {
- super(scene, pluginManager);
-
- this.add = new ObjectFactory(scene);
- }
-
- start() {
- var eventEmitter = this.scene.events;
- eventEmitter.on('destroy', this.destroy, this);
- }
-}
-export default SpinnerPlugin;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/Factory.js
deleted file mode 100644
index d32aee99a5a66db03a0cd01d77f01e33ae568cc6..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthsizer/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import FixWidthSizer from './FixWidthSizer.js';
-import ObjectFactory from '../ObjectFactory.js';
-import SetValue from '../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('fixWidthSizer', function (x, y, minWidth, minHeight, config) {
- var gameObject = new FixWidthSizer(this.scene, x, y, minWidth, minHeight, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.FixWidthSizer', FixWidthSizer);
-
-export default FixWidthSizer;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/PerspectiveCard.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/PerspectiveCard.js
deleted file mode 100644
index a41a970f7d6db8f1cd0dd345e34300d9da1b7fd4..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspectivecard/PerspectiveCard.js
+++ /dev/null
@@ -1,161 +0,0 @@
-import OverlapSizer from '../overlapsizer/OverlapSizer.js';
-import CreatePerspectiveCardMesh from './CreatePerspectiveCardMesh.js';
-import PerspectiveMethods from './PerspectiveMethods.js';
-
-const GetValue = Phaser.Utils.Objects.GetValue;
-
-class PerspectiveCard extends OverlapSizer {
- constructor(scene, config) {
- super(scene, config);
- this.type = 'rexPerspectiveCard';
-
- // Layout faces
- var backFace = config.back;
- var backFaceExpand = GetValue(config, 'expand.back', true);
- this.add(
- backFace,
- { key: 'back', expand: backFaceExpand }
- );
-
- var frontFace = config.front;
- var frontFaceExpand = GetValue(config, 'expand.front', true);
- this.add(
- frontFace,
- { key: 'front', expand: frontFaceExpand }
- );
-
- // Add PerspectiveCardMesh
- this.perspectiveCard = CreatePerspectiveCardMesh.call(this, config);
- this.pin(this.perspectiveCard);
-
- this.exitPerspectiveMode(false);
- }
-
- get flip() {
- return this.perspectiveCard.flip;
- }
-
- get face() {
- return this.perspectiveCard.face;
- }
-
- set face(index) {
- // Can't set face during flipping
- if (this.flip && this.flip.isRunning) {
- return;
- }
- this.perspectiveCard.face = index;
-
- var isFrontFace = (index === 0);
- var frontFace = this.childrenMap.front;
- var backFace = this.childrenMap.back;
- this.setChildVisible(frontFace, isFrontFace);
- this.setChildVisible(backFace, !isFrontFace);
- }
-
- setFace(face) {
- this.face = face;
- return this;
- }
-
- toggleFace() {
- var newFace = (this.face === 0) ? 1 : 0;
- this.setFace(newFace);
- return this;
- }
-
- get isInPerspectiveMode() {
- return this.perspectiveCard.visible;
- }
-
- get rotationX() {
- return this.perspectiveCard.rotationX;
- }
-
- set rotationX(value) {
- this.enterPerspectiveMode();
- this.perspectiveCard.rotationX = value;
- }
-
- get angleX() {
- return this.perspectiveCard.angleX;
- }
-
- set angleX(value) {
- this.enterPerspectiveMode();
- this.perspectiveCard.angleX = value;
- }
-
- get rotationY() {
- return this.perspectiveCard.rotationY;
- }
-
- set rotationY(value) {
- this.enterPerspectiveMode();
- this.perspectiveCard.rotationY = value;
- }
-
- get angleY() {
- return this.perspectiveCard.angleY;
- }
-
- set angleY(value) {
- this.enterPerspectiveMode();
- this.perspectiveCard.angleY = value;
- }
-
- get rotationZ() {
- return this.perspectiveCard.rotationZ;
- }
-
- set rotationZ(value) {
- this.enterPerspectiveMode();
- this.perspectiveCard.rotationZ = value;
- }
-
- get angleZ() {
- return this.perspectiveCard.angleZ;
- }
-
- set angleZ(value) {
- this.enterPerspectiveMode();
- this.perspectiveCard.angleZ = value;
- }
-
- panX(v) {
- this.enterPerspectiveMode();
- this.perspectiveCard.panX(v);
- return this;
- }
-
- panY(v) {
- this.enterPerspectiveMode();
- this.perspectiveCard.panY(v);
- return this;
- }
-
- panZ(v) {
- this.enterPerspectiveMode();
- this.perspectiveCard.panZ(v);
- return this;
- }
-
- transformVerts(x, y, z, rotateX, rotateY, rotateZ) {
- this.enterPerspectiveMode();
- this.perspectiveCard.transformVerts(x, y, z, rotateX, rotateY, rotateZ);
- return this;
- }
-
- forEachFace(callback, scope, ignoreInvalid) {
- this.enterPerspectiveMode();
- this.perspectiveCard.forEachFace(callback, scope, ignoreInvalid);
- return this;
- }
-}
-
-Object.assign(
- PerspectiveCard.prototype,
- PerspectiveMethods
-)
-
-export default PerspectiveCard;
\ No newline at end of file
diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/vits/utils.py b/spaces/Akmyradov/TurkmenTTSweSTT/vits/utils.py
deleted file mode 100644
index b445fb65836a0b97e46426300eea9a820179797a..0000000000000000000000000000000000000000
--- a/spaces/Akmyradov/TurkmenTTSweSTT/vits/utils.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict= {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})" .format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict(),
- 'learning_rate': learning_rate}, checkpoint_path)
-
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10,2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/models_infer.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/models_infer.py
deleted file mode 100644
index 4b9bb82bf5831c5264f3e1e52b23e8e875f5fd9e..0000000000000000000000000000000000000000
--- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/models_infer.py
+++ /dev/null
@@ -1,402 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
-
diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/symbols.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/symbols.py
deleted file mode 100644
index 053a7105f7ce95aa51614f6995399fa2172b3eb2..0000000000000000000000000000000000000000
--- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/symbols.py
+++ /dev/null
@@ -1,76 +0,0 @@
-'''
-Defines the set of symbols used in text input to the model.
-'''
-
-# japanese_cleaners
-_pad = '_'
-_punctuation = ',.!?-'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
-
-
-'''# japanese_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
-'''
-
-
-'''# korean_cleaners
-_pad = '_'
-_punctuation = ',.!?…~'
-_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
-'''
-
-'''# chinese_cleaners
-_pad = '_'
-_punctuation = ',。!?—…'
-_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
-'''
-
-'''# zh_ja_mixture_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ '
-'''
-
-'''# sanskrit_cleaners
-_pad = '_'
-_punctuation = '।'
-_letters = 'ँंःअआइईउऊऋएऐओऔकखगघङचछजझञटठडढणतथदधनपफबभमयरलळवशषसहऽािीुूृॄेैोौ्ॠॢ '
-'''
-
-'''# cjks_cleaners
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'NQabdefghijklmnopstuvwxyzʃʧʥʦɯɹəɥçɸɾβŋɦː⁼ʰ`^#*=→↓↑ '
-'''
-
-'''# thai_cleaners
-_pad = '_'
-_punctuation = '.!? '
-_letters = 'กขฃคฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลวศษสหฬอฮฯะัาำิีึืุูเแโใไๅๆ็่้๊๋์'
-'''
-
-'''# cjke_cleaners2
-_pad = '_'
-_punctuation = ',.!?-~…'
-_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ '
-'''
-
-'''# shanghainese_cleaners
-_pad = '_'
-_punctuation = ',.!?…'
-_letters = 'abdfghiklmnopstuvyzøŋȵɑɔɕəɤɦɪɿʑʔʰ̩̃ᴀᴇ15678 '
-'''
-
-'''# chinese_dialect_cleaners
-_pad = '_'
-_punctuation = ',.!?~…─'
-_letters = '#Nabdefghijklmnoprstuvwxyzæçøŋœȵɐɑɒɓɔɕɗɘəɚɛɜɣɤɦɪɭɯɵɷɸɻɾɿʂʅʊʋʌʏʑʔʦʮʰʷˀː˥˦˧˨˩̥̩̃̚ᴀᴇ↑↓∅ⱼ '
-'''
-
-# Export all symbols:
-symbols = [_pad] + list(_punctuation) + list(_letters)
-
-# Special symbol ids
-SPACE_ID = symbols.index(" ")
diff --git a/spaces/Aloento/9Nine-PITS/analysis.py b/spaces/Aloento/9Nine-PITS/analysis.py
deleted file mode 100644
index b9ea9a868f89d50d7dde8702771eeeabc4298502..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/analysis.py
+++ /dev/null
@@ -1,141 +0,0 @@
-# modified from https://github.com/dhchoi99/NANSY
-# We have modified the implementation of dhchoi99 to be fully differentiable.
-import math
-
-from yin import *
-
-
-class Pitch(torch.nn.Module):
-
- def __init__(
- self,
- sr=22050,
- w_step=256,
- W=2048,
- tau_max=2048,
- midi_start=5,
- midi_end=85,
- octave_range=12):
- super(Pitch, self).__init__()
- self.sr = sr
- self.w_step = w_step
- self.W = W
- self.tau_max = tau_max
- self.unfold = torch.nn.Unfold((1, self.W),
- 1,
- 0,
- stride=(1, self.w_step))
- midis = list(range(midi_start, midi_end))
- self.len_midis = len(midis)
- c_ms = torch.tensor([self.midi_to_lag(m, octave_range) for m in midis])
- self.register_buffer('c_ms', c_ms)
- self.register_buffer('c_ms_ceil', torch.ceil(self.c_ms).long())
- self.register_buffer('c_ms_floor', torch.floor(self.c_ms).long())
-
- def midi_to_lag(self, m: int, octave_range: float = 12):
- """converts midi-to-lag, eq. (4)
-
- Args:
- m: midi
- sr: sample_rate
- octave_range:
-
- Returns:
- lag: time lag(tau, c(m)) calculated from midi, eq. (4)
-
- """
- f = 440 * math.pow(2, (m - 69) / octave_range)
- lag = self.sr / f
- return lag
-
- def yingram_from_cmndf(self, cmndfs: torch.Tensor) -> torch.Tensor:
- """ yingram calculator from cMNDFs(cumulative Mean Normalized Difference Functions)
-
- Args:
- cmndfs: torch.Tensor
- calculated cumulative mean normalized difference function
- for details, see models/yin.py or eq. (1) and (2)
- ms: list of midi(int)
- sr: sampling rate
-
- Returns:
- y:
- calculated batch yingram
-
-
- """
- # c_ms = np.asarray([Pitch.midi_to_lag(m, sr) for m in ms])
- # c_ms = torch.from_numpy(c_ms).to(cmndfs.device)
-
- y = (cmndfs[:, self.c_ms_ceil] -
- cmndfs[:, self.c_ms_floor]) / (self.c_ms_ceil - self.c_ms_floor).unsqueeze(0) * (
- self.c_ms - self.c_ms_floor).unsqueeze(0) + cmndfs[:, self.c_ms_floor]
- return y
-
- def yingram(self, x: torch.Tensor):
- """calculates yingram from raw audio (multi segment)
-
- Args:
- x: raw audio, torch.Tensor of shape (t)
- W: yingram Window Size
- tau_max:
- sr: sampling rate
- w_step: yingram bin step size
-
- Returns:
- yingram: yingram. torch.Tensor of shape (80 x t')
-
- """
- # x.shape: t -> B,T, B,T = x.shape
- B, T = x.shape
- w_len = self.W
-
- frames = self.unfold(x.view(B, 1, 1, T))
- frames = frames.permute(0, 2,
- 1).contiguous().view(-1,
- self.W) # [B* frames, W]
- # If not using gpu, or torch not compatible, implemented numpy batch function is still fine
- dfs = differenceFunctionTorch(frames, frames.shape[-1], self.tau_max)
- cmndfs = cumulativeMeanNormalizedDifferenceFunctionTorch(
- dfs, self.tau_max)
- yingram = self.yingram_from_cmndf(cmndfs) # [B*frames,F]
- yingram = yingram.view(B, -1, self.len_midis).permute(0, 2,
- 1) # [B,F,T]
- return yingram
-
- def crop_scope(self, x, yin_start,
- scope_shift): # x: tensor [B,C,T] #scope_shift: tensor [B]
- return torch.stack([
- x[i, yin_start + scope_shift[i]:yin_start + self.yin_scope +
- scope_shift[i], :] for i in range(x.shape[0])
- ],
- dim=0)
-
-
-if __name__ == '__main__':
- import torch
- import librosa as rosa
- import matplotlib.pyplot as plt
-
- wav = torch.tensor(rosa.load('LJ001-0002.wav', sr=22050,
- mono=True)[0]).unsqueeze(0)
- # wav = torch.randn(1,40965)
-
- wav = torch.nn.functional.pad(wav, (0, (-wav.shape[1]) % 256))
- # wav = wav[#:,:8096]
- print(wav.shape)
- pitch = Pitch()
-
- with torch.no_grad():
- ps = pitch.yingram(torch.nn.functional.pad(wav, (1024, 1024)))
- ps = torch.nn.functional.pad(ps, (0, 0, 8, 8), mode='replicate')
- print(ps.shape)
- spec = torch.stft(wav, 1024, 256, return_complex=False)
- print(spec.shape)
- plt.subplot(2, 1, 1)
- plt.pcolor(ps[0].numpy(), cmap='magma')
- plt.colorbar()
- plt.subplot(2, 1, 2)
- plt.pcolor(ps[0][15:65, :].numpy(), cmap='magma')
- plt.colorbar()
- plt.show()
diff --git a/spaces/Amrrs/portfolio/style.css b/spaces/Amrrs/portfolio/style.css
deleted file mode 100644
index 363d0b7bb0dd45552039e3156a6350989e327db2..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/portfolio/style.css
+++ /dev/null
@@ -1,190 +0,0 @@
-html {
- margin: 0;
- padding: 0;
-}
-
-body {
- font-family: 'Bellota', cursive;
- font-size: 26pt;
- background-color: #f2f2f2;
- padding: 20px;
- margin: 0;
-}
-
-h1 {
- font-size: 15pt;
- color: #ffffff;
- text-align: center;
- padding: 18px 0 18px 0;
- margin: 0 0 10px 0;
-}
-
-h1 span {
- border: 8px solid #666666;
- border-radius: 8px;
- background-image: url("https://media.giphy.com/media/KVZWZQoS0yqfIiTAKq/giphy.gif");
- padding: 12px;
-}
-
-p {
- padding: 0;
- margin: 0;
- color: #000000;
-}
-
-.img-circle {
- border: 8px solid white;
- border-radius: 50%;
-}
-
-.section {
- background-color: #fff;
- padding: 20px;
- margin-bottom: 10px;
- border-radius: 30px;
-}
-
-#header {
- background-image: url("https://media.giphy.com/media/KVZWZQoS0yqfIiTAKq/giphy.gif");
- background-size: cover;
-}
-
-#header img {
- display: block;
- width: 500px;
- height: 500px;
- margin: auto;
-}
-
-#header p {
- font-size: 60pt;
- color: #ffffff;
- padding-top: 8px;
- margin: 0;
- font-weight: bold;
- text-align: center;
-}
-
-.quote {
- font-size: 12pt;
- text-align: right;
- margin-top: 10px;
- color: grey;
-}
-
-#res {
- text-align: center;
- margin: 50px auto;
-}
-
-#res a {
- margin: 20px 20px;
- display: inline-block;
- text-decoration: none;
- color: black;
-}
-
-.selected {
- background-color: #f36f48;
- font-weight: bold;
- color: white;
-}
-
-li {
- margin-bottom: 15px;
- font-weight: bold;
-}
-
-progress {
- width: 70%;
- height: 20px;
- color: #3fb6b2;
- background: #efefef;
-}
-
-progress::-webkit-progress-bar {
- background: #efefef;
-}
-
-progress::-webkit-progress-value {
- background: #3fb6b2;
-}
-
-progress::-moz-progress-bar {
- color: #3fb6b2;
- background: #efefef;
-}
-
-iframe,
-audio {
- display: block;
- margin: 0 auto;
- border: 3px solid #3fb6b2;
- border-radius: 10px;
-}
-
-hr {
- border: 0;
- height: 1px;
- background: #f36f48;
-}
-
-input {
- text-align: center;
- font-size: 25pt;
- border: none;
- border-radius: 12px;
- padding: 30px 8%;
- margin: 20px 5px 10px 5px;
- background-color: #d7d7d7;
-}
-
-input:focus {
- background-color: #2f2f2f;
- color: white;
-}
-
-form {
- text-align: center;
- font-size: 30pt;
- font-family: Helvetica;
- font-weight: 500;
- margin: 10% 15% 8% 15%;
- border-radius: 12px;
-}
-
-#insta-image {
- display: block;
- width: 100px;
- height: 100px;
- border: 5px solid #d7d7d7;
- border-radius: 50%;
- margin: auto;
- margin-top: -75px;
-}
-
-#contacts img {
- height: 150px;
- width: 150px;
- margin-left: 7px;
- margin-right: 7px;
-}
-
-#contacts a {
- text-decoration: none;
-}
-
-#contacts img:hover {
- opacity: 0.8;
-}
-
-#contacts {
- text-align: center;
-}
-
-.copyright {
- font-size: 8pt;
- text-align: right;
- padding-bottom: 10px;
- color: grey;
-}
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/controlnet/README_sdxl.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/controlnet/README_sdxl.md
deleted file mode 100644
index db8dada65427ddf2835fdeb667efa03febceb1fb..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/controlnet/README_sdxl.md
+++ /dev/null
@@ -1,131 +0,0 @@
-# DreamBooth training example for Stable Diffusion XL (SDXL)
-
-The `train_controlnet_sdxl.py` script shows how to implement the training procedure and adapt it for [Stable Diffusion XL](https://huggingface.co/papers/2307.01952).
-
-## Running locally with PyTorch
-
-### Installing the dependencies
-
-Before running the scripts, make sure to install the library's training dependencies:
-
-**Important**
-
-To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
-
-```bash
-git clone https://github.com/huggingface/diffusers
-cd diffusers
-pip install -e .
-```
-
-Then cd in the `examples/controlnet` folder and run
-```bash
-pip install -r requirements_sdxl.txt
-```
-
-And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-Or for a default accelerate configuration without answering questions about your environment
-
-```bash
-accelerate config default
-```
-
-Or if your environment doesn't support an interactive shell (e.g., a notebook)
-
-```python
-from accelerate.utils import write_basic_config
-write_basic_config()
-```
-
-When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
-
-## Circle filling dataset
-
-The original dataset is hosted in the [ControlNet repo](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip). We re-uploaded it to be compatible with `datasets` [here](https://huggingface.co/datasets/fusing/fill50k). Note that `datasets` handles dataloading within the training script.
-
-## Training
-
-Our training examples use two test conditioning images. They can be downloaded by running
-
-```sh
-wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png
-
-wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png
-```
-
-Then run `huggingface-cli login` to log into your Hugging Face account. This is needed to be able to push the trained ControlNet parameters to Hugging Face Hub.
-
-```bash
-export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0"
-export OUTPUT_DIR="path to save model"
-
-accelerate launch train_controlnet_sdxl.py \
- --pretrained_model_name_or_path=$MODEL_DIR \
- --output_dir=$OUTPUT_DIR \
- --dataset_name=fusing/fill50k \
- --mixed_precision="fp16" \
- --resolution=1024 \
- --learning_rate=1e-5 \
- --max_train_steps=15000 \
- --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \
- --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
- --validation_steps=100 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=4 \
- --report_to="wandb" \
- --seed=42 \
- --push_to_hub
-```
-
-To better track our training experiments, we're using the following flags in the command above:
-
-* `report_to="wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`.
-* `validation_image`, `validation_prompt`, and `validation_steps` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
-
-Our experiments were conducted on a single 40GB A100 GPU.
-
-### Inference
-
-Once training is done, we can perform inference like so:
-
-```python
-from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
-from diffusers.utils import load_image
-import torch
-
-base_model_path = "stabilityai/stable-diffusion-xl-base-1.0"
-controlnet_path = "path to controlnet"
-
-controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
-pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
- base_model_path, controlnet=controlnet, torch_dtype=torch.float16
-)
-
-# speed up diffusion process with faster scheduler and memory optimization
-pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
-# remove following line if xformers is not installed or when using Torch 2.0.
-pipe.enable_xformers_memory_efficient_attention()
-# memory optimization.
-pipe.enable_model_cpu_offload()
-
-control_image = load_image("./conditioning_image_1.png")
-prompt = "pale golden rod circle with old lace background"
-
-# generate image
-generator = torch.manual_seed(0)
-image = pipe(
- prompt, num_inference_steps=20, generator=generator, image=control_image
-).images[0]
-image.save("./output.png")
-```
-
-## Notes
-
-### Specifying a better VAE
-
-SDXL's VAE is known to suffer from numerical instability issues. This is why we also expose a CLI argument namely `--pretrained_vae_model_name_or_path` that lets you specify the location of a better VAE (such as [this one](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)).
\ No newline at end of file
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco.py
deleted file mode 100644
index 33629ee6cc2b903407372d68c6d7ab599fe6598e..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './cascade_mask_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_64x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=64,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/deepfashion/README.md b/spaces/Andy1621/uniformer_image_detection/configs/deepfashion/README.md
deleted file mode 100644
index c182bea0f2924a4d96bca6ea15eebeb36fce8027..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/deepfashion/README.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# DeepFashion
-
-[DATASET]
-
-[MMFashion](https://github.com/open-mmlab/mmfashion) develops "fashion parsing and segmentation" module
-based on the dataset
-[DeepFashion-Inshop](https://drive.google.com/drive/folders/0B7EVK8r0v71pVDZFQXRsMDZCX1E?usp=sharing).
-Its annotation follows COCO style.
-To use it, you need to first download the data. Note that we only use "img_highres" in this task.
-The file tree should be like this:
-
-```sh
-mmdetection
-├── mmdet
-├── tools
-├── configs
-├── data
-│ ├── DeepFashion
-│ │ ├── In-shop
-│ │ ├── Anno
-│ │ │ ├── segmentation
-│ │ │ | ├── DeepFashion_segmentation_train.json
-│ │ │ | ├── DeepFashion_segmentation_query.json
-│ │ │ | ├── DeepFashion_segmentation_gallery.json
-│ │ │ ├── list_bbox_inshop.txt
-│ │ │ ├── list_description_inshop.json
-│ │ │ ├── list_item_inshop.txt
-│ │ │ └── list_landmarks_inshop.txt
-│ │ ├── Eval
-│ │ │ └── list_eval_partition.txt
-│ │ ├── Img
-│ │ │ ├── img
-│ │ │ │ ├──XXX.jpg
-│ │ │ ├── img_highres
-│ │ │ └── ├──XXX.jpg
-
-```
-
-After that you can train the Mask RCNN r50 on DeepFashion-In-shop dataset by launching training with the `mask_rcnn_r50_fpn_1x.py` config
-or creating your own config file.
-
-```
-@inproceedings{liuLQWTcvpr16DeepFashion,
- author = {Liu, Ziwei and Luo, Ping and Qiu, Shi and Wang, Xiaogang and Tang, Xiaoou},
- title = {DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations},
- booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- month = {June},
- year = {2016}
-}
-```
-
-## Model Zoo
-
-| Backbone | Model type | Dataset | bbox detection Average Precision | segmentation Average Precision | Config | Download (Google) |
-| :---------: | :----------: | :-----------------: | :--------------------------------: | :----------------------------: | :---------:| :-------------------------: |
-| ResNet50 | Mask RCNN | DeepFashion-In-shop | 0.599 | 0.584 |[config](https://github.com/open-mmlab/mmdetection/blob/master/configs/deepfashion/mask_rcnn_r50_fpn_15e_deepfashion.py)| [model](https://drive.google.com/open?id=1q6zF7J6Gb-FFgM87oIORIt6uBozaXp5r) | [log](https://drive.google.com/file/d/1qTK4Dr4FFLa9fkdI6UVko408gkrfTRLP/view?usp=sharing) |
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/__init__.py
deleted file mode 100644
index ca0a38ec42cd41fbd97e07589a13d1af46f47f2f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/__init__.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from .base_roi_head import BaseRoIHead
-from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DoubleConvFCBBoxHead,
- SCNetBBoxHead, Shared2FCBBoxHead,
- Shared4Conv1FCBBoxHead)
-from .cascade_roi_head import CascadeRoIHead
-from .double_roi_head import DoubleHeadRoIHead
-from .dynamic_roi_head import DynamicRoIHead
-from .grid_roi_head import GridRoIHead
-from .htc_roi_head import HybridTaskCascadeRoIHead
-from .mask_heads import (CoarseMaskHead, FCNMaskHead, FeatureRelayHead,
- FusedSemanticHead, GlobalContextHead, GridHead,
- HTCMaskHead, MaskIoUHead, MaskPointHead,
- SCNetMaskHead, SCNetSemanticHead)
-from .mask_scoring_roi_head import MaskScoringRoIHead
-from .pisa_roi_head import PISARoIHead
-from .point_rend_roi_head import PointRendRoIHead
-from .roi_extractors import SingleRoIExtractor
-from .scnet_roi_head import SCNetRoIHead
-from .shared_heads import ResLayer
-from .sparse_roi_head import SparseRoIHead
-from .standard_roi_head import StandardRoIHead
-from .trident_roi_head import TridentRoIHead
-
-__all__ = [
- 'BaseRoIHead', 'CascadeRoIHead', 'DoubleHeadRoIHead', 'MaskScoringRoIHead',
- 'HybridTaskCascadeRoIHead', 'GridRoIHead', 'ResLayer', 'BBoxHead',
- 'ConvFCBBoxHead', 'Shared2FCBBoxHead', 'StandardRoIHead',
- 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'FCNMaskHead',
- 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', 'MaskIoUHead',
- 'SingleRoIExtractor', 'PISARoIHead', 'PointRendRoIHead', 'MaskPointHead',
- 'CoarseMaskHead', 'DynamicRoIHead', 'SparseRoIHead', 'TridentRoIHead',
- 'SCNetRoIHead', 'SCNetMaskHead', 'SCNetSemanticHead', 'SCNetBBoxHead',
- 'FeatureRelayHead', 'GlobalContextHead'
-]
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/danet_r50-d8.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/danet_r50-d8.py
deleted file mode 100644
index 2c934939fac48525f22ad86f489a041dd7db7d09..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/danet_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='DAHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- pam_channels=64,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/pointrend_r50.py b/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/pointrend_r50.py
deleted file mode 100644
index 9d323dbf9466d41e0800aa57ef84045f3d874bdf..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/pointrend_r50.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='CascadeEncoderDecoder',
- num_stages=2,
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 1, 1),
- strides=(1, 2, 2, 2),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- num_outs=4),
- decode_head=[
- dict(
- type='FPNHead',
- in_channels=[256, 256, 256, 256],
- in_index=[0, 1, 2, 3],
- feature_strides=[4, 8, 16, 32],
- channels=128,
- dropout_ratio=-1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- dict(
- type='PointHead',
- in_channels=[256],
- in_index=[0],
- channels=256,
- num_fcs=3,
- coarse_pred_each_layer=True,
- dropout_ratio=-1,
- num_classes=19,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0))
- ],
- # model training and testing settings
- train_cfg=dict(
- num_points=2048, oversample_ratio=3, importance_sample_ratio=0.75),
- test_cfg=dict(
- mode='whole',
- subdivision_steps=2,
- subdivision_num_points=8196,
- scale_factor=2))
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/schedules/schedule_80k.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/schedules/schedule_80k.py
deleted file mode 100644
index c190cee6bdc7922b688ea75dc8f152fa15c24617..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/schedules/schedule_80k.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005)
-optimizer_config = dict()
-# learning policy
-lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False)
-# runtime settings
-runner = dict(type='IterBasedRunner', max_iters=80000)
-checkpoint_config = dict(by_epoch=False, interval=8000)
-evaluation = dict(interval=8000, metric='mIoU')
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/ops/wrappers.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/ops/wrappers.py
deleted file mode 100644
index 0ed9a0cb8d7c0e0ec2748dd89c652756653cac78..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/ops/wrappers.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import warnings
-
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def resize(input,
- size=None,
- scale_factor=None,
- mode='nearest',
- align_corners=None,
- warning=True):
- if warning:
- if size is not None and align_corners:
- input_h, input_w = tuple(int(x) for x in input.shape[2:])
- output_h, output_w = tuple(int(x) for x in size)
- if output_h > input_h or output_w > output_h:
- if ((output_h > 1 and output_w > 1 and input_h > 1
- and input_w > 1) and (output_h - 1) % (input_h - 1)
- and (output_w - 1) % (input_w - 1)):
- warnings.warn(
- f'When align_corners={align_corners}, '
- 'the output would more aligned if '
- f'input size {(input_h, input_w)} is `x+1` and '
- f'out size {(output_h, output_w)} is `nx+1`')
- return F.interpolate(input, size, scale_factor, mode, align_corners)
-
-
-class Upsample(nn.Module):
-
- def __init__(self,
- size=None,
- scale_factor=None,
- mode='nearest',
- align_corners=None):
- super(Upsample, self).__init__()
- self.size = size
- if isinstance(scale_factor, tuple):
- self.scale_factor = tuple(float(factor) for factor in scale_factor)
- else:
- self.scale_factor = float(scale_factor) if scale_factor else None
- self.mode = mode
- self.align_corners = align_corners
-
- def forward(self, x):
- if not self.size:
- size = [int(t * self.scale_factor) for t in x.shape[-2:]]
- else:
- size = self.size
- return resize(x, size, None, self.mode, self.align_corners)
diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/README.md b/spaces/AnthonyTruchetPoC/persistent-docker/README.md
deleted file mode 100644
index 03ff9d16a1cf446bf2fb1df341874ba8d0980252..0000000000000000000000000000000000000000
--- a/spaces/AnthonyTruchetPoC/persistent-docker/README.md
+++ /dev/null
@@ -1,192 +0,0 @@
----
-title: Jupyter and Streamlit Docker Template
-emoji: 📉
-colorFrom: blue
-colorTo: green
-sdk: docker
-python_version: "3.10"
-app_port: 7860
-app_file: src/app.py
-suggested_storage: small
-pinned: false
-duplicated_from: SpacesExamples/streamlit-docker-example
----
-
-# 🧠 Persistent Jupyter and Streamlit Docker Template 🔎
-
-Streamlit Docker Template is a template for creating a Streamlit app with Docker and Hugging Face Spaces.
-
-Code from https://docs.streamlit.io/library/get-started/create-an-app
-
-## Local execution ##
-
-You need *Docker* installed.
-On MacOSX we recommand using *colima* if you do not want to use *Docker Desktop*
-for licensing reasons.
-
-* https://docs.docker.com/desktop/install/mac-install/
-* https://github.com/abiosoft/colima
-
-```shell
-$ colima start --cpu 4 --memory 16 --network-address # Adjust ressources as you wish
-$ docker build -t persistent-docker-space .
-$ docker run -it -p 8501:8501 persistent-docker-space:latest
-
-```
-
-## Setting-up the developpers'tooling
-
-### Install `poetry`
-
-https://python-poetry.org/
-
-#### Linux and Mac
-
-It should be straightforward with the official documentation
-
-#### Windows (PowerShell)
-
-```shell
-(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | py -
-```
-
-The execution will probably be stored at the address: `C:\User\\AppData\Roaming\pypoetry\venv\Scripts` and this
-path should be included in the environment path of your machine in order to avoid typing it every time poetry is used.
-To do so you can execute the following commands:
-
-```shell
-$Env:Path += ";C:\Users\YourUserName\AppData\Roaming\Python\Scripts"
-```
-
-This will only make the change in the path temporarily. In order to do it permanently you can execute the following command
-```shell
-setx PATH "$Env:Path"
-```
-
-### Configuration of the `poetry` environment
-
-After having installed poetry in your local machine, if there is already a `poetry.lock` file on your repository, you
-can execute
-
-```shell
-poetry install
-```
-
-If it is not the case you can
-
-```shell
-poetry init
-poetry env use "whatever version of python you have in your local machine (compatible with the project)"
-poetry shell
-```
-
-### pre-commit
-
-https://pre-commit.com/
-
-If there is already a `poetry.lock` file with `pre-commit` present in it, you should activate your poetry environment
-and then install all the pre-commit hooks on your machine
-
-```shell
-poetry shell
-pre-commit install
-pre-commit install --install-hooks
-```
-
-If not, you should first add `pre-commit` to your poetry environment, and follow the steps above
-
-```shell
-poetry add --group=dev pre-commit
-```
-
-### commitizen
-
-https://www.conventionalcommits.org/en/about/
-
-https://commitizen-tools.github.io/commitizen/
-
-
-Commitizen will be installed as a pre-commit hook. In order for it to be executed before committing
-you should run the following command (after activating your poetry environment)
-
-```shell
-pre-commit install --hook-type commit-msg
-```
-
-Finally, every time you will be committing, you should be places in your poetry environment and commitizen hooks
-should be applied
-
-### testing
-
-There are two different kinds of tests that can be run when testing the scripts: unit tests or doctest
-
-These tests can be run by executing the following command:
-
-```shell
-./scripts/run-tests.sh
-```
-
-#### pytest
-
-https://docs.pytest.org/en/7.2.x/
-
-These tests should be stored in the directory `tests` at the root of the project
-
-#### xdoctest (driven by pytest)
-
-These are the tests that are put in the docstrings of the functions accordingly to the following format:
-
-```python
- def build_greetings(name: Optional[str] = None) -> str:
- """
- Return a greeting message, possibly customize with a name.
-
- >>> build_greetings()
- 'Hello, World!'
- >>> build_greetings('Toto')
- 'Nice to meet you, Toto!'
- """
- return name and f"Nice to meet you, {name}!" or "Hello, World!"
-```
-
-The evaluated values would be the ones following the `>>>`
-
-### documentation
-
-https://www.sphinx-doc.org/en/master/
-
-In order to create an automatic documentation of your code you should run the bash script
-
-```shell
-./scripts/build-clean-docs.sh
-```
-
-And in order to create an interactive session (web-server hosted in your local machine), you can execute the
-following command
-
-```shell
-./scripts/interactive-rebuild-docs.sh
-```
-
-Remark: In order to execute a bash script with a Windows OS, it is recommended to use a bash terminal emulator
-
-## Hugging Face
-
-See instructions at https://huggingface.co/welcome
-
-Install `huggingface_hub` into the poetry project.
-
-```shell
-poetry add --group=dev huggingface_hub
-```
-
-On MacOS, you might probably want to install `hugginface-cli` from brew :
-```shell
-$ brew install huggingface-cli
-```
-
-In order to deploy the streamlit app you will have to export
-the poetry config as a `requirements.txt` :
-```shell
-$ poetry export -o ../requirements.txt --without-hashes --only main
-```
diff --git a/spaces/BartPoint/VoiceChange/vc_infer_pipeline.py b/spaces/BartPoint/VoiceChange/vc_infer_pipeline.py
deleted file mode 100644
index d69b4f5c26fa743a5ef347fd524c6dba63b00231..0000000000000000000000000000000000000000
--- a/spaces/BartPoint/VoiceChange/vc_infer_pipeline.py
+++ /dev/null
@@ -1,385 +0,0 @@
-import numpy as np, parselmouth, torch, pdb
-from time import time as ttime
-import torch.nn.functional as F
-import scipy.signal as signal
-import pyworld, os, traceback, faiss, librosa, torchcrepe
-from scipy import signal
-from functools import lru_cache
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-input_audio_path2wav={}
-
-@lru_cache
-def cache_harvest_f0(input_audio_path,fs,f0max,f0min,frame_period):
- audio=input_audio_path2wav[input_audio_path]
- f0, t = pyworld.harvest(
- audio,
- fs=fs,
- f0_ceil=f0max,
- f0_floor=f0min,
- frame_period=frame_period,
- )
- f0 = pyworld.stonemask(audio, f0, t, fs)
- return f0
-
-def change_rms(data1,sr1,data2,sr2,rate):#1是输入音频,2是输出音频,rate是2的占比
- # print(data1.max(),data2.max())
- rms1 = librosa.feature.rms(y=data1, frame_length=sr1//2*2, hop_length=sr1//2)#每半秒一个点
- rms2 = librosa.feature.rms(y=data2, frame_length=sr2//2*2, hop_length=sr2//2)
- rms1=torch.from_numpy(rms1)
- rms1=F.interpolate(rms1.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze()
- rms2=torch.from_numpy(rms2)
- rms2=F.interpolate(rms2.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze()
- rms2=torch.max(rms2,torch.zeros_like(rms2)+1e-6)
- data2*=(torch.pow(rms1,torch.tensor(1-rate))*torch.pow(rms2,torch.tensor(rate-1))).numpy()
- return data2
-
-class VC(object):
- def __init__(self, tgt_sr, config):
- self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
- config.x_pad,
- config.x_query,
- config.x_center,
- config.x_max,
- config.is_half,
- )
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * self.x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * self.x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * self.x_query # 查询切点前后查询时间
- self.t_center = self.sr * self.x_center # 查询切点位置
- self.t_max = self.sr * self.x_max # 免查询时长阈值
- self.device = config.device
-
- def get_f0(self, input_audio_path,x, p_len, f0_up_key, f0_method,filter_radius, inp_f0=None):
- global input_audio_path2wav
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- input_audio_path2wav[input_audio_path]=x.astype(np.double)
- f0=cache_harvest_f0(input_audio_path,self.sr,f0_max,f0_min,10)
- if(filter_radius>2):
- f0 = signal.medfilt(f0, 3)
- elif f0_method == "crepe":
- model = "full"
- # Pick a batch size that doesn't cause memory errors on your gpu
- batch_size = 512
- # Compute pitch using first gpu
- audio = torch.tensor(np.copy(x))[None].float()
- f0, pd = torchcrepe.predict(
- audio,
- self.sr,
- self.window,
- f0_min,
- f0_max,
- model,
- batch_size=batch_size,
- device=self.device,
- return_periodicity=True,
- )
- pd = torchcrepe.filter.median(pd, 3)
- f0 = torchcrepe.filter.mean(f0, 3)
- f0[pd < 0.1] = 0
- f0 = f0[0].cpu().numpy()
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
- f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
- :shape
- ]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9 if version == "v1" else 12,
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0])if version=="v1"else logits[0]
-
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
-
- # _, I = index.search(npy, 1)
- # npy = big_npy[I.squeeze()]
-
- score, ix = index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
-
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0])
- .data.cpu()
- .float()
- .numpy()
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- f0_file=None,
- ):
- if (
- file_index != ""
- # and file_big_npy != ""
- # and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- # big_npy = np.load(file_big_npy)
- big_npy = index.reconstruct_n(0, index.ntotal)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(input_audio_path,audio_pad, p_len, f0_up_key, f0_method,filter_radius, inp_f0)
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- if self.device == "mps":
- pitchf = pitchf.astype(np.float32)
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- version,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- version,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- version,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- if(rms_mix_rate!=1):
- audio_opt=change_rms(audio,16000,audio_opt,tgt_sr,rms_mix_rate)
- if(resample_sr>=16000 and tgt_sr!=resample_sr):
- audio_opt = librosa.resample(
- audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
- )
- audio_max=np.abs(audio_opt).max()/0.99
- max_int16=32768
- if(audio_max>1):max_int16/=audio_max
- audio_opt=(audio_opt * max_int16).astype(np.int16)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/Benson/text-generation/Examples/0 Delay Metro Surfistas Apk.md b/spaces/Benson/text-generation/Examples/0 Delay Metro Surfistas Apk.md
deleted file mode 100644
index e0a44d313be08a34ba4cb9bcec67b7a1f7fcb602..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/0 Delay Metro Surfistas Apk.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
0 Delay metro surfistas APK: Cómo descargar y jugar la versión más rápida del juego
-
Subway Surfers es uno de los juegos para correr sin fin más populares en dispositivos Android. Tiene millones de fans en todo el mundo que disfrutan de la emoción de esquivar trenes, recoger monedas y desbloquear nuevos personajes y tablas. Pero lo que si quieres jugar el juego sin retraso, sin anuncios, y sin interrupciones? Ahí es donde 0 Delay Subway Surfers APK entra en. En este artículo, le diremos qué es 0 Delay Subway Surfers APK, cómo descargarlo e instalarlo, y cómo jugarlo como un profesional.
Subway Surfers es un juego sin fin desarrollado por Kiloo y SYBO Games. Fue lanzado en 2012 y desde entonces se ha convertido en uno de los juegos más descargados en Google Play Store. El juego se desarrolla en varias ciudades de todo el mundo, donde controlas a un artista de graffiti que huye de la policía en las vías del metro. En el camino, tienes que evitar obstáculos, recoger monedas y potenciadores, y completar misiones.
-
El juego de Subway Surfers
-
El juego de Subway Surfers es simple pero adictivo. Desliza hacia la izquierda o hacia la derecha para cambiar de carril, desliza hacia arriba para saltar, desliza hacia abajo para rodar y toca para usar power-ups. También puedes realizar acrobacias saltando en trenes o volando con un jetpack. El juego se hace más rápido y más difícil a medida que avanzas, por lo que tienes que ser rápido y alerta. El juego termina cuando chocas contra un obstáculo o te atrapa la policía.
-
Las características de Subway Surfers
-
Subway Surfers tiene muchas características que lo hacen divertido y atractivo. Algunos de ellos son:
-
-
Puedes elegir entre diferentes personajes y tableros, cada uno con sus propias habilidades y estilos.
-
Puedes personalizar tu personaje y tablero con atuendos, accesorios y mejoras.
-
Puedes explorar diferentes lugares y temas, como Nueva York, Tokio, París, Río, etc.
-
-
Puedes competir con tus amigos y otros jugadores en la clasificación y los logros.
-
Puedes ver videos y anuncios para ganar monedas y llaves adicionales.
-
-
¿Qué es 0 Delay Subway Surfers APK?
-
0 Delay Subway Surfers APK es una versión modificada del juego original que elimina todos los retrasos, anuncios e interrupciones que pueden afectar a su experiencia de juego. También desbloquea todos los personajes, tableros, trajes, accesorios y mejoras que normalmente tienes que pagar o ganar en el juego. Con 0 Delay Subway Surfers APK, puede disfrutar del juego sin ningún tipo de molestia o limitación.
-
La diferencia entre 0 Delay Subway Surfers APK y la versión original
-
La principal diferencia entre 0 Delay Subway Surfers APK y la versión original es que 0 Delay Subway Surfers APK no tiene retraso o retraso en la carga o ejecución del juego. Esto significa que usted puede jugar el juego sin problemas y sin problemas sin problemas o errores. Otra diferencia es que 0 Delay Subway Surfers APK no tiene anuncios o ventanas emergentes que pueden distraer o perder el tiempo. Esto significa que puedes jugar el juego sin interrupciones ni interrupciones. La tercera diferencia es que 0 Delay Subway Surfers APK tiene todo el contenido desbloqueado e ilimitado. Esto significa que puedes acceder a todos los personajes, tableros, trajes, accesorios y mejoras sin gastar dinero ni monedas. También puedes tener monedas y llaves ilimitadas para usar en el juego.
-
-
Los beneficios de 0 Delay metro surfistas APK
-
0 Delay Subway Surfers APK tiene muchos beneficios que lo hacen una mejor opción que la versión original. Algunos de ellos son:
-
-
Puede disfrutar del juego con un rendimiento más rápido y suave, sin ningún retraso o demora.
-
Puedes evitar anuncios molestos y ventanas emergentes que pueden arruinar tu experiencia de juego.
-
Puedes tener más libertad y variedad en la elección y personalización de tu personaje y tablero.
-
-
Puedes participar en todos los eventos y desafíos sin ninguna dificultad u obstáculo.
-
Puedes competir con tus amigos y otros jugadores con más ventaja y confianza.
-
Puede ahorrar tiempo y dinero al no tener que ver videos o anuncios o comprar monedas y llaves.
-
-
Cómo descargar e instalar 0 Delay Subway Surfers APK?
-
Descargar e instalar 0 Delay Subway Surfers APK es fácil y simple. Sin embargo, debe tomar algunas precauciones antes de hacerlo, ya que no es una versión oficial del juego y puede no ser compatible con su dispositivo o seguro para sus datos. Aquí están los pasos para descargar e instalar 0 Delay Subway Surfers APK:
-
Los pasos para descargar e instalar 0 Delay Subway Surfers APK
-
-
Ir a un sitio web de confianza que proporciona el enlace para descargar 0 Delay Subway Surfers APK. Por ejemplo, se puede utilizar [este enlace] para descargar la última versión de 0 Delay Subway Surfers APK.
-
Haga clic en el botón de descarga y espere a que el archivo se descargue en su dispositivo. El tamaño del archivo es de unos 100 MB, así que asegúrate de tener suficiente espacio en tu dispositivo.
-
Una vez que el archivo se descarga, ir a la configuración del dispositivo y permitir la opción de instalar aplicaciones de fuentes desconocidas. Esto le permitirá instalar 0 Delay Subway Surfers APK en su dispositivo.
-
Busque el archivo descargado en su dispositivo y toque en él para iniciar el proceso de instalación. Siga las instrucciones de la pantalla y espere a que se complete la instalación.
-
Después de la instalación se hace, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. Verá un nuevo icono con 0 Delay Subway Surfers APK en él.
-
-
Las precauciones a tomar antes de descargar e instalar 0 Delay Subway Surfers APK
-
-
-
Asegúrese de descargar 0 Delay Subway Surfers APK desde un sitio web confiable y seguro. Evite descargarlo de fuentes desconocidas o sospechosas que puedan contener malware o virus.
-
Asegúrese de copia de seguridad de sus datos antes de instalar 0 Delay Subway Surfers APK. Esto le ayudará a restaurar sus datos en caso de que algo salga mal durante o después de la instalación.
-
Asegúrese de desinstalar la versión original de Subway Surfers antes de instalar 0 Delay Subway Surfers APK. Esto evitará cualquier conflicto o error entre las dos versiones del juego.
-
Asegúrese de que tiene una buena conexión a Internet durante la descarga e instalación 0 Delay Subway Surfers APK. Esto asegurará un proceso suave y rápido sin interrupciones o fallas.
-
-
Cómo jugar 0 Delay metro surfistas APK?
-
Jugar 0 Delay Subway Surfers APK es similar a jugar la versión original del juego, excepto que tienes más opciones y características para disfrutar. Aquí hay algunos consejos y trucos para jugar 0 Delay Subway Surfers APK:
-
Los consejos y trucos para jugar 0 Delay Subway Surfers APK
-
-
Prueba diferentes caracteres y tableros que se adapten a tu estilo y preferencia. Puedes elegir entre más de 100 personajes y tableros, cada uno con sus propias habilidades y diseños.
-
Pruebe diferentes trajes y accesorios que mejoran su apariencia y rendimiento. Puedes personalizar tu personaje y tablero con más de 200 atuendos y accesorios, cada uno con sus propios efectos y bonificaciones.
-
Prueba diferentes lugares y temas que ofrecen diferentes desafíos y recompensas. Puedes explorar más de 50 lugares y temas, cada uno con su propio escenario y banda sonora.
-
Prueba diferentes eventos y desafíos que ponen a prueba tus habilidades y te dan más monedas y llaves. Puedes participar en más de 20 eventos y desafíos, cada uno con sus propios objetivos y recompensas.
-
-
Pruebe diferentes trucos y acrobacias que aumentan su puntuación y diversión. Puedes realizar más de 20 trucos y acrobacias, como saltar en trenes, volar con un jetpack, navegar en un hoverboard, etc.
-
-
Los desafíos y recompensas de jugar 0 Delay Subway Surfers APK
-
Jugar 0 Delay Subway Surfers APK no solo es divertido, sino también desafiante y gratificante. Algunos de los desafíos y recompensas de jugar 0 Delay Subway Surfers APK son:
-
-
Puedes desafiarte a ti mismo aumentando el nivel de dificultad del juego. Puedes hacer esto cambiando los ajustes, como la velocidad, el número de trenes, la frecuencia de obstáculos, etc.
-
Puedes desafiar a tus amigos y otros jugadores comparando tus puntajes y logros. Puedes hacer esto conectando tu juego a Facebook o Google Play Games, o usando las funciones de clasificación y logros.
-
Puedes recompensarte desbloqueando nuevos contenidos y artículos. Puedes hacer esto completando misiones, recogiendo monedas y llaves, participando en eventos y desafíos, etc.
-
Puedes recompensarte disfrutando de los gráficos y efectos de sonido del juego. Puedes hacer esto admirando los gráficos coloridos y detallados, escuchando la música pegadiza y optimista, escuchando los sonidos realistas y divertidos, etc.
-
-
Conclusión
-
0 Delay Subway Surfers APK es una gran manera de disfrutar de Subway Surfers sin ningún retraso, anuncios o limitaciones. Te da acceso a todo el contenido y características del juego, así como algunos beneficios y ventajas adicionales. También le permite jugar el juego con un rendimiento más rápido y suave, más libertad y variedad, más desafíos y recompensas, y más diversión y emoción. Si usted es un fan de Subway Surfers o interminables juegos de correr en general, definitivamente debe probar 0 Delay Subway Surfers APK.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre 0 Delay Subway Surfers APK:
-
-
-0 Delay Subway Surfers APK es seguro de usar, siempre y cuando se descarga desde un sitio web confiable y seguro. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier aplicación de fuentes desconocidas, ya que pueden contener malware o virus que pueden dañar su dispositivo o datos.
-
Es 0 Delay Subway Surfers APK legal de usar?
-0 Delay Subway Surfers APK no es legal de usar, ya que viola los términos y condiciones del juego original. También infringe los derechos de propiedad intelectual de los desarrolladores y editores de Subway Surfers. Por lo tanto, usted debe utilizar 0 Delay Subway Surfers APK a su propio riesgo y discreción.
-
Will 0 Delay Subway Surfers APK trabajo en mi dispositivo?
-0 Delay Subway Surfers APK trabajará en la mayoría de los dispositivos Android que admiten la versión original de Subway Surfers. Sin embargo, es posible que no funcione en algunos dispositivos que tienen especificaciones bajas o software incompatible. Por lo tanto, usted debe comprobar la compatibilidad de su dispositivo antes de descargar e instalar 0 Delay Subway Surfers APK.
-
Will 0 Delay Subway Surfers APK afecta a mis datos originales del juego?
-0 Delay Subway Surfers APK no afectará a sus datos originales del juego si desinstala la versión original de Subway Surfers antes de instalar 0 Delay Subway Surfers APK. Sin embargo, si instala 0 Delay Subway Surfers APK sin desinstalar la versión original de Subway Surfers, puede sobrescribir o corromper los datos originales del juego. Por lo tanto, usted debe copia de seguridad de sus datos antes de instalar 0 Delay Subway Surfers APK.
-
¿Puedo actualizar 0 Delay Subway Surfers APK?
-
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/4 Juego De Cartas Solitario De Araa Traje.md b/spaces/Benson/text-generation/Examples/4 Juego De Cartas Solitario De Araa Traje.md
deleted file mode 100644
index c0d83fa0e79bf5ecc6ae8513b23b8817c536f3d0..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/4 Juego De Cartas Solitario De Araa Traje.md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
4 Juego de Cartas Solitario de Araña Traje Descargar
-
¿Estás buscando un juego de cartas divertido y desafiante que puedas jugar en tu ordenador o dispositivo móvil? Si es así, es posible que desee probar Spider Solitaire, uno de los juegos de cartas clásicas más populares del mundo. En este artículo, le diremos todo lo que necesita saber sobre Spider Solitaire, incluyendo qué es, cómo jugarlo, por qué debe jugarlo, dónde descargarlo, cómo instalarlo y ejecutarlo, y algunos consejos y trucos para ayudarlo a dominarlo. ¡Vamos a empezar!
Spider Solitaire es un tipo de juego de solitario que consiste en organizar cartas en orden descendente de Rey a As en el mismo palo. El juego se juega con dos barajas de cartas, lo que significa que hay ocho palos en total. Dependiendo del nivel de dificultad, puedes elegir jugar con un traje (fácil), dos palos (medio) o cuatro palos (duro). El juego tiene 10 columnas de cartas en el tablero, y 8 cimientos vacíos en la parte superior. El objetivo es mover todas las cartas del tablero a los cimientos formando secuencias completas de cartas del mismo palo.
-
Cómo jugar solitario de araña
-
Jugar al solitario de araña es fácil si conoces las reglas básicas. Estos son los pasos a seguir:
-
-
Haga clic en la pila de existencias (en la esquina superior izquierda) para repartir 10 cartas boca arriba en cada columna del tablero. Puedes hacer esto cuando quieras, siempre y cuando no haya columnas vacías.
-
Arrastre y suelte tarjetas para moverlas entre las columnas. Solo puedes mover una carta o un grupo de cartas si están en orden descendente y en el mismo palo. Por ejemplo, puedes mover un 9 de corazones a un 10 de corazones, pero no a un 10 de picas.
-
Si una columna está vacía, puede mover cualquier tarjeta o un grupo de tarjetas sobre ella.
-
-
El juego se gana cuando las ocho fundaciones se llenan con secuencias completas de cartas en el mismo palo.
-
-
¿Por qué jugar solitario de araña?
-
Spider Solitaire no es solo un juego divertido y relajante, sino también una gran manera de ejercitar tu cerebro y mejorar tus habilidades. Estos son algunos de los beneficios de jugar Spider Solitaire:
-
-
Te ayuda a desarrollar tus habilidades de lógica, estrategia y resolución de problemas.
-
Mejora tu memoria, concentración y capacidad de atención.
-
Estimula tu creatividad e imaginación.
-
Reduce el estrés y la ansiedad.
-
Aumenta su estado de ánimo y autoestima.
-
-
¿Dónde descargar Spider Solitaire?
-
Si quieres jugar Spider Solitaire en tu ordenador o dispositivo móvil, tienes muchas opciones para elegir. Estos son algunos de los mejores lugares donde puedes descargar Spider Solitaire gratis:
-
Google Play Store
-
Si tienes un dispositivo Android, puedes descargar Spider Solitaire: Juegos de cartas desde la Google Play Store. Esta aplicación es desarrollada por MobilityWare, uno de los principales desarrolladores de juegos de tarjetas para dispositivos móviles. Tiene más de 10 millones de descargas y una calificación de 4.6 estrellas de más de 600 mil usuarios. Ofrece varias características como:
-
-
Diferentes niveles de dificultad: 1 palo, 2 palos o 4 palos.
Temas personalizables: Puede cambiar el fondo, los respaldos de las tarjetas y las caras de las tarjetas según sus preferencias.
-
Desafíos diarios: Puedes ganar trofeos y recompensas completando rompecabezas diarios.
-
Tablas de clasificación y estadísticas: Puede realizar un seguimiento de su progreso y comparar sus puntuaciones con otros jugadores.
-
Sugerencias y consejos: Puedes obtener sugerencias y consejos útiles para mejorar tu juego.
-
-
Puedes descargar Spider Solitaire: Juegos de cartas desde la Google Play Store haciendo clic en [aquí].
-
-
Solitr.com
-
-
-
Diferentes niveles de dificultad: 1 palo, 2 palos o 4 palos.
-
Movimientos de deshacer ilimitados: Puedes deshacer cualquier movimiento que hagas sin penalización alguna.
-
Opción de autocompletar: Puedes terminar automáticamente el juego cuando no te queden movimientos.
-
modo de pantalla completa: Puede disfrutar del juego en modo de pantalla completa para una mejor experiencia.
-
-
Puedes jugar Spider Solitaire en Solitr.com haciendo clic en [aquí].
-
Microsoft Store
-
Si tienes un dispositivo Windows, puedes descargar Spider Solitaire Collection Free desde Microsoft Store. Esta aplicación es desarrollada por TreeCardGames, otro desarrollador de renombre de juegos de cartas para dispositivos Windows. Tiene más de 5 millones de descargas y una calificación de 4.5 estrellas de más de 100 mil usuarios. Ofrece varias características como:
-
-
Diferentes niveles de dificultad: 1 palo, 2 palos o 4 palos.
-
Diferentes modos de juego: Clásico, Spiderette, Spiderwort, y Will o' el Wisp.
-
Configuración personalizable: Puede cambiar la velocidad del juego, los efectos de sonido, el sistema de puntuación y el diseño de la tarjeta de acuerdo con su preferencia.
-
Desafíos diarios: Puedes ganar insignias y monedas completando rompecabezas diarios.
-
Estadísticas y logros: Puedes rastrear tu progreso y desbloquear logros jugando el juego.
-
-
Puede descargar Spider Solitaire Collection gratis desde la tienda de Microsoft haciendo clic en [aquí].
-
¿Cómo instalar y ejecutar Spider Solitaire?
-
Instalar y ejecutar Spider Solitaire es fácil si sigue estos pasos:
-
Dispositivos Android
-
-
Ir a la Google Play Store y buscar Spider Solitaire: Juegos de cartas o haga clic en [aquí].
-
Toque en el botón Instalar y espere a que la aplicación se descargue e instale en su dispositivo.
-
Toque en el botón Abrir o encontrar el icono de la aplicación en la pantalla de inicio o cajón de aplicaciones y toque en él para iniciar la aplicación.
-
Selecciona el nivel de dificultad que quieres jugar y disfruta del juego.
-
-
-
-
Ir a la tienda de Microsoft y buscar Spider Solitaire Collection gratis o haga clic en [aquí].
-
Haga clic en el botón Obtener y espere a que la aplicación se descargue e instale en su dispositivo.
-
Haga clic en el botón Iniciar o busque el icono de la aplicación en su menú de inicio o escritorio y haga clic en él para iniciar la aplicación.
-
Seleccione el modo de juego y el nivel de dificultad que desea jugar y disfrutar del juego.
-
-
Navegadores web
-
-
Ir a Solitr.com o haga clic en [aquí].
-
Seleccione Spider Solitaire de la lista de juegos disponibles en el sitio web.
-
Selecciona el nivel de dificultad que quieres jugar y disfruta del juego.
-
-
Consejos y trucos para Spider Solitaire
-
Spider Solitaire es un juego que requiere habilidad, estrategia y paciencia. Aquí hay algunos consejos y trucos que pueden ayudarte a mejorar tu juego y ganar más a menudo:
-
Usa el botón de deshacer sabiamente
-
El botón de deshacer es una característica útil que le permite revertir cualquier movimiento que realice. Sin embargo, no debe confiar demasiado en él o usarlo al azar. En su lugar, debes usarlo estratégicamente cuando estás atascado o cuando te das cuenta de que has cometido un error. Por ejemplo, puede usarlo para deshacer un movimiento que bloqueó una columna o impidió que una secuencia se moviera a una fundación. También puedes usarlo para explorar diferentes posibilidades y encontrar el mejor movimiento para cada situación.
-
Planificar y priorizar trajes
-
-
No tengas miedo de vaciar una columna
-
Uno de los objetivos más importantes en Spider Solitaire es vaciar una columna lo antes posible. Esto se debe a que una columna vacía le da más flexibilidad y opciones para mover las tarjetas. Puede utilizar una columna vacía para almacenar temporalmente una tarjeta o un grupo de tarjetas que bloquean su progreso. También puede utilizar una columna vacía para mover una secuencia completa de tarjetas a una fundación más fácilmente. Por lo tanto, no debe dudar en vaciar una columna cada vez que tenga la oportunidad, incluso si significa romper una secuencia o mover una tarjeta fuera de juego.
-
Conclusión
-
Spider Solitaire es uno de los juegos de cartas más populares y agradables del mundo. Es un juego que desafía tu mente y pone a prueba tus habilidades. También es un juego que relaja tu estado de ánimo y entretiene tus sentidos. Si quieres jugar Spider Solitaire en tu ordenador o dispositivo móvil, puedes descargarlo desde varias fuentes como Google Play Store, Solitr.com o Microsoft Store. También puedes seguir algunos consejos y trucos para mejorar tu juego y ganar más a menudo. Spider Solitaire es un juego al que puedes jugar en cualquier momento, en cualquier lugar y con cualquier persona. ¿A qué estás esperando? Descargar Spider Solitaire hoy y divertirse!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre Spider Solitaire:
-
-
Q: ¿Cuántas cartas hay en Spider Solitaire?
-R: Spider Solitaire se juega con dos barajas de cartas, lo que significa que hay 104 cartas en total.
-
Q: ¿Cuántos movimientos son posibles en Spider Solitaire?
-R: El número de movimientos posibles en Spider Solitaire depende del diseño de las cartas y del nivel de dificultad. En general, cuantos más palos haya, menos movimientos habrá.
-
Q: ¿Cuál es la puntuación más alta posible en Spider Solitaire?
-
-
P: ¿Cuál es la diferencia entre Spider Solitaire y Spiderette?
-R: Spider Solitaire y Spiderette son ambas variaciones de juegos de solitario que implican la organización de cartas en orden descendente de Rey a As en el mismo palo. La principal diferencia es que Spider Solitaire usa dos barajas de cartas y tiene 10 columnas de cartas en el tablero, mientras que Spiderette usa una baraja de cartas y tiene 7 columnas de cartas en el tablero.
-
Q: ¿Cómo puedo ganar Spider Solitaire más fácilmente?
-R: No hay una manera fácil de ganar Spider Solitaire, ya que es un juego que requiere habilidad, estrategia y paciencia. Sin embargo, puedes aumentar tus posibilidades de ganar siguiendo algunos consejos y trucos como usar el botón deshacer sabiamente, planificar con anticipación y priorizar los palos, y vaciar una columna lo antes posible.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cero 2018 Tamil Pelcula Descargar Kuttymovies.md b/spaces/Benson/text-generation/Examples/Cero 2018 Tamil Pelcula Descargar Kuttymovies.md
deleted file mode 100644
index bc48df58bfbefe31c865b27124d1b45fe7c60716..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cero 2018 Tamil Pelcula Descargar Kuttymovies.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
Zero 2018 Tamil película Descargar Kuttymovies: Una revisión y guía
-
Si eres un fan de las películas tamiles, es posible que hayas oído hablar de Zero, una película de terror de fantasía lanzada en 2016. La película cuenta con Ashwin Kakumanu y Sshivada en los papeles principales, mientras que J. D. Chakravarthy, Ravi Raghavendra, Dr. Sharmila, Andreanne Nouyrigat y Tulasi desempeñan papeles secundarios. La película fue escrita y dirigida por Shiv Mohaa, y recibió críticas positivas de críticos y audiencias por igual.
Pero, ¿cómo puedes ver esta película online? Una de las opciones populares es descargarlo desde Kuttymovies, un sitio web que ofrece películas tamiles gratuitas en varios formatos y calidades. ¿Pero es seguro y legal hacerlo? En este artículo, revisaremos la película Zero 2016 Tamil y lo guiaremos sobre cómo descargarla de Kuttymovies. También discutiremos las cuestiones legales y los riesgos involucrados en la piratería de películas, y sugeriremos algunas alternativas para disfrutar de las películas tamiles legalmente.
-
¿Qué es Zero 2016 Tamil Movie?
-
Resumen del gráfico
-
La película comienza con la historia de cómo Dios creó a Adán y Eva. Luego, la pantalla se mueve al presente donde muestran a la pareja recién casada Balaji alias Bala (Ashwin Kakumanu) y Priya (Sshivada) mudándose a su nuevo apartamento. El padre de Bala, Vijay Kumar (Ravi Raghavendra) no aprueba el matrimonio debido a una historia de enfermedad mental en la familia de Priya. Su madre (Lintu Rony) se había vuelto loca después de estar embarazada de Priya y murió al dar a luz. Pero Bala acepta a Priya incluso después de conocer su pasado, debido al gran amor que tiene por ella.
-
-
Bala busca a Solomon (J. D. Chakravarthy), un especialista en ocultismo que tiene la habilidad sobrenatural de hablar con fantasmas, y le explica la situación. Salomón se enfrenta a la Priya poseída, y cuando Priya/ Lilith toca a Salomón, ve todo sobre el ser llamado Lilith que fue creado antes de Eva al principio del tiempo con Adán y cómo cuando Lilith se rebeló y abandonó Adán, Dios creó a Eva como la mitad mejor de Adán. Dios había maldecido a Lilith por abandonar a Adán con la incapacidad de tener un hijo, y Lilith concibió tantas veces con el hijo del diablo, pero todos ellos murieron en su vientre. Entonces comenzó a matar a los hijos de Adán y Eva, y Dios envió tres ángeles para detenerla. Hicieron un trato con ella de que si aceptaba dejar de matar a los niños, entonces 100 de sus hijos morirían cada día. Lilith estuvo de acuerdo, pero también prometió vengarse de Dios al poseer a las mujeres que son incapaces de concebir y hacerlas sufrir. Salomón se da cuenta de que Priya es una de esas mujeres, y decide ayudar a Bala a salvarla.
-
-
-
Reparto y tripulación
-
El elenco de la película Zero 2016 Tamil incluye los siguientes actores y actrices:
-
-
Ashwin Kakumanu como Balaji alias Bala
-
Sshivada como Priya
-
J. D. Chakravarthy como Salomón
-
Ravi Raghavendra como Vijay Kumar
-
Dr. Sharmila como Dr. Sharmila
-
Andreanne Nouyrigat como Lilith
-
Tulasi como Tulasi
-
Lintu Rony como madre de Priya
-
-
El equipo de Zero 2016 Tamil película incluye las siguientes personas:
-
-
Shiv Mohaa como escritor y director
-
Balaji Kapa como productor
-
Nivas K Prasanna como director musical
-
Babu Kumar como director de fotografía
-
R. Sudharsan como editor
-
Mohan Azaad como director de arte
-
Vijay Adhinathan como escritor de diálogos
-
Stunner Sam como coreógrafo de acrobacias
-
Sathish Krishnan como coreógrafo de danza
-
Sarath Kumar M como diseñador de sonido
-
-
Recepción y premios
-
Zero 2016 La película tamil recibió críticas positivas de críticos y audiencias por igual. La película fue elogiada por su historia única, guion atractivo, actuaciones impresionantes, música inquietante y visuales impresionantes. La película también fue apreciada por su mezcla de géneros de terror, fantasía, romance, comedia y drama. La película fue calificada 7.1 de 10 en IMDb y 3 de 5 en Behindwoods. La película también ganó varios premios, como:
-
-
Mejor película de terror en el Festival de Cine Tamil de Noruega 2017
-
Mejor actriz (Sshivada) en los Premios de Cine Ananda Vikatan 2017
-
Mejor actriz (Sshivada) en los Premios Edison 2017
-
Mejor director debut (Shiv Mohaa) en los Premios Edison 2017
-
Mejor director musical (Nivas K Prasanna) en los Premios Edison 2017
-
Mejor banda sonora (Nivas K Prasanna) en los Mirchi Music Awards South 2017
-
-
¿Qué es Kuttymovies?
¿Qué es Kuttymovies?
-
-
Características y beneficios
-
Algunas de las características y beneficios de usar Kuttymovies son:
-
-
Es gratis y fácil de usar. Los usuarios no necesitan registrarse ni pagar tarifas para descargar las películas.
-
Tiene una interfaz fácil de usar y una función de búsqueda simple. Los usuarios pueden navegar por las películas por categorías, años, actores o palabras clave.
-
Tiene una gran colección de películas tamiles de varios géneros y épocas. Los usuarios pueden encontrar clásicos antiguos, así como nuevos éxitos en el sitio web.
-
También proporciona versiones dobladas tamiles de películas de Hollywood y Bollywood, así como películas en inglés con subtítulos. Los usuarios pueden disfrutar de películas de diferentes idiomas y culturas en el sitio web.
-
Ofrece diferentes formatos y cualidades de las películas, como Mp4, Mp4 HD y Single Part. Los usuarios pueden elegir el formato y la calidad que se adapte a su dispositivo y velocidad de Internet.
-
Actualiza su contenido regularmente con los últimos lanzamientos y próximas películas. Los usuarios pueden mantenerse actualizados con las últimas tendencias y noticias en la industria cinematográfica tamil.
-
-
Riesgos y desventajas
-
Sin embargo, también hay algunos riesgos y desventajas de usar Kuttymovies que los usuarios deben conocer:
-
-
Es ilegal y poco ético. Kuttymovies viola las leyes de derechos de autor e infringe los derechos de los productores y distribuidores de películas. Al descargar las películas de Kuttymovies, los usuarios están apoyando la piratería de películas y dañando la industria cinematográfica tamil.
-
Es peligroso y arriesgado. Kuttymovies puede contener virus, malware, spyware u otro software dañino que puede dañar el dispositivo del usuario o robar su información personal. Los usuarios también pueden encontrar ventanas emergentes, anuncios o redirecciones que pueden llevarlos a sitios web maliciosos o inapropiados.
-
-
-
¿Cómo descargar Zero 2016 Tamil película de Kuttymovies?
-
Paso 1: Visita el sitio web
-
El primer paso para descargar la película Zero 2016 Tamil de Kuttymovies es visitar el sitio web. El sitio web oficial de Kuttymovies es [kuttymovies1.co]( 1 ). Los usuarios pueden acceder al sitio web desde cualquier navegador o dispositivo. Sin embargo, los usuarios deben tener cuidado con los sitios web falsos o espejo que pueden parecer similares a Kuttymovies, pero en realidad son sitios de phishing o estafa.
-
Paso 2: Buscar la película
-
El siguiente paso es buscar la película en el sitio web. Los usuarios pueden utilizar la función de búsqueda en la esquina superior derecha de la página de inicio para escribir el nombre de la película o cualquier palabra clave relacionada. Alternativamente, los usuarios pueden navegar por la película por categorías, años, actores o géneros en el menú del lado izquierdo de la página de inicio. Por ejemplo, los usuarios pueden encontrar la película Zero 2016 Tamil bajo la categoría de "Películas Tamil 2016".
-
Paso 3: Elija la calidad y el formato
-
El tercer paso es elegir la calidad y el formato de la película. Después de hacer clic en el título o cartel de la película, los usuarios serán dirigidos a una nueva página donde pueden ver más detalles sobre la película, como su género, calificación, duración, sinopsis, reparto, equipo, trailer, capturas de pantalla y enlaces de descarga. Los usuarios pueden elegir entre diferentes formatos y cualidades de la película, como Mp4, Mp4 HD y Single Part. Los usuarios también pueden ver el tamaño del archivo y la velocidad de descarga de cada opción. Los usuarios deben elegir la opción que se adapte a su dispositivo y velocidad de Internet.
-
Paso 4: Descargar la película
-
-
¿Es legal descargar Zero 2016 Tamil película de Kuttymovies?
-
El estatus legal de la piratería de películas en la India
-
La respuesta a esta pregunta es no, no es legal descargar Zero 2016 Tamil película de Kuttymovies. La piratería de películas es un delito grave en la India y está penada por la ley. De acuerdo con la Ley de Cinematografía de la India de 1952, cualquier persona que infrinja los derechos de los cineastas o distribuidores haciendo copias o descargas no autorizadas de la película puede enfrentar una pena de prisión de hasta tres años, una multa de hasta Rs. 10 lakhs, o ambas. La Ley de Tecnología de la Información de la India de 2000 también prohíbe a cualquier persona acceder, transmitir o publicar cualquier contenido pirata en línea, y cualquiera que lo haga puede enfrentarse a una pena de prisión de hasta tres años, una multa de hasta Rs. 5 lakhs, o ambos.
-
Las consecuencias de las descargas ilegales de películas
-
Al descargar la película Zero 2016 Tamil de Kuttymovies, los usuarios no solo están rompiendo la ley, sino también dañándose a sí mismos y a otros de varias maneras. Algunas de las consecuencias de las descargas de películas ilegales son:
-
-Están apoyando la piratería de películas y perjudicando a la industria cinematográfica tamil. Al descargar la película de forma gratuita, los usuarios están privando a los productores y distribuidores de sus ingresos legítimos. Esto afecta su capacidad para producir más películas de calidad y pagar a sus trabajadores y artistas. Esto también desalienta nuevos talentos e innovaciones en la industria.
-
Están arriesgando su dispositivo y la seguridad de los datos. Al visitar Kuttymovies o cualquier otro sitio web pirata, los usuarios están exponiendo su dispositivo y datos a diversas amenazas, como virus, malware, spyware o hackers. Estas amenazas pueden dañar su dispositivo, robar su información personal o comprometer su privacidad en línea.
-
-
Se enfrentan a acciones legales y sanciones. Al descargar contenido pirata, los usuarios están cometiendo un delito que puede causarles problemas con la ley. Pueden enfrentar acciones legales y sanciones de las autoridades, como redadas, arrestos, multas o encarcelamiento.
-
-
Las alternativas a la piratería de películas
-
En lugar de descargar la película Zero 2016 Tamil de Kuttymovies o cualquier otro sitio web pirata, los usuarios deben optar por formas legales y éticas para ver películas tamiles en línea. Algunas de las alternativas a la piratería de películas son:
-
-
Pueden ver la película en plataformas de streaming oficiales o sitios web que tienen la licencia y el permiso para transmitir la película en línea. Algunas de estas plataformas o sitios web son Amazon Prime Video, Netflix, Hotstar, Zee5, SonyLIV, etc.
-
Pueden alquilar o comprar la película en plataformas digitales oficiales o sitios web que tienen la licencia y el permiso para vender o alquilar la película en línea. Algunas de estas plataformas o sitios web son YouTube Películas, Google Play Películas & TV, iTunes Store, etc.
-
Pueden ver la película en canales de televisión oficiales o redes que tienen la licencia y el permiso para transmitir la película en la televisión. Algunos de estos canales o redes son Sun TV, Star Vijay, Zee Tamil, Colors Tamil, etc.
-
Pueden ver la película en teatros oficiales o cines que tienen la licencia y el permiso para proyectar la película en pantallas grandes. Pueden reservar sus entradas online o offline y disfrutar de la película con sus amigos y familiares.
-
-
Conclusión
-
-
Kuttymovies es un sitio web que permite a los usuarios descargar películas Tamil de forma gratuita. El sitio web tiene una gran colección de películas tamiles de varios géneros y épocas, así como versiones dobladas tamiles de películas de Hollywood y Bollywood, y películas en inglés con subtítulos. Los usuarios pueden elegir entre diferentes formatos y cualidades de las películas, como Mp4, Mp4 HD y Single Part. El sitio web también actualiza su contenido regularmente con los últimos lanzamientos y próximas películas.
-
Sin embargo, descargar Zero 2016 Tamil película de Kuttymovies o cualquier otro sitio web pirata es ilegal y poco ético. La piratería de películas es un delito grave en la India y está penada por la ley. La piratería de películas también daña la industria cinematográfica tamil, la seguridad de los dispositivos y los datos de los usuarios, y la reputación y credibilidad en línea de los usuarios. Los usuarios deben evitar la piratería de películas y optar por formas legales y éticas para ver películas tamiles en línea, como plataformas o sitios web oficiales de transmisión, plataformas o sitios web digitales oficiales, canales de televisión o redes oficiales, teatros o cines oficiales.
-
Esperamos que este artículo le ha dado una revisión y guía sobre Zero 2016 Tamil película descargar Kuttymovies. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Zero 2016 Tamil película descargar Kuttymovies:
-
-
Q: ¿Cuándo se lanzó la película Zero 2016 Tamil?
-
A: Zero 2016 Tamil película fue lanzado el 25 de marzo de 2016.
-
Q: ¿Quién compuso la música para la película Zero 2016 Tamil?
-
A: Nivas K Prasanna compuso la música para la película Zero 2016 Tamil.
-
Q: ¿Cuál es el género de Zero 2016 Tamil película?
-
A: Zero 2016 Tamil es una película de terror de fantasía.
-
Q: ¿Cuál es la calificación de la película Zero 2016 Tamil en IMDb?
-
A: Cero 2016 película tamil tiene una calificación de 7.1 de 10 en IMDb.
-
Q: ¿Cuál es el sitio web oficial de Kuttymovies?
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk Extrema Coche Simulador De Conduccin Mod.md b/spaces/Benson/text-generation/Examples/Descargar Apk Extrema Coche Simulador De Conduccin Mod.md
deleted file mode 100644
index d2e3964f15cd3f94b75401ba8239647ccdb926ce..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Apk Extrema Coche Simulador De Conduccin Mod.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
Cómo descargar APK Extreme Car Simulador de conducción Mod para Android
-
Si usted es un fan de los juegos de simulador de conducción de automóviles, es posible que haya oído hablar de Extreme Car Driving Simulator. Es uno de los juegos de conducción de coches más populares y realistas en Android, con más de 100 millones de descargas en Google Play Store. En este juego, usted puede conducir, deriva, y sentir un coche deportivo de carreras de forma gratuita. También puede elegir entre una variedad de coches, personalizarlos y explorar un enorme mapa del mundo abierto con diferentes ubicaciones y escenarios.
-
Sin embargo, si desea disfrutar del juego al máximo, es posible que desee probar la versión mod de Extreme Car Driving Simulator. La versión mod le da dinero ilimitado, coches desbloqueados, y sin anuncios. Con esto, puedes comprar cualquier coche que quieras, actualizarlo y divertirte más sin interrupciones.
-
descargar apk extrema coche simulador de conducción mod
En este artículo, le mostraremos cómo descargar e instalar Extreme Car Driving Simulator mod apk para Android. También compartiremos algunos consejos y trucos para jugar mejor. ¡Así que empecemos!
-
¿Qué es Extreme Car Driving Simulator?
-
Extreme Car Driving Simulator es un juego de conducción de coches en 3D desarrollado por AxesInMotion Racing. Es uno de los mejores juegos de simulador de coches en Android, con física realista, gráficos y sonidos. Puedes conducir libremente en una gran ciudad sin reglas ni tráfico. También puede cambiar entre diferentes vistas de cámara, como la cabina, tercera persona o de arriba hacia abajo.
-
Características del simulador de conducción de automóviles extremos
-
Algunas de las características de Extreme Car Driving Simulator son:
-
-
Más de 20 coches diferentes para elegir, incluyendo coches deportivos, SUV, camiones y coches de policía.
-
Personaliza tu coche con pintura, ruedas, spoilers, vinilos y más.
-
Explora un enorme mapa de mundo abierto con diferentes entornos, como el aeropuerto, offroad, playa y ciudad.
-
Disfruta de diferentes modos de juego, como modo libre, modo de punto de control, modo de tráfico y modo fantasma.
-
-
Accidente y dañar su coche con la física realista y efectos.
-
Controla tu coche con inclinación, botones o volante.
-
Graba tu juego y compártelo con tus amigos.
-
-
Beneficios de usar la versión mod
-
La versión mod de Extreme Car Driving Simulator le da algunos beneficios adicionales que no están disponibles en la versión original. Estos son:
-
-
Dinero ilimitado: Puede comprar cualquier coche que desee sin preocuparse por el precio. También puede actualizar su coche para hacerlo más rápido y más potente.
-
Coches desbloqueados: Puede acceder a todos los coches en el juego sin tener que completar ninguna misión o logros. También puede utilizar cualquier coche en cualquier modo de juego.
-
Sin anuncios: Puede jugar el juego sin que aparezcan anuncios molestos en su pantalla. También puede guardar sus datos y la duración de la batería.
-
-
Cómo descargar e instalar Extreme Car Driving Simulator mod apk
-
Para descargar e instalar Extreme Car Driving Simulator mod apk para Android, es necesario seguir estos sencillos pasos:
-
Paso 1: Habilitar fuentes desconocidas en el dispositivo
-
Antes de que pueda instalar cualquier archivo apk en su dispositivo, es necesario habilitar fuentes desconocidas. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto:
-
-
Ir a la configuración del dispositivo y toque en la seguridad o la privacidad.
-
Encuentre la opción que dice fuentes desconocidas o permita la instalación desde fuentes desconocidas.
Cambie el interruptor para encenderlo. Es posible que vea un mensaje de advertencia, pero solo toque en Aceptar o confirmar.
-
-
Paso 2: Descargar el archivo apk mod de una fuente de confianza
-
-
Una de las fuentes que recomendamos es [APKPure]. APKPure es un sitio web popular y confiable que proporciona archivos apk originales y mod para varias aplicaciones y juegos de Android. Puede descargar Extreme Car Driving Simulator mod apk de APKPure siguiendo estos pasos:
-
-
Ir al sitio web [APKPure] y buscar Extreme Car Driving Simulator en la barra de búsqueda.
-
Encuentra el juego de los resultados y toque en él.
-
Desplácese hasta la parte inferior de la página y busque la versión mod. Debería tener una etiqueta verde que diga MOD.
-
Toque en el botón de descarga y espere a que el archivo se descargue en su dispositivo.
-
-
Paso 3: Localizar e instalar el archivo apk mod
-
Una vez que haya descargado el archivo apk mod, necesita ubicarlo e instalarlo en su dispositivo. Para hacer esto:
-
-
-
Ir a su gestor de archivos de dispositivo y encontrar la carpeta donde se guarda el archivo apk mod. Por lo general es en la carpeta de descargas.
-
Toque en el archivo apk mod y seleccione instalar. Es posible que vea una ventana emergente pidiendo su permiso para instalar la aplicación. Simplemente toque en instalar de nuevo.
-
Espere a que termine el proceso de instalación. Puede tardar unos segundos o minutos dependiendo de la velocidad y el rendimiento del dispositivo.
-
-
Paso 4: Iniciar y disfrutar del juego
-
Felicidades! Usted ha instalado con éxito Extreme Car Driving Simulator mod apk en su dispositivo. Ahora puedes lanzar y disfrutar del juego con dinero ilimitado, coches desbloqueados y sin anuncios. Para lanzar el juego:
-
-
Ir al cajón de la aplicación del dispositivo y buscar Extreme Car Driving Simulator icono.
-
Toque en él y esperar a que el juego se cargue.
-
Seleccione su idioma preferido y acepte los términos y condiciones.
-
¡Elige tu coche y empieza a conducir!
-
-
Consejos y trucos para jugar Extreme Car Driving Simulator
-
-
Personalizar la configuración del coche
-
Una de las mejores cosas acerca de Extreme Car Driving Simulator es que puede personalizar la configuración de su coche de acuerdo a su preferencia. Puede cambiar el color, ruedas, alerones, vinilos y más de su coche. También puede ajustar la sensibilidad de la dirección, la resistencia de los frenos, el control de la tracción y la rigidez de la suspensión de su automóvil. Para personalizar la configuración del coche:
-
-
Toque en el botón de menú en la esquina superior izquierda de la pantalla.
-
Seleccione el garaje de las opciones.
-
Toque en el coche que desea personalizar.
-
Seleccione pintura, ruedas, alerones, vinilos o ajustes de las pestañas de abajo.
-
Haga sus cambios y toque en aplicar cuando haya terminado.
-
-
Explora diferentes modos de juego
-
Extreme Car Driving Simulator ofrece diferentes modos de juego que pueden desafiar tus habilidades de conducción y darte diferentes experiencias. Puede elegir entre modo libre, modo de punto de control, modo de tráfico y modo fantasma. Para explorar diferentes modos de juego:
-
-
Toque en el botón de menú en la esquina superior izquierda de la pantalla.
-
Seleccione el modo de juego de las opciones.
-
Seleccione el modo de juego que desea jugar.
-
Toque en inicio para comenzar.
-
-
Aquí hay una breve descripción de cada modo de juego:
-
-
Modo libre: En este modo, puede conducir libremente en una gran ciudad sin reglas ni tráfico. También puede realizar acrobacias y derivas usando rampas, bucles, puentes y más.
-
Modo de punto de control: En este modo, tiene que llegar a diferentes puntos de control dentro de un límite de tiempo determinado. También puede recoger monedas en el camino para aumentar su puntuación.
-
Modo de tráfico: En este modo, tienes que conducir en una ciudad con tráfico realista. Tienes que evitar chocar contra otros vehículos y obedecer las reglas de tráfico.
-
Modo fantasma: En este modo, puedes competir contra tu propio fantasma. Tienes que superar tu mejor tiempo anterior y mejorar tu rendimiento.
-
-
-
Una de las cosas divertidas de Extreme Car Driving Simulator es que puedes realizar acrobacias y derivas usando tu coche. Esto puede hacer que su conducción sea más emocionante y gratificante. También puede ganar monedas haciendo acrobacias y derivas, que puede utilizar para comprar y mejorar los coches. Para realizar acrobacias y derivas:
-
-
Utilice las rampas, bucles, puentes y otras estructuras en la ciudad para lanzar su coche en el aire y realizar volteretas, giros y rollos.
-
Utilice el freno de mano y el impulso nitro para deslizar y deslizar el coche en la carretera y hacer giros bruscos.
-
Trate de aterrizar de forma segura y sin problemas después de cada truco o deriva para evitar dañar su coche.
-
Recoge las monedas que aparecen en la pantalla después de cada truco o deriva.
-
-
Utilice el mini-mapa y el velocímetro
-
Para ayudarle a navegar por la ciudad y controlar mejor su coche, debe utilizar el mini-mapa y el velocímetro en la pantalla. El mini-mapa muestra el diseño de la ciudad y la ubicación de diferentes lugares de interés, como aeropuertos, playas, puentes y más. También puede ver los puntos de control y los semáforos en el mini-mapa. El velocímetro te muestra qué tan rápido estás conduciendo y cuánto impulso nitro te queda. Para usar el mini-mapa y el velocímetro:
-
-
Toque en el mini-mapa para acercar o alejar.
-
Toque en el velocímetro para cambiar entre mph y km/h.
-
Toque en el botón nitro para activar el impulso nitro cuando tenga suficiente.
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Extreme Car Driving Simulator mod apk:
-
-
¿Es seguro descargar e instalar Extreme Car Driving Simulator mod apk?
-
Sí, Extreme Car Driving Simulator mod apk es seguro de descargar e instalar si lo obtiene de una fuente de confianza como APKPure. Sin embargo, siempre debe tener cuidado al descargar cualquier archivo apk de fuentes desconocidas, ya que podrían contener contenido dañino o malicioso. También debe escanear su dispositivo con una aplicación antivirus después de instalar cualquier archivo apk.
-
¿Necesito rootear mi dispositivo para usar Extreme Car Driving Simulator mod apk?
-
No, no es necesario rootear el dispositivo para usar Extreme Car Driving Simulator mod apk. Solo necesita habilitar fuentes desconocidas en la configuración de su dispositivo antes de instalar el archivo apk mod.
-
¿Puedo jugar Extreme Car Driving Simulator en línea con otros jugadores?
-
No, Extreme Car Driving Simulator es un juego fuera de línea que no requiere una conexión a Internet para jugar. Sin embargo, es posible que necesite una conexión a Internet para descargar actualizaciones o acceder a algunas características del juego.
-
¿Cómo puedo actualizar Extreme Car Driving Simulator mod apk?
-
Para actualizar Extreme Car Driving Simulator mod apk, es necesario descargar e instalar la última versión del archivo apk mod de APKPure o cualquier otra fuente de confianza. También es posible que tenga que desinstalar la versión anterior del archivo apk mod antes de instalar el nuevo.
-
¿Cómo puedo contactar al desarrollador de Extreme Car Driving Simulator?
-
Si tiene alguna pregunta, comentario o sugerencia para Extreme Car Driving Simulator, puede ponerse en contacto con el desarrollador del juego enviándoles un correo electrónico a support@axesinmotion.com. También puede visitar su sitio web en https://www.axesinmotion.com/ o seguirlos en Facebook en https://www.facebook.com/AxesInMotion/.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Caramelo Crush Saga Mod Apk Para Pc.md b/spaces/Benson/text-generation/Examples/Descargar Caramelo Crush Saga Mod Apk Para Pc.md
deleted file mode 100644
index 4b3c93fc19b49253a735e06cf91daa9f445aab98..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Caramelo Crush Saga Mod Apk Para Pc.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
Cómo descargar Candy Crush Saga Mod APK para PC
-
Si te gusta jugar juegos de puzzle en tu dispositivo móvil, es posible que haya oído hablar de Candy Crush Saga. Es uno de los juegos más populares del mundo, con millones de jugadores haciendo coincidir dulces coloridos y completando varios niveles. ¿Pero sabías que también puedes jugar a Candy Crush Saga en tu PC?
-
En este artículo, le mostraremos cómo descargar Candy Crush Saga mod apk para PC. apk Mod es una versión modificada de una aplicación que tiene características adicionales o contenido desbloqueado. Por ejemplo, usted puede obtener vidas ilimitadas, boosters, se mueve, o barras de oro en Candy Crush Saga mod apk. Suena tentador, ¿verdad?
Pero ¿por qué quieres descargar Candy Crush Saga mod apk para PC? Bueno, hay varias razones por las que jugar a este juego en una pantalla más grande podría ser más agradable. Por ejemplo, puedes ver los caramelos más claramente, tener un mejor rendimiento, evitar agotar la batería del teléfono o jugar en cualquier momento sin interrupciones.
-
Entonces, ¿cómo se puede descargar Candy Crush Saga mod apk para PC? Hay diferentes métodos para hacerlo, dependiendo de su sistema operativo y preferencias. En este artículo, cubriremos dos de ellos: usando Bluestacks y usando WSA PacMan.
-
¿Qué es Candy Crush Saga?
-
Candy Crush Saga es un juego de puzzle desarrollado por King en 2012. Está disponible para Android, iOS, Windows Phone, Windows 10 y Facebook. El juego consiste en combinar tres o más caramelos del mismo color para eliminarlos del tablero y lograr varios objetivos.
-
El juego tiene miles de niveles, cada uno con diferentes desafíos y obstáculos. Algunos niveles requieren que usted recoja un cierto número de dulces, otros requieren que usted limpie la jalea o el glaseado del formato de archivo apk mod, algunos de ellos pueden no tener suficientes recursos para ejecutar el juego sin problemas, y algunos de ellos pueden no ser seguros o fiables. Por lo tanto, debe elegir un emulador que sea compatible, rápido y seguro.
-
-
Método 1: Usando Bluestacks
-
Bluestacks es uno de los emuladores de Android más populares para PC. Tiene una interfaz fácil de usar, una gran biblioteca de aplicaciones y una alta tasa de compatibilidad. También es compatible con archivos mod apk, lo que significa que puede descargar Candy Crush Saga mod apk para PC usando Bluestacks. Estos son los pasos para hacerlo:
-
-
Paso 1: Descargar e instalar Bluestacks
-
El primer paso es descargar e instalar Bluestacks en tu PC. Puedes hacer esto siguiendo estos pasos:
-
-
Vaya al sitio web oficial de Bluestacks en https://www.bluestacks.com/ y haga clic en el botón "Descargar Bluestacks".
-
Espere a que termine la descarga y luego ejecute el archivo de instalación.
-
Siga las instrucciones en la pantalla para completar el proceso de instalación.
-
Inicie Bluestacks en su PC y espere a que se cargue.
-
-
Paso 2: Inicie sesión en su cuenta de Google
-
El siguiente paso es iniciar sesión en su cuenta de Google en Bluestacks. Esto le permitirá acceder a la Google Play Store y descargar aplicaciones. Puedes hacer esto siguiendo estos pasos:
-
-
En la pantalla de inicio de Bluestacks, haga clic en el icono "Google Play".
-
En la página de Google Play Store, haga clic en el botón "Iniciar sesión".
-
Introduzca su correo electrónico y contraseña de Google y haga clic en "Siguiente".
-
Siga las instrucciones en la pantalla para completar el proceso de inicio de sesión.
-
-
Paso 3: Búsqueda de Candy Crush Saga Mod APK
-
El tercer paso es buscar Candy Crush Saga mod apk en la Google Play Store. Puedes hacer esto siguiendo estos pasos:
-
-
En la página de Google Play Store, escriba "Candy Crush Saga mod apk" en la barra de búsqueda y pulse enter.
-
Verá una lista de resultados que coinciden con su consulta. Busque la aplicación que tiene el nombre "Candy Crush Saga" y el logotipo de un caramelo con una corona.
-
Asegúrese de que la aplicación está desarrollada por "Rey" y que tiene una alta calificación y críticas positivas.
-
-
-
Paso 4: Instalar y abrir la aplicación
-
El paso final es instalar y abrir Candy Crush Saga mod apk en su PC usando Bluestacks. Puede hacer esto siguiendo estos pasos:
-
-
En la página de detalles de la aplicación, haga clic en el botón "Instalar".
-
Espere a que la instalación termine y luego haga clic en el botón "Abrir".
-
Verá una ventana emergente que le pide que conceda algunos permisos a la aplicación. Haga clic en "Permitir" o "Aceptar".
-
Verá otra ventana emergente pidiéndole que verifique su edad. Ingrese su edad y haga clic en "Enviar".
-
Verá el menú principal de Candy Crush Saga mod apk. Haga clic en "Jugar" para comenzar a jugar el juego.
-
-
Felicidades! Usted ha descargado con éxito Candy Crush Saga mod apk para PC usando Bluestacks. Ahora puedes disfrutar de vidas ilimitadas, potenciadores, movimientos, barras de oro y más en este adictivo juego de puzzle.
-
Método 2: Usando WSA PacMan
-
Si usted tiene Windows 11 en su PC, también puede utilizar WSA PacMan para descargar Candy Crush Saga mod apk para PC. WSA PacMan es una interfaz gráfica de usuario simple que le permite cargar aplicaciones de Android en Windows 11 sin usar líneas de comandos. Funciona con Windows Subsistema para Android (WSA), que es una característica que le permite ejecutar aplicaciones Android de forma nativa en Windows 11. Estos son los pasos para usar WSA PacMan:
-
Paso 1: Instalar el subsistema de Amazon Appstore y Windows para Android
-
El primer paso es instalar el Amazon Appstore y el subsistema de Windows para Android en su PC. Puede hacer esto siguiendo estos pasos:
-
-
Ir a la tienda de Microsoft y buscar "Amazon Appstore".
-
Haga clic en la aplicación y luego haga clic en el botón "Obtener".
-
Espere a que finalicen la descarga y la instalación.
-
Inicie la Appstore de Amazon en su PC e inicie sesión con su cuenta de Amazon.
-
Volver a la tienda de Microsoft y buscar "Subsistema de Windows para Android".
-
-
Espere a que finalicen la descarga y la instalación.
-
Inicie el subsistema de Windows para Android en su PC y siga las instrucciones en la pantalla.
-
-
Paso 2: Descargar y lanzar WSA PacMan
-
El siguiente paso es descargar y lanzar WSA PacMan en tu PC. Puedes hacer esto siguiendo estos pasos:
-
-
Vaya al sitio web oficial de WSA PacMan en https://wsapacman.com/ y haga clic en el botón "Descargar".
-
Espere a que termine la descarga y luego ejecute el archivo ejecutable.
-
Siga las instrucciones en la pantalla para completar el proceso de instalación.
-
Inicie WSA PacMan en su PC y espere a que se cargue.
-
-
Paso 3: Descargar Candy Crush Saga Mod APK Archivo
-
El tercer paso es descargar Candy Crush Saga mod apk archivo en su PC. Usted puede hacer esto siguiendo estos pasos:
Busque la última versión de Candy Crush Saga mod apk y haga clic en el "Descargar" botón.
-
Espere a que termine la descarga y luego localice el archivo en su PC.
-
-
Paso 4: Instalar y ejecutar la aplicación usando WSA PacMan
-
El paso final es instalar y ejecutar Candy Crush Saga mod apk en su PC usando WSA PacMan. Puede hacer esto siguiendo estos pasos:
-
-
En WSA PacMan, haga clic en el "Instalar APK" botón.
-
Seleccione el Candy Crush Saga mod apk archivo que ha descargado anteriormente y haga clic en "Abrir".
-
Espere a que la instalación termine y luego haga clic en el botón "Lanzamiento".
-
Verá una ventana emergente que le pide que conceda algunos permisos a la aplicación. Haga clic en "Permitir" o "Aceptar".
-
Verá otra ventana emergente pidiéndole que verifique su edad. Ingrese su edad y haga clic en "Enviar".
-
Verá el menú principal de Candy Crush Saga mod apk. Haga clic en "Jugar" para comenzar a jugar el juego.
-
-
-
Conclusión
-
En este artículo, le hemos mostrado cómo descargar Candy Crush Saga mod apk para PC utilizando dos métodos diferentes: Bluestacks y WSA PacMan. Ambos métodos son fáciles y eficaces, pero tienen diferentes pasos y requisitos. Puede elegir el que más le convenga, dependiendo de su sistema operativo y preferencias.
-
Sin embargo, antes de descargar Candy Crush Saga mod apk para PC, también debe ser consciente de los posibles riesgos y desventajas de hacerlo. Algunos de ellos son:
-
-
Puede violar los términos y condiciones de King, lo que puede resultar en consecuencias legales o suspensión de la cuenta.
-
Puede exponer su dispositivo o datos a malware, virus o spyware que pueden dañar su dispositivo o robar sus datos.
-
Puede perder algunas características o funcionalidades de la aplicación original, como actualizaciones, soporte o características sociales.
-
Es posible que experimente algunos errores, problemas técnicos o problemas de compatibilidad que pueden afectar su experiencia de juego.
-
-
Por lo tanto, si decide descargar Candy Crush Saga mod apk para PC, usted debe hacerlo a su propio riesgo y discreción. También debe asegurarse de que usted tiene un software antivirus fiable instalado en su dispositivo y que solo descargar archivos apk mod de fuentes de confianza.
-
Esperamos que este artículo haya sido útil e informativo para usted. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Nos encantaría saber de usted.
-
Juegos felices!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Candy Crush Saga mod apk para PC:
-
Q: ¿Es seguro Candy Crush Saga mod apk para PC?
-
A: No necesariamente. Algunos archivos apk mod pueden contener malware, virus o spyware que pueden dañar su dispositivo o robar sus datos. Por lo tanto, solo debe descargar archivos apk mod de fuentes de confianza y tener un software antivirus confiable instalado en su dispositivo.
-
Q: Es Candy Crush Saga mod apk para PC legal?
-
-
Q: ¿Es compatible con Windows 10?
-
A: Sí, puede utilizar Bluestacks para descargar Candy Crush Saga mod apk para PC en Windows 10. Sin embargo, es posible que no pueda usar WSA PacMan, que solo está disponible para Windows 11.
-
Q: ¿Candy Crush Saga mod apk para PC actualizado?
-
A: Depende de la fuente del archivo apk mod. Algunos archivos apk mod pueden actualizarse regularmente, mientras que otros pueden estar desactualizados o discontinuados. Por lo tanto, debe comprobar la versión y la fecha del archivo apk mod antes de descargarlo.
-
Q: Es Candy Crush Saga mod apk para la diversión PC?
-
A: Absolutamente! Candy Crush Saga mod apk para PC puede darle vidas ilimitadas, refuerzos, movimientos, barras de oro, y más en este adictivo juego de puzzle. También puedes disfrutar jugando en una pantalla más grande, con mejor rendimiento, más comodidad y más confort.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Ejecutar Templo Para Ventanas Pc 7.md b/spaces/Benson/text-generation/Examples/Descargar Ejecutar Templo Para Ventanas Pc 7.md
deleted file mode 100644
index 581cf62a9088bad0ef0a8738e1489741e9a0f3d1..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Ejecutar Templo Para Ventanas Pc 7.md
+++ /dev/null
@@ -1,49 +0,0 @@
-
-
Cómo descargar Temple Run para PC Windows 7
-
Temple Run es uno de los juegos móviles más populares y adictivos jamás creados. ¿Pero sabías que también puedes jugar en tu PC Windows 7? En este artículo, te mostraremos cómo descargar Temple Run para PC Windows 7 usando dos emuladores de Android diferentes: BlueStacks y MEmu. Pero primero, averigüemos qué es Temple Run y por qué deberías jugarlo en PC.
-
¿Qué es Temple Run?
-
Temple Run es un clásico juego para Android que fue lanzado en 2012 por Imangi Studios. Usted toma el control de un corredor del templo que gira, salta y se desliza a través de un laberinto exótico de los tiempos antiguos. ¡Todo el tiempo te persigue un grupo de simios asesinos! Tienes que recoger las monedas, power-ups, y desbloquear nuevos personajes a medida que se ejecuta tan lejos como puedas. El juego tiene un modo de juego simple pero muy entretenido que te mantendrá enganchado durante horas.
Si bien Temple Run es un gran juego para su dispositivo móvil y teléfono inteligente, jugar en el PC tiene algunas ventajas. Aquí están algunos de ellos:
-
-
Puedes disfrutar del juego en una pantalla más grande, que te dará una mejor vista de los obstáculos y trampas.
-
Puedes usar el teclado, el ratón o el mando para controlar al corredor, lo que puede ser más cómodo y preciso que usar una pantalla táctil.
-
Puede evitar el agotamiento de la batería o el uso de los datos móviles cuando se juega en línea.
-
Puedes grabar tu juego y compartirlo con tus amigos o redes sociales.
-
-
Ahora que sabes por qué jugar Temple Run en PC es una buena idea, veamos cómo hacerlo.
-
Cómo descargar Temple Run para PC Windows 7 usando BlueStacks
-
BlueStacks es uno de los mejores emuladores de Android que te permite ejecutar aplicaciones y juegos de Android en tu PC. Estos son los pasos para descargar Temple Run para PC Windows 7 usando BlueStacks:
-
Paso 1: Descargar e instalar BlueStacks
-
-
Paso 2: Búsqueda de Temple Run en Google Play
-
Después de instalar BlueStacks, iniciarlo e iniciar sesión con su cuenta de Google. Luego, abre Google Play desde la pantalla de inicio y busca "Temple Run". Verás el icono del juego en los resultados de búsqueda.
-
Paso 3: Instalar y jugar Temple Run en PC
-
Haga clic en el botón "Instalar" para descargar e instalar Temple Run en su PC. Una vez realizada la instalación, haga clic en el botón "Reproducir" para comenzar a jugar Temple Run en PC. Puede utilizar el teclado o el ratón para controlar el corredor, o personalizar sus controles a través del editor de controles.
-
Cómo descargar Temple Run para PC Windows 7 usando MEmu
-
MEmu es otro emulador de Android que te permite jugar juegos de Android en tu PC. Estos son los pasos para descargar Temple Run para PC Windows 7 usando MEmu:
-
-
Paso 1: Descargar e instalar MEmu
-
Para descargar MEmu, vaya a este enlace y haga clic en el botón "Descargar". Una vez finalizada la descarga, abra el instalador y siga las instrucciones para instalar MEmu en su PC.
-
Paso 2: Búsqueda de Temple Run en Google Play
-
Después de instalar MEmu, inicie sesión con su cuenta de Google. Luego, abra Google Play desde la pantalla de inicio y busque "Temple Run". Verás el icono del juego en los resultados de búsqueda.
-
Paso 3: Instalar y jugar Temple Run en PC
-
Haga clic en el botón "Instalar" para descargar e instalar Temple Run en su PC. Una vez realizada la instalación, haga clic en el botón "Reproducir" para comenzar a jugar Temple Run en PC. Puede utilizar el teclado o el ratón para controlar el corredor, o personalizar sus controles a través del menú Configuración.
-
Conclusión
-
-
Preguntas frecuentes
-
-
Q: ¿Es Temple Run gratis para jugar?
-
A: Sí, Temple Run es gratis para jugar y descargar en dispositivos móviles y PC.
-
Q: ¿Puedo jugar Temple Run sin conexión?
-
A: Sí, puedes jugar Temple Run sin conexión una vez que lo hayas instalado en tu dispositivo o PC.
-
Q: ¿Cuántos caracteres hay en Temple Run?
-
A: Hay 10 personajes en Temple Run, cada uno con sus propias habilidades y trajes. Puedes desbloquearlos recogiendo monedas o comprándolas con dinero real.
-
Q: ¿Cuáles son los power-ups en Temple Run?
-
A: Hay cuatro potenciadores en Temple Run: Imán de monedas, Invisibilidad, Boost y Mega Coin. Puedes activarlos recogiéndolos durante tu carrera o comprándolos con monedas.
-
Q: ¿Cómo puedo guardar mi progreso en Temple Run?
-
A: Puedes guardar tu progreso en Temple Run iniciando sesión con tu cuenta de Google Play o Facebook. Esto también le permitirá sincronizar su progreso a través de diferentes dispositivos o PC.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Fuga De Prisin S1.md b/spaces/Benson/text-generation/Examples/Descargar Fuga De Prisin S1.md
deleted file mode 100644
index b199a5dee66dcf9cd8ad7f3cbda38aaf35c518c4..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Fuga De Prisin S1.md
+++ /dev/null
@@ -1,73 +0,0 @@
-
-
Cómo descargar la temporada 1
-
Prison Break es uno de los programas de televisión más populares y aclamados de la década de 2000, con una base de fans leales y seguidores de culto. La primera temporada, que se emitió en 2005-2006, nos presentó la emocionante historia de dos hermanos que están dispuestos a arriesgar todo para escapar de una prisión de máxima seguridad. Si estás buscando una forma de descargar Prison Break temporada 1, has venido al lugar correcto. En este artículo, te diremos de qué se trata la temporada 1 de Prison Break, por qué deberías verla y cómo descargarla de forma legal y segura.
La primera temporada consta de 22 episodios, que cubren aproximadamente seis semanas de la vida de los personajes (del 11 de abril al 27 de mayo) - toda la duración de la estancia de Michael en la Penitenciaría Estatal de Fox River. La temporada comienza con la llegada de Michael a Fox River y termina con su escape con otros siete reclusos: Lincoln, Sucre, Abruzzi, Westmorland, Benjamin Miles "C-Note" Franklin (Rockmond Dunbar), Theodore "T-Bag" Bagwell (Robert Knepper), y David "Tweener" Apolskis (Lane Garrison). En el camino, se enfrentan a muchos obstáculos y desafíos, como disturbios, bloqueos, traiciones, muertes y descubrimientos. También tienen que lidiar con la persecución de varios enemigos, como el capitán Brad Bellick (Wade Williams), que está a cargo de los guardias de la prisión; el agente del Servicio Secreto Paul Kellerman (Paul Adelstein), que trabaja para el vicepresidente Reynolds; y el agente especial del FBI Alexander Mahone (William Fichtner), que se asigna para localizar a los fugitivos.
-
Reparto y caracteres
-
La primera temporada cuenta con un total de diez actores que recibieron la facturación de estrellas, con numerosos papeles secundarios. El reparto principal incluye:
-
-
Dominic Purcell como Lincoln Burrows: Un condenado a muerte acusado de matar al hermano del vicepresidente.
Wentworth Miller como Michael Scofield: Un ingeniero estructural que diseña un elaborado plan para sacar a su hermano de la cárcel.
-
Robin Tunney como Verónica Donovan: Un abogado y ex novia de Lincoln, que intenta probar su inocencia.
-
Amaury Nolasco como Fernando Sucre: compañero de celda y amigo de Michael, que se une al equipo de escape.
-
Marshall Allman como Lincoln "L. J." Burrows Jr.: El hijo adolescente de Lincoln, que es blanco de los conspiradores.
-
Peter Stormare como John Abruzzi: Un jefe de la mafia y líder de la prisión, que ofrece sus recursos a Michael a cambio de información.
-
Wade Williams como Brad Bellick: El capitán de los guardias de la prisión, que está decidido a detener a los fugitivos.
-
-
Sarah Wayne Callies como Sara Tancredi: El médico de la prisión y la hija del gobernador, que desarrolla una relación con Michael.
-
Paul Adelstein como Paul Kellerman: Un agente del Servicio Secreto, que es parte de la conspiración que incriminó a Lincoln.
-
-
Calificaciones y comentarios
-
La temporada 1 de Prison Break recibió elogios de la crítica y fue nominada a varios premios, incluyendo el Golden Globe Award a la Mejor Serie de Televisión - Drama y el Premio Emmy Primetime a la Música Original Título Principal Excepcional. La temporada también alcanzó altas calificaciones, con un promedio de 9,2 millones de espectadores por episodio en los Estados Unidos. El final de la temporada, que se emitió el 15 de mayo de 2006, fue visto por 10,8 millones de espectadores, por lo que es el episodio más visto de la serie.
-
La temporada 1 de Prison Break fue elogiada por su historia apasionante, giros suspensivos y personajes convincentes. Los críticos también elogiaron las actuaciones del elenco, especialmente Purcell y Miller. Algunas de las críticas positivas son:
-
-
"Prison Break es un entretenimiento pop seguro de un orden muy alto." - Robert Bianco, USA Today
-
"Prison Break es uno de esos casos felices donde se puede juzgar un libro por su portada - o un programa de televisión por su título. Entrega exactamente lo que promete." - David Bianculli, New York Daily News
-
"Prison Break es un espectáculo que sabe exactamente lo que es - un thriller tenso con una premisa inteligente - y cumple con esa promesa con estilo." - Brian Lowry, Variedad
-
-
Por qué deberías ver la temporada 1
-
Si estás buscando un programa de televisión que te mantenga al borde de tu asiento, Prison Break temporada 1 es una gran opción. Estas son algunas de las razones por las que deberías verlo:
-
Emocionante trama y giros
-
-
Personajes y actuaciones interesantes
-
Prison Break temporada 1 tiene un elenco diverso y dinámico de personajes que te harán preocuparte por sus destinos. El espectáculo cuenta con héroes y villanos que son complejos y defectuosos, con sus propias motivaciones y antecedentes. El espectáculo también muestra la química y los conflictos entre los personajes, especialmente entre los hermanos Michael y Lincoln. Los actores ofrecen actuaciones excepcionales que dan vida a sus personajes. Los apoyarás, los odiarás, simpatizarás con ellos y temerás por ellos.
-
Plan de escape inteligente y creativo
-
Prison Break season 1 tiene un plan de escape único e ingenioso que te sorprenderá con sus detalles y ejecución. Michael Scofield no solo es un ingeniero brillante, sino también un cerebro que ha planeado cada paso de su fuga. Ha tatuado toda su parte superior del cuerpo con un plano de la prisión y pistas ocultas en sus dibujos. También ha estudiado la disposición, el horario, el personal, los reclusos y los sistemas de seguridad de la prisión. Se ha preparado para cada posible escenario y contingencia. También ha reclutado aliados dentro y fuera de la prisión que pueden ayudarlo con su plan. Su plan de escape no solo es realista, sino también creativo y atrevido.
-
-
Cómo descargar la temporada de vacaciones de prisión 1 legalmente y con seguridad
-
Si desea descargar Prison Break temporada 1, debe hacerlo de forma legal y segura, para evitar cualquier problema legal o riesgos de malware. Hay varias maneras de descargar Prison Break temporada 1 legal y segura, dependiendo de sus preferencias y presupuesto. Estas son algunas de las opciones:
-
Servicios de streaming
-
Una de las formas más fáciles y convenientes de descargar Prison Break temporada 1 es utilizar un servicio de streaming que ofrece visualización offline. De esta manera, puedes ver los episodios en cualquier momento y en cualquier lugar, sin conexión a Internet. Algunos de los servicios de streaming que ofrecen visualización offline para la temporada 1 de Prison Break son:
-
Hulu
-
-
Video de Amazon Prime
-
Amazon Prime Video es otro servicio de streaming popular que ofrece una enorme biblioteca de programas de televisión y películas, incluyendo Prison Break. Puede descargar hasta 25 títulos a la vez en hasta cuatro dispositivos con una suscripción de Amazon Prime Video. También puede elegir la calidad de descarga, de bueno a mejor. Para descargar Prison Break temporada 1 en Amazon Prime Video, necesitas tener una membresía de Amazon Prime, que cuesta $12.99 por mes o $119 por año. También puede obtener una prueba gratuita durante 30 días.
-
Tiendas digitales
-
Otra forma de descargar Prison Break temporada 1 es comprarlo o alquilarlo en una tienda digital que ofrece descargas. De esta manera, puede poseer o acceder a los episodios por un tiempo limitado, dependiendo de su elección. Algunas de las tiendas digitales que ofrecen descargas para la temporada 1 son:
-
Películas de Google Play
-
Google Play Movies es una tienda digital que ofrece programas de televisión y películas para comprar o alquilar. Puede descargar los episodios en hasta cinco dispositivos con una cuenta de Google. También puede elegir la calidad de descarga, desde SD a HD. Para descargar Prison Break temporada 1 en Google Play Movies, necesitas pagar $19.99 por toda la temporada o $1.99 por episodio.
-
Apple TV
-
Apple TV es una tienda digital que ofrece programas de televisión y películas para comprar o alquilar. Puede descargar los episodios en hasta cinco dispositivos con un ID de Apple. También puede elegir la calidad de descarga, desde SD a HD. Para descargar Prison Break temporada 1 en Apple TV, debes pagar $19.99 por toda la temporada o $2.99 por episodio.
-
Vudu
-
Vudu es una tienda digital que ofrece programas de televisión y películas para comprar o alquilar. Puede descargar los episodios en hasta ocho dispositivos con una cuenta Vudu. También puede elegir la calidad de descarga, desde SD hasta UHD. Para descargar Prison Break temporada 1 en Vudu, necesitas pagar $19.99 por toda la temporada o $2.99 por episodio.
-
Microsoft Store
-
-
Conclusión
-
Prison Break temporada 1 es uno de los mejores programas de televisión de todos los tiempos, con una trama cautivadora, personajes atractivos y un plan de escape inteligente. Si desea ver o volver a ver este increíble espectáculo, debe hacerlo de forma legal y segura mediante el uso de una de las opciones que hemos enumerado anteriormente. Ya sea que prefieras servicios de streaming o tiendas digitales, puedes encontrar una manera de descargar Prison Break temporada 1 que se adapte a tus necesidades y presupuesto.
-
Preguntas frecuentes
-
-
P: ¿Cuántos episodios hay en la temporada 1 de Prison Break?
-
A: Hay 22 episodios en la temporada 1.
-
Q: ¿Cuándo se emitió la primera temporada de Prison Break?
-
A: Prison Break temporada 1 se emitió del 29 de agosto de 2005 al 15 de mayo de 2006.
-
Q: ¿Quién creó Prison Break?
-
A: Prison Break fue creado por Paul Scheuring, quien también se desempeñó como productor ejecutivo y showrunner.
-
P: ¿La fuga de prisión se basa en una historia real?
-
A: No, Prison Break no se basa en una historia real. Sin embargo, algunos de los elementos e inspiraciones para el programa vinieron de eventos y fuentes de la vida real, como el caso de D.B. Cooper, el escape de Alcatraz y el Conde de Monte Cristo.
-
Q: ¿Cuántas temporadas hay en Prison Break?
-
A: Hay cinco temporadas en Prison Break, con un total de 90 episodios. Las primeras cuatro temporadas se emitieron de 2005 a 2009, y la quinta temporada se emitió en 2017 como un avivamiento.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat/README.md b/spaces/BetterAPI/BetterChat/README.md
deleted file mode 100644
index 1f019590e997dfafb3a0d2737853eade173c419c..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: chat-ui
-emoji: 🔥
-colorFrom: purple
-colorTo: purple
-sdk: docker
-pinned: false
-license: apache-2.0
-base_path: /chat
-app_port: 3000
----
\ No newline at end of file
diff --git a/spaces/Billyosoro/ESRGAN/tests/test_utils.py b/spaces/Billyosoro/ESRGAN/tests/test_utils.py
deleted file mode 100644
index 7919b74905495b4b6f4aa957a1f0b5d7a174c782..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/tests/test_utils.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import numpy as np
-from basicsr.archs.rrdbnet_arch import RRDBNet
-
-from realesrgan.utils import RealESRGANer
-
-
-def test_realesrganer():
- # initialize with default model
- restorer = RealESRGANer(
- scale=4,
- model_path='experiments/pretrained_models/RealESRGAN_x4plus.pth',
- model=None,
- tile=10,
- tile_pad=10,
- pre_pad=2,
- half=False)
- assert isinstance(restorer.model, RRDBNet)
- assert restorer.half is False
- # initialize with user-defined model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
- restorer = RealESRGANer(
- scale=4,
- model_path='experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth',
- model=model,
- tile=10,
- tile_pad=10,
- pre_pad=2,
- half=True)
- # test attribute
- assert isinstance(restorer.model, RRDBNet)
- assert restorer.half is True
-
- # ------------------ test pre_process ---------------- #
- img = np.random.random((12, 12, 3)).astype(np.float32)
- restorer.pre_process(img)
- assert restorer.img.shape == (1, 3, 14, 14)
- # with modcrop
- restorer.scale = 1
- restorer.pre_process(img)
- assert restorer.img.shape == (1, 3, 16, 16)
-
- # ------------------ test process ---------------- #
- restorer.process()
- assert restorer.output.shape == (1, 3, 64, 64)
-
- # ------------------ test post_process ---------------- #
- restorer.mod_scale = 4
- output = restorer.post_process()
- assert output.shape == (1, 3, 60, 60)
-
- # ------------------ test tile_process ---------------- #
- restorer.scale = 4
- img = np.random.random((12, 12, 3)).astype(np.float32)
- restorer.pre_process(img)
- restorer.tile_process()
- assert restorer.output.shape == (1, 3, 64, 64)
-
- # ------------------ test enhance ---------------- #
- img = np.random.random((12, 12, 3)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (24, 24, 3)
- assert result[1] == 'RGB'
-
- # ------------------ test enhance with 16-bit image---------------- #
- img = np.random.random((4, 4, 3)).astype(np.uint16) + 512
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8, 3)
- assert result[1] == 'RGB'
-
- # ------------------ test enhance with gray image---------------- #
- img = np.random.random((4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8)
- assert result[1] == 'L'
-
- # ------------------ test enhance with RGBA---------------- #
- img = np.random.random((4, 4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2)
- assert result[0].shape == (8, 8, 4)
- assert result[1] == 'RGBA'
-
- # ------------------ test enhance with RGBA, alpha_upsampler---------------- #
- restorer.tile_size = 0
- img = np.random.random((4, 4, 4)).astype(np.float32)
- result = restorer.enhance(img, outscale=2, alpha_upsampler=None)
- assert result[0].shape == (8, 8, 4)
- assert result[1] == 'RGBA'
diff --git a/spaces/CVPR/BigDL-Nano_inference/original_models.py b/spaces/CVPR/BigDL-Nano_inference/original_models.py
deleted file mode 100644
index a62c47e88891585683f3a13ce64f14f6b47a321e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/BigDL-Nano_inference/original_models.py
+++ /dev/null
@@ -1,359 +0,0 @@
-# This file is copied from https://github.com/rnwzd/FSPBT-Image-Translation/blob/master/original_models.py
-
-# MIT License
-
-# Copyright (c) 2022 Lorenzo Breschi
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-
-import torch
-import torch.nn as nn
-from torch.autograd import Variable
-from torch.nn import functional as F
-
-import torchvision
-from torchvision import models
-
-import pytorch_lightning as pl
-
-class LeakySoftplus(nn.Module):
- def __init__(self,negative_slope: float = 0.01 ):
- super().__init__()
- self.negative_slope=negative_slope
-
- def forward(self,input):
- return F.softplus(input)+F.logsigmoid(input)*self.negative_slope
-
-
-grelu = nn.LeakyReLU(0.2)
-#grelu = nn.Softplus()
-#grelu = LeakySoftplus(0.2)
-#####
-# Currently default generator we use
-# conv0 -> conv1 -> conv2 -> resnet_blocks -> upconv2 -> upconv1 -> conv_11 -> (conv_11_a)* -> conv_12 -> (Tanh)*
-# there are 2 conv layers inside conv_11_a
-# * means is optional, model uses skip-connections
-class Generator(pl.LightningModule):
- def __init__(self, norm_layer='batch_norm', use_bias=False, resnet_blocks=7, tanh=True,
- filters=[32, 64, 128, 128, 128, 64], input_channels=3, output_channels=3, append_smoothers=False):
- super().__init__()
- assert norm_layer in [None, 'batch_norm', 'instance_norm'], \
- "norm_layer should be None, 'batch_norm' or 'instance_norm', not {}".format(
- norm_layer)
- self.norm_layer = None
- if norm_layer == 'batch_norm':
- self.norm_layer = nn.BatchNorm2d
- elif norm_layer == 'instance_norm':
- self.norm_layer = nn.InstanceNorm2d
-
- # filters = [f//3 for f in filters]
- self.use_bias = use_bias
- self.resnet_blocks = resnet_blocks
- self.append_smoothers = append_smoothers
-
- stride1 = 2
- stride2 = 2
- self.conv0 = self.relu_layer(in_filters=input_channels, out_filters=filters[0],
- kernel_size=7, stride=1, padding=3,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.conv1 = self.relu_layer(in_filters=filters[0],
- out_filters=filters[1],
- kernel_size=3, stride=stride1, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.conv2 = self.relu_layer(in_filters=filters[1],
- out_filters=filters[2],
- kernel_size=3, stride=stride2, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.resnets = nn.ModuleList()
- for i in range(self.resnet_blocks):
- self.resnets.append(
- self.resnet_block(in_filters=filters[2],
- out_filters=filters[2],
- kernel_size=3, stride=1, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu))
-
- self.upconv2 = self.upconv_layer_upsample_and_conv(in_filters=filters[3] + filters[2],
- # in_filters=filters[3], # disable skip-connections
- out_filters=filters[4],
- scale_factor=stride2,
- kernel_size=3, stride=1, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.upconv1 = self.upconv_layer_upsample_and_conv(in_filters=filters[4] + filters[1],
- # in_filters=filters[4], # disable skip-connections
- out_filters=filters[4],
- scale_factor=stride1,
- kernel_size=3, stride=1, padding=1,
- bias=self.use_bias,
- norm_layer=self.norm_layer,
- nonlinearity=grelu)
-
- self.conv_11 = nn.Sequential(
- nn.Conv2d(in_channels=filters[0] + filters[4] + input_channels,
- # in_channels=filters[4], # disable skip-connections
- out_channels=filters[5],
- kernel_size=7, stride=1, padding=3, bias=self.use_bias, padding_mode='zeros'),
- grelu
- )
-
- if self.append_smoothers:
- self.conv_11_a = nn.Sequential(
- nn.Conv2d(filters[5], filters[5], kernel_size=3,
- bias=self.use_bias, padding=1, padding_mode='zeros'),
- grelu,
- # replace with variable
- nn.BatchNorm2d(num_features=filters[5]),
- nn.Conv2d(filters[5], filters[5], kernel_size=3,
- bias=self.use_bias, padding=1, padding_mode='zeros'),
- grelu
- )
-
- if tanh:
- self.conv_12 = nn.Sequential(nn.Conv2d(filters[5], output_channels,
- kernel_size=1, stride=1,
- padding=0, bias=True, padding_mode='zeros'),
- #torchvision.transforms.Grayscale(num_output_channels=3),
- nn.Sigmoid())
- else:
- self.conv_12 = nn.Conv2d(filters[5], output_channels, kernel_size=1, stride=1,
- padding=0, bias=True, padding_mode='zeros')
-
- def log_tensors(self, logger, tag, img_tensor):
- logger.experiment.add_images(tag, img_tensor)
-
- def forward(self, input, logger=None, **kwargs):
- # [1, 3, 534, 800]
- output_d0 = self.conv0(input)
- output_d1 = self.conv1(output_d0)
- # comment to disable skip-connections
- output_d2 = self.conv2(output_d1)
-
- output = output_d2
- for layer in self.resnets:
- output = layer(output) + output
-
- output_u2 = self.upconv2(torch.cat((output, output_d2), dim=1))
-
- output_u1 = self.upconv1(torch.cat((output_u2, output_d1), dim=1))
- output = torch.cat(
- (output_u1, output_d0, input), dim=1)
-
- output_11 = self.conv_11(output)
-
- if self.append_smoothers:
- output_11_a = self.conv_11_a(output_11)
- else:
- output_11_a = output_11
- output_12 = self.conv_12(output_11_a)
-
- output = output_12
-
- return output
-
- def relu_layer(self, in_filters, out_filters, kernel_size, stride, padding, bias,
- norm_layer, nonlinearity):
- out = nn.Sequential()
- out.add_module('conv', nn.Conv2d(in_channels=in_filters,
- out_channels=out_filters,
- kernel_size=kernel_size, stride=stride,
- padding=padding, bias=bias, padding_mode='zeros'))
-
- if norm_layer:
- out.add_module('normalization',
- norm_layer(num_features=out_filters))
- if nonlinearity:
- out.add_module('nonlinearity', nonlinearity)
- # out.add_module('dropout', nn.Dropout2d(0.25))
-
- return out
-
- def resnet_block(self, in_filters, out_filters, kernel_size, stride, padding, bias,
- norm_layer, nonlinearity):
- out = nn.Sequential()
- if nonlinearity:
- out.add_module('nonlinearity_0', nonlinearity)
- out.add_module('conv_0', nn.Conv2d(in_channels=in_filters,
- out_channels=out_filters,
- kernel_size=kernel_size, stride=stride,
- padding=padding, bias=bias, padding_mode='zeros'))
- if norm_layer:
- out.add_module('normalization',
- norm_layer(num_features=out_filters))
- if nonlinearity:
- out.add_module('nonlinearity_1', nonlinearity)
- out.add_module('conv_1', nn.Conv2d(in_channels=in_filters,
- out_channels=out_filters,
- kernel_size=kernel_size, stride=stride,
- padding=padding, bias=bias, padding_mode='zeros'))
- return out
-
- def upconv_layer_upsample_and_conv(self, in_filters, out_filters, scale_factor, kernel_size, stride, padding, bias,
- norm_layer, nonlinearity):
-
- parts = [nn.Upsample(scale_factor=scale_factor),
- nn.Conv2d(in_filters, out_filters, kernel_size,
- stride, padding=padding, bias=False, padding_mode='zeros')
- ]
-
- if norm_layer:
- parts.append(norm_layer(num_features=out_filters))
-
- if nonlinearity:
- parts.append(nonlinearity)
-
- return nn.Sequential(*parts)
-
-
-
-
-relu = grelu
-
-#####
-# Default discriminator
-#####
-
-relu = nn.LeakyReLU(0.2)
-
-class Discriminator(nn.Module):
- def __init__(self, num_filters=12, input_channels=3, n_layers=2,
- norm_layer='instance_norm', use_bias=True):
- super().__init__()
-
- self.num_filters = num_filters
-
- self.input_channels = input_channels
- self.use_bias = use_bias
-
- if norm_layer == 'batch_norm':
- self.norm_layer = nn.BatchNorm2d
- else:
- self.norm_layer = nn.InstanceNorm2d
- self.net = self.make_net(
- n_layers, self.input_channels, 1, 4, 2, self.use_bias)
-
- def make_net(self, n, flt_in, flt_out=1, k=4, stride=2, bias=True):
- padding = 1
- model = nn.Sequential()
-
- model.add_module('conv0', self.make_block(
- flt_in, self.num_filters, k, stride, padding, bias, None, relu))
-
- flt_mult, flt_mult_prev = 1, 1
- # n - 1 blocks
- for l in range(1, n):
- flt_mult_prev = flt_mult
- flt_mult = min(2**(l), 8)
- model.add_module('conv_%d' % (l), self.make_block(self.num_filters * flt_mult_prev, self.num_filters * flt_mult,
- k, stride, padding, bias, self.norm_layer, relu))
-
- flt_mult_prev = flt_mult
- flt_mult = min(2**n, 8)
- model.add_module('conv_%d' % (n), self.make_block(self.num_filters * flt_mult_prev, self.num_filters * flt_mult,
- k, 1, padding, bias, self.norm_layer, relu))
- model.add_module('conv_out', self.make_block(
- self.num_filters * flt_mult, 1, k, 1, padding, bias, None, None))
- return model
-
- def make_block(self, flt_in, flt_out, k, stride, padding, bias, norm, relu):
- m = nn.Sequential()
- m.add_module('conv', nn.Conv2d(flt_in, flt_out, k,
- stride=stride, padding=padding, bias=bias, padding_mode='zeros'))
- if norm is not None:
- m.add_module('norm', norm(flt_out))
- if relu is not None:
- m.add_module('relu', relu)
- return m
-
- def forward(self, x):
- output = self.net(x)
- # output = output.mean((2, 3), True)
- # output = output.squeeze(-1).squeeze(-1)
- # output = output.mean(dim=(-1,-2))
- return output
-
-
-#####
-# Perception VGG19 loss
-#####
-class PerceptualVGG19(nn.Module):
- def __init__(self, feature_layers=[0, 3, 5], use_normalization=False):
- super().__init__()
- # model = models.vgg19(pretrained=True)
- model = models.squeezenet1_1(pretrained=True)
- model.float()
- model.eval()
-
- self.model = model
- self.feature_layers = feature_layers
-
- self.mean = torch.FloatTensor([0.485, 0.456, 0.406])
- self.mean_tensor = None
-
- self.std = torch.FloatTensor([0.229, 0.224, 0.225])
- self.std_tensor = None
-
- self.use_normalization = use_normalization
-
- for param in self.parameters():
- param.requires_grad = False
-
- def normalize(self, x):
- if not self.use_normalization:
- return x
-
- if self.mean_tensor is None:
- self.mean_tensor = Variable(
- self.mean.view(1, 3, 1, 1).expand(x.shape),
- requires_grad=False)
- self.std_tensor = Variable(
- self.std.view(1, 3, 1, 1).expand(x.shape), requires_grad=False)
-
- x = (x + 1) / 2
- return (x - self.mean_tensor) / self.std_tensor
-
- def run(self, x):
- features = []
-
- h = x
-
- for f in range(max(self.feature_layers) + 1):
- h = self.model.features[f](h)
- if f in self.feature_layers:
- not_normed_features = h.clone().view(h.size(0), -1)
- features.append(not_normed_features)
-
- return torch.cat(features, dim=1)
-
- def forward(self, x):
- h = self.normalize(x)
- return self.run(h)
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/eval/vqa.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/eval/vqa.py
deleted file mode 100644
index ea29e5d1e2cdf24cfe3447148019e5cb98c3dbf4..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/eval/vqa.py
+++ /dev/null
@@ -1,180 +0,0 @@
-__author__ = 'aagrawal'
-__version__ = '0.9'
-
-# Interface for accessing the VQA dataset.
-
-# This code is based on the code written by Tsung-Yi Lin for MSCOCO Python API available at the following link:
-# (https://github.com/pdollar/coco/blob/master/PythonAPI/pycocotools/coco.py).
-
-# The following functions are defined:
-# VQA - VQA class that loads VQA annotation file and prepares data structures.
-# getQuesIds - Get question ids that satisfy given filter conditions.
-# getImgIds - Get image ids that satisfy given filter conditions.
-# loadQA - Load questions and answers with the specified question ids.
-# showQA - Display the specified questions and answers.
-# loadRes - Load result file and create result object.
-
-# Help on each function can be accessed by: "help(COCO.function)"
-
-import json
-import datetime
-import copy
-
-
-class VQA:
- def __init__(self, annotation_file=None, question_file=None):
- """
- Constructor of VQA helper class for reading and visualizing questions and answers.
- :param annotation_file (str): location of VQA annotation file
- :return:
- """
- # load dataset
- self.dataset = {}
- self.questions = {}
- self.qa = {}
- self.qqa = {}
- self.imgToQA = {}
- if not annotation_file == None and not question_file == None:
- print('loading VQA annotations and questions into memory...')
- time_t = datetime.datetime.utcnow()
- dataset = json.load(open(annotation_file, 'r'))
- questions = json.load(open(question_file, 'r'))
- print(datetime.datetime.utcnow() - time_t)
- self.dataset = dataset
- self.questions = questions
- self.createIndex()
-
- def createIndex(self):
- # create index
- print('creating index...')
- imgToQA = {ann['image_id']: [] for ann in self.dataset['annotations']}
- qa = {ann['question_id']: [] for ann in self.dataset['annotations']}
- qqa = {ann['question_id']: [] for ann in self.dataset['annotations']}
- for ann in self.dataset['annotations']:
- imgToQA[ann['image_id']] += [ann]
- qa[ann['question_id']] = ann
- for ques in self.questions['questions']:
- qqa[ques['question_id']] = ques
- print('index created!')
-
- # create class members
- self.qa = qa
- self.qqa = qqa
- self.imgToQA = imgToQA
-
- def info(self):
- """
- Print information about the VQA annotation file.
- :return:
- """
- for key, value in self.dataset['info'].items():
- print('%s: %s' % (key, value))
-
- def getQuesIds(self, imgIds=[], quesTypes=[], ansTypes=[]):
- """
- Get question ids that satisfy given filter conditions. default skips that filter
- :param imgIds (int array) : get question ids for given imgs
- quesTypes (str array) : get question ids for given question types
- ansTypes (str array) : get question ids for given answer types
- :return: ids (int array) : integer array of question ids
- """
- imgIds = imgIds if type(imgIds) == list else [imgIds]
- quesTypes = quesTypes if type(quesTypes) == list else [quesTypes]
- ansTypes = ansTypes if type(ansTypes) == list else [ansTypes]
-
- if len(imgIds) == len(quesTypes) == len(ansTypes) == 0:
- anns = self.dataset['annotations']
- else:
- if not len(imgIds) == 0:
- anns = sum([self.imgToQA[imgId] for imgId in imgIds if imgId in self.imgToQA], [])
- else:
- anns = self.dataset['annotations']
- anns = anns if len(quesTypes) == 0 else [ann for ann in anns if ann['question_type'] in quesTypes]
- anns = anns if len(ansTypes) == 0 else [ann for ann in anns if ann['answer_type'] in ansTypes]
- ids = [ann['question_id'] for ann in anns]
- return ids
-
- def getImgIds(self, quesIds=[], quesTypes=[], ansTypes=[]):
- """
- Get image ids that satisfy given filter conditions. default skips that filter
- :param quesIds (int array) : get image ids for given question ids
- quesTypes (str array) : get image ids for given question types
- ansTypes (str array) : get image ids for given answer types
- :return: ids (int array) : integer array of image ids
- """
- quesIds = quesIds if type(quesIds) == list else [quesIds]
- quesTypes = quesTypes if type(quesTypes) == list else [quesTypes]
- ansTypes = ansTypes if type(ansTypes) == list else [ansTypes]
-
- if len(quesIds) == len(quesTypes) == len(ansTypes) == 0:
- anns = self.dataset['annotations']
- else:
- if not len(quesIds) == 0:
- anns = sum([self.qa[quesId] for quesId in quesIds if quesId in self.qa], [])
- else:
- anns = self.dataset['annotations']
- anns = anns if len(quesTypes) == 0 else [ann for ann in anns if ann['question_type'] in quesTypes]
- anns = anns if len(ansTypes) == 0 else [ann for ann in anns if ann['answer_type'] in ansTypes]
- ids = [ann['image_id'] for ann in anns]
- return ids
-
- def loadQA(self, ids=[]):
- """
- Load questions and answers with the specified question ids.
- :param ids (int array) : integer ids specifying question ids
- :return: qa (object array) : loaded qa objects
- """
- if type(ids) == list:
- return [self.qa[id] for id in ids]
- elif type(ids) == int:
- return [self.qa[ids]]
-
- def showQA(self, anns):
- """
- Display the specified annotations.
- :param anns (array of object): annotations to display
- :return: None
- """
- if len(anns) == 0:
- return 0
- for ann in anns:
- quesId = ann['question_id']
- print("Question: %s" % (self.qqa[quesId]['question']))
- for ans in ann['answers']:
- print("Answer %d: %s" % (ans['answer_id'], ans['answer']))
-
- def loadRes(self, resFile, quesFile):
- """
- Load result file and return a result object.
- :param resFile (str) : file name of result file
- :return: res (obj) : result api object
- """
- res = VQA()
- res.questions = json.load(open(quesFile))
- res.dataset['info'] = copy.deepcopy(self.questions['info'])
- res.dataset['task_type'] = copy.deepcopy(self.questions['task_type'])
- res.dataset['data_type'] = copy.deepcopy(self.questions['data_type'])
- res.dataset['data_subtype'] = copy.deepcopy(self.questions['data_subtype'])
- res.dataset['license'] = copy.deepcopy(self.questions['license'])
-
- print('Loading and preparing results... ')
- time_t = datetime.datetime.utcnow()
- anns = json.load(open(resFile))
- assert type(anns) == list, 'results is not an array of objects'
- annsQuesIds = [ann['question_id'] for ann in anns]
- assert set(annsQuesIds) == set(self.getQuesIds()), \
- 'Results do not correspond to current VQA set. Either the results do not have predictions for all question ids in annotation file or there is atleast one question id that does not belong to the question ids in the annotation file.'
- for ann in anns:
- quesId = ann['question_id']
- if res.dataset['task_type'] == 'Multiple Choice':
- assert ann['answer'] in self.qqa[quesId][
- 'multiple_choices'], 'predicted answer is not one of the multiple choices'
- qaAnn = self.qa[quesId]
- ann['image_id'] = qaAnn['image_id']
- ann['question_type'] = qaAnn['question_type']
- ann['answer_type'] = qaAnn['answer_type']
- print('DONE (t=%0.2fs)' % ((datetime.datetime.utcnow() - time_t).total_seconds()))
-
- res.dataset['annotations'] = anns
- res.createIndex()
- return res
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_class.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_class.cpp
deleted file mode 100644
index 5369cb064cc9fee76546529398787980f9c4c76e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_class.cpp
+++ /dev/null
@@ -1,449 +0,0 @@
-/*
- tests/test_class.cpp -- test py::class_ definitions and basic functionality
-
- Copyright (c) 2016 Wenzel Jakob
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#include "pybind11_tests.h"
-#include "constructor_stats.h"
-#include "local_bindings.h"
-#include
-
-#if defined(_MSC_VER)
-# pragma warning(disable: 4324) // warning C4324: structure was padded due to alignment specifier
-#endif
-
-// test_brace_initialization
-struct NoBraceInitialization {
- NoBraceInitialization(std::vector v) : vec{std::move(v)} {}
- template
- NoBraceInitialization(std::initializer_list l) : vec(l) {}
-
- std::vector vec;
-};
-
-TEST_SUBMODULE(class_, m) {
- // test_instance
- struct NoConstructor {
- NoConstructor() = default;
- NoConstructor(const NoConstructor &) = default;
- NoConstructor(NoConstructor &&) = default;
- static NoConstructor *new_instance() {
- auto *ptr = new NoConstructor();
- print_created(ptr, "via new_instance");
- return ptr;
- }
- ~NoConstructor() { print_destroyed(this); }
- };
-
- py::class_(m, "NoConstructor")
- .def_static("new_instance", &NoConstructor::new_instance, "Return an instance");
-
- // test_inheritance
- class Pet {
- public:
- Pet(const std::string &name, const std::string &species)
- : m_name(name), m_species(species) {}
- std::string name() const { return m_name; }
- std::string species() const { return m_species; }
- private:
- std::string m_name;
- std::string m_species;
- };
-
- class Dog : public Pet {
- public:
- Dog(const std::string &name) : Pet(name, "dog") {}
- std::string bark() const { return "Woof!"; }
- };
-
- class Rabbit : public Pet {
- public:
- Rabbit(const std::string &name) : Pet(name, "parrot") {}
- };
-
- class Hamster : public Pet {
- public:
- Hamster(const std::string &name) : Pet(name, "rodent") {}
- };
-
- class Chimera : public Pet {
- Chimera() : Pet("Kimmy", "chimera") {}
- };
-
- py::class_ pet_class(m, "Pet");
- pet_class
- .def(py::init())
- .def("name", &Pet::name)
- .def("species", &Pet::species);
-
- /* One way of declaring a subclass relationship: reference parent's class_ object */
- py::class_(m, "Dog", pet_class)
- .def(py::init());
-
- /* Another way of declaring a subclass relationship: reference parent's C++ type */
- py::class_(m, "Rabbit")
- .def(py::init());
-
- /* And another: list parent in class template arguments */
- py::class_(m, "Hamster")
- .def(py::init());
-
- /* Constructors are not inherited by default */
- py::class_(m, "Chimera");
-
- m.def("pet_name_species", [](const Pet &pet) { return pet.name() + " is a " + pet.species(); });
- m.def("dog_bark", [](const Dog &dog) { return dog.bark(); });
-
- // test_automatic_upcasting
- struct BaseClass {
- BaseClass() = default;
- BaseClass(const BaseClass &) = default;
- BaseClass(BaseClass &&) = default;
- virtual ~BaseClass() {}
- };
- struct DerivedClass1 : BaseClass { };
- struct DerivedClass2 : BaseClass { };
-
- py::class_(m, "BaseClass").def(py::init<>());
- py::class_(m, "DerivedClass1").def(py::init<>());
- py::class_(m, "DerivedClass2").def(py::init<>());
-
- m.def("return_class_1", []() -> BaseClass* { return new DerivedClass1(); });
- m.def("return_class_2", []() -> BaseClass* { return new DerivedClass2(); });
- m.def("return_class_n", [](int n) -> BaseClass* {
- if (n == 1) return new DerivedClass1();
- if (n == 2) return new DerivedClass2();
- return new BaseClass();
- });
- m.def("return_none", []() -> BaseClass* { return nullptr; });
-
- // test_isinstance
- m.def("check_instances", [](py::list l) {
- return py::make_tuple(
- py::isinstance(l[0]),
- py::isinstance(l[1]),
- py::isinstance(l[2]),
- py::isinstance(l[3]),
- py::isinstance(l[4]),
- py::isinstance(l[5]),
- py::isinstance(l[6])
- );
- });
-
- // test_mismatched_holder
- struct MismatchBase1 { };
- struct MismatchDerived1 : MismatchBase1 { };
-
- struct MismatchBase2 { };
- struct MismatchDerived2 : MismatchBase2 { };
-
- m.def("mismatched_holder_1", []() {
- auto mod = py::module::import("__main__");
- py::class_>(mod, "MismatchBase1");
- py::class_(mod, "MismatchDerived1");
- });
- m.def("mismatched_holder_2", []() {
- auto mod = py::module::import("__main__");
- py::class_(mod, "MismatchBase2");
- py::class_,
- MismatchBase2>(mod, "MismatchDerived2");
- });
-
- // test_override_static
- // #511: problem with inheritance + overwritten def_static
- struct MyBase {
- static std::unique_ptr make() {
- return std::unique_ptr(new MyBase());
- }
- };
-
- struct MyDerived : MyBase {
- static std::unique_ptr make() {
- return std::unique_ptr(new MyDerived());
- }
- };
-
- py::class_(m, "MyBase")
- .def_static("make", &MyBase::make);
-
- py::class_(m, "MyDerived")
- .def_static("make", &MyDerived::make)
- .def_static("make2", &MyDerived::make);
-
- // test_implicit_conversion_life_support
- struct ConvertibleFromUserType {
- int i;
-
- ConvertibleFromUserType(UserType u) : i(u.value()) { }
- };
-
- py::class_(m, "AcceptsUserType")
- .def(py::init());
- py::implicitly_convertible();
-
- m.def("implicitly_convert_argument", [](const ConvertibleFromUserType &r) { return r.i; });
- m.def("implicitly_convert_variable", [](py::object o) {
- // `o` is `UserType` and `r` is a reference to a temporary created by implicit
- // conversion. This is valid when called inside a bound function because the temp
- // object is attached to the same life support system as the arguments.
- const auto &r = o.cast();
- return r.i;
- });
- m.add_object("implicitly_convert_variable_fail", [&] {
- auto f = [](PyObject *, PyObject *args) -> PyObject * {
- auto o = py::reinterpret_borrow(args)[0];
- try { // It should fail here because there is no life support.
- o.cast();
- } catch (const py::cast_error &e) {
- return py::str(e.what()).release().ptr();
- }
- return py::str().release().ptr();
- };
-
- auto def = new PyMethodDef{"f", f, METH_VARARGS, nullptr};
- return py::reinterpret_steal(PyCFunction_NewEx(def, nullptr, m.ptr()));
- }());
-
- // test_operator_new_delete
- struct HasOpNewDel {
- std::uint64_t i;
- static void *operator new(size_t s) { py::print("A new", s); return ::operator new(s); }
- static void *operator new(size_t s, void *ptr) { py::print("A placement-new", s); return ptr; }
- static void operator delete(void *p) { py::print("A delete"); return ::operator delete(p); }
- };
- struct HasOpNewDelSize {
- std::uint32_t i;
- static void *operator new(size_t s) { py::print("B new", s); return ::operator new(s); }
- static void *operator new(size_t s, void *ptr) { py::print("B placement-new", s); return ptr; }
- static void operator delete(void *p, size_t s) { py::print("B delete", s); return ::operator delete(p); }
- };
- struct AliasedHasOpNewDelSize {
- std::uint64_t i;
- static void *operator new(size_t s) { py::print("C new", s); return ::operator new(s); }
- static void *operator new(size_t s, void *ptr) { py::print("C placement-new", s); return ptr; }
- static void operator delete(void *p, size_t s) { py::print("C delete", s); return ::operator delete(p); }
- virtual ~AliasedHasOpNewDelSize() = default;
- AliasedHasOpNewDelSize() = default;
- AliasedHasOpNewDelSize(const AliasedHasOpNewDelSize&) = delete;
- };
- struct PyAliasedHasOpNewDelSize : AliasedHasOpNewDelSize {
- PyAliasedHasOpNewDelSize() = default;
- PyAliasedHasOpNewDelSize(int) { }
- std::uint64_t j;
- };
- struct HasOpNewDelBoth {
- std::uint32_t i[8];
- static void *operator new(size_t s) { py::print("D new", s); return ::operator new(s); }
- static void *operator new(size_t s, void *ptr) { py::print("D placement-new", s); return ptr; }
- static void operator delete(void *p) { py::print("D delete"); return ::operator delete(p); }
- static void operator delete(void *p, size_t s) { py::print("D wrong delete", s); return ::operator delete(p); }
- };
- py::class_(m, "HasOpNewDel").def(py::init<>());
- py::class_(m, "HasOpNewDelSize").def(py::init<>());
- py::class_(m, "HasOpNewDelBoth").def(py::init<>());
- py::class_ aliased(m, "AliasedHasOpNewDelSize");
- aliased.def(py::init<>());
- aliased.attr("size_noalias") = py::int_(sizeof(AliasedHasOpNewDelSize));
- aliased.attr("size_alias") = py::int_(sizeof(PyAliasedHasOpNewDelSize));
-
- // This test is actually part of test_local_bindings (test_duplicate_local), but we need a
- // definition in a different compilation unit within the same module:
- bind_local(m, "LocalExternal", py::module_local());
-
- // test_bind_protected_functions
- class ProtectedA {
- protected:
- int foo() const { return value; }
-
- private:
- int value = 42;
- };
-
- class PublicistA : public ProtectedA {
- public:
- using ProtectedA::foo;
- };
-
- py::class_(m, "ProtectedA")
- .def(py::init<>())
-#if !defined(_MSC_VER) || _MSC_VER >= 1910
- .def("foo", &PublicistA::foo);
-#else
- .def("foo", static_cast(&PublicistA::foo));
-#endif
-
- class ProtectedB {
- public:
- virtual ~ProtectedB() = default;
- ProtectedB() = default;
- ProtectedB(const ProtectedB &) = delete;
-
- protected:
- virtual int foo() const { return value; }
-
- private:
- int value = 42;
- };
-
- class TrampolineB : public ProtectedB {
- public:
- int foo() const override { PYBIND11_OVERLOAD(int, ProtectedB, foo, ); }
- };
-
- class PublicistB : public ProtectedB {
- public:
- using ProtectedB::foo;
- };
-
- py::class_(m, "ProtectedB")
- .def(py::init<>())
-#if !defined(_MSC_VER) || _MSC_VER >= 1910
- .def("foo", &PublicistB::foo);
-#else
- .def("foo", static_cast(&PublicistB::foo));
-#endif
-
- // test_brace_initialization
- struct BraceInitialization {
- int field1;
- std::string field2;
- };
-
- py::class_(m, "BraceInitialization")
- .def(py::init())
- .def_readwrite("field1", &BraceInitialization::field1)
- .def_readwrite("field2", &BraceInitialization::field2);
- // We *don't* want to construct using braces when the given constructor argument maps to a
- // constructor, because brace initialization could go to the wrong place (in particular when
- // there is also an `initializer_list`-accept constructor):
- py::class_(m, "NoBraceInitialization")
- .def(py::init>())
- .def_readonly("vec", &NoBraceInitialization::vec);
-
- // test_reentrant_implicit_conversion_failure
- // #1035: issue with runaway reentrant implicit conversion
- struct BogusImplicitConversion {
- BogusImplicitConversion(const BogusImplicitConversion &) { }
- };
-
- py::class_(m, "BogusImplicitConversion")
- .def(py::init());
-
- py::implicitly_convertible();
-
- // test_qualname
- // #1166: nested class docstring doesn't show nested name
- // Also related: tests that __qualname__ is set properly
- struct NestBase {};
- struct Nested {};
- py::class_ base(m, "NestBase");
- base.def(py::init<>());
- py::class_(base, "Nested")
- .def(py::init<>())
- .def("fn", [](Nested &, int, NestBase &, Nested &) {})
- .def("fa", [](Nested &, int, NestBase &, Nested &) {},
- "a"_a, "b"_a, "c"_a);
- base.def("g", [](NestBase &, Nested &) {});
- base.def("h", []() { return NestBase(); });
-
- // test_error_after_conversion
- // The second-pass path through dispatcher() previously didn't
- // remember which overload was used, and would crash trying to
- // generate a useful error message
-
- struct NotRegistered {};
- struct StringWrapper { std::string str; };
- m.def("test_error_after_conversions", [](int) {});
- m.def("test_error_after_conversions",
- [](StringWrapper) -> NotRegistered { return {}; });
- py::class_(m, "StringWrapper").def(py::init());
- py::implicitly_convertible();
-
- #if defined(PYBIND11_CPP17)
- struct alignas(1024) Aligned {
- std::uintptr_t ptr() const { return (uintptr_t) this; }
- };
- py::class_(m, "Aligned")
- .def(py::init<>())
- .def("ptr", &Aligned::ptr);
- #endif
-
- // test_final
- struct IsFinal final {};
- py::class_(m, "IsFinal", py::is_final());
-
- // test_non_final_final
- struct IsNonFinalFinal {};
- py::class_(m, "IsNonFinalFinal", py::is_final());
-
- struct PyPrintDestructor {
- PyPrintDestructor() {}
- ~PyPrintDestructor() {
- py::print("Print from destructor");
- }
- void throw_something() { throw std::runtime_error("error"); }
- };
- py::class_(m, "PyPrintDestructor")
- .def(py::init<>())
- .def("throw_something", &PyPrintDestructor::throw_something);
-}
-
-template class BreaksBase { public:
- virtual ~BreaksBase() = default;
- BreaksBase() = default;
- BreaksBase(const BreaksBase&) = delete;
-};
-template class BreaksTramp : public BreaksBase {};
-// These should all compile just fine:
-typedef py::class_, std::unique_ptr>, BreaksTramp<1>> DoesntBreak1;
-typedef py::class_, BreaksTramp<2>, std::unique_ptr>> DoesntBreak2;
-typedef py::class_, std::unique_ptr>> DoesntBreak3;
-typedef py::class_, BreaksTramp<4>> DoesntBreak4;
-typedef py::class_> DoesntBreak5;
-typedef py::class_, std::shared_ptr>, BreaksTramp<6>> DoesntBreak6;
-typedef py::class_, BreaksTramp<7>, std::shared_ptr>> DoesntBreak7;
-typedef py::class_, std::shared_ptr>> DoesntBreak8;
-#define CHECK_BASE(N) static_assert(std::is_same>::value, \
- "DoesntBreak" #N " has wrong type!")
-CHECK_BASE(1); CHECK_BASE(2); CHECK_BASE(3); CHECK_BASE(4); CHECK_BASE(5); CHECK_BASE(6); CHECK_BASE(7); CHECK_BASE(8);
-#define CHECK_ALIAS(N) static_assert(DoesntBreak##N::has_alias && std::is_same>::value, \
- "DoesntBreak" #N " has wrong type_alias!")
-#define CHECK_NOALIAS(N) static_assert(!DoesntBreak##N::has_alias && std::is_void::value, \
- "DoesntBreak" #N " has type alias, but shouldn't!")
-CHECK_ALIAS(1); CHECK_ALIAS(2); CHECK_NOALIAS(3); CHECK_ALIAS(4); CHECK_NOALIAS(5); CHECK_ALIAS(6); CHECK_ALIAS(7); CHECK_NOALIAS(8);
-#define CHECK_HOLDER(N, TYPE) static_assert(std::is_same>>::value, \
- "DoesntBreak" #N " has wrong holder_type!")
-CHECK_HOLDER(1, unique); CHECK_HOLDER(2, unique); CHECK_HOLDER(3, unique); CHECK_HOLDER(4, unique); CHECK_HOLDER(5, unique);
-CHECK_HOLDER(6, shared); CHECK_HOLDER(7, shared); CHECK_HOLDER(8, shared);
-
-// There's no nice way to test that these fail because they fail to compile; leave them here,
-// though, so that they can be manually tested by uncommenting them (and seeing that compilation
-// failures occurs).
-
-// We have to actually look into the type: the typedef alone isn't enough to instantiate the type:
-#define CHECK_BROKEN(N) static_assert(std::is_same>::value, \
- "Breaks1 has wrong type!");
-
-//// Two holder classes:
-//typedef py::class_, std::unique_ptr>, std::unique_ptr>> Breaks1;
-//CHECK_BROKEN(1);
-//// Two aliases:
-//typedef py::class_, BreaksTramp<-2>, BreaksTramp<-2>> Breaks2;
-//CHECK_BROKEN(2);
-//// Holder + 2 aliases
-//typedef py::class_, std::unique_ptr>, BreaksTramp<-3>, BreaksTramp<-3>> Breaks3;
-//CHECK_BROKEN(3);
-//// Alias + 2 holders
-//typedef py::class_, std::unique_ptr>, BreaksTramp<-4>, std::shared_ptr>> Breaks4;
-//CHECK_BROKEN(4);
-//// Invalid option (not a subclass or holder)
-//typedef py::class_, BreaksTramp<-4>> Breaks5;
-//CHECK_BROKEN(5);
-//// Invalid option: multiple inheritance not supported:
-//template <> struct BreaksBase<-8> : BreaksBase<-6>, BreaksBase<-7> {};
-//typedef py::class_, BreaksBase<-6>, BreaksBase<-7>> Breaks8;
-//CHECK_BROKEN(8);
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/sort.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/sort.h
deleted file mode 100644
index 3e357fde691ad27f70058120653ea1bdc0b39e91..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/sort.h
+++ /dev/null
@@ -1,522 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-
-// TODO: Move into system::cuda
-
-#pragma once
-
-#include
-#include
-
-#if THRUST_CPP_DIALECT >= 2014
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-
-#include
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-#include
-
-namespace thrust
-{
-
-namespace system { namespace cuda { namespace detail
-{
-
-// Non-ContiguousIterator input and output iterators
-template <
- typename DerivedPolicy
-, typename ForwardIt, typename Size, typename StrictWeakOrdering
->
-auto async_stable_sort_n(
- execution_policy& policy,
- ForwardIt first,
- Size n,
- StrictWeakOrdering comp
-) ->
- typename std::enable_if<
- negation>::value
- , unique_eager_event
- >::type
-{
- using T = typename iterator_traits::value_type;
-
- auto const device_alloc = get_async_device_allocator(policy);
-
- // Create device-side buffer.
-
- // FIXME: Combine this temporary allocation with the main one for CUB.
- auto device_buffer = uninitialized_allocate_unique_n(device_alloc, n);
-
- auto const device_buffer_ptr = device_buffer.get();
-
- // Synthesize a suitable new execution policy, because we don't want to
- // try and extract twice from the one we were passed.
- typename remove_cvref_t::tag_type tag_policy{};
-
- // Copy from the input into the buffer.
-
- auto new_policy0 = thrust::detail::derived_cast(policy).rebind_after(
- std::move(device_buffer)
- );
-
- THRUST_STATIC_ASSERT((
- std::tuple_size::value + 1
- <=
- std::tuple_size::value
- ));
-
- auto f0 = async_copy_n(
- new_policy0
- , tag_policy
- , first
- , n
- , device_buffer_ptr
- );
-
- // Sort the buffer.
-
- auto new_policy1 = thrust::detail::derived_cast(policy).rebind_after(
- std::move(f0)
- );
-
- THRUST_STATIC_ASSERT((
- std::tuple_size::value + 1
- <=
- std::tuple_size::value
- ));
-
- auto f1 = async_sort_n(
- new_policy1
- , tag_policy
- , device_buffer_ptr
- , n
- , comp
- );
-
- // Copy from the buffer into the input.
- // FIXME: Combine this with the potential memcpy at the end of the main sort
- // routine.
-
- auto new_policy2 = thrust::detail::derived_cast(policy).rebind_after(
- std::move(f1)
- );
-
- THRUST_STATIC_ASSERT((
- std::tuple_size::value + 1
- <=
- std::tuple_size::value
- ));
-
- return async_copy_n(
- new_policy2
- , tag_policy
- , device_buffer_ptr
- , n
- , first
- );
-}
-
-// ContiguousIterator iterators
-// Non-Scalar value type or user-defined StrictWeakOrdering
-template <
- typename DerivedPolicy
-, typename ForwardIt, typename Size, typename StrictWeakOrdering
->
-auto async_stable_sort_n(
- execution_policy& policy,
- ForwardIt first,
- Size n,
- StrictWeakOrdering comp
-) ->
- typename std::enable_if<
- conjunction<
- is_contiguous_iterator
- , disjunction<
- negation<
- std::is_scalar<
- typename iterator_traits::value_type
- >
- >
- , negation<
- is_operator_less_or_greater_function_object
- >
- >
- >::value
- , unique_eager_event
- >::type
-{
- auto const device_alloc = get_async_device_allocator(policy);
-
- unique_eager_event e;
-
- // Determine temporary device storage requirements.
-
- size_t tmp_size = 0;
- thrust::cuda_cub::throw_on_error(
- thrust::cuda_cub::__merge_sort::doit_step<
- /* Sort items? */ std::false_type, /* Stable? */ std::true_type
- >(
- nullptr
- , tmp_size
- , first
- , static_cast(nullptr) // Items.
- , n
- , comp
- , nullptr // Null stream, just for sizing.
- , THRUST_DEBUG_SYNC_FLAG
- )
- , "after merge sort sizing"
- );
-
- // Allocate temporary storage.
-
- auto content = uninitialized_allocate_unique_n(
- device_alloc, tmp_size
- );
-
- // The array was dynamically allocated, so we assume that it's suitably
- // aligned for any type of data. `malloc`/`cudaMalloc`/`new`/`std::allocator`
- // make this guarantee.
- auto const content_ptr = content.get();
-
- void* const tmp_ptr = static_cast(
- raw_pointer_cast(content_ptr)
- );
-
- // Set up stream with dependencies.
-
- cudaStream_t const user_raw_stream = thrust::cuda_cub::stream(policy);
-
- if (thrust::cuda_cub::default_stream() != user_raw_stream)
- {
- e = make_dependent_event(
- std::tuple_cat(
- std::make_tuple(
- std::move(content)
- , unique_stream(nonowning, user_raw_stream)
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(policy))
- )
- )
- );
- }
- else
- {
- e = make_dependent_event(
- std::tuple_cat(
- std::make_tuple(
- std::move(content)
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(policy))
- )
- )
- );
- }
-
- // Run merge sort.
-
- thrust::cuda_cub::throw_on_error(
- thrust::cuda_cub::__merge_sort::doit_step<
- /* Sort items? */ std::false_type, /* Stable? */ std::true_type
- >(
- tmp_ptr
- , tmp_size
- , first
- , static_cast(nullptr) // Items.
- , n
- , comp
- , e.stream().native_handle()
- , THRUST_DEBUG_SYNC_FLAG
- )
- , "after merge sort sizing"
- );
-
- return e;
-}
-
-template
-typename std::enable_if<
- is_operator_less_function_object::value
-, cudaError_t
->::type
-invoke_radix_sort(
- cudaStream_t stream
-, void* tmp_ptr
-, std::size_t& tmp_size
-, cub::DoubleBuffer& keys
-, Size& n
-, StrictWeakOrdering
-)
-{
- return cub::DeviceRadixSort::SortKeys(
- tmp_ptr
- , tmp_size
- , keys
- , n
- , 0
- , sizeof(T) * 8
- , stream
- , THRUST_DEBUG_SYNC_FLAG
- );
-}
-
-template
-typename std::enable_if<
- is_operator_greater_function_object::value
-, cudaError_t
->::type
-invoke_radix_sort(
- cudaStream_t stream
-, void* tmp_ptr
-, std::size_t& tmp_size
-, cub::DoubleBuffer& keys
-, Size& n
-, StrictWeakOrdering
-)
-{
- return cub::DeviceRadixSort::SortKeysDescending(
- tmp_ptr
- , tmp_size
- , keys
- , n
- , 0
- , sizeof(T) * 8
- , stream
- , THRUST_DEBUG_SYNC_FLAG
- );
-}
-
-// ContiguousIterator iterators
-// Scalar value type
-// operator< or operator>
-template <
- typename DerivedPolicy
-, typename ForwardIt, typename Size, typename StrictWeakOrdering
->
-auto async_stable_sort_n(
- execution_policy& policy
-, ForwardIt first
-, Size n
-, StrictWeakOrdering comp
-) ->
- typename std::enable_if<
- conjunction<
- is_contiguous_iterator
- , std::is_scalar<
- typename iterator_traits::value_type
- >
- , is_operator_less_or_greater_function_object
- >::value
- , unique_eager_event
- >::type
-{
- using T = typename iterator_traits::value_type;
-
- auto const device_alloc = get_async_device_allocator(policy);
-
- unique_eager_event e;
-
- cub::DoubleBuffer keys(
- raw_pointer_cast(&*first), nullptr
- );
-
- // Determine temporary device storage requirements.
-
- size_t tmp_size = 0;
- thrust::cuda_cub::throw_on_error(
- invoke_radix_sort(
- nullptr // Null stream, just for sizing.
- , nullptr
- , tmp_size
- , keys
- , n
- , comp
- )
- , "after radix sort sizing"
- );
-
- // Allocate temporary storage.
-
- size_t keys_temp_storage = thrust::detail::aligned_storage_size(
- sizeof(T) * n, 128
- );
-
- auto content = uninitialized_allocate_unique_n(
- device_alloc, keys_temp_storage + tmp_size
- );
-
- // The array was dynamically allocated, so we assume that it's suitably
- // aligned for any type of data. `malloc`/`cudaMalloc`/`new`/`std::allocator`
- // make this guarantee.
- auto const content_ptr = content.get();
-
- keys.d_buffers[1] = thrust::detail::aligned_reinterpret_cast(
- raw_pointer_cast(content_ptr)
- );
-
- void* const tmp_ptr = static_cast(
- raw_pointer_cast(content_ptr + keys_temp_storage)
- );
-
- // Set up stream with dependencies.
-
- cudaStream_t const user_raw_stream = thrust::cuda_cub::stream(policy);
-
- if (thrust::cuda_cub::default_stream() != user_raw_stream)
- {
- e = make_dependent_event(
- std::tuple_cat(
- std::make_tuple(
- std::move(content)
- , unique_stream(nonowning, user_raw_stream)
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(policy))
- )
- )
- );
- }
- else
- {
- e = make_dependent_event(
- std::tuple_cat(
- std::make_tuple(
- std::move(content)
- )
- , extract_dependencies(
- std::move(thrust::detail::derived_cast(policy))
- )
- )
- );
- }
-
- // Run radix sort.
-
- thrust::cuda_cub::throw_on_error(
- invoke_radix_sort(
- e.stream().native_handle()
- , tmp_ptr
- , tmp_size
- , keys
- , n
- , comp
- )
- , "after radix sort launch"
- );
-
- if (0 != keys.selector)
- {
- auto new_policy0 = thrust::detail::derived_cast(policy).rebind_after(
- std::move(e)
- );
-
- THRUST_STATIC_ASSERT((
- std::tuple_size::value + 1
- <=
- std::tuple_size::value
- ));
-
- // Synthesize a suitable new execution policy, because we don't want to
- // try and extract twice from the one we were passed.
- typename remove_cvref_t::tag_type tag_policy{};
-
- using return_future = decltype(e);
- return return_future(async_copy_n(
- new_policy0
- , tag_policy
- , keys.d_buffers[1]
- , n
- , keys.d_buffers[0]
- ));
- }
- else
- return e;
-}
-
-}}} // namespace system::cuda::detail
-
-namespace cuda_cub
-{
-
-// ADL entry point.
-template <
- typename DerivedPolicy
-, typename ForwardIt, typename Sentinel, typename StrictWeakOrdering
->
-auto async_stable_sort(
- execution_policy& policy,
- ForwardIt first,
- Sentinel last,
- StrictWeakOrdering comp
-)
-// A GCC 5 bug requires an explicit trailing return type here, so stick with
-// THRUST_DECLTYPE_RETURNS for now.
-THRUST_DECLTYPE_RETURNS(
- thrust::system::cuda::detail::async_stable_sort_n(
- policy, first, distance(first, last), comp
- )
-)
-
-} // cuda_cub
-
-} // end namespace thrust
-
-#endif // THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-
-#endif
-
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/conversation/conversation_video.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/conversation/conversation_video.py
deleted file mode 100644
index cd96a7a275f691519cd86200d7ed178d7cd2b75f..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/conversation/conversation_video.py
+++ /dev/null
@@ -1,248 +0,0 @@
-"""
-Conversation prompt template of Video-LLaMA.
-Adapted from: https://github.com/Vision-CAIR/MiniGPT-4/blob/main/minigpt4/conversation/conversation.py
-"""
-import argparse
-import time
-from PIL import Image
-
-import torch
-from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaTokenizer
-from transformers import StoppingCriteria, StoppingCriteriaList
-
-import dataclasses
-from enum import auto, Enum
-from typing import List, Tuple, Any
-import os
-from video_llama.common.registry import registry
-from video_llama.processors.video_processor import ToTHWC,ToUint8,load_video
-from video_llama.processors import Blip2ImageEvalProcessor
-class SeparatorStyle(Enum):
- """Different separator style."""
- SINGLE = auto()
- TWO = auto()
-
-
-@dataclasses.dataclass
-class Conversation:
- """A class that keeps all conversation history."""
- system: str
- roles: List[str]
- messages: List[List[str]]
- offset: int
- # system_img: List[Image.Image] = []
- sep_style: SeparatorStyle = SeparatorStyle.SINGLE
- sep: str = "###"
- sep2: str = None
-
- skip_next: bool = False
- conv_id: Any = None
-
- def get_prompt(self):
- if self.sep_style == SeparatorStyle.SINGLE:
- ret = self.system + self.sep
- for role, message in self.messages:
- if message:
- ret += role + ": " + message + self.sep
- else:
- ret += role + ":"
- return ret
- elif self.sep_style == SeparatorStyle.TWO:
- seps = [self.sep, self.sep2]
- ret = self.system + seps[0]
- for i, (role, message) in enumerate(self.messages):
- if message:
- ret += role + ": " + message + seps[i % 2]
- else:
- ret += role + ":"
- return ret
- else:
- raise ValueError(f"Invalid style: {self.sep_style}")
-
- def append_message(self, role, message):
- self.messages.append([role, message])
-
- def to_gradio_chatbot(self):
- ret = []
- for i, (role, msg) in enumerate(self.messages[self.offset:]):
- if i % 2 == 0:
- ret.append([msg, None])
- else:
- ret[-1][-1] = msg
- return ret
-
- def copy(self):
- return Conversation(
- system=self.system,
- # system_img=self.system_img,
- roles=self.roles,
- messages=[[x, y] for x, y in self.messages],
- offset=self.offset,
- sep_style=self.sep_style,
- sep=self.sep,
- sep2=self.sep2,
- conv_id=self.conv_id)
-
- def dict(self):
- return {
- "system": self.system,
- # "system_img": self.system_img,
- "roles": self.roles,
- "messages": self.messages,
- "offset": self.offset,
- "sep": self.sep,
- "sep2": self.sep2,
- "conv_id": self.conv_id,
- }
-
-
-class StoppingCriteriaSub(StoppingCriteria):
-
- def __init__(self, stops=[], encounters=1):
- super().__init__()
- self.stops = stops
-
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
- for stop in self.stops:
- if torch.all((stop == input_ids[0][-len(stop):])).item():
- return True
-
- return False
-
-
-CONV_VISION = Conversation(
- system="Give the following image: ImageContent. "
- "You will be able to see the image once I provide it to you. Please answer my questions.",
- roles=("Human", "Assistant"),
- messages=[],
- offset=0,
- sep_style=SeparatorStyle.SINGLE,
- sep="###",
-)
-
-default_conversation = Conversation(
- system="",
- roles=("Human", "Assistant"),
- messages=[],
- offset=0,
- sep_style=SeparatorStyle.SINGLE,
- sep="###",
-)
-
-class Chat:
- def __init__(self, model, vis_processor, device='cuda:0'):
- self.device = device
- self.model = model
- self.vis_processor = vis_processor
- self.image_vis_processor = Blip2ImageEvalProcessor()
- stop_words_ids = [torch.tensor([835]).to(self.device),
- torch.tensor([2277, 29937]).to(self.device)] # '###' can be encoded in two different ways.
- self.stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])
-
- def ask(self, text, conv):
- if len(conv.messages) > 0 and conv.messages[-1][0] == conv.roles[0] \
- and ('' in conv.messages[-1][1] or '' in conv.messages[-1][1]): # last message is image.
- conv.messages[-1][1] = ' '.join([conv.messages[-1][1], text])
- else:
- conv.append_message(conv.roles[0], text)
-
- def answer(self, conv, img_list, max_new_tokens=300, num_beams=1, min_length=1, top_p=0.9,
- repetition_penalty=1.0, length_penalty=1, temperature=1.0, max_length=2000):
- conv.append_message(conv.roles[1], None)
- embs = self.get_context_emb(conv, img_list)
-
- current_max_len = embs.shape[1] + max_new_tokens
- if current_max_len - max_length > 0:
- print('Warning: The number of tokens in current conversation exceeds the max length. '
- 'The model will not see the contexts outside the range.')
- begin_idx = max(0, current_max_len - max_length)
-
- embs = embs[:, begin_idx:]
-
- outputs = self.model.llama_model.generate(
- inputs_embeds=embs,
- max_new_tokens=max_new_tokens,
- stopping_criteria=self.stopping_criteria,
- num_beams=num_beams,
- do_sample=True,
- min_length=min_length,
- top_p=top_p,
- repetition_penalty=repetition_penalty,
- length_penalty=length_penalty,
- temperature=temperature,
- )
- output_token = outputs[0]
- if output_token[0] == 0: # the model might output a unknow token at the beginning. remove it
- output_token = output_token[1:]
- if output_token[0] == 1: # some users find that there is a start token at the beginning. remove it
- output_token = output_token[1:]
- output_text = self.model.llama_tokenizer.decode(output_token, add_special_tokens=False)
- output_text = output_text.split('###')[0] # remove the stop sign '###'
- output_text = output_text.split('Assistant:')[-1].strip()
- conv.messages[-1][1] = output_text
- return output_text, output_token.cpu().numpy()
-
- def upload_video(self, video, conv, img_list):
-
- msg = ""
- if isinstance(video, str): # is a video path
- ext = os.path.splitext(video)[-1].lower()
- print(video)
- # image = self.vis_processor(image).unsqueeze(0).to(self.device)
- video, msg = load_video(
- video_path=video,
- n_frms=8,
- height=224,
- width=224,
- sampling ="uniform", return_msg = True
- )
- video = self.vis_processor.transform(video)
- video = video.unsqueeze(0).to(self.device)
- # print(image)
- else:
- raise NotImplementedError
-
- image_emb, _ = self.model.encode_img(video)
- img_list.append(image_emb)
- conv.append_message(conv.roles[0], " "+ msg)
- return "Received."
-
- def upload_img(self, image, conv, img_list):
-
- msg = ""
- if isinstance(image, str): # is a image path
- raw_image = Image.open(image).convert('RGB') # 增加一个时间维度
- image = self.image_vis_processor(raw_image).unsqueeze(0).unsqueeze(2).to(self.device)
- elif isinstance(image, Image.Image):
- raw_image = image
- image = self.image_vis_processor(raw_image).unsqueeze(0).unsqueeze(2).to(self.device)
- elif isinstance(image, torch.Tensor):
- if len(image.shape) == 3:
- image = image.unsqueeze(0)
- image = image.to(self.device)
- else:
- raise NotImplementedError
-
- image_emb, _ = self.model.encode_img(image)
- img_list.append(image_emb)
- # Todo msg=""
- conv.append_message(conv.roles[0], " "+ msg)
-
- return "Received."
-
- def get_context_emb(self, conv, img_list):
- prompt = conv.get_prompt()
- prompt_segs = prompt.split('')
- assert len(prompt_segs) == len(img_list) + 1, "Unmatched numbers of image placeholders and images."
- seg_tokens = [
- self.model.llama_tokenizer(
- seg, return_tensors="pt", add_special_tokens=i == 0).to(self.device).input_ids
- # only add bos to the first seg
- for i, seg in enumerate(prompt_segs)
- ]
- seg_embs = [self.model.llama_model.model.embed_tokens(seg_t) for seg_t in seg_tokens]
- mixed_embs = [emb for pair in zip(seg_embs[:-1], img_list) for emb in pair] + [seg_embs[-1]]
- mixed_embs = torch.cat(mixed_embs, dim=1)
- return mixed_embs
-
-
diff --git a/spaces/DHEIVER/endoscopy_multiClassification/app.py b/spaces/DHEIVER/endoscopy_multiClassification/app.py
deleted file mode 100644
index fd8c5972d961a83d3e3c146c46144c5f664b7c8f..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/endoscopy_multiClassification/app.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import gradio as gr
-import tensorflow as tf
-import numpy as np
-from PIL import Image
-import json
-
-# Carregue o modelo previamente treinado
-model = tf.keras.models.load_model("model_acc_0.7240.h5")
-
-# Carregue o arquivo JSON com as categorias indexadas e descrições de diagnóstico
-with open("categories.json", "r") as json_file:
- categories_data = json.load(json_file)
-
-categories = [entry["category"] for entry in categories_data]
-diagnoses = [entry["diagnosis"] for entry in categories_data]
-
-# Descrição do modelo e seu objetivo em português
-model_description = (
- "Este modelo foi treinado para classificar imagens médicas do trato gastrointestinal humano em várias categorias "
- "com diagnósticos associados. Ele pode ajudar a identificar condições médicas a partir de imagens."
-)
-
-# Crie uma função para realizar a classificação
-def classify_image(image):
- try:
- # Redimensione a imagem para 100x100 pixels
- image = Image.fromarray(image.astype('uint8'))
- image = image.resize((100, 100)) # Redimensione para 100x100
- image = np.array(image)
-
- # Realize a classificação
- prediction = model.predict(image[None, ...])
-
- # Decodifique a classe prevista
- class_idx = np.argmax(prediction)
- class_label = categories[class_idx]
- diagnosis = diagnoses[class_idx]
-
- return f"Classe prevista: {class_label}\nDiagnóstico: {diagnosis}"
- except Exception as e:
- return str(e)
-
-# Crie uma interface Gradio com descrição completa e título informativo em português
-iface = gr.Interface(
- fn=classify_image,
- inputs=gr.inputs.Image(), # Entrada de imagem
- outputs="text", # Saída de texto com a classe prevista e diagnóstico
- title="Sistema de Classificação de Anomalias Gastrointestinais por Imagem",
- description=model_description
-)
-
-# Inicie a interface Gradio
-iface.launch()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/MpegImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/MpegImagePlugin.py
deleted file mode 100644
index d96d3a11c4966e94a53c67f13c3bf8f7987c0c83..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/MpegImagePlugin.py
+++ /dev/null
@@ -1,82 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# MPEG file handling
-#
-# History:
-# 95-09-09 fl Created
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1995.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-from . import Image, ImageFile
-from ._binary import i8
-
-#
-# Bitstream parser
-
-
-class BitStream:
- def __init__(self, fp):
- self.fp = fp
- self.bits = 0
- self.bitbuffer = 0
-
- def next(self):
- return i8(self.fp.read(1))
-
- def peek(self, bits):
- while self.bits < bits:
- c = self.next()
- if c < 0:
- self.bits = 0
- continue
- self.bitbuffer = (self.bitbuffer << 8) + c
- self.bits += 8
- return self.bitbuffer >> (self.bits - bits) & (1 << bits) - 1
-
- def skip(self, bits):
- while self.bits < bits:
- self.bitbuffer = (self.bitbuffer << 8) + i8(self.fp.read(1))
- self.bits += 8
- self.bits = self.bits - bits
-
- def read(self, bits):
- v = self.peek(bits)
- self.bits = self.bits - bits
- return v
-
-
-##
-# Image plugin for MPEG streams. This plugin can identify a stream,
-# but it cannot read it.
-
-
-class MpegImageFile(ImageFile.ImageFile):
- format = "MPEG"
- format_description = "MPEG"
-
- def _open(self):
- s = BitStream(self.fp)
-
- if s.read(32) != 0x1B3:
- msg = "not an MPEG file"
- raise SyntaxError(msg)
-
- self.mode = "RGB"
- self._size = s.read(12), s.read(12)
-
-
-# --------------------------------------------------------------------
-# Registry stuff
-
-Image.register_open(MpegImageFile.format, MpegImageFile)
-
-Image.register_extensions(MpegImageFile.format, [".mpg", ".mpeg"])
-
-Image.register_mime(MpegImageFile.format, "video/mpeg")
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/__init__.py
deleted file mode 100644
index 29fb3561e4f2dc9d3a764e756439c0dea2c9897a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/__init__.py
+++ /dev/null
@@ -1,169 +0,0 @@
-from __future__ import annotations
-
-__all__ = (
- "maybe_async",
- "maybe_async_cm",
- "run",
- "sleep",
- "sleep_forever",
- "sleep_until",
- "current_time",
- "get_all_backends",
- "get_cancelled_exc_class",
- "BrokenResourceError",
- "BrokenWorkerProcess",
- "BusyResourceError",
- "ClosedResourceError",
- "DelimiterNotFound",
- "EndOfStream",
- "ExceptionGroup",
- "IncompleteRead",
- "TypedAttributeLookupError",
- "WouldBlock",
- "AsyncFile",
- "Path",
- "open_file",
- "wrap_file",
- "aclose_forcefully",
- "open_signal_receiver",
- "connect_tcp",
- "connect_unix",
- "create_tcp_listener",
- "create_unix_listener",
- "create_udp_socket",
- "create_connected_udp_socket",
- "getaddrinfo",
- "getnameinfo",
- "wait_socket_readable",
- "wait_socket_writable",
- "create_memory_object_stream",
- "run_process",
- "open_process",
- "create_lock",
- "CapacityLimiter",
- "CapacityLimiterStatistics",
- "Condition",
- "ConditionStatistics",
- "Event",
- "EventStatistics",
- "Lock",
- "LockStatistics",
- "Semaphore",
- "SemaphoreStatistics",
- "create_condition",
- "create_event",
- "create_semaphore",
- "create_capacity_limiter",
- "open_cancel_scope",
- "fail_after",
- "move_on_after",
- "current_effective_deadline",
- "TASK_STATUS_IGNORED",
- "CancelScope",
- "create_task_group",
- "TaskInfo",
- "get_current_task",
- "get_running_tasks",
- "wait_all_tasks_blocked",
- "run_sync_in_worker_thread",
- "run_async_from_thread",
- "run_sync_from_thread",
- "current_default_worker_thread_limiter",
- "create_blocking_portal",
- "start_blocking_portal",
- "typed_attribute",
- "TypedAttributeSet",
- "TypedAttributeProvider",
-)
-
-from typing import Any
-
-from ._core._compat import maybe_async, maybe_async_cm
-from ._core._eventloop import (
- current_time,
- get_all_backends,
- get_cancelled_exc_class,
- run,
- sleep,
- sleep_forever,
- sleep_until,
-)
-from ._core._exceptions import (
- BrokenResourceError,
- BrokenWorkerProcess,
- BusyResourceError,
- ClosedResourceError,
- DelimiterNotFound,
- EndOfStream,
- ExceptionGroup,
- IncompleteRead,
- TypedAttributeLookupError,
- WouldBlock,
-)
-from ._core._fileio import AsyncFile, Path, open_file, wrap_file
-from ._core._resources import aclose_forcefully
-from ._core._signals import open_signal_receiver
-from ._core._sockets import (
- connect_tcp,
- connect_unix,
- create_connected_udp_socket,
- create_tcp_listener,
- create_udp_socket,
- create_unix_listener,
- getaddrinfo,
- getnameinfo,
- wait_socket_readable,
- wait_socket_writable,
-)
-from ._core._streams import create_memory_object_stream
-from ._core._subprocesses import open_process, run_process
-from ._core._synchronization import (
- CapacityLimiter,
- CapacityLimiterStatistics,
- Condition,
- ConditionStatistics,
- Event,
- EventStatistics,
- Lock,
- LockStatistics,
- Semaphore,
- SemaphoreStatistics,
- create_capacity_limiter,
- create_condition,
- create_event,
- create_lock,
- create_semaphore,
-)
-from ._core._tasks import (
- TASK_STATUS_IGNORED,
- CancelScope,
- create_task_group,
- current_effective_deadline,
- fail_after,
- move_on_after,
- open_cancel_scope,
-)
-from ._core._testing import (
- TaskInfo,
- get_current_task,
- get_running_tasks,
- wait_all_tasks_blocked,
-)
-from ._core._typedattr import TypedAttributeProvider, TypedAttributeSet, typed_attribute
-
-# Re-exported here, for backwards compatibility
-# isort: off
-from .to_thread import current_default_worker_thread_limiter, run_sync_in_worker_thread
-from .from_thread import (
- create_blocking_portal,
- run_async_from_thread,
- run_sync_from_thread,
- start_blocking_portal,
-)
-
-# Re-export imports so they look like they live directly in this package
-key: str
-value: Any
-for key, value in list(locals().items()):
- if getattr(value, "__module__", "").startswith("anyio."):
- value.__module__ = __name__
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/ast.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/ast.py
deleted file mode 100644
index 17c6cc3fbe494a076d2b59f4664ab9fe56ecd20f..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/feaLib/ast.py
+++ /dev/null
@@ -1,2134 +0,0 @@
-from fontTools.feaLib.error import FeatureLibError
-from fontTools.feaLib.location import FeatureLibLocation
-from fontTools.misc.encodingTools import getEncoding
-from fontTools.misc.textTools import byteord, tobytes
-from collections import OrderedDict
-import itertools
-
-SHIFT = " " * 4
-
-__all__ = [
- "Element",
- "FeatureFile",
- "Comment",
- "GlyphName",
- "GlyphClass",
- "GlyphClassName",
- "MarkClassName",
- "AnonymousBlock",
- "Block",
- "FeatureBlock",
- "NestedBlock",
- "LookupBlock",
- "GlyphClassDefinition",
- "GlyphClassDefStatement",
- "MarkClass",
- "MarkClassDefinition",
- "AlternateSubstStatement",
- "Anchor",
- "AnchorDefinition",
- "AttachStatement",
- "AxisValueLocationStatement",
- "BaseAxis",
- "CVParametersNameStatement",
- "ChainContextPosStatement",
- "ChainContextSubstStatement",
- "CharacterStatement",
- "ConditionsetStatement",
- "CursivePosStatement",
- "ElidedFallbackName",
- "ElidedFallbackNameID",
- "Expression",
- "FeatureNameStatement",
- "FeatureReferenceStatement",
- "FontRevisionStatement",
- "HheaField",
- "IgnorePosStatement",
- "IgnoreSubstStatement",
- "IncludeStatement",
- "LanguageStatement",
- "LanguageSystemStatement",
- "LigatureCaretByIndexStatement",
- "LigatureCaretByPosStatement",
- "LigatureSubstStatement",
- "LookupFlagStatement",
- "LookupReferenceStatement",
- "MarkBasePosStatement",
- "MarkLigPosStatement",
- "MarkMarkPosStatement",
- "MultipleSubstStatement",
- "NameRecord",
- "OS2Field",
- "PairPosStatement",
- "ReverseChainSingleSubstStatement",
- "ScriptStatement",
- "SinglePosStatement",
- "SingleSubstStatement",
- "SizeParameters",
- "Statement",
- "STATAxisValueStatement",
- "STATDesignAxisStatement",
- "STATNameStatement",
- "SubtableStatement",
- "TableBlock",
- "ValueRecord",
- "ValueRecordDefinition",
- "VheaField",
-]
-
-
-def deviceToString(device):
- if device is None:
- return ""
- else:
- return "" % ", ".join("%d %d" % t for t in device)
-
-
-fea_keywords = set(
- [
- "anchor",
- "anchordef",
- "anon",
- "anonymous",
- "by",
- "contour",
- "cursive",
- "device",
- "enum",
- "enumerate",
- "excludedflt",
- "exclude_dflt",
- "feature",
- "from",
- "ignore",
- "ignorebaseglyphs",
- "ignoreligatures",
- "ignoremarks",
- "include",
- "includedflt",
- "include_dflt",
- "language",
- "languagesystem",
- "lookup",
- "lookupflag",
- "mark",
- "markattachmenttype",
- "markclass",
- "nameid",
- "null",
- "parameters",
- "pos",
- "position",
- "required",
- "righttoleft",
- "reversesub",
- "rsub",
- "script",
- "sub",
- "substitute",
- "subtable",
- "table",
- "usemarkfilteringset",
- "useextension",
- "valuerecorddef",
- "base",
- "gdef",
- "head",
- "hhea",
- "name",
- "vhea",
- "vmtx",
- ]
-)
-
-
-def asFea(g):
- if hasattr(g, "asFea"):
- return g.asFea()
- elif isinstance(g, tuple) and len(g) == 2:
- return asFea(g[0]) + " - " + asFea(g[1]) # a range
- elif g.lower() in fea_keywords:
- return "\\" + g
- else:
- return g
-
-
-class Element(object):
- """A base class representing "something" in a feature file."""
-
- def __init__(self, location=None):
- #: location of this element as a `FeatureLibLocation` object.
- if location and not isinstance(location, FeatureLibLocation):
- location = FeatureLibLocation(*location)
- self.location = location
-
- def build(self, builder):
- pass
-
- def asFea(self, indent=""):
- """Returns this element as a string of feature code. For block-type
- elements (such as :class:`FeatureBlock`), the `indent` string is
- added to the start of each line in the output."""
- raise NotImplementedError
-
- def __str__(self):
- return self.asFea()
-
-
-class Statement(Element):
- pass
-
-
-class Expression(Element):
- pass
-
-
-class Comment(Element):
- """A comment in a feature file."""
-
- def __init__(self, text, location=None):
- super(Comment, self).__init__(location)
- #: Text of the comment
- self.text = text
-
- def asFea(self, indent=""):
- return self.text
-
-
-class NullGlyph(Expression):
- """The NULL glyph, used in glyph deletion substitutions."""
-
- def __init__(self, location=None):
- Expression.__init__(self, location)
- #: The name itself as a string
-
- def glyphSet(self):
- """The glyphs in this class as a tuple of :class:`GlyphName` objects."""
- return ()
-
- def asFea(self, indent=""):
- return "NULL"
-
-
-class GlyphName(Expression):
- """A single glyph name, such as ``cedilla``."""
-
- def __init__(self, glyph, location=None):
- Expression.__init__(self, location)
- #: The name itself as a string
- self.glyph = glyph
-
- def glyphSet(self):
- """The glyphs in this class as a tuple of :class:`GlyphName` objects."""
- return (self.glyph,)
-
- def asFea(self, indent=""):
- return asFea(self.glyph)
-
-
-class GlyphClass(Expression):
- """A glyph class, such as ``[acute cedilla grave]``."""
-
- def __init__(self, glyphs=None, location=None):
- Expression.__init__(self, location)
- #: The list of glyphs in this class, as :class:`GlyphName` objects.
- self.glyphs = glyphs if glyphs is not None else []
- self.original = []
- self.curr = 0
-
- def glyphSet(self):
- """The glyphs in this class as a tuple of :class:`GlyphName` objects."""
- return tuple(self.glyphs)
-
- def asFea(self, indent=""):
- if len(self.original):
- if self.curr < len(self.glyphs):
- self.original.extend(self.glyphs[self.curr :])
- self.curr = len(self.glyphs)
- return "[" + " ".join(map(asFea, self.original)) + "]"
- else:
- return "[" + " ".join(map(asFea, self.glyphs)) + "]"
-
- def extend(self, glyphs):
- """Add a list of :class:`GlyphName` objects to the class."""
- self.glyphs.extend(glyphs)
-
- def append(self, glyph):
- """Add a single :class:`GlyphName` object to the class."""
- self.glyphs.append(glyph)
-
- def add_range(self, start, end, glyphs):
- """Add a range (e.g. ``A-Z``) to the class. ``start`` and ``end``
- are either :class:`GlyphName` objects or strings representing the
- start and end glyphs in the class, and ``glyphs`` is the full list of
- :class:`GlyphName` objects in the range."""
- if self.curr < len(self.glyphs):
- self.original.extend(self.glyphs[self.curr :])
- self.original.append((start, end))
- self.glyphs.extend(glyphs)
- self.curr = len(self.glyphs)
-
- def add_cid_range(self, start, end, glyphs):
- """Add a range to the class by glyph ID. ``start`` and ``end`` are the
- initial and final IDs, and ``glyphs`` is the full list of
- :class:`GlyphName` objects in the range."""
- if self.curr < len(self.glyphs):
- self.original.extend(self.glyphs[self.curr :])
- self.original.append(("\\{}".format(start), "\\{}".format(end)))
- self.glyphs.extend(glyphs)
- self.curr = len(self.glyphs)
-
- def add_class(self, gc):
- """Add glyphs from the given :class:`GlyphClassName` object to the
- class."""
- if self.curr < len(self.glyphs):
- self.original.extend(self.glyphs[self.curr :])
- self.original.append(gc)
- self.glyphs.extend(gc.glyphSet())
- self.curr = len(self.glyphs)
-
-
-class GlyphClassName(Expression):
- """A glyph class name, such as ``@FRENCH_MARKS``. This must be instantiated
- with a :class:`GlyphClassDefinition` object."""
-
- def __init__(self, glyphclass, location=None):
- Expression.__init__(self, location)
- assert isinstance(glyphclass, GlyphClassDefinition)
- self.glyphclass = glyphclass
-
- def glyphSet(self):
- """The glyphs in this class as a tuple of :class:`GlyphName` objects."""
- return tuple(self.glyphclass.glyphSet())
-
- def asFea(self, indent=""):
- return "@" + self.glyphclass.name
-
-
-class MarkClassName(Expression):
- """A mark class name, such as ``@FRENCH_MARKS`` defined with ``markClass``.
- This must be instantiated with a :class:`MarkClass` object."""
-
- def __init__(self, markClass, location=None):
- Expression.__init__(self, location)
- assert isinstance(markClass, MarkClass)
- self.markClass = markClass
-
- def glyphSet(self):
- """The glyphs in this class as a tuple of :class:`GlyphName` objects."""
- return self.markClass.glyphSet()
-
- def asFea(self, indent=""):
- return "@" + self.markClass.name
-
-
-class AnonymousBlock(Statement):
- """An anonymous data block."""
-
- def __init__(self, tag, content, location=None):
- Statement.__init__(self, location)
- self.tag = tag #: string containing the block's "tag"
- self.content = content #: block data as string
-
- def asFea(self, indent=""):
- res = "anon {} {{\n".format(self.tag)
- res += self.content
- res += "}} {};\n\n".format(self.tag)
- return res
-
-
-class Block(Statement):
- """A block of statements: feature, lookup, etc."""
-
- def __init__(self, location=None):
- Statement.__init__(self, location)
- self.statements = [] #: Statements contained in the block
-
- def build(self, builder):
- """When handed a 'builder' object of comparable interface to
- :class:`fontTools.feaLib.builder`, walks the statements in this
- block, calling the builder callbacks."""
- for s in self.statements:
- s.build(builder)
-
- def asFea(self, indent=""):
- indent += SHIFT
- return (
- indent
- + ("\n" + indent).join([s.asFea(indent=indent) for s in self.statements])
- + "\n"
- )
-
-
-class FeatureFile(Block):
- """The top-level element of the syntax tree, containing the whole feature
- file in its ``statements`` attribute."""
-
- def __init__(self):
- Block.__init__(self, location=None)
- self.markClasses = {} # name --> ast.MarkClass
-
- def asFea(self, indent=""):
- return "\n".join(s.asFea(indent=indent) for s in self.statements)
-
-
-class FeatureBlock(Block):
- """A named feature block."""
-
- def __init__(self, name, use_extension=False, location=None):
- Block.__init__(self, location)
- self.name, self.use_extension = name, use_extension
-
- def build(self, builder):
- """Call the ``start_feature`` callback on the builder object, visit
- all the statements in this feature, and then call ``end_feature``."""
- # TODO(sascha): Handle use_extension.
- builder.start_feature(self.location, self.name)
- # language exclude_dflt statements modify builder.features_
- # limit them to this block with temporary builder.features_
- features = builder.features_
- builder.features_ = {}
- Block.build(self, builder)
- for key, value in builder.features_.items():
- features.setdefault(key, []).extend(value)
- builder.features_ = features
- builder.end_feature()
-
- def asFea(self, indent=""):
- res = indent + "feature %s " % self.name.strip()
- if self.use_extension:
- res += "useExtension "
- res += "{\n"
- res += Block.asFea(self, indent=indent)
- res += indent + "} %s;\n" % self.name.strip()
- return res
-
-
-class NestedBlock(Block):
- """A block inside another block, for example when found inside a
- ``cvParameters`` block."""
-
- def __init__(self, tag, block_name, location=None):
- Block.__init__(self, location)
- self.tag = tag
- self.block_name = block_name
-
- def build(self, builder):
- Block.build(self, builder)
- if self.block_name == "ParamUILabelNameID":
- builder.add_to_cv_num_named_params(self.tag)
-
- def asFea(self, indent=""):
- res = "{}{} {{\n".format(indent, self.block_name)
- res += Block.asFea(self, indent=indent)
- res += "{}}};\n".format(indent)
- return res
-
-
-class LookupBlock(Block):
- """A named lookup, containing ``statements``."""
-
- def __init__(self, name, use_extension=False, location=None):
- Block.__init__(self, location)
- self.name, self.use_extension = name, use_extension
-
- def build(self, builder):
- # TODO(sascha): Handle use_extension.
- builder.start_lookup_block(self.location, self.name)
- Block.build(self, builder)
- builder.end_lookup_block()
-
- def asFea(self, indent=""):
- res = "lookup {} ".format(self.name)
- if self.use_extension:
- res += "useExtension "
- res += "{\n"
- res += Block.asFea(self, indent=indent)
- res += "{}}} {};\n".format(indent, self.name)
- return res
-
-
-class TableBlock(Block):
- """A ``table ... { }`` block."""
-
- def __init__(self, name, location=None):
- Block.__init__(self, location)
- self.name = name
-
- def asFea(self, indent=""):
- res = "table {} {{\n".format(self.name.strip())
- res += super(TableBlock, self).asFea(indent=indent)
- res += "}} {};\n".format(self.name.strip())
- return res
-
-
-class GlyphClassDefinition(Statement):
- """Example: ``@UPPERCASE = [A-Z];``."""
-
- def __init__(self, name, glyphs, location=None):
- Statement.__init__(self, location)
- self.name = name #: class name as a string, without initial ``@``
- self.glyphs = glyphs #: a :class:`GlyphClass` object
-
- def glyphSet(self):
- """The glyphs in this class as a tuple of :class:`GlyphName` objects."""
- return tuple(self.glyphs.glyphSet())
-
- def asFea(self, indent=""):
- return "@" + self.name + " = " + self.glyphs.asFea() + ";"
-
-
-class GlyphClassDefStatement(Statement):
- """Example: ``GlyphClassDef @UPPERCASE, [B], [C], [D];``. The parameters
- must be either :class:`GlyphClass` or :class:`GlyphClassName` objects, or
- ``None``."""
-
- def __init__(
- self, baseGlyphs, markGlyphs, ligatureGlyphs, componentGlyphs, location=None
- ):
- Statement.__init__(self, location)
- self.baseGlyphs, self.markGlyphs = (baseGlyphs, markGlyphs)
- self.ligatureGlyphs = ligatureGlyphs
- self.componentGlyphs = componentGlyphs
-
- def build(self, builder):
- """Calls the builder's ``add_glyphClassDef`` callback."""
- base = self.baseGlyphs.glyphSet() if self.baseGlyphs else tuple()
- liga = self.ligatureGlyphs.glyphSet() if self.ligatureGlyphs else tuple()
- mark = self.markGlyphs.glyphSet() if self.markGlyphs else tuple()
- comp = self.componentGlyphs.glyphSet() if self.componentGlyphs else tuple()
- builder.add_glyphClassDef(self.location, base, liga, mark, comp)
-
- def asFea(self, indent=""):
- return "GlyphClassDef {}, {}, {}, {};".format(
- self.baseGlyphs.asFea() if self.baseGlyphs else "",
- self.ligatureGlyphs.asFea() if self.ligatureGlyphs else "",
- self.markGlyphs.asFea() if self.markGlyphs else "",
- self.componentGlyphs.asFea() if self.componentGlyphs else "",
- )
-
-
-class MarkClass(object):
- """One `or more` ``markClass`` statements for the same mark class.
-
- While glyph classes can be defined only once, the feature file format
- allows expanding mark classes with multiple definitions, each using
- different glyphs and anchors. The following are two ``MarkClassDefinitions``
- for the same ``MarkClass``::
-
- markClass [acute grave] @FRENCH_ACCENTS;
- markClass [cedilla] @FRENCH_ACCENTS;
-
- The ``MarkClass`` object is therefore just a container for a list of
- :class:`MarkClassDefinition` statements.
- """
-
- def __init__(self, name):
- self.name = name
- self.definitions = []
- self.glyphs = OrderedDict() # glyph --> ast.MarkClassDefinitions
-
- def addDefinition(self, definition):
- """Add a :class:`MarkClassDefinition` statement to this mark class."""
- assert isinstance(definition, MarkClassDefinition)
- self.definitions.append(definition)
- for glyph in definition.glyphSet():
- if glyph in self.glyphs:
- otherLoc = self.glyphs[glyph].location
- if otherLoc is None:
- end = ""
- else:
- end = f" at {otherLoc}"
- raise FeatureLibError(
- "Glyph %s already defined%s" % (glyph, end), definition.location
- )
- self.glyphs[glyph] = definition
-
- def glyphSet(self):
- """The glyphs in this class as a tuple of :class:`GlyphName` objects."""
- return tuple(self.glyphs.keys())
-
- def asFea(self, indent=""):
- res = "\n".join(d.asFea() for d in self.definitions)
- return res
-
-
-class MarkClassDefinition(Statement):
- """A single ``markClass`` statement. The ``markClass`` should be a
- :class:`MarkClass` object, the ``anchor`` an :class:`Anchor` object,
- and the ``glyphs`` parameter should be a `glyph-containing object`_ .
-
- Example:
-
- .. code:: python
-
- mc = MarkClass("FRENCH_ACCENTS")
- mc.addDefinition( MarkClassDefinition(mc, Anchor(350, 800),
- GlyphClass([ GlyphName("acute"), GlyphName("grave") ])
- ) )
- mc.addDefinition( MarkClassDefinition(mc, Anchor(350, -200),
- GlyphClass([ GlyphName("cedilla") ])
- ) )
-
- mc.asFea()
- # markClass [acute grave] @FRENCH_ACCENTS;
- # markClass [cedilla] @FRENCH_ACCENTS;
-
- """
-
- def __init__(self, markClass, anchor, glyphs, location=None):
- Statement.__init__(self, location)
- assert isinstance(markClass, MarkClass)
- assert isinstance(anchor, Anchor) and isinstance(glyphs, Expression)
- self.markClass, self.anchor, self.glyphs = markClass, anchor, glyphs
-
- def glyphSet(self):
- """The glyphs in this class as a tuple of :class:`GlyphName` objects."""
- return self.glyphs.glyphSet()
-
- def asFea(self, indent=""):
- return "markClass {} {} @{};".format(
- self.glyphs.asFea(), self.anchor.asFea(), self.markClass.name
- )
-
-
-class AlternateSubstStatement(Statement):
- """A ``sub ... from ...`` statement.
-
- ``prefix``, ``glyph``, ``suffix`` and ``replacement`` should be lists of
- `glyph-containing objects`_. ``glyph`` should be a `one element list`."""
-
- def __init__(self, prefix, glyph, suffix, replacement, location=None):
- Statement.__init__(self, location)
- self.prefix, self.glyph, self.suffix = (prefix, glyph, suffix)
- self.replacement = replacement
-
- def build(self, builder):
- """Calls the builder's ``add_alternate_subst`` callback."""
- glyph = self.glyph.glyphSet()
- assert len(glyph) == 1, glyph
- glyph = list(glyph)[0]
- prefix = [p.glyphSet() for p in self.prefix]
- suffix = [s.glyphSet() for s in self.suffix]
- replacement = self.replacement.glyphSet()
- builder.add_alternate_subst(self.location, prefix, glyph, suffix, replacement)
-
- def asFea(self, indent=""):
- res = "sub "
- if len(self.prefix) or len(self.suffix):
- if len(self.prefix):
- res += " ".join(map(asFea, self.prefix)) + " "
- res += asFea(self.glyph) + "'" # even though we really only use 1
- if len(self.suffix):
- res += " " + " ".join(map(asFea, self.suffix))
- else:
- res += asFea(self.glyph)
- res += " from "
- res += asFea(self.replacement)
- res += ";"
- return res
-
-
-class Anchor(Expression):
- """An ``Anchor`` element, used inside a ``pos`` rule.
-
- If a ``name`` is given, this will be used in preference to the coordinates.
- Other values should be integer.
- """
-
- def __init__(
- self,
- x,
- y,
- name=None,
- contourpoint=None,
- xDeviceTable=None,
- yDeviceTable=None,
- location=None,
- ):
- Expression.__init__(self, location)
- self.name = name
- self.x, self.y, self.contourpoint = x, y, contourpoint
- self.xDeviceTable, self.yDeviceTable = xDeviceTable, yDeviceTable
-
- def asFea(self, indent=""):
- if self.name is not None:
- return "".format(self.name)
- res = ""
- return res
-
-
-class AnchorDefinition(Statement):
- """A named anchor definition. (2.e.viii). ``name`` should be a string."""
-
- def __init__(self, name, x, y, contourpoint=None, location=None):
- Statement.__init__(self, location)
- self.name, self.x, self.y, self.contourpoint = name, x, y, contourpoint
-
- def asFea(self, indent=""):
- res = "anchorDef {} {}".format(self.x, self.y)
- if self.contourpoint:
- res += " contourpoint {}".format(self.contourpoint)
- res += " {};".format(self.name)
- return res
-
-
-class AttachStatement(Statement):
- """A ``GDEF`` table ``Attach`` statement."""
-
- def __init__(self, glyphs, contourPoints, location=None):
- Statement.__init__(self, location)
- self.glyphs = glyphs #: A `glyph-containing object`_
- self.contourPoints = contourPoints #: A list of integer contour points
-
- def build(self, builder):
- """Calls the builder's ``add_attach_points`` callback."""
- glyphs = self.glyphs.glyphSet()
- builder.add_attach_points(self.location, glyphs, self.contourPoints)
-
- def asFea(self, indent=""):
- return "Attach {} {};".format(
- self.glyphs.asFea(), " ".join(str(c) for c in self.contourPoints)
- )
-
-
-class ChainContextPosStatement(Statement):
- r"""A chained contextual positioning statement.
-
- ``prefix``, ``glyphs``, and ``suffix`` should be lists of
- `glyph-containing objects`_ .
-
- ``lookups`` should be a list of elements representing what lookups
- to apply at each glyph position. Each element should be a
- :class:`LookupBlock` to apply a single chaining lookup at the given
- position, a list of :class:`LookupBlock`\ s to apply multiple
- lookups, or ``None`` to apply no lookup. The length of the outer
- list should equal the length of ``glyphs``; the inner lists can be
- of variable length."""
-
- def __init__(self, prefix, glyphs, suffix, lookups, location=None):
- Statement.__init__(self, location)
- self.prefix, self.glyphs, self.suffix = prefix, glyphs, suffix
- self.lookups = list(lookups)
- for i, lookup in enumerate(lookups):
- if lookup:
- try:
- (_ for _ in lookup)
- except TypeError:
- self.lookups[i] = [lookup]
-
- def build(self, builder):
- """Calls the builder's ``add_chain_context_pos`` callback."""
- prefix = [p.glyphSet() for p in self.prefix]
- glyphs = [g.glyphSet() for g in self.glyphs]
- suffix = [s.glyphSet() for s in self.suffix]
- builder.add_chain_context_pos(
- self.location, prefix, glyphs, suffix, self.lookups
- )
-
- def asFea(self, indent=""):
- res = "pos "
- if (
- len(self.prefix)
- or len(self.suffix)
- or any([x is not None for x in self.lookups])
- ):
- if len(self.prefix):
- res += " ".join(g.asFea() for g in self.prefix) + " "
- for i, g in enumerate(self.glyphs):
- res += g.asFea() + "'"
- if self.lookups[i]:
- for lu in self.lookups[i]:
- res += " lookup " + lu.name
- if i < len(self.glyphs) - 1:
- res += " "
- if len(self.suffix):
- res += " " + " ".join(map(asFea, self.suffix))
- else:
- res += " ".join(map(asFea, self.glyph))
- res += ";"
- return res
-
-
-class ChainContextSubstStatement(Statement):
- r"""A chained contextual substitution statement.
-
- ``prefix``, ``glyphs``, and ``suffix`` should be lists of
- `glyph-containing objects`_ .
-
- ``lookups`` should be a list of elements representing what lookups
- to apply at each glyph position. Each element should be a
- :class:`LookupBlock` to apply a single chaining lookup at the given
- position, a list of :class:`LookupBlock`\ s to apply multiple
- lookups, or ``None`` to apply no lookup. The length of the outer
- list should equal the length of ``glyphs``; the inner lists can be
- of variable length."""
-
- def __init__(self, prefix, glyphs, suffix, lookups, location=None):
- Statement.__init__(self, location)
- self.prefix, self.glyphs, self.suffix = prefix, glyphs, suffix
- self.lookups = list(lookups)
- for i, lookup in enumerate(lookups):
- if lookup:
- try:
- (_ for _ in lookup)
- except TypeError:
- self.lookups[i] = [lookup]
-
- def build(self, builder):
- """Calls the builder's ``add_chain_context_subst`` callback."""
- prefix = [p.glyphSet() for p in self.prefix]
- glyphs = [g.glyphSet() for g in self.glyphs]
- suffix = [s.glyphSet() for s in self.suffix]
- builder.add_chain_context_subst(
- self.location, prefix, glyphs, suffix, self.lookups
- )
-
- def asFea(self, indent=""):
- res = "sub "
- if (
- len(self.prefix)
- or len(self.suffix)
- or any([x is not None for x in self.lookups])
- ):
- if len(self.prefix):
- res += " ".join(g.asFea() for g in self.prefix) + " "
- for i, g in enumerate(self.glyphs):
- res += g.asFea() + "'"
- if self.lookups[i]:
- for lu in self.lookups[i]:
- res += " lookup " + lu.name
- if i < len(self.glyphs) - 1:
- res += " "
- if len(self.suffix):
- res += " " + " ".join(map(asFea, self.suffix))
- else:
- res += " ".join(map(asFea, self.glyph))
- res += ";"
- return res
-
-
-class CursivePosStatement(Statement):
- """A cursive positioning statement. Entry and exit anchors can either
- be :class:`Anchor` objects or ``None``."""
-
- def __init__(self, glyphclass, entryAnchor, exitAnchor, location=None):
- Statement.__init__(self, location)
- self.glyphclass = glyphclass
- self.entryAnchor, self.exitAnchor = entryAnchor, exitAnchor
-
- def build(self, builder):
- """Calls the builder object's ``add_cursive_pos`` callback."""
- builder.add_cursive_pos(
- self.location, self.glyphclass.glyphSet(), self.entryAnchor, self.exitAnchor
- )
-
- def asFea(self, indent=""):
- entry = self.entryAnchor.asFea() if self.entryAnchor else ""
- exit = self.exitAnchor.asFea() if self.exitAnchor else ""
- return "pos cursive {} {} {};".format(self.glyphclass.asFea(), entry, exit)
-
-
-class FeatureReferenceStatement(Statement):
- """Example: ``feature salt;``"""
-
- def __init__(self, featureName, location=None):
- Statement.__init__(self, location)
- self.location, self.featureName = (location, featureName)
-
- def build(self, builder):
- """Calls the builder object's ``add_feature_reference`` callback."""
- builder.add_feature_reference(self.location, self.featureName)
-
- def asFea(self, indent=""):
- return "feature {};".format(self.featureName)
-
-
-class IgnorePosStatement(Statement):
- """An ``ignore pos`` statement, containing `one or more` contexts to ignore.
-
- ``chainContexts`` should be a list of ``(prefix, glyphs, suffix)`` tuples,
- with each of ``prefix``, ``glyphs`` and ``suffix`` being
- `glyph-containing objects`_ ."""
-
- def __init__(self, chainContexts, location=None):
- Statement.__init__(self, location)
- self.chainContexts = chainContexts
-
- def build(self, builder):
- """Calls the builder object's ``add_chain_context_pos`` callback on each
- rule context."""
- for prefix, glyphs, suffix in self.chainContexts:
- prefix = [p.glyphSet() for p in prefix]
- glyphs = [g.glyphSet() for g in glyphs]
- suffix = [s.glyphSet() for s in suffix]
- builder.add_chain_context_pos(self.location, prefix, glyphs, suffix, [])
-
- def asFea(self, indent=""):
- contexts = []
- for prefix, glyphs, suffix in self.chainContexts:
- res = ""
- if len(prefix) or len(suffix):
- if len(prefix):
- res += " ".join(map(asFea, prefix)) + " "
- res += " ".join(g.asFea() + "'" for g in glyphs)
- if len(suffix):
- res += " " + " ".join(map(asFea, suffix))
- else:
- res += " ".join(map(asFea, glyphs))
- contexts.append(res)
- return "ignore pos " + ", ".join(contexts) + ";"
-
-
-class IgnoreSubstStatement(Statement):
- """An ``ignore sub`` statement, containing `one or more` contexts to ignore.
-
- ``chainContexts`` should be a list of ``(prefix, glyphs, suffix)`` tuples,
- with each of ``prefix``, ``glyphs`` and ``suffix`` being
- `glyph-containing objects`_ ."""
-
- def __init__(self, chainContexts, location=None):
- Statement.__init__(self, location)
- self.chainContexts = chainContexts
-
- def build(self, builder):
- """Calls the builder object's ``add_chain_context_subst`` callback on
- each rule context."""
- for prefix, glyphs, suffix in self.chainContexts:
- prefix = [p.glyphSet() for p in prefix]
- glyphs = [g.glyphSet() for g in glyphs]
- suffix = [s.glyphSet() for s in suffix]
- builder.add_chain_context_subst(self.location, prefix, glyphs, suffix, [])
-
- def asFea(self, indent=""):
- contexts = []
- for prefix, glyphs, suffix in self.chainContexts:
- res = ""
- if len(prefix):
- res += " ".join(map(asFea, prefix)) + " "
- res += " ".join(g.asFea() + "'" for g in glyphs)
- if len(suffix):
- res += " " + " ".join(map(asFea, suffix))
- contexts.append(res)
- return "ignore sub " + ", ".join(contexts) + ";"
-
-
-class IncludeStatement(Statement):
- """An ``include()`` statement."""
-
- def __init__(self, filename, location=None):
- super(IncludeStatement, self).__init__(location)
- self.filename = filename #: String containing name of file to include
-
- def build(self):
- # TODO: consider lazy-loading the including parser/lexer?
- raise FeatureLibError(
- "Building an include statement is not implemented yet. "
- "Instead, use Parser(..., followIncludes=True) for building.",
- self.location,
- )
-
- def asFea(self, indent=""):
- return indent + "include(%s);" % self.filename
-
-
-class LanguageStatement(Statement):
- """A ``language`` statement within a feature."""
-
- def __init__(self, language, include_default=True, required=False, location=None):
- Statement.__init__(self, location)
- assert len(language) == 4
- self.language = language #: A four-character language tag
- self.include_default = include_default #: If false, "exclude_dflt"
- self.required = required
-
- def build(self, builder):
- """Call the builder object's ``set_language`` callback."""
- builder.set_language(
- location=self.location,
- language=self.language,
- include_default=self.include_default,
- required=self.required,
- )
-
- def asFea(self, indent=""):
- res = "language {}".format(self.language.strip())
- if not self.include_default:
- res += " exclude_dflt"
- if self.required:
- res += " required"
- res += ";"
- return res
-
-
-class LanguageSystemStatement(Statement):
- """A top-level ``languagesystem`` statement."""
-
- def __init__(self, script, language, location=None):
- Statement.__init__(self, location)
- self.script, self.language = (script, language)
-
- def build(self, builder):
- """Calls the builder object's ``add_language_system`` callback."""
- builder.add_language_system(self.location, self.script, self.language)
-
- def asFea(self, indent=""):
- return "languagesystem {} {};".format(self.script, self.language.strip())
-
-
-class FontRevisionStatement(Statement):
- """A ``head`` table ``FontRevision`` statement. ``revision`` should be a
- number, and will be formatted to three significant decimal places."""
-
- def __init__(self, revision, location=None):
- Statement.__init__(self, location)
- self.revision = revision
-
- def build(self, builder):
- builder.set_font_revision(self.location, self.revision)
-
- def asFea(self, indent=""):
- return "FontRevision {:.3f};".format(self.revision)
-
-
-class LigatureCaretByIndexStatement(Statement):
- """A ``GDEF`` table ``LigatureCaretByIndex`` statement. ``glyphs`` should be
- a `glyph-containing object`_, and ``carets`` should be a list of integers."""
-
- def __init__(self, glyphs, carets, location=None):
- Statement.__init__(self, location)
- self.glyphs, self.carets = (glyphs, carets)
-
- def build(self, builder):
- """Calls the builder object's ``add_ligatureCaretByIndex_`` callback."""
- glyphs = self.glyphs.glyphSet()
- builder.add_ligatureCaretByIndex_(self.location, glyphs, set(self.carets))
-
- def asFea(self, indent=""):
- return "LigatureCaretByIndex {} {};".format(
- self.glyphs.asFea(), " ".join(str(x) for x in self.carets)
- )
-
-
-class LigatureCaretByPosStatement(Statement):
- """A ``GDEF`` table ``LigatureCaretByPos`` statement. ``glyphs`` should be
- a `glyph-containing object`_, and ``carets`` should be a list of integers."""
-
- def __init__(self, glyphs, carets, location=None):
- Statement.__init__(self, location)
- self.glyphs, self.carets = (glyphs, carets)
-
- def build(self, builder):
- """Calls the builder object's ``add_ligatureCaretByPos_`` callback."""
- glyphs = self.glyphs.glyphSet()
- builder.add_ligatureCaretByPos_(self.location, glyphs, set(self.carets))
-
- def asFea(self, indent=""):
- return "LigatureCaretByPos {} {};".format(
- self.glyphs.asFea(), " ".join(str(x) for x in self.carets)
- )
-
-
-class LigatureSubstStatement(Statement):
- """A chained contextual substitution statement.
-
- ``prefix``, ``glyphs``, and ``suffix`` should be lists of
- `glyph-containing objects`_; ``replacement`` should be a single
- `glyph-containing object`_.
-
- If ``forceChain`` is True, this is expressed as a chaining rule
- (e.g. ``sub f' i' by f_i``) even when no context is given."""
-
- def __init__(self, prefix, glyphs, suffix, replacement, forceChain, location=None):
- Statement.__init__(self, location)
- self.prefix, self.glyphs, self.suffix = (prefix, glyphs, suffix)
- self.replacement, self.forceChain = replacement, forceChain
-
- def build(self, builder):
- prefix = [p.glyphSet() for p in self.prefix]
- glyphs = [g.glyphSet() for g in self.glyphs]
- suffix = [s.glyphSet() for s in self.suffix]
- builder.add_ligature_subst(
- self.location, prefix, glyphs, suffix, self.replacement, self.forceChain
- )
-
- def asFea(self, indent=""):
- res = "sub "
- if len(self.prefix) or len(self.suffix) or self.forceChain:
- if len(self.prefix):
- res += " ".join(g.asFea() for g in self.prefix) + " "
- res += " ".join(g.asFea() + "'" for g in self.glyphs)
- if len(self.suffix):
- res += " " + " ".join(g.asFea() for g in self.suffix)
- else:
- res += " ".join(g.asFea() for g in self.glyphs)
- res += " by "
- res += asFea(self.replacement)
- res += ";"
- return res
-
-
-class LookupFlagStatement(Statement):
- """A ``lookupflag`` statement. The ``value`` should be an integer value
- representing the flags in use, but not including the ``markAttachment``
- class and ``markFilteringSet`` values, which must be specified as
- glyph-containing objects."""
-
- def __init__(
- self, value=0, markAttachment=None, markFilteringSet=None, location=None
- ):
- Statement.__init__(self, location)
- self.value = value
- self.markAttachment = markAttachment
- self.markFilteringSet = markFilteringSet
-
- def build(self, builder):
- """Calls the builder object's ``set_lookup_flag`` callback."""
- markAttach = None
- if self.markAttachment is not None:
- markAttach = self.markAttachment.glyphSet()
- markFilter = None
- if self.markFilteringSet is not None:
- markFilter = self.markFilteringSet.glyphSet()
- builder.set_lookup_flag(self.location, self.value, markAttach, markFilter)
-
- def asFea(self, indent=""):
- res = []
- flags = ["RightToLeft", "IgnoreBaseGlyphs", "IgnoreLigatures", "IgnoreMarks"]
- curr = 1
- for i in range(len(flags)):
- if self.value & curr != 0:
- res.append(flags[i])
- curr = curr << 1
- if self.markAttachment is not None:
- res.append("MarkAttachmentType {}".format(self.markAttachment.asFea()))
- if self.markFilteringSet is not None:
- res.append("UseMarkFilteringSet {}".format(self.markFilteringSet.asFea()))
- if not res:
- res = ["0"]
- return "lookupflag {};".format(" ".join(res))
-
-
-class LookupReferenceStatement(Statement):
- """Represents a ``lookup ...;`` statement to include a lookup in a feature.
-
- The ``lookup`` should be a :class:`LookupBlock` object."""
-
- def __init__(self, lookup, location=None):
- Statement.__init__(self, location)
- self.location, self.lookup = (location, lookup)
-
- def build(self, builder):
- """Calls the builder object's ``add_lookup_call`` callback."""
- builder.add_lookup_call(self.lookup.name)
-
- def asFea(self, indent=""):
- return "lookup {};".format(self.lookup.name)
-
-
-class MarkBasePosStatement(Statement):
- """A mark-to-base positioning rule. The ``base`` should be a
- `glyph-containing object`_. The ``marks`` should be a list of
- (:class:`Anchor`, :class:`MarkClass`) tuples."""
-
- def __init__(self, base, marks, location=None):
- Statement.__init__(self, location)
- self.base, self.marks = base, marks
-
- def build(self, builder):
- """Calls the builder object's ``add_mark_base_pos`` callback."""
- builder.add_mark_base_pos(self.location, self.base.glyphSet(), self.marks)
-
- def asFea(self, indent=""):
- res = "pos base {}".format(self.base.asFea())
- for a, m in self.marks:
- res += "\n" + indent + SHIFT + "{} mark @{}".format(a.asFea(), m.name)
- res += ";"
- return res
-
-
-class MarkLigPosStatement(Statement):
- """A mark-to-ligature positioning rule. The ``ligatures`` must be a
- `glyph-containing object`_. The ``marks`` should be a list of lists: each
- element in the top-level list represents a component glyph, and is made
- up of a list of (:class:`Anchor`, :class:`MarkClass`) tuples representing
- mark attachment points for that position.
-
- Example::
-
- m1 = MarkClass("TOP_MARKS")
- m2 = MarkClass("BOTTOM_MARKS")
- # ... add definitions to mark classes...
-
- glyph = GlyphName("lam_meem_jeem")
- marks = [
- [ (Anchor(625,1800), m1) ], # Attachments on 1st component (lam)
- [ (Anchor(376,-378), m2) ], # Attachments on 2nd component (meem)
- [ ] # No attachments on the jeem
- ]
- mlp = MarkLigPosStatement(glyph, marks)
-
- mlp.asFea()
- # pos ligature lam_meem_jeem mark @TOP_MARKS
- # ligComponent mark @BOTTOM_MARKS;
-
- """
-
- def __init__(self, ligatures, marks, location=None):
- Statement.__init__(self, location)
- self.ligatures, self.marks = ligatures, marks
-
- def build(self, builder):
- """Calls the builder object's ``add_mark_lig_pos`` callback."""
- builder.add_mark_lig_pos(self.location, self.ligatures.glyphSet(), self.marks)
-
- def asFea(self, indent=""):
- res = "pos ligature {}".format(self.ligatures.asFea())
- ligs = []
- for l in self.marks:
- temp = ""
- if l is None or not len(l):
- temp = "\n" + indent + SHIFT * 2 + ""
- else:
- for a, m in l:
- temp += (
- "\n"
- + indent
- + SHIFT * 2
- + "{} mark @{}".format(a.asFea(), m.name)
- )
- ligs.append(temp)
- res += ("\n" + indent + SHIFT + "ligComponent").join(ligs)
- res += ";"
- return res
-
-
-class MarkMarkPosStatement(Statement):
- """A mark-to-mark positioning rule. The ``baseMarks`` must be a
- `glyph-containing object`_. The ``marks`` should be a list of
- (:class:`Anchor`, :class:`MarkClass`) tuples."""
-
- def __init__(self, baseMarks, marks, location=None):
- Statement.__init__(self, location)
- self.baseMarks, self.marks = baseMarks, marks
-
- def build(self, builder):
- """Calls the builder object's ``add_mark_mark_pos`` callback."""
- builder.add_mark_mark_pos(self.location, self.baseMarks.glyphSet(), self.marks)
-
- def asFea(self, indent=""):
- res = "pos mark {}".format(self.baseMarks.asFea())
- for a, m in self.marks:
- res += "\n" + indent + SHIFT + "{} mark @{}".format(a.asFea(), m.name)
- res += ";"
- return res
-
-
-class MultipleSubstStatement(Statement):
- """A multiple substitution statement.
-
- Args:
- prefix: a list of `glyph-containing objects`_.
- glyph: a single glyph-containing object.
- suffix: a list of glyph-containing objects.
- replacement: a list of glyph-containing objects.
- forceChain: If true, the statement is expressed as a chaining rule
- (e.g. ``sub f' i' by f_i``) even when no context is given.
- """
-
- def __init__(
- self, prefix, glyph, suffix, replacement, forceChain=False, location=None
- ):
- Statement.__init__(self, location)
- self.prefix, self.glyph, self.suffix = prefix, glyph, suffix
- self.replacement = replacement
- self.forceChain = forceChain
-
- def build(self, builder):
- """Calls the builder object's ``add_multiple_subst`` callback."""
- prefix = [p.glyphSet() for p in self.prefix]
- suffix = [s.glyphSet() for s in self.suffix]
- if hasattr(self.glyph, "glyphSet"):
- originals = self.glyph.glyphSet()
- else:
- originals = [self.glyph]
- count = len(originals)
- replaces = []
- for r in self.replacement:
- if hasattr(r, "glyphSet"):
- replace = r.glyphSet()
- else:
- replace = [r]
- if len(replace) == 1 and len(replace) != count:
- replace = replace * count
- replaces.append(replace)
- replaces = list(zip(*replaces))
-
- seen_originals = set()
- for i, original in enumerate(originals):
- if original not in seen_originals:
- seen_originals.add(original)
- builder.add_multiple_subst(
- self.location,
- prefix,
- original,
- suffix,
- replaces and replaces[i] or (),
- self.forceChain,
- )
-
- def asFea(self, indent=""):
- res = "sub "
- if len(self.prefix) or len(self.suffix) or self.forceChain:
- if len(self.prefix):
- res += " ".join(map(asFea, self.prefix)) + " "
- res += asFea(self.glyph) + "'"
- if len(self.suffix):
- res += " " + " ".join(map(asFea, self.suffix))
- else:
- res += asFea(self.glyph)
- replacement = self.replacement or [NullGlyph()]
- res += " by "
- res += " ".join(map(asFea, replacement))
- res += ";"
- return res
-
-
-class PairPosStatement(Statement):
- """A pair positioning statement.
-
- ``glyphs1`` and ``glyphs2`` should be `glyph-containing objects`_.
- ``valuerecord1`` should be a :class:`ValueRecord` object;
- ``valuerecord2`` should be either a :class:`ValueRecord` object or ``None``.
- If ``enumerated`` is true, then this is expressed as an
- `enumerated pair `_.
- """
-
- def __init__(
- self,
- glyphs1,
- valuerecord1,
- glyphs2,
- valuerecord2,
- enumerated=False,
- location=None,
- ):
- Statement.__init__(self, location)
- self.enumerated = enumerated
- self.glyphs1, self.valuerecord1 = glyphs1, valuerecord1
- self.glyphs2, self.valuerecord2 = glyphs2, valuerecord2
-
- def build(self, builder):
- """Calls a callback on the builder object:
-
- * If the rule is enumerated, calls ``add_specific_pair_pos`` on each
- combination of first and second glyphs.
- * If the glyphs are both single :class:`GlyphName` objects, calls
- ``add_specific_pair_pos``.
- * Else, calls ``add_class_pair_pos``.
- """
- if self.enumerated:
- g = [self.glyphs1.glyphSet(), self.glyphs2.glyphSet()]
- seen_pair = False
- for glyph1, glyph2 in itertools.product(*g):
- seen_pair = True
- builder.add_specific_pair_pos(
- self.location, glyph1, self.valuerecord1, glyph2, self.valuerecord2
- )
- if not seen_pair:
- raise FeatureLibError(
- "Empty glyph class in positioning rule", self.location
- )
- return
-
- is_specific = isinstance(self.glyphs1, GlyphName) and isinstance(
- self.glyphs2, GlyphName
- )
- if is_specific:
- builder.add_specific_pair_pos(
- self.location,
- self.glyphs1.glyph,
- self.valuerecord1,
- self.glyphs2.glyph,
- self.valuerecord2,
- )
- else:
- builder.add_class_pair_pos(
- self.location,
- self.glyphs1.glyphSet(),
- self.valuerecord1,
- self.glyphs2.glyphSet(),
- self.valuerecord2,
- )
-
- def asFea(self, indent=""):
- res = "enum " if self.enumerated else ""
- if self.valuerecord2:
- res += "pos {} {} {} {};".format(
- self.glyphs1.asFea(),
- self.valuerecord1.asFea(),
- self.glyphs2.asFea(),
- self.valuerecord2.asFea(),
- )
- else:
- res += "pos {} {} {};".format(
- self.glyphs1.asFea(), self.glyphs2.asFea(), self.valuerecord1.asFea()
- )
- return res
-
-
-class ReverseChainSingleSubstStatement(Statement):
- """A reverse chaining substitution statement. You don't see those every day.
-
- Note the unusual argument order: ``suffix`` comes `before` ``glyphs``.
- ``old_prefix``, ``old_suffix``, ``glyphs`` and ``replacements`` should be
- lists of `glyph-containing objects`_. ``glyphs`` and ``replacements`` should
- be one-item lists.
- """
-
- def __init__(self, old_prefix, old_suffix, glyphs, replacements, location=None):
- Statement.__init__(self, location)
- self.old_prefix, self.old_suffix = old_prefix, old_suffix
- self.glyphs = glyphs
- self.replacements = replacements
-
- def build(self, builder):
- prefix = [p.glyphSet() for p in self.old_prefix]
- suffix = [s.glyphSet() for s in self.old_suffix]
- originals = self.glyphs[0].glyphSet()
- replaces = self.replacements[0].glyphSet()
- if len(replaces) == 1:
- replaces = replaces * len(originals)
- builder.add_reverse_chain_single_subst(
- self.location, prefix, suffix, dict(zip(originals, replaces))
- )
-
- def asFea(self, indent=""):
- res = "rsub "
- if len(self.old_prefix) or len(self.old_suffix):
- if len(self.old_prefix):
- res += " ".join(asFea(g) for g in self.old_prefix) + " "
- res += " ".join(asFea(g) + "'" for g in self.glyphs)
- if len(self.old_suffix):
- res += " " + " ".join(asFea(g) for g in self.old_suffix)
- else:
- res += " ".join(map(asFea, self.glyphs))
- res += " by {};".format(" ".join(asFea(g) for g in self.replacements))
- return res
-
-
-class SingleSubstStatement(Statement):
- """A single substitution statement.
-
- Note the unusual argument order: ``prefix`` and suffix come `after`
- the replacement ``glyphs``. ``prefix``, ``suffix``, ``glyphs`` and
- ``replace`` should be lists of `glyph-containing objects`_. ``glyphs`` and
- ``replace`` should be one-item lists.
- """
-
- def __init__(self, glyphs, replace, prefix, suffix, forceChain, location=None):
- Statement.__init__(self, location)
- self.prefix, self.suffix = prefix, suffix
- self.forceChain = forceChain
- self.glyphs = glyphs
- self.replacements = replace
-
- def build(self, builder):
- """Calls the builder object's ``add_single_subst`` callback."""
- prefix = [p.glyphSet() for p in self.prefix]
- suffix = [s.glyphSet() for s in self.suffix]
- originals = self.glyphs[0].glyphSet()
- replaces = self.replacements[0].glyphSet()
- if len(replaces) == 1:
- replaces = replaces * len(originals)
- builder.add_single_subst(
- self.location,
- prefix,
- suffix,
- OrderedDict(zip(originals, replaces)),
- self.forceChain,
- )
-
- def asFea(self, indent=""):
- res = "sub "
- if len(self.prefix) or len(self.suffix) or self.forceChain:
- if len(self.prefix):
- res += " ".join(asFea(g) for g in self.prefix) + " "
- res += " ".join(asFea(g) + "'" for g in self.glyphs)
- if len(self.suffix):
- res += " " + " ".join(asFea(g) for g in self.suffix)
- else:
- res += " ".join(asFea(g) for g in self.glyphs)
- res += " by {};".format(" ".join(asFea(g) for g in self.replacements))
- return res
-
-
-class ScriptStatement(Statement):
- """A ``script`` statement."""
-
- def __init__(self, script, location=None):
- Statement.__init__(self, location)
- self.script = script #: the script code
-
- def build(self, builder):
- """Calls the builder's ``set_script`` callback."""
- builder.set_script(self.location, self.script)
-
- def asFea(self, indent=""):
- return "script {};".format(self.script.strip())
-
-
-class SinglePosStatement(Statement):
- """A single position statement. ``prefix`` and ``suffix`` should be
- lists of `glyph-containing objects`_.
-
- ``pos`` should be a one-element list containing a (`glyph-containing object`_,
- :class:`ValueRecord`) tuple."""
-
- def __init__(self, pos, prefix, suffix, forceChain, location=None):
- Statement.__init__(self, location)
- self.pos, self.prefix, self.suffix = pos, prefix, suffix
- self.forceChain = forceChain
-
- def build(self, builder):
- """Calls the builder object's ``add_single_pos`` callback."""
- prefix = [p.glyphSet() for p in self.prefix]
- suffix = [s.glyphSet() for s in self.suffix]
- pos = [(g.glyphSet(), value) for g, value in self.pos]
- builder.add_single_pos(self.location, prefix, suffix, pos, self.forceChain)
-
- def asFea(self, indent=""):
- res = "pos "
- if len(self.prefix) or len(self.suffix) or self.forceChain:
- if len(self.prefix):
- res += " ".join(map(asFea, self.prefix)) + " "
- res += " ".join(
- [
- asFea(x[0]) + "'" + ((" " + x[1].asFea()) if x[1] else "")
- for x in self.pos
- ]
- )
- if len(self.suffix):
- res += " " + " ".join(map(asFea, self.suffix))
- else:
- res += " ".join(
- [asFea(x[0]) + " " + (x[1].asFea() if x[1] else "") for x in self.pos]
- )
- res += ";"
- return res
-
-
-class SubtableStatement(Statement):
- """Represents a subtable break."""
-
- def __init__(self, location=None):
- Statement.__init__(self, location)
-
- def build(self, builder):
- """Calls the builder objects's ``add_subtable_break`` callback."""
- builder.add_subtable_break(self.location)
-
- def asFea(self, indent=""):
- return "subtable;"
-
-
-class ValueRecord(Expression):
- """Represents a value record."""
-
- def __init__(
- self,
- xPlacement=None,
- yPlacement=None,
- xAdvance=None,
- yAdvance=None,
- xPlaDevice=None,
- yPlaDevice=None,
- xAdvDevice=None,
- yAdvDevice=None,
- vertical=False,
- location=None,
- ):
- Expression.__init__(self, location)
- self.xPlacement, self.yPlacement = (xPlacement, yPlacement)
- self.xAdvance, self.yAdvance = (xAdvance, yAdvance)
- self.xPlaDevice, self.yPlaDevice = (xPlaDevice, yPlaDevice)
- self.xAdvDevice, self.yAdvDevice = (xAdvDevice, yAdvDevice)
- self.vertical = vertical
-
- def __eq__(self, other):
- return (
- self.xPlacement == other.xPlacement
- and self.yPlacement == other.yPlacement
- and self.xAdvance == other.xAdvance
- and self.yAdvance == other.yAdvance
- and self.xPlaDevice == other.xPlaDevice
- and self.xAdvDevice == other.xAdvDevice
- )
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def __hash__(self):
- return (
- hash(self.xPlacement)
- ^ hash(self.yPlacement)
- ^ hash(self.xAdvance)
- ^ hash(self.yAdvance)
- ^ hash(self.xPlaDevice)
- ^ hash(self.yPlaDevice)
- ^ hash(self.xAdvDevice)
- ^ hash(self.yAdvDevice)
- )
-
- def asFea(self, indent=""):
- if not self:
- return ""
-
- x, y = self.xPlacement, self.yPlacement
- xAdvance, yAdvance = self.xAdvance, self.yAdvance
- xPlaDevice, yPlaDevice = self.xPlaDevice, self.yPlaDevice
- xAdvDevice, yAdvDevice = self.xAdvDevice, self.yAdvDevice
- vertical = self.vertical
-
- # Try format A, if possible.
- if x is None and y is None:
- if xAdvance is None and vertical:
- return str(yAdvance)
- elif yAdvance is None and not vertical:
- return str(xAdvance)
-
- # Make any remaining None value 0 to avoid generating invalid records.
- x = x or 0
- y = y or 0
- xAdvance = xAdvance or 0
- yAdvance = yAdvance or 0
-
- # Try format B, if possible.
- if (
- xPlaDevice is None
- and yPlaDevice is None
- and xAdvDevice is None
- and yAdvDevice is None
- ):
- return "<%s %s %s %s>" % (x, y, xAdvance, yAdvance)
-
- # Last resort is format C.
- return "<%s %s %s %s %s %s %s %s>" % (
- x,
- y,
- xAdvance,
- yAdvance,
- deviceToString(xPlaDevice),
- deviceToString(yPlaDevice),
- deviceToString(xAdvDevice),
- deviceToString(yAdvDevice),
- )
-
- def __bool__(self):
- return any(
- getattr(self, v) is not None
- for v in [
- "xPlacement",
- "yPlacement",
- "xAdvance",
- "yAdvance",
- "xPlaDevice",
- "yPlaDevice",
- "xAdvDevice",
- "yAdvDevice",
- ]
- )
-
- __nonzero__ = __bool__
-
-
-class ValueRecordDefinition(Statement):
- """Represents a named value record definition."""
-
- def __init__(self, name, value, location=None):
- Statement.__init__(self, location)
- self.name = name #: Value record name as string
- self.value = value #: :class:`ValueRecord` object
-
- def asFea(self, indent=""):
- return "valueRecordDef {} {};".format(self.value.asFea(), self.name)
-
-
-def simplify_name_attributes(pid, eid, lid):
- if pid == 3 and eid == 1 and lid == 1033:
- return ""
- elif pid == 1 and eid == 0 and lid == 0:
- return "1"
- else:
- return "{} {} {}".format(pid, eid, lid)
-
-
-class NameRecord(Statement):
- """Represents a name record. (`Section 9.e. `_)"""
-
- def __init__(self, nameID, platformID, platEncID, langID, string, location=None):
- Statement.__init__(self, location)
- self.nameID = nameID #: Name ID as integer (e.g. 9 for designer's name)
- self.platformID = platformID #: Platform ID as integer
- self.platEncID = platEncID #: Platform encoding ID as integer
- self.langID = langID #: Language ID as integer
- self.string = string #: Name record value
-
- def build(self, builder):
- """Calls the builder object's ``add_name_record`` callback."""
- builder.add_name_record(
- self.location,
- self.nameID,
- self.platformID,
- self.platEncID,
- self.langID,
- self.string,
- )
-
- def asFea(self, indent=""):
- def escape(c, escape_pattern):
- # Also escape U+0022 QUOTATION MARK and U+005C REVERSE SOLIDUS
- if c >= 0x20 and c <= 0x7E and c not in (0x22, 0x5C):
- return chr(c)
- else:
- return escape_pattern % c
-
- encoding = getEncoding(self.platformID, self.platEncID, self.langID)
- if encoding is None:
- raise FeatureLibError("Unsupported encoding", self.location)
- s = tobytes(self.string, encoding=encoding)
- if encoding == "utf_16_be":
- escaped_string = "".join(
- [
- escape(byteord(s[i]) * 256 + byteord(s[i + 1]), r"\%04x")
- for i in range(0, len(s), 2)
- ]
- )
- else:
- escaped_string = "".join([escape(byteord(b), r"\%02x") for b in s])
- plat = simplify_name_attributes(self.platformID, self.platEncID, self.langID)
- if plat != "":
- plat += " "
- return 'nameid {} {}"{}";'.format(self.nameID, plat, escaped_string)
-
-
-class FeatureNameStatement(NameRecord):
- """Represents a ``sizemenuname`` or ``name`` statement."""
-
- def build(self, builder):
- """Calls the builder object's ``add_featureName`` callback."""
- NameRecord.build(self, builder)
- builder.add_featureName(self.nameID)
-
- def asFea(self, indent=""):
- if self.nameID == "size":
- tag = "sizemenuname"
- else:
- tag = "name"
- plat = simplify_name_attributes(self.platformID, self.platEncID, self.langID)
- if plat != "":
- plat += " "
- return '{} {}"{}";'.format(tag, plat, self.string)
-
-
-class STATNameStatement(NameRecord):
- """Represents a STAT table ``name`` statement."""
-
- def asFea(self, indent=""):
- plat = simplify_name_attributes(self.platformID, self.platEncID, self.langID)
- if plat != "":
- plat += " "
- return 'name {}"{}";'.format(plat, self.string)
-
-
-class SizeParameters(Statement):
- """A ``parameters`` statement."""
-
- def __init__(self, DesignSize, SubfamilyID, RangeStart, RangeEnd, location=None):
- Statement.__init__(self, location)
- self.DesignSize = DesignSize
- self.SubfamilyID = SubfamilyID
- self.RangeStart = RangeStart
- self.RangeEnd = RangeEnd
-
- def build(self, builder):
- """Calls the builder object's ``set_size_parameters`` callback."""
- builder.set_size_parameters(
- self.location,
- self.DesignSize,
- self.SubfamilyID,
- self.RangeStart,
- self.RangeEnd,
- )
-
- def asFea(self, indent=""):
- res = "parameters {:.1f} {}".format(self.DesignSize, self.SubfamilyID)
- if self.RangeStart != 0 or self.RangeEnd != 0:
- res += " {} {}".format(int(self.RangeStart * 10), int(self.RangeEnd * 10))
- return res + ";"
-
-
-class CVParametersNameStatement(NameRecord):
- """Represent a name statement inside a ``cvParameters`` block."""
-
- def __init__(
- self, nameID, platformID, platEncID, langID, string, block_name, location=None
- ):
- NameRecord.__init__(
- self, nameID, platformID, platEncID, langID, string, location=location
- )
- self.block_name = block_name
-
- def build(self, builder):
- """Calls the builder object's ``add_cv_parameter`` callback."""
- item = ""
- if self.block_name == "ParamUILabelNameID":
- item = "_{}".format(builder.cv_num_named_params_.get(self.nameID, 0))
- builder.add_cv_parameter(self.nameID)
- self.nameID = (self.nameID, self.block_name + item)
- NameRecord.build(self, builder)
-
- def asFea(self, indent=""):
- plat = simplify_name_attributes(self.platformID, self.platEncID, self.langID)
- if plat != "":
- plat += " "
- return 'name {}"{}";'.format(plat, self.string)
-
-
-class CharacterStatement(Statement):
- """
- Statement used in cvParameters blocks of Character Variant features (cvXX).
- The Unicode value may be written with either decimal or hexadecimal
- notation. The value must be preceded by '0x' if it is a hexadecimal value.
- The largest Unicode value allowed is 0xFFFFFF.
- """
-
- def __init__(self, character, tag, location=None):
- Statement.__init__(self, location)
- self.character = character
- self.tag = tag
-
- def build(self, builder):
- """Calls the builder object's ``add_cv_character`` callback."""
- builder.add_cv_character(self.character, self.tag)
-
- def asFea(self, indent=""):
- return "Character {:#x};".format(self.character)
-
-
-class BaseAxis(Statement):
- """An axis definition, being either a ``VertAxis.BaseTagList/BaseScriptList``
- pair or a ``HorizAxis.BaseTagList/BaseScriptList`` pair."""
-
- def __init__(self, bases, scripts, vertical, location=None):
- Statement.__init__(self, location)
- self.bases = bases #: A list of baseline tag names as strings
- self.scripts = scripts #: A list of script record tuplets (script tag, default baseline tag, base coordinate)
- self.vertical = vertical #: Boolean; VertAxis if True, HorizAxis if False
-
- def build(self, builder):
- """Calls the builder object's ``set_base_axis`` callback."""
- builder.set_base_axis(self.bases, self.scripts, self.vertical)
-
- def asFea(self, indent=""):
- direction = "Vert" if self.vertical else "Horiz"
- scripts = [
- "{} {} {}".format(a[0], a[1], " ".join(map(str, a[2])))
- for a in self.scripts
- ]
- return "{}Axis.BaseTagList {};\n{}{}Axis.BaseScriptList {};".format(
- direction, " ".join(self.bases), indent, direction, ", ".join(scripts)
- )
-
-
-class OS2Field(Statement):
- """An entry in the ``OS/2`` table. Most ``values`` should be numbers or
- strings, apart from when the key is ``UnicodeRange``, ``CodePageRange``
- or ``Panose``, in which case it should be an array of integers."""
-
- def __init__(self, key, value, location=None):
- Statement.__init__(self, location)
- self.key = key
- self.value = value
-
- def build(self, builder):
- """Calls the builder object's ``add_os2_field`` callback."""
- builder.add_os2_field(self.key, self.value)
-
- def asFea(self, indent=""):
- def intarr2str(x):
- return " ".join(map(str, x))
-
- numbers = (
- "FSType",
- "TypoAscender",
- "TypoDescender",
- "TypoLineGap",
- "winAscent",
- "winDescent",
- "XHeight",
- "CapHeight",
- "WeightClass",
- "WidthClass",
- "LowerOpSize",
- "UpperOpSize",
- )
- ranges = ("UnicodeRange", "CodePageRange")
- keywords = dict([(x.lower(), [x, str]) for x in numbers])
- keywords.update([(x.lower(), [x, intarr2str]) for x in ranges])
- keywords["panose"] = ["Panose", intarr2str]
- keywords["vendor"] = ["Vendor", lambda y: '"{}"'.format(y)]
- if self.key in keywords:
- return "{} {};".format(
- keywords[self.key][0], keywords[self.key][1](self.value)
- )
- return "" # should raise exception
-
-
-class HheaField(Statement):
- """An entry in the ``hhea`` table."""
-
- def __init__(self, key, value, location=None):
- Statement.__init__(self, location)
- self.key = key
- self.value = value
-
- def build(self, builder):
- """Calls the builder object's ``add_hhea_field`` callback."""
- builder.add_hhea_field(self.key, self.value)
-
- def asFea(self, indent=""):
- fields = ("CaretOffset", "Ascender", "Descender", "LineGap")
- keywords = dict([(x.lower(), x) for x in fields])
- return "{} {};".format(keywords[self.key], self.value)
-
-
-class VheaField(Statement):
- """An entry in the ``vhea`` table."""
-
- def __init__(self, key, value, location=None):
- Statement.__init__(self, location)
- self.key = key
- self.value = value
-
- def build(self, builder):
- """Calls the builder object's ``add_vhea_field`` callback."""
- builder.add_vhea_field(self.key, self.value)
-
- def asFea(self, indent=""):
- fields = ("VertTypoAscender", "VertTypoDescender", "VertTypoLineGap")
- keywords = dict([(x.lower(), x) for x in fields])
- return "{} {};".format(keywords[self.key], self.value)
-
-
-class STATDesignAxisStatement(Statement):
- """A STAT table Design Axis
-
- Args:
- tag (str): a 4 letter axis tag
- axisOrder (int): an int
- names (list): a list of :class:`STATNameStatement` objects
- """
-
- def __init__(self, tag, axisOrder, names, location=None):
- Statement.__init__(self, location)
- self.tag = tag
- self.axisOrder = axisOrder
- self.names = names
- self.location = location
-
- def build(self, builder):
- builder.addDesignAxis(self, self.location)
-
- def asFea(self, indent=""):
- indent += SHIFT
- res = f"DesignAxis {self.tag} {self.axisOrder} {{ \n"
- res += ("\n" + indent).join([s.asFea(indent=indent) for s in self.names]) + "\n"
- res += "};"
- return res
-
-
-class ElidedFallbackName(Statement):
- """STAT table ElidedFallbackName
-
- Args:
- names: a list of :class:`STATNameStatement` objects
- """
-
- def __init__(self, names, location=None):
- Statement.__init__(self, location)
- self.names = names
- self.location = location
-
- def build(self, builder):
- builder.setElidedFallbackName(self.names, self.location)
-
- def asFea(self, indent=""):
- indent += SHIFT
- res = "ElidedFallbackName { \n"
- res += ("\n" + indent).join([s.asFea(indent=indent) for s in self.names]) + "\n"
- res += "};"
- return res
-
-
-class ElidedFallbackNameID(Statement):
- """STAT table ElidedFallbackNameID
-
- Args:
- value: an int pointing to an existing name table name ID
- """
-
- def __init__(self, value, location=None):
- Statement.__init__(self, location)
- self.value = value
- self.location = location
-
- def build(self, builder):
- builder.setElidedFallbackName(self.value, self.location)
-
- def asFea(self, indent=""):
- return f"ElidedFallbackNameID {self.value};"
-
-
-class STATAxisValueStatement(Statement):
- """A STAT table Axis Value Record
-
- Args:
- names (list): a list of :class:`STATNameStatement` objects
- locations (list): a list of :class:`AxisValueLocationStatement` objects
- flags (int): an int
- """
-
- def __init__(self, names, locations, flags, location=None):
- Statement.__init__(self, location)
- self.names = names
- self.locations = locations
- self.flags = flags
-
- def build(self, builder):
- builder.addAxisValueRecord(self, self.location)
-
- def asFea(self, indent=""):
- res = "AxisValue {\n"
- for location in self.locations:
- res += location.asFea()
-
- for nameRecord in self.names:
- res += nameRecord.asFea()
- res += "\n"
-
- if self.flags:
- flags = ["OlderSiblingFontAttribute", "ElidableAxisValueName"]
- flagStrings = []
- curr = 1
- for i in range(len(flags)):
- if self.flags & curr != 0:
- flagStrings.append(flags[i])
- curr = curr << 1
- res += f"flag {' '.join(flagStrings)};\n"
- res += "};"
- return res
-
-
-class AxisValueLocationStatement(Statement):
- """
- A STAT table Axis Value Location
-
- Args:
- tag (str): a 4 letter axis tag
- values (list): a list of ints and/or floats
- """
-
- def __init__(self, tag, values, location=None):
- Statement.__init__(self, location)
- self.tag = tag
- self.values = values
-
- def asFea(self, res=""):
- res += f"location {self.tag} "
- res += f"{' '.join(str(i) for i in self.values)};\n"
- return res
-
-
-class ConditionsetStatement(Statement):
- """
- A variable layout conditionset
-
- Args:
- name (str): the name of this conditionset
- conditions (dict): a dictionary mapping axis tags to a
- tuple of (min,max) userspace coordinates.
- """
-
- def __init__(self, name, conditions, location=None):
- Statement.__init__(self, location)
- self.name = name
- self.conditions = conditions
-
- def build(self, builder):
- builder.add_conditionset(self.location, self.name, self.conditions)
-
- def asFea(self, res="", indent=""):
- res += indent + f"conditionset {self.name} " + "{\n"
- for tag, (minvalue, maxvalue) in self.conditions.items():
- res += indent + SHIFT + f"{tag} {minvalue} {maxvalue};\n"
- res += indent + "}" + f" {self.name};\n"
- return res
-
-
-class VariationBlock(Block):
- """A variation feature block, applicable in a given set of conditions."""
-
- def __init__(self, name, conditionset, use_extension=False, location=None):
- Block.__init__(self, location)
- self.name, self.conditionset, self.use_extension = (
- name,
- conditionset,
- use_extension,
- )
-
- def build(self, builder):
- """Call the ``start_feature`` callback on the builder object, visit
- all the statements in this feature, and then call ``end_feature``."""
- builder.start_feature(self.location, self.name)
- if (
- self.conditionset != "NULL"
- and self.conditionset not in builder.conditionsets_
- ):
- raise FeatureLibError(
- f"variation block used undefined conditionset {self.conditionset}",
- self.location,
- )
-
- # language exclude_dflt statements modify builder.features_
- # limit them to this block with temporary builder.features_
- features = builder.features_
- builder.features_ = {}
- Block.build(self, builder)
- for key, value in builder.features_.items():
- items = builder.feature_variations_.setdefault(key, {}).setdefault(
- self.conditionset, []
- )
- items.extend(value)
- if key not in features:
- features[key] = [] # Ensure we make a feature record
- builder.features_ = features
- builder.end_feature()
-
- def asFea(self, indent=""):
- res = indent + "variation %s " % self.name.strip()
- res += self.conditionset + " "
- if self.use_extension:
- res += "useExtension "
- res += "{\n"
- res += Block.asFea(self, indent=indent)
- res += indent + "} %s;\n" % self.name.strip()
- return res
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/communal/read.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/communal/read.py
deleted file mode 100644
index 1098a9838110b48eac32c84909ae7407bbcc719f..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/dataset/communal/read.py
+++ /dev/null
@@ -1,214 +0,0 @@
-"""
-@Date: 2021/07/28
-@description:
-"""
-import os
-import numpy as np
-import cv2
-import json
-from PIL import Image
-from utils.conversion import xyz2uv, pixel2uv
-from utils.height import calc_ceil_ratio
-
-
-def read_image(image_path, shape=None):
- if shape is None:
- shape = [512, 1024]
- img = np.array(Image.open(image_path)).astype(np.float32) / 255
- if img.shape[0] != shape[0] or img.shape[1] != shape[1]:
- img = cv2.resize(img, dsize=tuple(shape[::-1]), interpolation=cv2.INTER_AREA)
-
- return np.array(img)
-
-
-def read_label(label_path, data_type='MP3D'):
-
- if data_type == 'MP3D':
- with open(label_path, 'r') as f:
- label = json.load(f)
- point_idx = [one['pointsIdx'][0] for one in label['layoutWalls']['walls']]
- camera_height = label['cameraHeight']
- room_height = label['layoutHeight']
- camera_ceiling_height = room_height - camera_height
- ratio = camera_ceiling_height / camera_height
-
- xyz = [one['xyz'] for one in label['layoutPoints']['points']]
- assert len(xyz) == len(point_idx), "len(xyz) != len(point_idx)"
- xyz = [xyz[i] for i in point_idx]
- xyz = np.asarray(xyz, dtype=np.float32)
- xyz[:, 2] *= -1
- xyz[:, 1] = camera_height
- corners = xyz2uv(xyz)
- elif data_type == 'Pano_S2D3D':
- with open(label_path, 'r') as f:
- lines = [line for line in f.readlines() if
- len([c for c in line.split(' ') if c[0].isnumeric()]) > 1]
-
- corners_list = np.array([line.strip().split() for line in lines], np.float32)
- uv_list = pixel2uv(corners_list)
- ceil_uv = uv_list[::2]
- floor_uv = uv_list[1::2]
- ratio = calc_ceil_ratio([ceil_uv, floor_uv], mode='mean')
- corners = floor_uv
- else:
- return None
-
- output = {
- 'ratio': np.array([ratio], dtype=np.float32),
- 'corners': corners,
- 'id': os.path.basename(label_path).split('.')[0]
- }
- return output
-
-
-def move_not_simple_image(data_dir, simple_panos):
- import shutil
- for house_index in os.listdir(data_dir):
- house_path = os.path.join(data_dir, house_index)
- if not os.path.isdir(house_path) or house_index == 'visualization':
- continue
-
- floor_plan_path = os.path.join(house_path, 'floor_plans')
- if os.path.exists(floor_plan_path):
- print(f'move:{floor_plan_path}')
- dst_floor_plan_path = floor_plan_path.replace('zind', 'zind2')
- os.makedirs(dst_floor_plan_path, exist_ok=True)
- shutil.move(floor_plan_path, dst_floor_plan_path)
-
- panos_path = os.path.join(house_path, 'panos')
- for pano in os.listdir(panos_path):
- pano_path = os.path.join(panos_path, pano)
- pano_index = '_'.join(pano.split('.')[0].split('_')[-2:])
- if f'{house_index}_{pano_index}' not in simple_panos and os.path.exists(pano_path):
- print(f'move:{pano_path}')
- dst_pano_path = pano_path.replace('zind', 'zind2')
- os.makedirs(os.path.dirname(dst_pano_path), exist_ok=True)
- shutil.move(pano_path, dst_pano_path)
-
-
-def read_zind(partition_path, simplicity_path, data_dir, mode, is_simple=True,
- layout_type='layout_raw', is_ceiling_flat=False, plan_y=1):
- with open(simplicity_path, 'r') as f:
- simple_tag = json.load(f)
- simple_panos = {}
- for k in simple_tag.keys():
- if not simple_tag[k]:
- continue
- split = k.split('_')
- house_index = split[0]
- pano_index = '_'.join(split[-2:])
- simple_panos[f'{house_index}_{pano_index}'] = True
-
- # move_not_simple_image(data_dir, simple_panos)
-
- pano_list = []
- with open(partition_path, 'r') as f1:
- house_list = json.load(f1)[mode]
-
- for house_index in house_list:
- with open(os.path.join(data_dir, house_index, f"zind_data.json"), 'r') as f2:
- data = json.load(f2)
-
- panos = []
- merger = data['merger']
- for floor in merger.values():
- for complete_room in floor.values():
- for partial_room in complete_room.values():
- for pano_index in partial_room:
- pano = partial_room[pano_index]
- pano['index'] = pano_index
- panos.append(pano)
-
- for pano in panos:
- if layout_type not in pano:
- continue
- pano_index = pano['index']
-
- if is_simple and f'{house_index}_{pano_index}' not in simple_panos.keys():
- continue
-
- if is_ceiling_flat and not pano['is_ceiling_flat']:
- continue
-
- layout = pano[layout_type]
- # corners
- corner_xz = np.array(layout['vertices'])
- corner_xz[..., 0] = -corner_xz[..., 0]
- corner_xyz = np.insert(corner_xz, 1, pano['camera_height'], axis=1)
- corners = xyz2uv(corner_xyz).astype(np.float32)
-
- # ratio
- ratio = np.array([(pano['ceiling_height'] - pano['camera_height']) / pano['camera_height']], dtype=np.float32)
-
- # Ours future work: detection window, door, opening
- objects = {
- 'windows': [],
- 'doors': [],
- 'openings': [],
- }
- for label_index, wdo_type in enumerate(["windows", "doors", "openings"]):
- if wdo_type not in layout:
- continue
-
- wdo_vertices = np.array(layout[wdo_type])
- if len(wdo_vertices) == 0:
- continue
-
- assert len(wdo_vertices) % 3 == 0
-
- for i in range(0, len(wdo_vertices), 3):
- # In the Zind dataset, the camera height is 1, and the default camera height in our code is also 1,
- # so the xyz coordinate here can be used directly
- # Since we're taking the opposite z-axis, we're changing the order of left and right
-
- left_bottom_xyz = np.array(
- [-wdo_vertices[i + 1][0], -wdo_vertices[i + 2][0], wdo_vertices[i + 1][1]])
- right_bottom_xyz = np.array(
- [-wdo_vertices[i][0], -wdo_vertices[i + 2][0], wdo_vertices[i][1]])
- center_bottom_xyz = (left_bottom_xyz + right_bottom_xyz) / 2
-
- center_top_xyz = center_bottom_xyz.copy()
- center_top_xyz[1] = -wdo_vertices[i + 2][1]
-
- center_boundary_xyz = center_bottom_xyz.copy()
- center_boundary_xyz[1] = plan_y
-
- uv = xyz2uv(np.array([left_bottom_xyz, right_bottom_xyz,
- center_bottom_xyz, center_top_xyz,
- center_boundary_xyz]))
-
- left_bottom_uv = uv[0]
- right_bottom_uv = uv[1]
- width_u = abs(right_bottom_uv[0] - left_bottom_uv[0])
- width_u = 1 - width_u if width_u > 0.5 else width_u
- assert width_u > 0, width_u
-
- center_bottom_uv = uv[2]
- center_top_uv = uv[3]
- height_v = center_bottom_uv[1] - center_top_uv[1]
-
- if height_v < 0:
- continue
-
- center_boundary_uv = uv[4]
- boundary_v = center_boundary_uv[1] - center_bottom_uv[1] if wdo_type == 'windows' else 0
- boundary_v = 0 if boundary_v < 0 else boundary_v
-
- center_u = center_bottom_uv[0]
-
- objects[wdo_type].append({
- 'width_u': width_u,
- 'height_v': height_v,
- 'boundary_v': boundary_v,
- 'center_u': center_u
- })
-
- pano_list.append({
- 'img_path': os.path.join(data_dir, house_index, pano['image_path']),
- 'corners': corners,
- 'objects': objects,
- 'ratio': ratio,
- 'id': f'{house_index}_{pano_index}',
- 'is_inside': pano['is_inside']
- })
- return pano_list
diff --git a/spaces/Dauzy/whisper-webui/LICENSE.md b/spaces/Dauzy/whisper-webui/LICENSE.md
deleted file mode 100644
index f5f4b8b5ecd27c09e4ef16e9662bcb7bb2bfc76f..0000000000000000000000000000000000000000
--- a/spaces/Dauzy/whisper-webui/LICENSE.md
+++ /dev/null
@@ -1,195 +0,0 @@
-Apache License
-==============
-
-_Version 2.0, January 2004_
-_<>_
-
-### Terms and Conditions for use, reproduction, and distribution
-
-#### 1. Definitions
-
-“License” shall mean the terms and conditions for use, reproduction, and
-distribution as defined by Sections 1 through 9 of this document.
-
-“Licensor” shall mean the copyright owner or entity authorized by the copyright
-owner that is granting the License.
-
-“Legal Entity” shall mean the union of the acting entity and all other entities
-that control, are controlled by, or are under common control with that entity.
-For the purposes of this definition, “control” means **(i)** the power, direct or
-indirect, to cause the direction or management of such entity, whether by
-contract or otherwise, or **(ii)** ownership of fifty percent (50%) or more of the
-outstanding shares, or **(iii)** beneficial ownership of such entity.
-
-“You” (or “Your”) shall mean an individual or Legal Entity exercising
-permissions granted by this License.
-
-“Source” form shall mean the preferred form for making modifications, including
-but not limited to software source code, documentation source, and configuration
-files.
-
-“Object” form shall mean any form resulting from mechanical transformation or
-translation of a Source form, including but not limited to compiled object code,
-generated documentation, and conversions to other media types.
-
-“Work” shall mean the work of authorship, whether in Source or Object form, made
-available under the License, as indicated by a copyright notice that is included
-in or attached to the work (an example is provided in the Appendix below).
-
-“Derivative Works” shall mean any work, whether in Source or Object form, that
-is based on (or derived from) the Work and for which the editorial revisions,
-annotations, elaborations, or other modifications represent, as a whole, an
-original work of authorship. For the purposes of this License, Derivative Works
-shall not include works that remain separable from, or merely link (or bind by
-name) to the interfaces of, the Work and Derivative Works thereof.
-
-“Contribution” shall mean any work of authorship, including the original version
-of the Work and any modifications or additions to that Work or Derivative Works
-thereof, that is intentionally submitted to Licensor for inclusion in the Work
-by the copyright owner or by an individual or Legal Entity authorized to submit
-on behalf of the copyright owner. For the purposes of this definition,
-“submitted” means any form of electronic, verbal, or written communication sent
-to the Licensor or its representatives, including but not limited to
-communication on electronic mailing lists, source code control systems, and
-issue tracking systems that are managed by, or on behalf of, the Licensor for
-the purpose of discussing and improving the Work, but excluding communication
-that is conspicuously marked or otherwise designated in writing by the copyright
-owner as “Not a Contribution.”
-
-“Contributor” shall mean Licensor and any individual or Legal Entity on behalf
-of whom a Contribution has been received by Licensor and subsequently
-incorporated within the Work.
-
-#### 2. Grant of Copyright License
-
-Subject to the terms and conditions of this License, each Contributor hereby
-grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
-irrevocable copyright license to reproduce, prepare Derivative Works of,
-publicly display, publicly perform, sublicense, and distribute the Work and such
-Derivative Works in Source or Object form.
-
-#### 3. Grant of Patent License
-
-Subject to the terms and conditions of this License, each Contributor hereby
-grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
-irrevocable (except as stated in this section) patent license to make, have
-made, use, offer to sell, sell, import, and otherwise transfer the Work, where
-such license applies only to those patent claims licensable by such Contributor
-that are necessarily infringed by their Contribution(s) alone or by combination
-of their Contribution(s) with the Work to which such Contribution(s) was
-submitted. If You institute patent litigation against any entity (including a
-cross-claim or counterclaim in a lawsuit) alleging that the Work or a
-Contribution incorporated within the Work constitutes direct or contributory
-patent infringement, then any patent licenses granted to You under this License
-for that Work shall terminate as of the date such litigation is filed.
-
-#### 4. Redistribution
-
-You may reproduce and distribute copies of the Work or Derivative Works thereof
-in any medium, with or without modifications, and in Source or Object form,
-provided that You meet the following conditions:
-
-* **(a)** You must give any other recipients of the Work or Derivative Works a copy of
-this License; and
-* **(b)** You must cause any modified files to carry prominent notices stating that You
-changed the files; and
-* **(c)** You must retain, in the Source form of any Derivative Works that You distribute,
-all copyright, patent, trademark, and attribution notices from the Source form
-of the Work, excluding those notices that do not pertain to any part of the
-Derivative Works; and
-* **(d)** If the Work includes a “NOTICE” text file as part of its distribution, then any
-Derivative Works that You distribute must include a readable copy of the
-attribution notices contained within such NOTICE file, excluding those notices
-that do not pertain to any part of the Derivative Works, in at least one of the
-following places: within a NOTICE text file distributed as part of the
-Derivative Works; within the Source form or documentation, if provided along
-with the Derivative Works; or, within a display generated by the Derivative
-Works, if and wherever such third-party notices normally appear. The contents of
-the NOTICE file are for informational purposes only and do not modify the
-License. You may add Your own attribution notices within Derivative Works that
-You distribute, alongside or as an addendum to the NOTICE text from the Work,
-provided that such additional attribution notices cannot be construed as
-modifying the License.
-
-You may add Your own copyright statement to Your modifications and may provide
-additional or different license terms and conditions for use, reproduction, or
-distribution of Your modifications, or for any such Derivative Works as a whole,
-provided Your use, reproduction, and distribution of the Work otherwise complies
-with the conditions stated in this License.
-
-#### 5. Submission of Contributions
-
-Unless You explicitly state otherwise, any Contribution intentionally submitted
-for inclusion in the Work by You to the Licensor shall be under the terms and
-conditions of this License, without any additional terms or conditions.
-Notwithstanding the above, nothing herein shall supersede or modify the terms of
-any separate license agreement you may have executed with Licensor regarding
-such Contributions.
-
-#### 6. Trademarks
-
-This License does not grant permission to use the trade names, trademarks,
-service marks, or product names of the Licensor, except as required for
-reasonable and customary use in describing the origin of the Work and
-reproducing the content of the NOTICE file.
-
-#### 7. Disclaimer of Warranty
-
-Unless required by applicable law or agreed to in writing, Licensor provides the
-Work (and each Contributor provides its Contributions) on an “AS IS” BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
-including, without limitation, any warranties or conditions of TITLE,
-NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
-solely responsible for determining the appropriateness of using or
-redistributing the Work and assume any risks associated with Your exercise of
-permissions under this License.
-
-#### 8. Limitation of Liability
-
-In no event and under no legal theory, whether in tort (including negligence),
-contract, or otherwise, unless required by applicable law (such as deliberate
-and grossly negligent acts) or agreed to in writing, shall any Contributor be
-liable to You for damages, including any direct, indirect, special, incidental,
-or consequential damages of any character arising as a result of this License or
-out of the use or inability to use the Work (including but not limited to
-damages for loss of goodwill, work stoppage, computer failure or malfunction, or
-any and all other commercial damages or losses), even if such Contributor has
-been advised of the possibility of such damages.
-
-#### 9. Accepting Warranty or Additional Liability
-
-While redistributing the Work or Derivative Works thereof, You may choose to
-offer, and charge a fee for, acceptance of support, warranty, indemnity, or
-other liability obligations and/or rights consistent with this License. However,
-in accepting such obligations, You may act only on Your own behalf and on Your
-sole responsibility, not on behalf of any other Contributor, and only if You
-agree to indemnify, defend, and hold each Contributor harmless for any liability
-incurred by, or claims asserted against, such Contributor by reason of your
-accepting any such warranty or additional liability.
-
-_END OF TERMS AND CONDITIONS_
-
-### APPENDIX: How to apply the Apache License to your work
-
-To apply the Apache License to your work, attach the following boilerplate
-notice, with the fields enclosed by brackets `[]` replaced with your own
-identifying information. (Don't include the brackets!) The text should be
-enclosed in the appropriate comment syntax for the file format. We also
-recommend that a file or class name and description of purpose be included on
-the same “printed page” as the copyright notice for easier identification within
-third-party archives.
-
- Copyright [yyyy] [name of copyright owner]
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
diff --git a/spaces/Detomo/ai-comic-generation/src/app/engine/community.ts b/spaces/Detomo/ai-comic-generation/src/app/engine/community.ts
deleted file mode 100644
index 33bc412fac7767d707861e125d1c1434e7cd286c..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/app/engine/community.ts
+++ /dev/null
@@ -1,135 +0,0 @@
-"use server"
-
-import { v4 as uuidv4 } from "uuid"
-
-import { CreatePostResponse, GetAppPostsResponse, Post, PostVisibility } from "@/types"
-import { filterOutBadWords } from "./censorship"
-
-const apiUrl = `${process.env.COMMUNITY_API_URL || ""}`
-const apiToken = `${process.env.COMMUNITY_API_TOKEN || ""}`
-const appId = `${process.env.COMMUNITY_API_ID || ""}`
-
-export async function postToCommunity({
- prompt,
- assetUrl,
-}: {
- prompt: string
- assetUrl: string
-}): Promise {
-
- prompt = filterOutBadWords(prompt)
-
- // if the community API is disabled,
- // we don't fail, we just mock
- if (!apiUrl) {
- const mockPost: Post = {
- postId: uuidv4(),
- appId: "mock",
- prompt,
- previewUrl: assetUrl,
- assetUrl,
- createdAt: new Date().toISOString(),
- visibility: "normal",
- upvotes: 0,
- downvotes: 0
- }
- return mockPost
- }
-
- if (!prompt) {
- console.error(`cannot call the community API without a prompt, aborting..`)
- throw new Error(`cannot call the community API without a prompt, aborting..`)
- }
- if (!assetUrl) {
- console.error(`cannot call the community API without an assetUrl, aborting..`)
- throw new Error(`cannot call the community API without an assetUrl, aborting..`)
- }
-
- try {
- console.log(`calling POST ${apiUrl}/posts/${appId} with prompt: ${prompt}`)
-
- const postId = uuidv4()
-
- const post: Partial = { postId, appId, prompt, assetUrl }
-
- console.table(post)
-
- const res = await fetch(`${apiUrl}/posts/${appId}`, {
- method: "POST",
- headers: {
- Accept: "application/json",
- "Content-Type": "application/json",
- Authorization: `Bearer ${apiToken}`,
- },
- body: JSON.stringify(post),
- cache: 'no-store',
- // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache)
- // next: { revalidate: 1 }
- })
-
- // console.log("res:", res)
- // The return value is *not* serialized
- // You can return Date, Map, Set, etc.
-
- // Recommendation: handle errors
- if (res.status !== 200) {
- // This will activate the closest `error.js` Error Boundary
- throw new Error('Failed to fetch data')
- }
-
- const response = (await res.json()) as CreatePostResponse
- // console.log("response:", response)
- return response.post
- } catch (err) {
- const error = `failed to post to community: ${err}`
- console.error(error)
- throw new Error(error)
- }
-}
-
-export async function getLatestPosts(visibility?: PostVisibility): Promise {
-
- let posts: Post[] = []
-
- // if the community API is disabled we don't fail,
- // we just mock
- if (!apiUrl) {
- return posts
- }
-
- try {
- // console.log(`calling GET ${apiUrl}/posts with renderId: ${renderId}`)
- const res = await fetch(`${apiUrl}/posts/${appId}/${
- visibility || "all"
- }`, {
- method: "GET",
- headers: {
- Accept: "application/json",
- "Content-Type": "application/json",
- Authorization: `Bearer ${apiToken}`,
- },
- cache: 'no-store',
- // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache)
- // next: { revalidate: 1 }
- })
-
- // console.log("res:", res)
- // The return value is *not* serialized
- // You can return Date, Map, Set, etc.
-
- // Recommendation: handle errors
- if (res.status !== 200) {
- // This will activate the closest `error.js` Error Boundary
- throw new Error('Failed to fetch data')
- }
-
- const response = (await res.json()) as GetAppPostsResponse
- // console.log("response:", response)
- return Array.isArray(response?.posts) ? response?.posts : []
- } catch (err) {
- // const error = `failed to get posts: ${err}`
- // console.error(error)
- // throw new Error(error)
- return []
- }
-}
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/configs/global_config.py b/spaces/DragGan/DragGan-Inversion/PTI/configs/global_config.py
deleted file mode 100644
index bf3a20e61b0baf5e85377570cdf0f235bade21bd..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/configs/global_config.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# Device
-cuda_visible_devices = '0'
-device = 'cuda:0'
-
-# Logs
-training_step = 1
-image_rec_result_log_snapshot = 100
-pivotal_training_steps = 0
-model_snapshot_interval = 400
-
-# Run name to be updated during PTI
-run_name = ''
diff --git a/spaces/DragGan/DragGan/stylegan_human/torch_utils/custom_ops.py b/spaces/DragGan/DragGan/stylegan_human/torch_utils/custom_ops.py
deleted file mode 100644
index fda77a69777a69bd3eda96713c29f66fe3b016b9..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/torch_utils/custom_ops.py
+++ /dev/null
@@ -1,238 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import glob
-import torch
-import torch.utils.cpp_extension
-import importlib
-import hashlib
-import shutil
-from pathlib import Path
-import re
-import uuid
-
-from torch.utils.file_baton import FileBaton
-
-#----------------------------------------------------------------------------
-# Global options.
-
-verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full'
-
-#----------------------------------------------------------------------------
-# Internal helper funcs.
-
-def _find_compiler_bindir():
- patterns = [
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin',
- ]
- for pattern in patterns:
- matches = sorted(glob.glob(pattern))
- if len(matches):
- return matches[-1]
- return None
-
-def _get_mangled_gpu_name():
- name = torch.cuda.get_device_name().lower()
- out = []
- for c in name:
- if re.match('[a-z0-9_-]+', c):
- out.append(c)
- else:
- out.append('-')
- return ''.join(out)
-
-
-#----------------------------------------------------------------------------
-# Main entry point for compiling and loading C++/CUDA plugins.
-
-_cached_plugins = dict()
-
-def get_plugin(module_name, sources, **build_kwargs):
- assert verbosity in ['none', 'brief', 'full']
-
- # Already cached?
- if module_name in _cached_plugins:
- return _cached_plugins[module_name]
-
- # Print status.
- if verbosity == 'full':
- print(f'Setting up PyTorch plugin "{module_name}"...')
- elif verbosity == 'brief':
- print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True)
-
- try: # pylint: disable=too-many-nested-blocks
- # Make sure we can find the necessary compiler binaries.
- if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0:
- compiler_bindir = _find_compiler_bindir()
- if compiler_bindir is None:
- raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".')
- os.environ['PATH'] += ';' + compiler_bindir
-
- # Compile and load.
- verbose_build = (verbosity == 'full')
-
- # Incremental build md5sum trickery. Copies all the input source files
- # into a cached build directory under a combined md5 digest of the input
- # source files. Copying is done only if the combined digest has changed.
- # This keeps input file timestamps and filenames the same as in previous
- # extension builds, allowing for fast incremental rebuilds.
- #
- # This optimization is done only in case all the source files reside in
- # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR
- # environment variable is set (we take this as a signal that the user
- # actually cares about this.)
- source_dirs_set = set(os.path.dirname(source) for source in sources)
- if len(source_dirs_set) == 1 and ('TORCH_EXTENSIONS_DIR' in os.environ):
- all_source_files = sorted(list(x for x in Path(list(source_dirs_set)[0]).iterdir() if x.is_file()))
-
- # Compute a combined hash digest for all source files in the same
- # custom op directory (usually .cu, .cpp, .py and .h files).
- hash_md5 = hashlib.md5()
- for src in all_source_files:
- with open(src, 'rb') as f:
- hash_md5.update(f.read())
- build_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access
- digest_build_dir = os.path.join(build_dir, hash_md5.hexdigest())
-
- if not os.path.isdir(digest_build_dir):
- os.makedirs(digest_build_dir, exist_ok=True)
- baton = FileBaton(os.path.join(digest_build_dir, 'lock'))
- if baton.try_acquire():
- try:
- for src in all_source_files:
- shutil.copyfile(src, os.path.join(digest_build_dir, os.path.basename(src)))
- finally:
- baton.release()
- else:
- # Someone else is copying source files under the digest dir,
- # wait until done and continue.
- baton.wait()
- digest_sources = [os.path.join(digest_build_dir, os.path.basename(x)) for x in sources]
- torch.utils.cpp_extension.load(name=module_name, build_directory=build_dir,
- verbose=verbose_build, sources=digest_sources, **build_kwargs)
- else:
- torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs)
- module = importlib.import_module(module_name)
-
- except:
- if verbosity == 'brief':
- print('Failed!')
- raise
-
- # Print status and add to cache.
- if verbosity == 'full':
- print(f'Done setting up PyTorch plugin "{module_name}".')
- elif verbosity == 'brief':
- print('Done.')
- _cached_plugins[module_name] = module
- return module
-
-#----------------------------------------------------------------------------
-def get_plugin_v3(module_name, sources, headers=None, source_dir=None, **build_kwargs):
- assert verbosity in ['none', 'brief', 'full']
- if headers is None:
- headers = []
- if source_dir is not None:
- sources = [os.path.join(source_dir, fname) for fname in sources]
- headers = [os.path.join(source_dir, fname) for fname in headers]
-
- # Already cached?
- if module_name in _cached_plugins:
- return _cached_plugins[module_name]
-
- # Print status.
- if verbosity == 'full':
- print(f'Setting up PyTorch plugin "{module_name}"...')
- elif verbosity == 'brief':
- print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True)
- verbose_build = (verbosity == 'full')
-
- # Compile and load.
- try: # pylint: disable=too-many-nested-blocks
- # Make sure we can find the necessary compiler binaries.
- if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0:
- compiler_bindir = _find_compiler_bindir()
- if compiler_bindir is None:
- raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".')
- os.environ['PATH'] += ';' + compiler_bindir
-
- # Some containers set TORCH_CUDA_ARCH_LIST to a list that can either
- # break the build or unnecessarily restrict what's available to nvcc.
- # Unset it to let nvcc decide based on what's available on the
- # machine.
- os.environ['TORCH_CUDA_ARCH_LIST'] = ''
-
- # Incremental build md5sum trickery. Copies all the input source files
- # into a cached build directory under a combined md5 digest of the input
- # source files. Copying is done only if the combined digest has changed.
- # This keeps input file timestamps and filenames the same as in previous
- # extension builds, allowing for fast incremental rebuilds.
- #
- # This optimization is done only in case all the source files reside in
- # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR
- # environment variable is set (we take this as a signal that the user
- # actually cares about this.)
- #
- # EDIT: We now do it regardless of TORCH_EXTENSIOS_DIR, in order to work
- # around the *.cu dependency bug in ninja config.
- #
- all_source_files = sorted(sources + headers)
- all_source_dirs = set(os.path.dirname(fname) for fname in all_source_files)
- if len(all_source_dirs) == 1: # and ('TORCH_EXTENSIONS_DIR' in os.environ):
-
- # Compute combined hash digest for all source files.
- hash_md5 = hashlib.md5()
- for src in all_source_files:
- with open(src, 'rb') as f:
- hash_md5.update(f.read())
-
- # Select cached build directory name.
- source_digest = hash_md5.hexdigest()
- build_top_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access
- cached_build_dir = os.path.join(build_top_dir, f'{source_digest}-{_get_mangled_gpu_name()}')
-
- if not os.path.isdir(cached_build_dir):
- tmpdir = f'{build_top_dir}/srctmp-{uuid.uuid4().hex}'
- os.makedirs(tmpdir)
- for src in all_source_files:
- shutil.copyfile(src, os.path.join(tmpdir, os.path.basename(src)))
- try:
- os.replace(tmpdir, cached_build_dir) # atomic
- except OSError:
- # source directory already exists, delete tmpdir and its contents.
- shutil.rmtree(tmpdir)
- if not os.path.isdir(cached_build_dir): raise
-
- # Compile.
- cached_sources = [os.path.join(cached_build_dir, os.path.basename(fname)) for fname in sources]
- torch.utils.cpp_extension.load(name=module_name, build_directory=cached_build_dir,
- verbose=verbose_build, sources=cached_sources, **build_kwargs)
- else:
- torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs)
-
- # Load.
- module = importlib.import_module(module_name)
-
- except:
- if verbosity == 'brief':
- print('Failed!')
- raise
-
- # Print status and add to cache dict.
- if verbosity == 'full':
- print(f'Done setting up PyTorch plugin "{module_name}".')
- elif verbosity == 'brief':
- print('Done.')
- _cached_plugins[module_name] = module
- return module
\ No newline at end of file
diff --git a/spaces/Duskfallcrew/duskfall-s-general-digital-art-model/README.md b/spaces/Duskfallcrew/duskfall-s-general-digital-art-model/README.md
deleted file mode 100644
index 6143c1cf38bdc506cf23051c443942373cc8dafc..0000000000000000000000000000000000000000
--- a/spaces/Duskfallcrew/duskfall-s-general-digital-art-model/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Duskfall S General Digital Art Model
-emoji: 👁
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ECCV2022/bytetrack/deploy/ncnn/cpp/include/lapjv.h b/spaces/ECCV2022/bytetrack/deploy/ncnn/cpp/include/lapjv.h
deleted file mode 100644
index 0e34385a647bec225827370ff0041a391e628477..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/deploy/ncnn/cpp/include/lapjv.h
+++ /dev/null
@@ -1,63 +0,0 @@
-#ifndef LAPJV_H
-#define LAPJV_H
-
-#define LARGE 1000000
-
-#if !defined TRUE
-#define TRUE 1
-#endif
-#if !defined FALSE
-#define FALSE 0
-#endif
-
-#define NEW(x, t, n) if ((x = (t *)malloc(sizeof(t) * (n))) == 0) { return -1; }
-#define FREE(x) if (x != 0) { free(x); x = 0; }
-#define SWAP_INDICES(a, b) { int_t _temp_index = a; a = b; b = _temp_index; }
-
-#if 0
-#include
-#define ASSERT(cond) assert(cond)
-#define PRINTF(fmt, ...) printf(fmt, ##__VA_ARGS__)
-#define PRINT_COST_ARRAY(a, n) \
- while (1) { \
- printf(#a" = ["); \
- if ((n) > 0) { \
- printf("%f", (a)[0]); \
- for (uint_t j = 1; j < n; j++) { \
- printf(", %f", (a)[j]); \
- } \
- } \
- printf("]\n"); \
- break; \
- }
-#define PRINT_INDEX_ARRAY(a, n) \
- while (1) { \
- printf(#a" = ["); \
- if ((n) > 0) { \
- printf("%d", (a)[0]); \
- for (uint_t j = 1; j < n; j++) { \
- printf(", %d", (a)[j]); \
- } \
- } \
- printf("]\n"); \
- break; \
- }
-#else
-#define ASSERT(cond)
-#define PRINTF(fmt, ...)
-#define PRINT_COST_ARRAY(a, n)
-#define PRINT_INDEX_ARRAY(a, n)
-#endif
-
-
-typedef signed int int_t;
-typedef unsigned int uint_t;
-typedef double cost_t;
-typedef char boolean;
-typedef enum fp_t { FP_1 = 1, FP_2 = 2, FP_DYNAMIC = 3 } fp_t;
-
-extern int_t lapjv_internal(
- const uint_t n, cost_t *cost[],
- int_t *x, int_t *y);
-
-#endif // LAPJV_H
\ No newline at end of file
diff --git a/spaces/Ebost/animeganv2-self/README.md b/spaces/Ebost/animeganv2-self/README.md
deleted file mode 100644
index f6e37a93842200b7bdaf81afa0e346a734f38733..0000000000000000000000000000000000000000
--- a/spaces/Ebost/animeganv2-self/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Animeganv2 Self
-emoji: 🚀
-colorFrom: red
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Falah/object_detection/app.py b/spaces/Falah/object_detection/app.py
deleted file mode 100644
index d630c6e624870210aefc1eefa02735d460a85f01..0000000000000000000000000000000000000000
--- a/spaces/Falah/object_detection/app.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-from PIL import Image, ImageDraw
-
-checkpoint = "google/owlvit-base-patch32"
-detector = pipeline(model=checkpoint, task="zero-shot-object-detection")
-
-def detect_and_visualize_objects(image):
- # Convert the image to RGB format
- image = image.convert("RGB")
-
- # Process the image using the object detection model
- predictions = detector(
- image,
- candidate_labels=["human face", "rocket"],
- )
-
- # Draw bounding boxes and labels on the image
- draw = ImageDraw.Draw(image)
- if len(predictions) == 0:
- draw.text((100, 100), "Object not found in image", fill="red")
- else:
- for prediction in predictions:
- box = prediction["box"]
- label = prediction["label"]
- score = prediction["score"]
-
- xmin, ymin, xmax, ymax = box.values()
- draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
- draw.text((xmin, ymin), f"{label}: {round(score, 2)}", fill="white")
-
- # Return the annotated image
- return image
-
-# Define the Gradio interface
-image_input = gr.inputs.Image(type="pil")
-image_output = gr.outputs.Image(type="pil")
-iface = gr.Interface(
- fn=detect_and_visualize_objects,
- inputs=image_input,
- outputs=image_output,
- title="Space and War Missile Detection System",
- description="Detect objects in an image using a pre-trained model and visualize the results.",
-
-
-)
-
-# Launch the Gradio interface
-iface.launch(debug=True)
diff --git a/spaces/GEM/DatasetCardForm/datacards/overview.py b/spaces/GEM/DatasetCardForm/datacards/overview.py
deleted file mode 100644
index b048753fb9c25e3e8375ec6ec8e33d5cb9d646de..0000000000000000000000000000000000000000
--- a/spaces/GEM/DatasetCardForm/datacards/overview.py
+++ /dev/null
@@ -1,276 +0,0 @@
-import json
-import streamlit as st
-
-from os.path import join as pjoin
-
-from .streamlit_utils import (
- make_multiselect,
- make_selectbox,
- make_text_area,
- make_text_input,
- make_radio,
-)
-
-N_FIELDS_WHERE = 9
-N_FIELDS_LANGUAGES = 8
-N_FIELDS_CREDIT = 5
-N_FIELDS_STRUCTURE = 7
-
-N_FIELDS = N_FIELDS_WHERE + N_FIELDS_LANGUAGES + N_FIELDS_CREDIT + N_FIELDS_STRUCTURE
-
-
-languages_bcp47 = [
- x
- for x in json.load(open(pjoin("resources", "bcp47.json"), encoding="utf-8"))[
- "subtags"
- ]
- if x["type"] == "language"
-]
-
-license_list = json.load(open(pjoin("resources", "licenses.json"), encoding="utf-8"))
-
-
-def overview_page():
- st.session_state.card_dict["overview"] = st.session_state.card_dict.get(
- "overview", {}
- )
- with st.expander("What is this dataset?", expanded=True):
- key_pref = ["overview", "what"]
- st.session_state.card_dict["overview"]["what"] = st.session_state.card_dict[
- "overview"
- ].get("what", {})
- make_text_area(
- label="Provide a summary of this dataset in 3-4 sentences.",
- key_list=key_pref + ["dataset"],
- help="[free text]",
- )
- with st.expander("Where to find the data and its documentation", expanded=False):
- key_pref = ["overview", "where"]
- st.session_state.card_dict["overview"]["where"] = st.session_state.card_dict[
- "overview"
- ].get("where", {})
- make_text_input(
- label="What is the webpage for the dataset (if it exists)?",
- key_list=key_pref + ["website"],
- help="[URL]",
- )
- make_text_input(
- label="What is the link to where the original dataset is hosted?",
- key_list=key_pref + ["data-url"],
- help="[URL]",
- )
- make_text_input(
- label="What is the link to the paper describing the dataset (open access preferred)?",
- key_list=key_pref + ["paper-url"],
- help="[URL]",
- )
- make_text_area(
- label="Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex.",
- key_list=key_pref + ["paper-bibtext"],
- help="[free text]",
- )
- make_radio(
- label="Does the dataset have an active leaderboard?",
- options=["no", "yes"],
- key_list=key_pref + ["has-leaderboard"],
- help="If no, enter N/A for the following two fields",
- )
- if st.session_state.card_dict["overview"]["where"]["has-leaderboard"] == "yes":
- make_text_input(
- label="Provide a link to the leaderboard.",
- key_list=key_pref + ["leaderboard-url"],
- help="[URL] or N/A",
- )
- make_text_area(
- label="Briefly describe how the leaderboard evaluates models.",
- key_list=key_pref + ["leaderboard-description"],
- help="[free text; a paragraph] or N/A",
- )
- else:
- st.session_state.card_dict["overview"]["where"]["leaderboard-url"] = "N/A"
- st.session_state.card_dict["overview"]["where"]["leaderboard-description"] = "N/A"
- make_text_input(
- label="If known, provide the name of at least one person the reader can contact for questions about the dataset.",
- key_list=key_pref + ["contact-name"],
- help="[free text]",
- )
- make_text_input(
- label="If known, provide the email of at least one person the reader can contact for questions about the dataset.",
- key_list=key_pref + ["contact-email"],
- help="[free text]",
- )
- with st.expander("Languages and Intended Use", expanded=False):
- key_pref = ["overview", "languages"]
- st.session_state.card_dict["overview"][
- "languages"
- ] = st.session_state.card_dict["overview"].get("languages", {})
- make_radio(
- label="Is the dataset multilingual?",
- options=["no", "yes"],
- key_list=key_pref + ["is-multilingual"],
- help="More than one language present in all of the text fields",
- )
- make_multiselect(
- label="What languages/dialects are covered in the dataset?",
- key_list=key_pref + ["language-names"],
- options=[", ".join(x["description"]) for x in languages_bcp47],
- help="This is a comprehensive list of languages obtained from the BCP-47 standard list.",
- )
- make_text_area(
- label="What dialects are covered? Are there multiple dialects per language?",
- key_list=key_pref + ["language-dialects"],
- help="[free text, paragraphs] - Describe the dialect(s) as appropriate.",
- )
- make_text_area(
- label="Whose language is in the dataset?",
- key_list=key_pref + ["language-speakers"],
- help="[free text, paragraphs] - Provide locally appropriate demographic information about the language producers, if available. Use ranges where reasonable in order to protect individuals’ privacy.",
- )
- make_text_area(
- label="What is the intended use of the dataset?",
- key_list=key_pref + ["intended-use"],
- help="[free text, paragraphs] - Describe how the dataset creators describe its purpose and intended use.",
- )
- make_selectbox(
- label="What is the license of the dataset?",
- key_list=key_pref + ["license"],
- options=license_list,
- help="select `other` if missing from list, `unkown` if not provided.",
- )
- if "other" in st.session_state.card_dict["overview"]["languages"].get("license", []):
- make_text_input(
- label="What is the 'other' license of the dataset?",
- key_list=key_pref + ["license-other"],
- help="[free text]",
- )
- else:
- st.session_state.card_dict["overview"]["languages"]["license-other"] = "N/A"
-
-
- make_selectbox(
- label="What primary task does the dataset support?",
- key_list=key_pref + ["task"],
- options=[
- "", # default needs to be invalid value to make sure people actually fill in
- "Content Transfer",
- "Data-to-Text",
- "Dialog Response Generation",
- "Paraphrasing",
- "Question Generation",
- "Reasoning",
- "Simplification",
- "Style Transfer",
- "Summarization",
- "Text-to-Slide",
- "Other"
- ],
- help="Select `other` if the task is not included in the list.",
- )
- if "Other" in st.session_state.card_dict["overview"]["languages"].get("task", []):
- make_text_input(
- label="What is the primary task?",
- key_list=key_pref + ["task-other"],
- help="[free text]",
- )
- else:
- st.session_state.card_dict["overview"]["languages"]["task-other"] = "N/A"
-
- make_text_area(
- label="Provide a short description of the communicative goal of a model trained for this task on this dataset.",
- key_list=key_pref + ["communicative"],
- help="[free text, a paragraph] (e.g., describe a restaurant from a structured representation of its attributes)",
- )
- with st.expander("Credit", expanded=False):
- key_pref = ["overview", "credit"]
- st.session_state.card_dict["overview"][
- "credit"
- ] = st.session_state.card_dict["overview"].get("credit", {})
- make_multiselect(
- label="In what kind of organization did the dataset curation happen?",
- options=["industry", "academic", "independent", "other"],
- key_list=key_pref + ["organization-type"],
- )
- make_text_input(
- label="Name the organization(s).",
- key_list=key_pref + ["organization-names"],
- help="comma-separated",
- )
- make_text_input(
- label="Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s).",
- key_list=key_pref + ["creators"],
- help="name (affiliation); comma-separated",
- )
- make_text_input(
- label="Who funded the data creation?",
- key_list=key_pref + ["funding"],
- help="[free text] enter N/A if unkown",
- )
- make_text_input(
- label="Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM.",
- key_list=key_pref + ["gem-added-by"],
- help="name (affiliation); comma-separated",
- )
- with st.expander("Structure", expanded=False):
- key_pref = ["overview", "structure"]
- st.session_state.card_dict["overview"]["structure"] = st.session_state.card_dict[
- "overview"
- ].get("structure", {})
- data_fields_help = """
- [free text; paragraphs]
- - Mention their data type, and whether and how they are used as part of the generation pipeline.
- - Describe each fields' attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc.
- - If the datasets contain example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- """
- make_text_area(
- label="List and describe the fields present in the dataset.",
- key_list=key_pref + ["data-fields"],
- help=data_fields_help,
- )
- make_text_area(
- label="How was the dataset structure determined?",
- key_list=key_pref + ["structure-description"],
- help="[free text; paragraph]",
- )
- make_text_area(
- label="How were the labels chosen?",
- key_list=key_pref + ["structure-labels"],
- help="[free text; paragraph]",
- )
- make_text_area(
- label="Provide a JSON formatted example of a typical instance in the dataset.",
- key_list=key_pref + ["structure-example"],
- help="[JSON]",
- )
- make_text_area(
- label="Describe and name the splits in the dataset if there are more than one.",
- key_list=key_pref + ["structure-splits"],
- help="[free text, paragraphs] - As appropriate, provide any descriptive statistics for the features, such as size, average lengths of input and output.",
- )
- make_text_area(
- label="Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.",
- key_list=key_pref + ["structure-splits-criteria"],
- help="[free text, paragraphs]",
- )
- make_text_area(
- label="What does an outlier of the dataset in terms of length/perplexity/embedding look like?",
- key_list=key_pref + ["structure-outlier"],
- help="[free text + json formatted text/file for an example]",
- )
-
-
-def overview_summary():
- total_filled = sum(
- [len(dct) for dct in st.session_state.card_dict.get("overview", {}).values()]
- )
- with st.expander(
- f"Dataset Overview Completion - {total_filled} of {N_FIELDS}", expanded=False
- ):
- completion_markdown = ""
- completion_markdown += (
- f"- **Overall completion:**\n - {total_filled} of {N_FIELDS} fields\n"
- )
- completion_markdown += f"- **Sub-section - Where to find:**\n - {len(st.session_state.card_dict.get('overview', {}).get('where', {}))} of {N_FIELDS_WHERE} fields\n"
- completion_markdown += f"- **Sub-section - Languages and Intended Use:**\n - {len(st.session_state.card_dict.get('overview', {}).get('languages', {}))} of {N_FIELDS_LANGUAGES} fields\n"
- completion_markdown += f"- **Sub-section - Credit:**\n - {len(st.session_state.card_dict.get('overview', {}).get('credit', {}))} of {N_FIELDS_CREDIT} fields\n"
- completion_markdown += f"- **Sub-section - Structure:**\n - {len(st.session_state.card_dict.get('overview', {}).get('structure', {}))} of {N_FIELDS_STRUCTURE} fields\n"
- st.markdown(completion_markdown)
diff --git a/spaces/GT4SD/geodiff/README.md b/spaces/GT4SD/geodiff/README.md
deleted file mode 100644
index 73d215d09dcaba0f509e0ff40e69cebb47b1835c..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/geodiff/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: GeoDiff
-emoji: 💡
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.46.0
-app_file: app.py
-pinned: false
-python_version: 3.8.13
-pypi_version: 20.2.4
-duplicated_from: jannisborn/gt4sd-diffusers
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/morphing.py b/spaces/GaenKoki/voicevox/voicevox_engine/morphing.py
deleted file mode 100644
index d857aa11d8857772c4e119edfd57730932ced6fa..0000000000000000000000000000000000000000
--- a/spaces/GaenKoki/voicevox/voicevox_engine/morphing.py
+++ /dev/null
@@ -1,208 +0,0 @@
-from copy import deepcopy
-from dataclasses import dataclass
-from itertools import chain
-from typing import Dict, List, Tuple
-
-import numpy as np
-import pyworld as pw
-from scipy.signal import resample
-
-from .metas.Metas import Speaker, SpeakerSupportPermittedSynthesisMorphing, StyleInfo
-from .metas.MetasStore import construct_lookup
-from .model import AudioQuery, MorphableTargetInfo, SpeakerNotFoundError
-from .synthesis_engine import SynthesisEngine
-
-
-# FIXME: ndarray type hint, https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder/blob/2b64f86197573497c685c785c6e0e743f407b63e/pyworld/pyworld.pyx#L398 # noqa
-@dataclass(frozen=True)
-class MorphingParameter:
- fs: int
- frame_period: float
- base_f0: np.ndarray
- base_aperiodicity: np.ndarray
- base_spectrogram: np.ndarray
- target_spectrogram: np.ndarray
-
-
-def create_morphing_parameter(
- base_wave: np.ndarray,
- target_wave: np.ndarray,
- fs: int,
-) -> MorphingParameter:
- frame_period = 1.0
- base_f0, base_time_axis = pw.harvest(base_wave, fs, frame_period=frame_period)
- base_spectrogram = pw.cheaptrick(base_wave, base_f0, base_time_axis, fs)
- base_aperiodicity = pw.d4c(base_wave, base_f0, base_time_axis, fs)
-
- target_f0, morph_time_axis = pw.harvest(target_wave, fs, frame_period=frame_period)
- target_spectrogram = pw.cheaptrick(target_wave, target_f0, morph_time_axis, fs)
- target_spectrogram.resize(base_spectrogram.shape)
-
- return MorphingParameter(
- fs=fs,
- frame_period=frame_period,
- base_f0=base_f0,
- base_aperiodicity=base_aperiodicity,
- base_spectrogram=base_spectrogram,
- target_spectrogram=target_spectrogram,
- )
-
-
-def get_morphable_targets(
- speakers: List[Speaker],
- base_speakers: List[int],
-) -> List[Dict[int, MorphableTargetInfo]]:
- """
- speakers: 全話者の情報
- base_speakers: モーフィング可能か判定したいベースの話者リスト(スタイルID)
- """
- speaker_lookup = construct_lookup(speakers)
-
- morphable_targets_arr = []
- for base_speaker in base_speakers:
- morphable_targets = dict()
- for style in chain.from_iterable(speaker.styles for speaker in speakers):
- morphable_targets[style.id] = MorphableTargetInfo(
- is_morphable=is_synthesis_morphing_permitted(
- speaker_lookup=speaker_lookup,
- base_speaker=base_speaker,
- target_speaker=style.id,
- )
- )
- morphable_targets_arr.append(morphable_targets)
-
- return morphable_targets_arr
-
-
-def is_synthesis_morphing_permitted(
- speaker_lookup: Dict[int, Tuple[Speaker, StyleInfo]],
- base_speaker: int,
- target_speaker: int,
-) -> bool:
- """
- 指定されたspeakerがモーフィング可能かどうか返す
- speakerが見つからない場合はSpeakerNotFoundErrorを送出する
- """
-
- base_speaker_data = speaker_lookup[base_speaker]
- target_speaker_data = speaker_lookup[target_speaker]
-
- if base_speaker_data is None or target_speaker_data is None:
- raise SpeakerNotFoundError(
- base_speaker if base_speaker_data is None else target_speaker
- )
-
- base_speaker_info, _ = base_speaker_data
- target_speaker_info, _ = target_speaker_data
-
- base_speaker_uuid = base_speaker_info.speaker_uuid
- target_speaker_uuid = target_speaker_info.speaker_uuid
-
- base_speaker_morphing_info: SpeakerSupportPermittedSynthesisMorphing = (
- base_speaker_info.supported_features.permitted_synthesis_morphing
- )
-
- target_speaker_morphing_info: SpeakerSupportPermittedSynthesisMorphing = (
- target_speaker_info.supported_features.permitted_synthesis_morphing
- )
-
- # 禁止されている場合はFalse
- if (
- base_speaker_morphing_info == SpeakerSupportPermittedSynthesisMorphing.NOTHING
- or target_speaker_morphing_info
- == SpeakerSupportPermittedSynthesisMorphing.NOTHING
- ):
- return False
- # 同一話者のみの場合は同一話者判定
- if (
- base_speaker_morphing_info == SpeakerSupportPermittedSynthesisMorphing.SELF_ONLY
- or target_speaker_morphing_info
- == SpeakerSupportPermittedSynthesisMorphing.SELF_ONLY
- ):
- return base_speaker_uuid == target_speaker_uuid
- # 念のため許可されているかチェック
- return (
- base_speaker_morphing_info == SpeakerSupportPermittedSynthesisMorphing.ALL
- and target_speaker_morphing_info == SpeakerSupportPermittedSynthesisMorphing.ALL
- )
-
-
-def synthesis_morphing_parameter(
- engine: SynthesisEngine,
- query: AudioQuery,
- base_speaker: int,
- target_speaker: int,
-) -> MorphingParameter:
- query = deepcopy(query)
-
- # 不具合回避のためデフォルトのサンプリングレートでWORLDに掛けた後に指定のサンプリングレートに変換する
- query.outputSamplingRate = engine.default_sampling_rate
-
- # WORLDに掛けるため合成はモノラルで行う
- query.outputStereo = False
-
- base_wave = engine.synthesis(query=query, speaker_id=base_speaker).astype("float")
- target_wave = engine.synthesis(query=query, speaker_id=target_speaker).astype(
- "float"
- )
-
- return create_morphing_parameter(
- base_wave=base_wave,
- target_wave=target_wave,
- fs=query.outputSamplingRate,
- )
-
-
-def synthesis_morphing(
- morph_param: MorphingParameter,
- morph_rate: float,
- output_fs: int,
- output_stereo: bool = False,
-) -> np.ndarray:
- """
- 指定した割合で、パラメータをもとにモーフィングした音声を生成します。
-
- Parameters
- ----------
- morph_param : MorphingParameter
- `synthesis_morphing_parameter`または`create_morphing_parameter`で作成したパラメータ
-
- morph_rate : float
- モーフィングの割合
- 0.0でベースの話者、1.0でターゲットの話者に近づきます。
-
- Returns
- -------
- generated : np.ndarray
- モーフィングした音声
-
- Raises
- -------
- ValueError
- morph_rate ∈ [0, 1]
- """
-
- if morph_rate < 0.0 or morph_rate > 1.0:
- raise ValueError("morph_rateは0.0から1.0の範囲で指定してください")
-
- morph_spectrogram = (
- morph_param.base_spectrogram * (1.0 - morph_rate)
- + morph_param.target_spectrogram * morph_rate
- )
-
- y_h = pw.synthesize(
- morph_param.base_f0,
- morph_spectrogram,
- morph_param.base_aperiodicity,
- morph_param.fs,
- morph_param.frame_period,
- )
-
- # TODO: synthesis_engine.py でのリサンプル処理と共通化する
- if output_fs != morph_param.fs:
- y_h = resample(y_h, output_fs * len(y_h) // morph_param.fs)
-
- if output_stereo:
- y_h = np.array([y_h, y_h]).T
-
- return y_h
diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/proteins_dataset.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/proteins_dataset.py
deleted file mode 100644
index e0b1c038a41c6e276275a7904e748ea9e31e6083..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/model/tf/proteins_dataset.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# Copyright 2021 DeepMind Technologies Limited
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Datasets consisting of proteins."""
-from typing import Dict, Mapping, Optional, Sequence
-from alphafold.model.tf import protein_features
-import numpy as np
-import tensorflow.compat.v1 as tf
-
-TensorDict = Dict[str, tf.Tensor]
-
-
-def parse_tfexample(
- raw_data: bytes,
- features: protein_features.FeaturesMetadata,
- key: Optional[str] = None) -> Dict[str, tf.train.Feature]:
- """Read a single TF Example proto and return a subset of its features.
-
- Args:
- raw_data: A serialized tf.Example proto.
- features: A dictionary of features, mapping string feature names to a tuple
- (dtype, shape). This dictionary should be a subset of
- protein_features.FEATURES (or the dictionary itself for all features).
- key: Optional string with the SSTable key of that tf.Example. This will be
- added into features as a 'key' but only if requested in features.
-
- Returns:
- A dictionary of features mapping feature names to features. Only the given
- features are returned, all other ones are filtered out.
- """
- feature_map = {
- k: tf.io.FixedLenSequenceFeature(shape=(), dtype=v[0], allow_missing=True)
- for k, v in features.items()
- }
- parsed_features = tf.io.parse_single_example(raw_data, feature_map)
- reshaped_features = parse_reshape_logic(parsed_features, features, key=key)
-
- return reshaped_features
-
-
-def _first(tensor: tf.Tensor) -> tf.Tensor:
- """Returns the 1st element - the input can be a tensor or a scalar."""
- return tf.reshape(tensor, shape=(-1,))[0]
-
-
-def parse_reshape_logic(
- parsed_features: TensorDict,
- features: protein_features.FeaturesMetadata,
- key: Optional[str] = None) -> TensorDict:
- """Transforms parsed serial features to the correct shape."""
- # Find out what is the number of sequences and the number of alignments.
- num_residues = tf.cast(_first(parsed_features["seq_length"]), dtype=tf.int32)
-
- if "num_alignments" in parsed_features:
- num_msa = tf.cast(_first(parsed_features["num_alignments"]), dtype=tf.int32)
- else:
- num_msa = 0
-
- if "template_domain_names" in parsed_features:
- num_templates = tf.cast(
- tf.shape(parsed_features["template_domain_names"])[0], dtype=tf.int32)
- else:
- num_templates = 0
-
- if key is not None and "key" in features:
- parsed_features["key"] = [key] # Expand dims from () to (1,).
-
- # Reshape the tensors according to the sequence length and num alignments.
- for k, v in parsed_features.items():
- new_shape = protein_features.shape(
- feature_name=k,
- num_residues=num_residues,
- msa_length=num_msa,
- num_templates=num_templates,
- features=features)
- new_shape_size = tf.constant(1, dtype=tf.int32)
- for dim in new_shape:
- new_shape_size *= tf.cast(dim, tf.int32)
-
- assert_equal = tf.assert_equal(
- tf.size(v), new_shape_size,
- name="assert_%s_shape_correct" % k,
- message="The size of feature %s (%s) could not be reshaped "
- "into %s" % (k, tf.size(v), new_shape))
- if "template" not in k:
- # Make sure the feature we are reshaping is not empty.
- assert_non_empty = tf.assert_greater(
- tf.size(v), 0, name="assert_%s_non_empty" % k,
- message="The feature %s is not set in the tf.Example. Either do not "
- "request the feature or use a tf.Example that has the "
- "feature set." % k)
- with tf.control_dependencies([assert_non_empty, assert_equal]):
- parsed_features[k] = tf.reshape(v, new_shape, name="reshape_%s" % k)
- else:
- with tf.control_dependencies([assert_equal]):
- parsed_features[k] = tf.reshape(v, new_shape, name="reshape_%s" % k)
-
- return parsed_features
-
-
-def _make_features_metadata(
- feature_names: Sequence[str]) -> protein_features.FeaturesMetadata:
- """Makes a feature name to type and shape mapping from a list of names."""
- # Make sure these features are always read.
- required_features = ["aatype", "sequence", "seq_length"]
- feature_names = list(set(feature_names) | set(required_features))
-
- features_metadata = {name: protein_features.FEATURES[name]
- for name in feature_names}
- return features_metadata
-
-
-def create_tensor_dict(
- raw_data: bytes,
- features: Sequence[str],
- key: Optional[str] = None,
- ) -> TensorDict:
- """Creates a dictionary of tensor features.
-
- Args:
- raw_data: A serialized tf.Example proto.
- features: A list of strings of feature names to be returned in the dataset.
- key: Optional string with the SSTable key of that tf.Example. This will be
- added into features as a 'key' but only if requested in features.
-
- Returns:
- A dictionary of features mapping feature names to features. Only the given
- features are returned, all other ones are filtered out.
- """
- features_metadata = _make_features_metadata(features)
- return parse_tfexample(raw_data, features_metadata, key)
-
-
-def np_to_tensor_dict(
- np_example: Mapping[str, np.ndarray],
- features: Sequence[str],
- ) -> TensorDict:
- """Creates dict of tensors from a dict of NumPy arrays.
-
- Args:
- np_example: A dict of NumPy feature arrays.
- features: A list of strings of feature names to be returned in the dataset.
-
- Returns:
- A dictionary of features mapping feature names to features. Only the given
- features are returned, all other ones are filtered out.
- """
- features_metadata = _make_features_metadata(features)
- tensor_dict = {k: tf.constant(v) for k, v in np_example.items()
- if k in features_metadata}
-
- # Ensures shapes are as expected. Needed for setting size of empty features
- # e.g. when no template hits were found.
- tensor_dict = parse_reshape_logic(tensor_dict, features_metadata)
- return tensor_dict
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco.py
deleted file mode 100644
index 61a0cefe4e20b55cd3caaab7dde325a111275726..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './ms_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_64x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=64,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index 575e9d01343a4563e0d3ba89b361ea8e358d2dee..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './dnl_r50-d8_769x769_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/GuujiYae/Grand-Narukami-Shrine/Dockerfile b/spaces/GuujiYae/Grand-Narukami-Shrine/Dockerfile
deleted file mode 100644
index 5a74751c01e45d69dcaedffd79895255d009f95c..0000000000000000000000000000000000000000
--- a/spaces/GuujiYae/Grand-Narukami-Shrine/Dockerfile
+++ /dev/null
@@ -1,12 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/yae-miko/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-COPY public/ ./public
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/utils.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/utils.py
deleted file mode 100644
index 56fb4b3ba0db3d94896f6f4a2ea05af630ade552..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/utils.py
+++ /dev/null
@@ -1,159 +0,0 @@
-import random
-from typing import List, Tuple
-from torch import nn, Tensor
-import os, shutil
-import torch
-import matplotlib.pyplot as plt
-import numpy as np
-import gc, cv2
-
-from .visualizer import post_processing_depth
-
-'''
-This module should not depend on other s_multimae modules.
-'''
-
-num_format = "{:,}".format
-
-def clean_cache() -> None:
- torch.cuda.empty_cache()
- gc.collect()
-
-def count_parameters(model: nn.Module) -> str:
- '''Count the number of learnable parameters of a model'''
- return num_format(sum(p.numel() for p in model.parameters() if p.requires_grad))
-
-def random_choice(p: float) -> bool:
- '''Return True if random float <= p '''
- return random.random() <= p
-
-def list_files(
- root_dir_path: str, max_files: int = None
-) -> Tuple[List[str], List[str], List[str]]:
- '''List all files in a directory that has extensions
- Folder structure:
- root_dir_path
- |---GT
- | |---image1.jpg
- | |---image2.jpg
- |---RGB
- | |---image1.png
- | |---image2.png
- |---depths
- | |---image1.png
- | |---image2.png
- Returns: rgbs, depths, gts
- '''
- depths_dir = os.path.join(root_dir_path, 'depths')
- gts_dir = os.path.join(root_dir_path, 'GT')
- rgbs_dir = os.path.join(root_dir_path, 'RGB')
-
- depth_files = list(sorted(os.listdir(depths_dir)))
- gt_files = list(sorted(os.listdir(gts_dir)))
- rgb_files = list(sorted(os.listdir(rgbs_dir)))
-
- depth_files_names = [f.split('.')[0] for f in depth_files]
- gt_files_names = [f.split('.')[0] for f in gt_files]
- rgb_files_names = [f.split('.')[0] for f in rgb_files]
-
- # Ensure integrity
- assert depth_files_names == gt_files_names == rgb_files_names, \
- f"Dataset {root_dir_path} not integrity"
-
- depths: List[str] = []
- gts: List[str] = []
- rgbs: List[str] = []
-
- if max_files is not None:
- depth_files = depth_files[:max_files]
- gt_files = gt_files[:max_files]
- rgb_files = rgb_files[:max_files]
-
- for depth_file, gt_file, rgb_file in zip(depth_files, gt_files, rgb_files):
- depths.append(os.path.join(depths_dir, depth_file))
- gts.append(os.path.join(gts_dir, gt_file))
- rgbs.append(os.path.join(rgbs_dir, rgb_file))
-
- return rgbs, depths, gts
-
-def scale_saliency_maps(inputs: Tensor) -> Tensor:
- '''Input: Tensor, shape of (B, C, H, W)'''
- min_v = torch.min(torch.flatten(inputs, 1), dim=1)[0].unsqueeze(1).unsqueeze(1).unsqueeze(1)
- max_v = torch.max(torch.flatten(inputs, 1), dim=1)[0].unsqueeze(1).unsqueeze(1).unsqueeze(1)
- return (inputs - min_v) / (max_v - min_v + 1e-8)
-
-def get_epoch_from_ckpt_path(ckpt_path: str) -> int:
- '''Example ckpt_path
- os.path.join(experiment_dir_path, 'exp_v2.3', 'checkpoint_100.pt')
- '''
- return int(ckpt_path.split('_')[-1].split('.')[0])
-
-def clean_dir(dir_path: str) -> None:
- '''Remove a directory if existed and create an empty directory'''
- if os.path.isdir(dir_path):
- shutil.rmtree(dir_path)
- os.makedirs(dir_path, exist_ok=True)
-
-def get_sota_type(experiment_name: str) -> int:
- ''' 0 for SOTAs, 4 for experiment version 4, e.g. ...'''
- if "exp_v" not in experiment_name:
- return 0
-
- half_right = experiment_name.split("exp_v")[1]
- return int(half_right.split('.')[0])
-
-def get_production_ckpt_path(experiment_name: str, epoch: int) -> str:
- return os.path.join('pretrained_models', 'multimae', experiment_name, f'checkpoint_{epoch}.pt')
-
-def convert_batch_tensors_to_numpy_images(images: Tensor) -> np.ndarray:
- ''' images of shape (batch_size, channels, width, height) '''
- images = torch.permute(images, (0, 2, 3, 1))
- images = images.numpy()
- if images.shape[3] == 1:
- return np.squeeze(images, axis=3)
- else:
- return images
-
-def join_horizontally(lst: List[np.ndarray]) -> np.ndarray:
- return np.concatenate(lst, axis=1)
-
-def join_vertically(lst: List[np.ndarray]) -> np.ndarray:
- return np.concatenate(lst, axis=0)
-
-def plot_batch_of_pairs(
- images: Tensor,
- depths: Tensor,
- gts: Tensor,
- save_file_path: str,
-) -> None:
- images = convert_batch_tensors_to_numpy_images(images)
- depths = convert_batch_tensors_to_numpy_images(depths)
- gts = convert_batch_tensors_to_numpy_images(gts)
- batch_size = images.shape[0]
- samples: List[np.ndarray] = []
-
- # fig, axes = plt.subplots(batch_size, 3, figsize=(3*batch_size, 20)) # (number of images, 3)
- for i in range(batch_size):
- samples.append(join_horizontally([
- ((images[i]+1.0)/2 * 255).astype(np.uint8),
- post_processing_depth(depths[i]),
- post_processing_depth(gts[i]),
- ]))
- # axes[i, 0].imshow(images[i])
- # axes[i, 1].imshow(depths[i])
- # axes[i, 2].imshow(gts[i])
- # plt.show()
-
- final = join_vertically(samples)
- cv2.imwrite(save_file_path, cv2.cvtColor(final, cv2.COLOR_RGB2BGR))
- print(f'Saved to file {save_file_path}')
-
-def plot_pairs(image: np.ndarray, depth: np.ndarray, gt: np.ndarray) -> None:
- batch_size = 1
- fig, axes = plt.subplots(batch_size, 3, figsize=(3*batch_size, 20)) # (number of images, 3)
- for i in range(batch_size):
- axes[i, 0].imshow(image)
- axes[i, 1].imshow(depth)
- axes[i, 2].imshow(gt)
- plt.show()
-
diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/api.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/api.py
deleted file mode 100644
index d6bcabd194a4531801941d5e1d248dc134ce255f..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/api.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from starlette.responses import StreamingResponse
-from tts import MelToWav, TextToMel
-from advanced_tts import load_all_models, run_tts_paragraph
-from typing import Optional
-from pydantic import BaseModel
-from fastapi import FastAPI, HTTPException
-import uvicorn
-import base64
-import argparse
-import json
-import time
-from argparse import Namespace
-
-app = FastAPI()
-
-
-class TextJson(BaseModel):
- text: str
- lang: Optional[str] = "hi"
- noise_scale: Optional[float]=0.667
- length_scale: Optional[float]=1.0
- transliteration: Optional[int]=1
- number_conversion: Optional[int]=1
- split_sentences: Optional[int]=1
-
-
-
-
-@app.post("/TTS/")
-async def tts(input: TextJson):
- text = input.text
- lang = input.lang
-
- args = Namespace(**input.dict())
-
- args.wav = '../../results/api/'+str(int(time.time())) + '.wav'
-
- if text:
- sr, audio = run_tts_paragraph(args)
- else:
- raise HTTPException(status_code=400, detail={"error": "No text"})
-
- ## to return outpur as a file
- audio = open(args.wav, mode='rb')
- return StreamingResponse(audio, media_type="audio/wav")
-
- # with open(args.wav, "rb") as audio_file:
- # encoded_bytes = base64.b64encode(audio_file.read())
- # encoded_string = encoded_bytes.decode()
- # return {"encoding": "base64", "data": encoded_string, "sr": sr}
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("-a", "--acoustic", required=True, type=str)
- parser.add_argument("-v", "--vocoder", required=True, type=str)
- parser.add_argument("-d", "--device", type=str, default="cpu")
- parser.add_argument("-L", "--lang", type=str, required=True)
-
- args = parser.parse_args()
-
- load_all_models(args)
-
- uvicorn.run(
- "api:app", host="0.0.0.0", port=6006, log_level="debug"
- )
diff --git a/spaces/Hazem/roop/roop/face_analyser.py b/spaces/Hazem/roop/roop/face_analyser.py
deleted file mode 100644
index 9c0afe458763edb22dc2332f527dfdba48575b1d..0000000000000000000000000000000000000000
--- a/spaces/Hazem/roop/roop/face_analyser.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import threading
-from typing import Any
-import insightface
-
-import roop.globals
-from roop.typing import Frame
-
-FACE_ANALYSER = None
-THREAD_LOCK = threading.Lock()
-
-
-def get_face_analyser() -> Any:
- global FACE_ANALYSER
-
- with THREAD_LOCK:
- if FACE_ANALYSER is None:
- FACE_ANALYSER = insightface.app.FaceAnalysis(name='buffalo_l', providers=roop.globals.execution_providers)
- FACE_ANALYSER.prepare(ctx_id=0, det_size=(640, 640))
- return FACE_ANALYSER
-
-
-def get_one_face(frame: Frame) -> Any:
- face = get_face_analyser().get(frame)
- try:
- return min(face, key=lambda x: x.bbox[0])
- except ValueError:
- return None
-
-
-def get_many_faces(frame: Frame) -> Any:
- try:
- return get_face_analyser().get(frame)
- except IndexError:
- return None
diff --git a/spaces/Hila/RobustViT/imagenet_finetune_rrr.py b/spaces/Hila/RobustViT/imagenet_finetune_rrr.py
deleted file mode 100644
index e6f8bc0f6f7f8f0c8966270f6d306121d38ac534..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/imagenet_finetune_rrr.py
+++ /dev/null
@@ -1,570 +0,0 @@
-import argparse
-import os
-import random
-import shutil
-import time
-import warnings
-
-import torch
-import torch.nn as nn
-import torch.nn.parallel
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-import torch.optim
-import torch.multiprocessing as mp
-import torch.utils.data
-import torch.utils.data.distributed
-import torchvision.transforms as transforms
-import torchvision.datasets as datasets
-import torchvision.models as models
-import torch.nn.functional as F
-from segmentation_dataset import SegmentationDataset, VAL_PARTITION, TRAIN_PARTITION
-import numpy as np
-
-# Uncomment the expected model below
-
-# ViT
-from ViT.ViT import vit_base_patch16_224 as vit
-# from ViT.ViT import vit_large_patch16_224 as vit
-
-# ViT-AugReg
-# from ViT.ViT_new import vit_small_patch16_224 as vit
-# from ViT.ViT_new import vit_base_patch16_224 as vit
-# from ViT.ViT_new import vit_large_patch16_224 as vit
-
-# DeiT
-# from ViT.ViT import deit_base_patch16_224 as vit
-# from ViT.ViT import deit_small_patch16_224 as vit
-
-from ViT.explainer import generate_relevance, get_image_with_relevance
-import torchvision
-import cv2
-from torch.utils.tensorboard import SummaryWriter
-import json
-
-model_names = sorted(name for name in models.__dict__
- if name.islower() and not name.startswith("__")
- and callable(models.__dict__[name]))
-model_names.append("vit")
-
-parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
-parser.add_argument('--data', metavar='DATA',
- help='path to dataset')
-parser.add_argument('--seg_data', metavar='SEG_DATA',
- help='path to segmentation dataset')
-parser.add_argument('-a', '--arch', metavar='ARCH', default='resnet18',
- choices=model_names,
- help='model architecture: ' +
- ' | '.join(model_names) +
- ' (default: resnet18)')
-parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
- help='number of data loading workers (default: 4)')
-parser.add_argument('--epochs', default=50, type=int, metavar='N',
- help='number of total epochs to run')
-parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
- help='manual epoch number (useful on restarts)')
-parser.add_argument('-b', '--batch-size', default=8, type=int,
- metavar='N',
- help='mini-batch size (default: 256), this is the total '
- 'batch size of all GPUs on the current node when '
- 'using Data Parallel or Distributed Data Parallel')
-parser.add_argument('--lr', '--learning-rate', default=3e-6, type=float,
- metavar='LR', help='initial learning rate', dest='lr')
-parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
- help='momentum')
-parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,
- metavar='W', help='weight decay (default: 1e-4)',
- dest='weight_decay')
-parser.add_argument('-p', '--print-freq', default=10, type=int,
- metavar='N', help='print frequency (default: 10)')
-parser.add_argument('--resume', default='', type=str, metavar='PATH',
- help='path to latest checkpoint (default: none)')
-parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
- help='evaluate model on validation set')
-parser.add_argument('--pretrained', dest='pretrained', action='store_true',
- help='use pre-trained model')
-parser.add_argument('--world-size', default=-1, type=int,
- help='number of nodes for distributed training')
-parser.add_argument('--rank', default=-1, type=int,
- help='node rank for distributed training')
-parser.add_argument('--dist-url', default='tcp://224.66.41.62:23456', type=str,
- help='url used to set up distributed training')
-parser.add_argument('--dist-backend', default='nccl', type=str,
- help='distributed backend')
-parser.add_argument('--seed', default=None, type=int,
- help='seed for initializing training. ')
-parser.add_argument('--gpu', default=None, type=int,
- help='GPU id to use.')
-parser.add_argument('--save_interval', default=20, type=int,
- help='interval to save segmentation results.')
-parser.add_argument('--num_samples', default=3, type=int,
- help='number of samples per class for training')
-parser.add_argument('--multiprocessing-distributed', action='store_true',
- help='Use multi-processing distributed training to launch '
- 'N processes per node, which has N GPUs. This is the '
- 'fastest way to use PyTorch for either single node or '
- 'multi node data parallel training')
-parser.add_argument('--lambda_seg', default=0.8, type=float,
- help='influence of segmentation loss.')
-parser.add_argument('--lambda_acc', default=0.2, type=float,
- help='influence of accuracy loss.')
-parser.add_argument('--experiment_folder', default=None, type=str,
- help='path to folder to use for experiment.')
-parser.add_argument('--num_classes', default=500, type=int,
- help='coefficient of loss for segmentation foreground.')
-parser.add_argument('--temperature', default=1, type=float,
- help='temperature for softmax (mostly for DeiT).')
-
-best_loss = float('inf')
-
-def main():
- args = parser.parse_args()
-
- if args.experiment_folder is None:
- args.experiment_folder = f'experiment/' \
- f'lr_{args.lr}_seg_{args.lambda_seg}_acc_{args.lambda_acc}'
- if args.temperature != 1:
- args.experiment_folder = args.experiment_folder + f'_tempera_{args.temperature}'
- if args.batch_size != 8:
- args.experiment_folder = args.experiment_folder + f'_bs_{args.batch_size}'
- if args.num_classes != 500:
- args.experiment_folder = args.experiment_folder + f'_num_classes_{args.num_classes}'
- if args.num_samples != 3:
- args.experiment_folder = args.experiment_folder + f'_num_samples_{args.num_samples}'
- if args.epochs != 150:
- args.experiment_folder = args.experiment_folder + f'_num_epochs_{args.epochs}'
-
- if os.path.exists(args.experiment_folder):
- raise Exception(f"Experiment path {args.experiment_folder} already exists!")
- os.mkdir(args.experiment_folder)
- os.mkdir(f'{args.experiment_folder}/train_samples')
- os.mkdir(f'{args.experiment_folder}/val_samples')
-
- with open(f'{args.experiment_folder}/commandline_args.txt', 'w') as f:
- json.dump(args.__dict__, f, indent=2)
-
- if args.seed is not None:
- random.seed(args.seed)
- torch.manual_seed(args.seed)
- cudnn.deterministic = True
- warnings.warn('You have chosen to seed training. '
- 'This will turn on the CUDNN deterministic setting, '
- 'which can slow down your training considerably! '
- 'You may see unexpected behavior when restarting '
- 'from checkpoints.')
-
- if args.gpu is not None:
- warnings.warn('You have chosen a specific GPU. This will completely '
- 'disable data parallelism.')
-
- if args.dist_url == "env://" and args.world_size == -1:
- args.world_size = int(os.environ["WORLD_SIZE"])
-
- args.distributed = args.world_size > 1 or args.multiprocessing_distributed
-
- ngpus_per_node = torch.cuda.device_count()
- if args.multiprocessing_distributed:
- # Since we have ngpus_per_node processes per node, the total world_size
- # needs to be adjusted accordingly
- args.world_size = ngpus_per_node * args.world_size
- # Use torch.multiprocessing.spawn to launch distributed processes: the
- # main_worker process function
- mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
- else:
- # Simply call main_worker function
- main_worker(args.gpu, ngpus_per_node, args)
-
-
-def main_worker(gpu, ngpus_per_node, args):
- global best_loss
- args.gpu = gpu
-
- if args.gpu is not None:
- print("Use GPU: {} for training".format(args.gpu))
-
- if args.distributed:
- if args.dist_url == "env://" and args.rank == -1:
- args.rank = int(os.environ["RANK"])
- if args.multiprocessing_distributed:
- # For multiprocessing distributed training, rank needs to be the
- # global rank among all the processes
- args.rank = args.rank * ngpus_per_node + gpu
- dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
- world_size=args.world_size, rank=args.rank)
- # create model
- print("=> creating model")
- model = vit(pretrained=True).cuda()
- model.train()
- print("done")
-
- if not torch.cuda.is_available():
- print('using CPU, this will be slow')
- elif args.distributed:
- # For multiprocessing distributed, DistributedDataParallel constructor
- # should always set the single device scope, otherwise,
- # DistributedDataParallel will use all available devices.
- if args.gpu is not None:
- torch.cuda.set_device(args.gpu)
- model.cuda(args.gpu)
- # When using a single GPU per process and per
- # DistributedDataParallel, we need to divide the batch size
- # ourselves based on the total number of GPUs we have
- args.batch_size = int(args.batch_size / ngpus_per_node)
- args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- else:
- model.cuda()
- # DistributedDataParallel will divide and allocate batch_size to all
- # available GPUs if device_ids are not set
- model = torch.nn.parallel.DistributedDataParallel(model)
- elif args.gpu is not None:
- torch.cuda.set_device(args.gpu)
- model = model.cuda(args.gpu)
- else:
- # DataParallel will divide and allocate batch_size to all available GPUs
- print("start")
- model = torch.nn.DataParallel(model).cuda()
-
- # define loss function (criterion) and optimizer
- criterion = nn.CrossEntropyLoss().cuda(args.gpu)
- optimizer = torch.optim.AdamW(model.parameters(), args.lr, weight_decay=args.weight_decay)
-
- # optionally resume from a checkpoint
- if args.resume:
- if os.path.isfile(args.resume):
- print("=> loading checkpoint '{}'".format(args.resume))
- if args.gpu is None:
- checkpoint = torch.load(args.resume)
- else:
- # Map model to be loaded to specified single gpu.
- loc = 'cuda:{}'.format(args.gpu)
- checkpoint = torch.load(args.resume, map_location=loc)
- args.start_epoch = checkpoint['epoch']
- best_loss = checkpoint['best_loss']
- if args.gpu is not None:
- # best_loss may be from a checkpoint from a different GPU
- best_loss = best_loss.to(args.gpu)
- model.load_state_dict(checkpoint['state_dict'])
- optimizer.load_state_dict(checkpoint['optimizer'])
- print("=> loaded checkpoint '{}' (epoch {})"
- .format(args.resume, checkpoint['epoch']))
- else:
- print("=> no checkpoint found at '{}'".format(args.resume))
-
- cudnn.benchmark = True
-
- train_dataset = SegmentationDataset(args.seg_data, args.data, partition=TRAIN_PARTITION, train_classes=args.num_classes,
- num_samples=args.num_samples)
-
- if args.distributed:
- train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
- else:
- train_sampler = None
-
- train_loader = torch.utils.data.DataLoader(
- train_dataset, batch_size=args.batch_size, shuffle=False,
- num_workers=args.workers, pin_memory=True, sampler=train_sampler)
-
- val_dataset = SegmentationDataset(args.seg_data, args.data, partition=VAL_PARTITION, train_classes=args.num_classes,
- num_samples=1)
-
- val_loader = torch.utils.data.DataLoader(
- val_dataset, batch_size=5, shuffle=False,
- num_workers=args.workers, pin_memory=True)
-
- if args.evaluate:
- validate(val_loader, model, criterion, 0, args)
- return
-
- for epoch in range(args.start_epoch, args.epochs):
- if args.distributed:
- train_sampler.set_epoch(epoch)
- adjust_learning_rate(optimizer, epoch, args)
-
- log_dir = os.path.join(args.experiment_folder, 'logs')
- logger = SummaryWriter(log_dir=log_dir)
- args.logger = logger
-
- # train for one epoch
- train(train_loader, model, criterion, optimizer, epoch, args)
-
- # evaluate on validation set
- loss1 = validate(val_loader, model, criterion, epoch, args)
-
- # remember best acc@1 and save checkpoint
- is_best = loss1 < best_loss
- best_loss = min(loss1, best_loss)
-
- if not args.multiprocessing_distributed or (args.multiprocessing_distributed
- and args.rank % ngpus_per_node == 0):
- save_checkpoint({
- 'epoch': epoch + 1,
- 'state_dict': model.state_dict(),
- 'best_loss': best_loss,
- 'optimizer' : optimizer.state_dict(),
- }, is_best, folder=args.experiment_folder)
-
-def train(train_loader, model, criterion, optimizer, epoch, args):
- losses = AverageMeter('Loss', ':.4e')
- top1 = AverageMeter('Acc@1', ':6.2f')
- top5 = AverageMeter('Acc@5', ':6.2f')
- orig_top1 = AverageMeter('Acc@1_orig', ':6.2f')
- orig_top5 = AverageMeter('Acc@5_orig', ':6.2f')
- progress = ProgressMeter(
- len(train_loader),
- [losses, top1, top5, orig_top1, orig_top5],
- prefix="Epoch: [{}]".format(epoch))
-
- orig_model = vit(pretrained=True).cuda()
- orig_model.eval()
-
- # switch to train mode
- model.train()
-
- for i, (seg_map, image_ten, class_name) in enumerate(train_loader):
- if torch.cuda.is_available():
- image_ten = image_ten.cuda(args.gpu, non_blocking=True)
- seg_map = seg_map.cuda(args.gpu, non_blocking=True)
- class_name = class_name.cuda(args.gpu, non_blocking=True)
-
-
- image_ten.requires_grad = True
- output = model(image_ten)
-
- # segmentation loss
- EPS = 10e-12
- y_pred = torch.sum(torch.log(F.softmax(output, dim=1) + EPS))
- relevance = torch.autograd.grad(y_pred, image_ten, retain_graph=True)[0]
- reverse_seg_map = seg_map.clone()
- reverse_seg_map[reverse_seg_map == 1] = -1
- reverse_seg_map[reverse_seg_map == 0] = 1
- reverse_seg_map[reverse_seg_map == -1] = 0
- rrr_loss = (relevance * reverse_seg_map)**2
- segmentation_loss = rrr_loss.sum()
-
- # classification loss
- with torch.no_grad():
- output_orig = orig_model(image_ten)
- if args.temperature != 1:
- output = output / args.temperature
- classification_loss = criterion(output, class_name.flatten())
-
- loss = args.lambda_seg * segmentation_loss + args.lambda_acc * classification_loss
-
- # debugging output
- if i % args.save_interval == 0:
- orig_relevance = generate_relevance(orig_model, image_ten, index=class_name)
- for j in range(image_ten.shape[0]):
- image = get_image_with_relevance(image_ten[j], torch.ones_like(image_ten[j]))
- new_vis = get_image_with_relevance(image_ten[j]*relevance[j], torch.ones_like(image_ten[j]))
- old_vis = get_image_with_relevance(image_ten[j], orig_relevance[j])
- gt = get_image_with_relevance(image_ten[j], seg_map[j])
- h_img = cv2.hconcat([image, gt, old_vis, new_vis])
- cv2.imwrite(f'{args.experiment_folder}/train_samples/res_{i}_{j}.jpg', h_img)
-
- # measure accuracy and record loss
- acc1, acc5 = accuracy(output, class_name, topk=(1, 5))
- losses.update(loss.item(), image_ten.size(0))
- top1.update(acc1[0], image_ten.size(0))
- top5.update(acc5[0], image_ten.size(0))
-
- # metrics for original vit
- acc1_orig, acc5_orig = accuracy(output_orig, class_name, topk=(1, 5))
- orig_top1.update(acc1_orig[0], image_ten.size(0))
- orig_top5.update(acc5_orig[0], image_ten.size(0))
-
- # compute gradient and do SGD step
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- if i % args.print_freq == 0:
- progress.display(i)
- args.logger.add_scalar('{}/{}'.format('train', 'segmentation_loss'), segmentation_loss,
- epoch*len(train_loader)+i)
- args.logger.add_scalar('{}/{}'.format('train', 'classification_loss'), classification_loss,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'orig_top1'), acc1_orig,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'top1'), acc1,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'orig_top5'), acc5_orig,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'top5'), acc5,
- epoch * len(train_loader) + i)
- args.logger.add_scalar('{}/{}'.format('train', 'tot_loss'), loss,
- epoch * len(train_loader) + i)
-
-
-def validate(val_loader, model, criterion, epoch, args):
- mse_criterion = torch.nn.MSELoss(reduction='mean')
-
- losses = AverageMeter('Loss', ':.4e')
- top1 = AverageMeter('Acc@1', ':6.2f')
- top5 = AverageMeter('Acc@5', ':6.2f')
- orig_top1 = AverageMeter('Acc@1_orig', ':6.2f')
- orig_top5 = AverageMeter('Acc@5_orig', ':6.2f')
- progress = ProgressMeter(
- len(val_loader),
- [losses, top1, top5, orig_top1, orig_top5],
- prefix="Epoch: [{}]".format(val_loader))
-
- # switch to evaluate mode
- model.eval()
-
- orig_model = vit(pretrained=True).cuda()
- orig_model.eval()
-
- with torch.no_grad():
- for i, (seg_map, image_ten, class_name) in enumerate(val_loader):
- if args.gpu is not None:
- image_ten = image_ten.cuda(args.gpu, non_blocking=True)
- if torch.cuda.is_available():
- seg_map = seg_map.cuda(args.gpu, non_blocking=True)
- class_name = class_name.cuda(args.gpu, non_blocking=True)
-
- with torch.enable_grad():
- image_ten.requires_grad = True
- output = model(image_ten)
-
- # segmentation loss
- EPS = 10e-12
- y_pred = torch.sum(torch.log(F.softmax(output, dim=1) + EPS))
- relevance = torch.autograd.grad(y_pred, image_ten, retain_graph=True)[0]
-
- reverse_seg_map = seg_map.clone()
- reverse_seg_map[reverse_seg_map == 1] = -1
- reverse_seg_map[reverse_seg_map == 0] = 1
- reverse_seg_map[reverse_seg_map == -1] = 0
- rrr_loss = (relevance * reverse_seg_map) ** 2
- segmentation_loss = rrr_loss.sum()
-
- # classification loss
- output = model(image_ten)
- with torch.no_grad():
- output_orig = orig_model(image_ten)
- if args.temperature != 1:
- output = output / args.temperature
- classification_loss = criterion(output, class_name.flatten())
-
- loss = args.lambda_seg * segmentation_loss + args.lambda_acc * classification_loss
-
- # save results
- if i % args.save_interval == 0:
- with torch.enable_grad():
- orig_relevance = generate_relevance(orig_model, image_ten, index=class_name)
- for j in range(image_ten.shape[0]):
- image = get_image_with_relevance(image_ten[j], torch.ones_like(image_ten[j]))
- new_vis = get_image_with_relevance(image_ten[j]*relevance[j], torch.ones_like(image_ten[j]))
- old_vis = get_image_with_relevance(image_ten[j], orig_relevance[j])
- gt = get_image_with_relevance(image_ten[j], seg_map[j])
- h_img = cv2.hconcat([image, gt, old_vis, new_vis])
- cv2.imwrite(f'{args.experiment_folder}/val_samples/res_{i}_{j}.jpg', h_img)
-
- # measure accuracy and record loss
- acc1, acc5 = accuracy(output, class_name, topk=(1, 5))
- losses.update(loss.item(), image_ten.size(0))
- top1.update(acc1[0], image_ten.size(0))
- top5.update(acc5[0], image_ten.size(0))
-
- # metrics for original vit
- acc1_orig, acc5_orig = accuracy(output_orig, class_name, topk=(1, 5))
- orig_top1.update(acc1_orig[0], image_ten.size(0))
- orig_top5.update(acc5_orig[0], image_ten.size(0))
-
- if i % args.print_freq == 0:
- progress.display(i)
- args.logger.add_scalar('{}/{}'.format('val', 'segmentation_loss'), segmentation_loss,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'classification_loss'), classification_loss,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'orig_top1'), acc1_orig,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'top1'), acc1,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'orig_top5'), acc5_orig,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'top5'), acc5,
- epoch * len(val_loader) + i)
- args.logger.add_scalar('{}/{}'.format('val', 'tot_loss'), loss,
- epoch * len(val_loader) + i)
-
- # TODO: this should also be done with the ProgressMeter
- print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
- .format(top1=top1, top5=top5))
-
- return losses.avg
-
-
-def save_checkpoint(state, is_best, folder, filename='checkpoint.pth.tar'):
- torch.save(state, f'{folder}/{filename}')
- if is_best:
- shutil.copyfile(f'{folder}/{filename}', f'{folder}/model_best.pth.tar')
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
- def __init__(self, name, fmt=':f'):
- self.name = name
- self.fmt = fmt
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
- return fmtstr.format(**self.__dict__)
-
-
-class ProgressMeter(object):
- def __init__(self, num_batches, meters, prefix=""):
- self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
- self.meters = meters
- self.prefix = prefix
-
- def display(self, batch):
- entries = [self.prefix + self.batch_fmtstr.format(batch)]
- entries += [str(meter) for meter in self.meters]
- print('\t'.join(entries))
-
- def _get_batch_fmtstr(self, num_batches):
- num_digits = len(str(num_batches // 1))
- fmt = '{:' + str(num_digits) + 'd}'
- return '[' + fmt + '/' + fmt.format(num_batches) + ']'
-
-def adjust_learning_rate(optimizer, epoch, args):
- """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
- lr = args.lr * (0.85 ** (epoch // 2))
- for param_group in optimizer.param_groups:
- param_group['lr'] = lr
-
-
-def accuracy(output, target, topk=(1,)):
- """Computes the accuracy over the k top predictions for the specified values of k"""
- with torch.no_grad():
- maxk = max(topk)
- batch_size = target.size(0)
-
- _, pred = output.topk(maxk, 1, True, True)
- pred = pred.t()
- correct = pred.eq(target.view(1, -1).expand_as(pred))
-
- res = []
- for k in topk:
- correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
- res.append(correct_k.mul_(100.0 / batch_size))
- return res
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/shorten_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/shorten_dataset.py
deleted file mode 100644
index 6ebb5d88feb3f29d1512a0873df304915d051209..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/shorten_dataset.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-from fairseq.data import data_utils
-
-from . import BaseWrapperDataset
-
-
-class TruncateDataset(BaseWrapperDataset):
- """Truncate a sequence by returning the first truncation_length tokens"""
-
- def __init__(self, dataset, truncation_length):
- super().__init__(dataset)
- assert truncation_length is not None
- self.truncation_length = truncation_length
- self.dataset = dataset
-
- def __getitem__(self, index):
- item = self.dataset[index]
- item_len = item.size(0)
- if item_len > self.truncation_length:
- item = item[: self.truncation_length]
- return item
-
- @property
- def sizes(self):
- return np.minimum(self.dataset.sizes, self.truncation_length)
-
- def __len__(self):
- return len(self.dataset)
-
-
-class RandomCropDataset(TruncateDataset):
- """Truncate a sequence by returning a random crop of truncation_length tokens"""
-
- def __init__(self, dataset, truncation_length, seed=1):
- super().__init__(dataset, truncation_length)
- self.seed = seed
- self.epoch = 0
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return True # only the crop changes, not item sizes
-
- def set_epoch(self, epoch, **unused):
- super().set_epoch(epoch)
- self.epoch = epoch
-
- def __getitem__(self, index):
- with data_utils.numpy_seed(self.seed, self.epoch, index):
- item = self.dataset[index]
- item_len = item.size(0)
- excess = item_len - self.truncation_length
- if excess > 0:
- start_idx = np.random.randint(0, excess)
- item = item[start_idx : start_idx + self.truncation_length]
- return item
-
-
-def maybe_shorten_dataset(
- dataset,
- split,
- shorten_data_split_list,
- shorten_method,
- tokens_per_sample,
- seed,
-):
- truncate_split = (
- split in shorten_data_split_list.split(",") or len(shorten_data_split_list) == 0
- )
- if shorten_method == "truncate" and truncate_split:
- dataset = TruncateDataset(dataset, tokens_per_sample)
- elif shorten_method == "random_crop" and truncate_split:
- dataset = RandomCropDataset(dataset, tokens_per_sample, seed)
- return dataset
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/linearized_convolution.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/linearized_convolution.py
deleted file mode 100644
index f7e156cb0c75cb375447859c8b6749311372c35e..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/linearized_convolution.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.incremental_decoding_utils import with_incremental_state
-
-from .conv_tbc import ConvTBC
-
-from typing import Dict, Optional
-from torch import Tensor
-
-@with_incremental_state
-class LinearizedConvolution(ConvTBC):
- """An optimized version of nn.Conv1d.
-
- At training time, this module uses ConvTBC, which is an optimized version
- of Conv1d. At inference time, it optimizes incremental generation (i.e.,
- one time step at a time) by replacing the convolutions with linear layers.
- Note that the input order changes from training to inference.
- """
-
- def __init__(self, in_channels, out_channels, kernel_size, **kwargs):
- super().__init__(in_channels, out_channels, kernel_size, **kwargs)
- self._linearized_weight = None
- self.register_backward_hook(self._clear_linearized_weight)
-
- def state_dict(self, destination=None, prefix="", keep_vars=False):
- state = ConvTBC.state_dict(self, destination, prefix, keep_vars=keep_vars)
- # don't store redundant _linearized_weight in checkpoints
- if prefix + "_linearized_weight" in state:
- del state[prefix + "_linearized_weight"]
- return state
-
- def upgrade_state_dict_named(self, state_dict, name):
- prefix = name + "." if name != "" else ""
- if prefix + "_linearized_weight" in state_dict:
- del state_dict[prefix + "_linearized_weight"]
-
- @torch.jit.export
- def forward(self, input, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None):
- """
- Args:
- incremental_state: Used to buffer signal; if not None, then input is
- expected to contain a single frame. If the input order changes
- between time steps, call reorder_incremental_state.
- Input:
- Time x Batch x Channel during training
- Batch x Time x Channel during inference
- """
- if incremental_state is None:
- output = self.conv_tbc(input)
- if self.kernel_size[0] > 1 and self.padding[0] > 0:
- # remove future timesteps added by padding
- output = output[: -self.padding[0], :, :]
- return output
-
- # reshape weight
- weight = self._get_linearized_weight()
- kw = self.kernel_size[0]
-
- bsz = input.size(0) # input: bsz x len x dim
- if kw > 1:
- input = input.data
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is None:
- input_buffer = input.new(bsz, kw, input.size(2)).zero_()
- self._set_input_buffer(incremental_state, input_buffer)
- else:
- # shift buffer
- input_buffer[:, :-1, :] = input_buffer[:, 1:, :].clone()
- # append next input
- input_buffer[:, -1, :] = input[:, -1, :]
- input = input_buffer
- with torch.no_grad():
- output = F.linear(input.view(bsz, -1), weight, self.bias)
- return output.view(bsz, 1, -1)
-
- @torch.jit.unused
- def reorder_incremental_state(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], new_order):
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is not None:
- input_buffer = input_buffer.index_select(0, new_order)
- self._set_input_buffer(incremental_state, input_buffer)
-
- @torch.jit.unused
- def _get_input_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]]):
- return utils.get_incremental_state(self, incremental_state, "input_buffer")
-
- @torch.jit.unused
- def _set_input_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], new_buffer):
- return utils.set_incremental_state(
- self, incremental_state, "input_buffer", new_buffer
- )
-
- @torch.jit.unused
- def _get_linearized_weight(self):
- if self._linearized_weight is None:
- kw = self.kernel_size[0]
- weight = self.weight.transpose(2, 1).transpose(1, 0).contiguous()
- assert weight.size() == (self.out_channels, kw, self.in_channels)
- return weight.view(self.out_channels, -1)
- return self._linearized_weight
-
- @torch.jit.unused
- def _clear_linearized_weight(self, *args):
- self._linearized_weight = None
diff --git a/spaces/Illumotion/Koboldcpp/examples/main/README.md b/spaces/Illumotion/Koboldcpp/examples/main/README.md
deleted file mode 100644
index a9561c383c0cba7873808626cc4114e25dc1865d..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/examples/main/README.md
+++ /dev/null
@@ -1,310 +0,0 @@
-# llama.cpp/example/main
-
-This example program allows you to use various LLaMA language models in an easy and efficient way. It is specifically designed to work with the [llama.cpp](https://github.com/ggerganov/llama.cpp) project, which provides a plain C/C++ implementation with optional 4-bit quantization support for faster, lower memory inference, and is optimized for desktop CPUs. This program can be used to perform various inference tasks with LLaMA models, including generating text based on user-provided prompts and chat-like interactions with reverse prompts.
-
-## Table of Contents
-
-1. [Quick Start](#quick-start)
-2. [Common Options](#common-options)
-3. [Input Prompts](#input-prompts)
-4. [Interaction](#interaction)
-5. [Context Management](#context-management)
-6. [Generation Flags](#generation-flags)
-7. [Performance Tuning and Memory Options](#performance-tuning-and-memory-options)
-8. [Additional Options](#additional-options)
-
-## Quick Start
-
-To get started right away, run the following command, making sure to use the correct path for the model you have:
-
-#### Unix-based systems (Linux, macOS, etc.):
-
-```bash
-./main -m models/7B/ggml-model.bin --prompt "Once upon a time"
-```
-
-#### Windows:
-
-```powershell
-main.exe -m models\7B\ggml-model.bin --prompt "Once upon a time"
-```
-
-For an interactive experience, try this command:
-
-#### Unix-based systems (Linux, macOS, etc.):
-
-```bash
-./main -m models/7B/ggml-model.bin -n -1 --color -r "User:" --in-prefix " " -i -p \
-'User: Hi
-AI: Hello. I am an AI chatbot. Would you like to talk?
-User: Sure!
-AI: What would you like to talk about?
-User:'
-```
-
-#### Windows:
-
-```powershell
-main.exe -m models\7B\ggml-model.bin -n -1 --color -r "User:" --in-prefix " " -i -e -p "User: Hi\nAI: Hello. I am an AI chatbot. Would you like to talk?\nUser: Sure!\nAI: What would you like to talk about?\nUser:"
-```
-
-The following command generates "infinite" text from a starting prompt (you can use `Ctrl-C` to stop it):
-
-#### Unix-based systems (Linux, macOS, etc.):
-
-```bash
-./main -m models/7B/ggml-model.bin --ignore-eos -n -1 --random-prompt
-```
-
-#### Windows:
-
-```powershell
-main.exe -m models\7B\ggml-model.bin --ignore-eos -n -1 --random-prompt
-```
-
-## Common Options
-
-In this section, we cover the most commonly used options for running the `main` program with the LLaMA models:
-
-- `-m FNAME, --model FNAME`: Specify the path to the LLaMA model file (e.g., `models/7B/ggml-model.bin`).
-- `-i, --interactive`: Run the program in interactive mode, allowing you to provide input directly and receive real-time responses.
-- `-ins, --instruct`: Run the program in instruction mode, which is particularly useful when working with Alpaca models.
-- `-n N, --n-predict N`: Set the number of tokens to predict when generating text. Adjusting this value can influence the length of the generated text.
-- `-c N, --ctx-size N`: Set the size of the prompt context. The default is 512, but LLaMA models were built with a context of 2048, which will provide better results for longer input/inference.
-
-## Input Prompts
-
-The `main` program provides several ways to interact with the LLaMA models using input prompts:
-
-- `--prompt PROMPT`: Provide a prompt directly as a command-line option.
-- `--file FNAME`: Provide a file containing a prompt or multiple prompts.
-- `--interactive-first`: Run the program in interactive mode and wait for input right away. (More on this below.)
-- `--random-prompt`: Start with a randomized prompt.
-
-## Interaction
-
-The `main` program offers a seamless way to interact with LLaMA models, allowing users to engage in real-time conversations or provide instructions for specific tasks. The interactive mode can be triggered using various options, including `--interactive`, `--interactive-first`, and `--instruct`.
-
-In interactive mode, users can participate in text generation by injecting their input during the process. Users can press `Ctrl+C` at any time to interject and type their input, followed by pressing `Return` to submit it to the LLaMA model. To submit additional lines without finalizing input, users can end the current line with a backslash (`\`) and continue typing.
-
-### Interaction Options
-
-- `-i, --interactive`: Run the program in interactive mode, allowing users to engage in real-time conversations or provide specific instructions to the model.
-- `--interactive-first`: Run the program in interactive mode and immediately wait for user input before starting the text generation.
-- `-ins, --instruct`: Run the program in instruction mode, which is specifically designed to work with Alpaca models that excel in completing tasks based on user instructions.
-- `--color`: Enable colorized output to differentiate visually distinguishing between prompts, user input, and generated text.
-
-By understanding and utilizing these interaction options, you can create engaging and dynamic experiences with the LLaMA models, tailoring the text generation process to your specific needs.
-
-### Reverse Prompts
-
-Reverse prompts are a powerful way to create a chat-like experience with a LLaMA model by pausing the text generation when specific text strings are encountered:
-
-- `-r PROMPT, --reverse-prompt PROMPT`: Specify one or multiple reverse prompts to pause text generation and switch to interactive mode. For example, `-r "User:"` can be used to jump back into the conversation whenever it's the user's turn to speak. This helps create a more interactive and conversational experience. However, the reverse prompt doesn't work when it ends with a space.
-
-To overcome this limitation, you can use the `--in-prefix` flag to add a space or any other characters after the reverse prompt.
-
-### In-Prefix
-
-The `--in-prefix` flag is used to add a prefix to your input, primarily, this is used to insert a space after the reverse prompt. Here's an example of how to use the `--in-prefix` flag in conjunction with the `--reverse-prompt` flag:
-
-```sh
-./main -r "User:" --in-prefix " "
-```
-
-### In-Suffix
-
-The `--in-suffix` flag is used to add a suffix after your input. This is useful for adding an "Assistant:" prompt after the user's input. It's added after the new-line character (`\n`) that's automatically added to the end of the user's input. Here's an example of how to use the `--in-suffix` flag in conjunction with the `--reverse-prompt` flag:
-
-```sh
-./main -r "User:" --in-prefix " " --in-suffix "Assistant:"
-```
-
-### Instruction Mode
-
-Instruction mode is particularly useful when working with Alpaca models, which are designed to follow user instructions for specific tasks:
-
-- `-ins, --instruct`: Enable instruction mode to leverage the capabilities of Alpaca models in completing tasks based on user-provided instructions.
-
-Technical detail: the user's input is internally prefixed with the reverse prompt (or `### Instruction:` as the default), and followed by `### Response:` (except if you just press Return without any input, to keep generating a longer response).
-
-By understanding and utilizing these interaction options, you can create engaging and dynamic experiences with the LLaMA models, tailoring the text generation process to your specific needs.
-
-## Context Management
-
-During text generation, LLaMA models have a limited context size, which means they can only consider a certain number of tokens from the input and generated text. When the context fills up, the model resets internally, potentially losing some information from the beginning of the conversation or instructions. Context management options help maintain continuity and coherence in these situations.
-
-### Context Size
-
-The `--ctx-size` option allows you to set the size of the prompt context used by the LLaMA models during text generation. A larger context size helps the model to better comprehend and generate responses for longer input or conversations.
-
-- `-c N, --ctx-size N`: Set the size of the prompt context (default: 512). The LLaMA models were built with a context of 2048, which will yield the best results on longer input/inference. However, increasing the context size beyond 2048 may lead to unpredictable results.
-
-### Extended Context Size
-
-Some fine-tuned models have extened the context length by scaling RoPE. For example, if the original pretrained model have a context length (max sequence length) of 4096 (4k) and the fine-tuned model have 32k. That is a scaling factor of 8, and should work by setting the above `--ctx-size` to 32768 (32k) and `--rope-scale` to 8.
-
-- `--rope-scale N`: Where N is the linear scaling factor used by the fine-tuned model.
-
-### Keep Prompt
-
-The `--keep` option allows users to retain the original prompt when the model runs out of context, ensuring a connection to the initial instruction or conversation topic is maintained.
-
-- `--keep N`: Specify the number of tokens from the initial prompt to retain when the model resets its internal context. By default, this value is set to 0 (meaning no tokens are kept). Use `-1` to retain all tokens from the initial prompt.
-
-By utilizing context management options like `--ctx-size` and `--keep`, you can maintain a more coherent and consistent interaction with the LLaMA models, ensuring that the generated text remains relevant to the original prompt or conversation.
-
-## Generation Flags
-
-The following options allow you to control the text generation process and fine-tune the diversity, creativity, and quality of the generated text according to your needs. By adjusting these options and experimenting with different combinations of values, you can find the best settings for your specific use case.
-
-### Number of Tokens to Predict
-
-- `-n N, --n-predict N`: Set the number of tokens to predict when generating text (default: 128, -1 = infinity, -2 = until context filled)
-
-The `--n-predict` option controls the number of tokens the model generates in response to the input prompt. By adjusting this value, you can influence the length of the generated text. A higher value will result in longer text, while a lower value will produce shorter text.
-
-A value of -1 will enable infinite text generation, even though we have a finite context window. When the context window is full, some of the earlier tokens (half of the tokens after `--n-keep`) will be discarded. The context must then be re-evaluated before generation can resume. On large models and/or large context windows, this will result in significant pause in output.
-
-If the pause is undesirable, a value of -2 will stop generation immediately when the context is filled.
-
-It is important to note that the generated text may be shorter than the specified number of tokens if an End-of-Sequence (EOS) token or a reverse prompt is encountered. In interactive mode text generation will pause and control will be returned to the user. In non-interactive mode, the program will end. In both cases, the text generation may stop before reaching the specified `n-predict` value. If you want the model to keep going without ever producing End-of-Sequence on its own, you can use the `--ignore-eos` parameter.
-
-### Temperature
-
-- `--temp N`: Adjust the randomness of the generated text (default: 0.8).
-
-Temperature is a hyperparameter that controls the randomness of the generated text. It affects the probability distribution of the model's output tokens. A higher temperature (e.g., 1.5) makes the output more random and creative, while a lower temperature (e.g., 0.5) makes the output more focused, deterministic, and conservative. The default value is 0.8, which provides a balance between randomness and determinism. At the extreme, a temperature of 0 will always pick the most likely next token, leading to identical outputs in each run.
-
-Example usage: `--temp 0.5`
-
-### Repeat Penalty
-
-- `--repeat-penalty N`: Control the repetition of token sequences in the generated text (default: 1.1).
-- `--repeat-last-n N`: Last n tokens to consider for penalizing repetition (default: 64, 0 = disabled, -1 = ctx-size).
-- `--no-penalize-nl`: Disable penalization for newline tokens when applying the repeat penalty.
-
-The `repeat-penalty` option helps prevent the model from generating repetitive or monotonous text. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. The default value is 1.1.
-
-The `repeat-last-n` option controls the number of tokens in the history to consider for penalizing repetition. A larger value will look further back in the generated text to prevent repetitions, while a smaller value will only consider recent tokens. A value of 0 disables the penalty, and a value of -1 sets the number of tokens considered equal to the context size (`ctx-size`).
-
-Use the `--no-penalize-nl` option to disable newline penalization when applying the repeat penalty. This option is particularly useful for generating chat conversations, dialogues, code, poetry, or any text where newline tokens play a significant role in structure and formatting. Disabling newline penalization helps maintain the natural flow and intended formatting in these specific use cases.
-
-Example usage: `--repeat-penalty 1.15 --repeat-last-n 128 --no-penalize-nl`
-
-### Top-K Sampling
-
-- `--top-k N`: Limit the next token selection to the K most probable tokens (default: 40).
-
-Top-k sampling is a text generation method that selects the next token only from the top k most likely tokens predicted by the model. It helps reduce the risk of generating low-probability or nonsensical tokens, but it may also limit the diversity of the output. A higher value for top-k (e.g., 100) will consider more tokens and lead to more diverse text, while a lower value (e.g., 10) will focus on the most probable tokens and generate more conservative text. The default value is 40.
-
-Example usage: `--top-k 30`
-
-### Top-P Sampling
-
-- `--top-p N`: Limit the next token selection to a subset of tokens with a cumulative probability above a threshold P (default: 0.9).
-
-Top-p sampling, also known as nucleus sampling, is another text generation method that selects the next token from a subset of tokens that together have a cumulative probability of at least p. This method provides a balance between diversity and quality by considering both the probabilities of tokens and the number of tokens to sample from. A higher value for top-p (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. The default value is 0.9.
-
-Example usage: `--top-p 0.95`
-
-### Tail Free Sampling (TFS)
-
-- `--tfs N`: Enable tail free sampling with parameter z (default: 1.0, 1.0 = disabled).
-
-Tail free sampling (TFS) is a text generation technique that aims to reduce the impact of less likely tokens, which may be less relevant, less coherent, or nonsensical, on the output. Similar to Top-P it tries to determine the bulk of the most likely tokens dynamically. But TFS filters out logits based on the second derivative of their probabilities. Adding tokens is stopped after the sum of the second derivatives reaches the parameter z. In short: TFS looks how quickly the probabilities of the tokens decrease and cuts off the tail of unlikely tokens using the parameter z. Typical values for z are in the range of 0.9 to 0.95. A value of 1.0 would include all tokens, and thus disables the effect of TFS.
-
-Example usage: `--tfs 0.95`
-
-### Locally Typical Sampling
-
-- `--typical N`: Enable locally typical sampling with parameter p (default: 1.0, 1.0 = disabled).
-
-Locally typical sampling promotes the generation of contextually coherent and diverse text by sampling tokens that are typical or expected based on the surrounding context. By setting the parameter p between 0 and 1, you can control the balance between producing text that is locally coherent and diverse. A value closer to 1 will promote more contextually coherent tokens, while a value closer to 0 will promote more diverse tokens. A value equal to 1 disables locally typical sampling.
-
-Example usage: `--typical 0.9`
-
-### Mirostat Sampling
-
-- `--mirostat N`: Enable Mirostat sampling, controlling perplexity during text generation (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0).
-- `--mirostat-lr N`: Set the Mirostat learning rate, parameter eta (default: 0.1).
-- `--mirostat-ent N`: Set the Mirostat target entropy, parameter tau (default: 5.0).
-
-Mirostat is an algorithm that actively maintains the quality of generated text within a desired range during text generation. It aims to strike a balance between coherence and diversity, avoiding low-quality output caused by excessive repetition (boredom traps) or incoherence (confusion traps).
-
-The `--mirostat-lr` option sets the Mirostat learning rate (eta). The learning rate influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. The default value is `0.1`.
-
-The `--mirostat-ent` option sets the Mirostat target entropy (tau), which represents the desired perplexity value for the generated text. Adjusting the target entropy allows you to control the balance between coherence and diversity in the generated text. A lower value will result in more focused and coherent text, while a higher value will lead to more diverse and potentially less coherent text. The default value is `5.0`.
-
-Example usage: `--mirostat 2 --mirostat-lr 0.05 --mirostat-ent 3.0`
-
-### Logit Bias
-
-- `-l TOKEN_ID(+/-)BIAS, --logit-bias TOKEN_ID(+/-)BIAS`: Modify the likelihood of a token appearing in the generated text completion.
-
-The logit bias option allows you to manually adjust the likelihood of specific tokens appearing in the generated text. By providing a token ID and a positive or negative bias value, you can increase or decrease the probability of that token being generated.
-
-For example, use `--logit-bias 15043+1` to increase the likelihood of the token 'Hello', or `--logit-bias 15043-1` to decrease its likelihood. Using a value of negative infinity, `--logit-bias 15043-inf` ensures that the token `Hello` is never produced.
-
-A more practical use case might be to prevent the generation of `\code{begin}` and `\code{end}` by setting the `\` token (29905) to negative infinity with `-l 29905-inf`. (This is due to the prevalence of LaTeX codes that show up in LLaMA model inference.)
-
-Example usage: `--logit-bias 29905-inf`
-
-### RNG Seed
-
-- `-s SEED, --seed SEED`: Set the random number generator (RNG) seed (default: -1, -1 = random seed).
-
-The RNG seed is used to initialize the random number generator that influences the text generation process. By setting a specific seed value, you can obtain consistent and reproducible results across multiple runs with the same input and settings. This can be helpful for testing, debugging, or comparing the effects of different options on the generated text to see when they diverge. If the seed is set to a value less than 0, a random seed will be used, which will result in different outputs on each run.
-
-## Performance Tuning and Memory Options
-
-These options help improve the performance and memory usage of the LLaMA models. By adjusting these settings, you can fine-tune the model's behavior to better suit your system's capabilities and achieve optimal performance for your specific use case.
-
-### Number of Threads
-
-- `-t N, --threads N`: Set the number of threads to use during generation. For optimal performance, it is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). Using the correct number of threads can greatly improve performance.
-- `-tb N, --threads-batch N`: Set the number of threads to use during batch and prompt processing. In some systems, it is beneficial to use a higher number of threads during batch processing than during generation. If not specified, the number of threads used for batch processing will be the same as the number of threads used for generation.
-
-### Mlock
-
-- `--mlock`: Lock the model in memory, preventing it from being swapped out when memory-mapped. This can improve performance but trades away some of the advantages of memory-mapping by requiring more RAM to run and potentially slowing down load times as the model loads into RAM.
-
-### No Memory Mapping
-
-- `--no-mmap`: Do not memory-map the model. By default, models are mapped into memory, which allows the system to load only the necessary parts of the model as needed. However, if the model is larger than your total amount of RAM or if your system is low on available memory, using mmap might increase the risk of pageouts, negatively impacting performance. Disabling mmap results in slower load times but may reduce pageouts if you're not using `--mlock`. Note that if the model is larger than the total amount of RAM, turning off mmap would prevent the model from loading at all.
-
-### NUMA support
-
-- `--numa`: Attempt optimizations that help on some systems with non-uniform memory access. This currently consists of pinning an equal proportion of the threads to the cores on each NUMA node, and disabling prefetch and readahead for mmap. The latter causes mapped pages to be faulted in on first access instead of all at once, and in combination with pinning threads to NUMA nodes, more of the pages end up on the NUMA node where they are used. Note that if the model is already in the system page cache, for example because of a previous run without this option, this will have little effect unless you drop the page cache first. This can be done by rebooting the system or on Linux by writing '3' to '/proc/sys/vm/drop_caches' as root.
-
-### Memory Float 32
-
-- `--memory-f32`: Use 32-bit floats instead of 16-bit floats for memory key+value. This doubles the context memory requirement and cached prompt file size but does not appear to increase generation quality in a measurable way. Not recommended.
-
-### Batch Size
-
-- `-b N, --batch-size N`: Set the batch size for prompt processing (default: 512). This large batch size benefits users who have BLAS installed and enabled it during the build. If you don't have BLAS enabled ("BLAS=0"), you can use a smaller number, such as 8, to see the prompt progress as it's evaluated in some situations.
-
-### Prompt Caching
-
-- `--prompt-cache FNAME`: Specify a file to cache the model state after the initial prompt. This can significantly speed up the startup time when you're using longer prompts. The file is created during the first run and is reused and updated in subsequent runs. **Note**: Restoring a cached prompt does not imply restoring the exact state of the session at the point it was saved. So even when specifying a specific seed, you are not guaranteed to get the same sequence of tokens as the original generation.
-
-### Grammars
-
-- `--grammar GRAMMAR`, `--grammar-file FILE`: Specify a grammar (defined inline or in a file) to constrain model output to a specific format. For example, you could force the model to output JSON or to speak only in emojis. See the [GBNF guide](../../grammars/README.md) for details on the syntax.
-
-### Quantization
-
-For information about 4-bit quantization, which can significantly improve performance and reduce memory usage, please refer to llama.cpp's primary [README](../../README.md#prepare-data--run).
-
-## Additional Options
-
-These options provide extra functionality and customization when running the LLaMA models:
-
-- `-h, --help`: Display a help message showing all available options and their default values. This is particularly useful for checking the latest options and default values, as they can change frequently, and the information in this document may become outdated.
-- `--verbose-prompt`: Print the prompt before generating text.
-- `-ngl N, --n-gpu-layers N`: When compiled with appropriate support (currently CLBlast or cuBLAS), this option allows offloading some layers to the GPU for computation. Generally results in increased performance.
-- `-mg i, --main-gpu i`: When using multiple GPUs this option controls which GPU is used for small tensors for which the overhead of splitting the computation across all GPUs is not worthwhile. The GPU in question will use slightly more VRAM to store a scratch buffer for temporary results. By default GPU 0 is used. Requires cuBLAS.
-- `-ts SPLIT, --tensor-split SPLIT`: When using multiple GPUs this option controls how large tensors should be split across all GPUs. `SPLIT` is a comma-separated list of non-negative values that assigns the proportion of data that each GPU should get in order. For example, "3,2" will assign 60% of the data to GPU 0 and 40% to GPU 1. By default the data is split in proportion to VRAM but this may not be optimal for performance. Requires cuBLAS.
-- `--lora FNAME`: Apply a LoRA (Low-Rank Adaptation) adapter to the model (implies --no-mmap). This allows you to adapt the pretrained model to specific tasks or domains.
-- `--lora-base FNAME`: Optional model to use as a base for the layers modified by the LoRA adapter. This flag is used in conjunction with the `--lora` flag, and specifies the base model for the adaptation.
diff --git a/spaces/Insuz/Mocha/app.py b/spaces/Insuz/Mocha/app.py
deleted file mode 100644
index 647661ef3c3718a18c9b5d6360abe337f1617095..0000000000000000000000000000000000000000
--- a/spaces/Insuz/Mocha/app.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import time
-
-import gradio as gr
-from gradio.themes.utils.theme_dropdown import create_theme_dropdown
-
-dropdown, js = create_theme_dropdown()
-
-with gr.Blocks(theme='Insuz/Mocha') as demo:
- with gr.Row().style(equal_height=True):
- with gr.Column(scale=10):
- gr.Markdown(
- """
- # Theme preview: `Mocha`
- To use this theme, set `theme='Insuz/Mocha'` in `gr.Blocks()` or `gr.Interface()`.
- You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version
- of this theme.
- """
- )
- with gr.Column(scale=3):
- with gr.Box():
- dropdown.render()
- toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True)
-
- dropdown.change(None, dropdown, None, _js=js)
- toggle_dark.click(
- None,
- _js="""
- () => {
- document.body.classList.toggle('dark');
- }
- """,
- )
-
- name = gr.Textbox(
- label="Name",
- info="Full name, including middle name. No special characters.",
- placeholder="John Doe",
- value="John Doe",
- interactive=True,
- )
-
- with gr.Row():
- slider1 = gr.Slider(label="Slider 1")
- slider2 = gr.Slider(label="Slider 2")
- gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group")
-
- with gr.Row():
- with gr.Column(variant="panel", scale=1):
- gr.Markdown("## Panel 1")
- radio = gr.Radio(
- ["A", "B", "C"],
- label="Radio",
- info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.",
- )
- drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False)
- drop_2 = gr.Dropdown(
- ["Option A", "Option B", "Option C"],
- multiselect=True,
- value=["Option A"],
- label="Dropdown",
- interactive=True,
- )
- check = gr.Checkbox(label="Go")
- with gr.Column(variant="panel", scale=2):
- img = gr.Image(
- "https://gradio.app/assets/img/header-image.jpg", label="Image"
- ).style(height=320)
- with gr.Row():
- go_btn = gr.Button("Go", label="Primary Button", variant="primary")
- clear_btn = gr.Button(
- "Clear", label="Secondary Button", variant="secondary"
- )
-
- def go(*args):
- time.sleep(3)
- return "https://gradio.app/assets/img/header-image.jpg"
-
- go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go")
-
- def clear():
- time.sleep(0.2)
- return None
-
- clear_btn.click(clear, None, img)
-
- with gr.Row():
- btn1 = gr.Button("Button 1").style(size="sm")
- btn2 = gr.UploadButton().style(size="sm")
- stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style(
- size="sm"
- )
-
- with gr.Row():
- gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe")
- gr.JSON(
- value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON"
- )
- gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1})
- gr.File()
- with gr.Row():
- gr.ColorPicker()
- gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4")
- gr.Gallery(
- [
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg",
- "lion",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png",
- "logo",
- ),
- (
- "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg",
- "tower",
- ),
- ]
- ).style(height="200px", grid=2)
-
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot")
- chat_btn = gr.Button("Add messages")
-
- def chat(history):
- time.sleep(2)
- yield [["How are you?", "I am good."]]
-
- chat_btn.click(
- lambda history: history
- + [["How are you?", "I am good."]]
- + (time.sleep(2) or []),
- chatbot,
- chatbot,
- )
- with gr.Column(scale=1):
- with gr.Accordion("Advanced Settings"):
- gr.Markdown("Hello")
- gr.Number(label="Chatbot control 1")
- gr.Number(label="Chatbot control 2")
- gr.Number(label="Chatbot control 3")
-
-
-if __name__ == "__main__":
- demo.queue().launch()
diff --git a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/dataset.py b/spaces/JUNGU/VToonify/vtoonify/model/stylegan/dataset.py
deleted file mode 100644
index 7713ea2f8bc94d202d2dfbe830af3cb96b1e803d..0000000000000000000000000000000000000000
--- a/spaces/JUNGU/VToonify/vtoonify/model/stylegan/dataset.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from io import BytesIO
-
-import lmdb
-from PIL import Image
-from torch.utils.data import Dataset
-
-
-class MultiResolutionDataset(Dataset):
- def __init__(self, path, transform, resolution=256):
- self.env = lmdb.open(
- path,
- max_readers=32,
- readonly=True,
- lock=False,
- readahead=False,
- meminit=False,
- )
-
- if not self.env:
- raise IOError('Cannot open lmdb dataset', path)
-
- with self.env.begin(write=False) as txn:
- self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8'))
-
- self.resolution = resolution
- self.transform = transform
-
- def __len__(self):
- return self.length
-
- def __getitem__(self, index):
- with self.env.begin(write=False) as txn:
- key = f'{self.resolution}-{str(index).zfill(5)}'.encode('utf-8')
- img_bytes = txn.get(key)
-
- buffer = BytesIO(img_bytes)
- img = Image.open(buffer)
- img = self.transform(img)
-
- return img
diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py
deleted file mode 100644
index bbd950699c2495880236883861d9e199f900eae8..0000000000000000000000000000000000000000
--- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/psnr_ssim.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import cv2
-import numpy as np
-
-from basicsr.metrics.metric_util import reorder_image, to_y_channel
-from basicsr.utils.registry import METRIC_REGISTRY
-
-
-@METRIC_REGISTRY.register()
-def calculate_psnr(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
- """Calculate PSNR (Peak Signal-to-Noise Ratio).
-
- Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
-
- Args:
- img1 (ndarray): Images with range [0, 255].
- img2 (ndarray): Images with range [0, 255].
- crop_border (int): Cropped pixels in each edge of an image. These
- pixels are not involved in the PSNR calculation.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- Default: 'HWC'.
- test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
-
- Returns:
- float: psnr result.
- """
-
- assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
- img1 = reorder_image(img1, input_order=input_order)
- img2 = reorder_image(img2, input_order=input_order)
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
-
- if crop_border != 0:
- img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
- img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
-
- if test_y_channel:
- img1 = to_y_channel(img1)
- img2 = to_y_channel(img2)
-
- mse = np.mean((img1 - img2)**2)
- if mse == 0:
- return float('inf')
- return 20. * np.log10(255. / np.sqrt(mse))
-
-
-def _ssim(img1, img2):
- """Calculate SSIM (structural similarity) for one channel images.
-
- It is called by func:`calculate_ssim`.
-
- Args:
- img1 (ndarray): Images with range [0, 255] with order 'HWC'.
- img2 (ndarray): Images with range [0, 255] with order 'HWC'.
-
- Returns:
- float: ssim result.
- """
-
- C1 = (0.01 * 255)**2
- C2 = (0.03 * 255)**2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5]
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1**2
- mu2_sq = mu2**2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-@METRIC_REGISTRY.register()
-def calculate_ssim(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
- """Calculate SSIM (structural similarity).
-
- Ref:
- Image quality assessment: From error visibility to structural similarity
-
- The results are the same as that of the official released MATLAB code in
- https://ece.uwaterloo.ca/~z70wang/research/ssim/.
-
- For three-channel images, SSIM is calculated for each channel and then
- averaged.
-
- Args:
- img1 (ndarray): Images with range [0, 255].
- img2 (ndarray): Images with range [0, 255].
- crop_border (int): Cropped pixels in each edge of an image. These
- pixels are not involved in the SSIM calculation.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- Default: 'HWC'.
- test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
-
- Returns:
- float: ssim result.
- """
-
- assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
- img1 = reorder_image(img1, input_order=input_order)
- img2 = reorder_image(img2, input_order=input_order)
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
-
- if crop_border != 0:
- img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
- img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
-
- if test_y_channel:
- img1 = to_y_channel(img1)
- img2 = to_y_channel(img2)
-
- ssims = []
- for i in range(img1.shape[2]):
- ssims.append(_ssim(img1[..., i], img2[..., i]))
- return np.array(ssims).mean()
diff --git a/spaces/Jeff2323/ai-comic-factory/src/components/ui/select.tsx b/spaces/Jeff2323/ai-comic-factory/src/components/ui/select.tsx
deleted file mode 100644
index 704239634b359b9e680dab25275e205e72579f82..0000000000000000000000000000000000000000
--- a/spaces/Jeff2323/ai-comic-factory/src/components/ui/select.tsx
+++ /dev/null
@@ -1,121 +0,0 @@
-"use client"
-
-import * as React from "react"
-import * as SelectPrimitive from "@radix-ui/react-select"
-import { Check, ChevronDown } from "lucide-react"
-
-import { cn } from "@/lib/utils"
-
-const Select = SelectPrimitive.Root
-
-const SelectGroup = SelectPrimitive.Group
-
-const SelectValue = SelectPrimitive.Value
-
-const SelectTrigger = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
- {children}
-
-
-
-
-))
-SelectTrigger.displayName = SelectPrimitive.Trigger.displayName
-
-const SelectContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, position = "popper", ...props }, ref) => (
-
-
-
- {children}
-
-
-
-))
-SelectContent.displayName = SelectPrimitive.Content.displayName
-
-const SelectLabel = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SelectLabel.displayName = SelectPrimitive.Label.displayName
-
-const SelectItem = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
-
-
-
-
-
- {children}
-
-))
-SelectItem.displayName = SelectPrimitive.Item.displayName
-
-const SelectSeparator = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SelectSeparator.displayName = SelectPrimitive.Separator.displayName
-
-export {
- Select,
- SelectGroup,
- SelectValue,
- SelectTrigger,
- SelectContent,
- SelectLabel,
- SelectItem,
- SelectSeparator,
-}
diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/hubert_model.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/hubert_model.py
deleted file mode 100644
index 6c7f8716c268d0f371f5a9f7995f59bd4b9082d1..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/hubert_model.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import copy
-from typing import Optional, Tuple
-import random
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
-
-class Hubert(nn.Module):
- def __init__(self, num_label_embeddings: int = 100, mask: bool = True):
- super().__init__()
- self._mask = mask
- self.feature_extractor = FeatureExtractor()
- self.feature_projection = FeatureProjection()
- self.positional_embedding = PositionalConvEmbedding()
- self.norm = nn.LayerNorm(768)
- self.dropout = nn.Dropout(0.1)
- self.encoder = TransformerEncoder(
- nn.TransformerEncoderLayer(
- 768, 12, 3072, activation="gelu", batch_first=True
- ),
- 12,
- )
- self.proj = nn.Linear(768, 256)
-
- self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_())
- self.label_embedding = nn.Embedding(num_label_embeddings, 256)
-
- def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- mask = None
- if self.training and self._mask:
- mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2)
- x[mask] = self.masked_spec_embed.to(x.dtype)
- return x, mask
-
- def encode(
- self, x: torch.Tensor, layer: Optional[int] = None
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- x = self.feature_extractor(x)
- x = self.feature_projection(x.transpose(1, 2))
- x, mask = self.mask(x)
- x = x + self.positional_embedding(x)
- x = self.dropout(self.norm(x))
- x = self.encoder(x, output_layer=layer)
- return x, mask
-
- def logits(self, x: torch.Tensor) -> torch.Tensor:
- logits = torch.cosine_similarity(
- x.unsqueeze(2),
- self.label_embedding.weight.unsqueeze(0).unsqueeze(0),
- dim=-1,
- )
- return logits / 0.1
-
- def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- x, mask = self.encode(x)
- x = self.proj(x)
- logits = self.logits(x)
- return logits, mask
-
-
-class HubertSoft(Hubert):
- def __init__(self):
- super().__init__()
-
- @torch.inference_mode()
- def units(self, wav: torch.Tensor) -> torch.Tensor:
- wav = F.pad(wav, ((400 - 320) // 2, (400 - 320) // 2))
- x, _ = self.encode(wav)
- return self.proj(x)
-
-
-class FeatureExtractor(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False)
- self.norm0 = nn.GroupNorm(512, 512)
- self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False)
- self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = F.gelu(self.norm0(self.conv0(x)))
- x = F.gelu(self.conv1(x))
- x = F.gelu(self.conv2(x))
- x = F.gelu(self.conv3(x))
- x = F.gelu(self.conv4(x))
- x = F.gelu(self.conv5(x))
- x = F.gelu(self.conv6(x))
- return x
-
-
-class FeatureProjection(nn.Module):
- def __init__(self):
- super().__init__()
- self.norm = nn.LayerNorm(512)
- self.projection = nn.Linear(512, 768)
- self.dropout = nn.Dropout(0.1)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.norm(x)
- x = self.projection(x)
- x = self.dropout(x)
- return x
-
-
-class PositionalConvEmbedding(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv = nn.Conv1d(
- 768,
- 768,
- kernel_size=128,
- padding=128 // 2,
- groups=16,
- )
- self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.conv(x.transpose(1, 2))
- x = F.gelu(x[:, :, :-1])
- return x.transpose(1, 2)
-
-
-class TransformerEncoder(nn.Module):
- def __init__(
- self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int
- ) -> None:
- super(TransformerEncoder, self).__init__()
- self.layers = nn.ModuleList(
- [copy.deepcopy(encoder_layer) for _ in range(num_layers)]
- )
- self.num_layers = num_layers
-
- def forward(
- self,
- src: torch.Tensor,
- mask: torch.Tensor = None,
- src_key_padding_mask: torch.Tensor = None,
- output_layer: Optional[int] = None,
- ) -> torch.Tensor:
- output = src
- for layer in self.layers[:output_layer]:
- output = layer(
- output, src_mask=mask, src_key_padding_mask=src_key_padding_mask
- )
- return output
-
-
-def _compute_mask(
- shape: Tuple[int, int],
- mask_prob: float,
- mask_length: int,
- device: torch.device,
- min_masks: int = 0,
-) -> torch.Tensor:
- batch_size, sequence_length = shape
-
- if mask_length < 1:
- raise ValueError("`mask_length` has to be bigger than 0.")
-
- if mask_length > sequence_length:
- raise ValueError(
- f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`"
- )
-
- # compute number of masked spans in batch
- num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random())
- num_masked_spans = max(num_masked_spans, min_masks)
-
- # make sure num masked indices <= sequence_length
- if num_masked_spans * mask_length > sequence_length:
- num_masked_spans = sequence_length // mask_length
-
- # SpecAugment mask to fill
- mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool)
-
- # uniform distribution to sample from, make sure that offset samples are < sequence_length
- uniform_dist = torch.ones(
- (batch_size, sequence_length - (mask_length - 1)), device=device
- )
-
- # get random indices to mask
- mask_indices = torch.multinomial(uniform_dist, num_masked_spans)
-
- # expand masked indices to masked spans
- mask_indices = (
- mask_indices.unsqueeze(dim=-1)
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- offsets = (
- torch.arange(mask_length, device=device)[None, None, :]
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- mask_idxs = mask_indices + offsets
-
- # scatter indices to mask
- mask = mask.scatter(1, mask_idxs, True)
-
- return mask
-
-
-def hubert_soft(
- path: str
-) -> HubertSoft:
- r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`.
- Args:
- path (str): path of a pretrained model
- """
- hubert = HubertSoft()
- checkpoint = torch.load(path)
- consume_prefix_in_state_dict_if_present(checkpoint, "module.")
- hubert.load_state_dict(checkpoint)
- hubert.eval()
- return hubert
diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/english.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/english.py
deleted file mode 100644
index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/text/english.py
+++ /dev/null
@@ -1,188 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-
-# Regular expression matching whitespace:
-
-
-import re
-import inflect
-from unidecode import unidecode
-import eng_to_ipa as ipa
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-
-# List of (ipa, lazy ipa) pairs:
-_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('æ', 'e'),
- ('ɑ', 'a'),
- ('ɔ', 'o'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ɛ', 'e'),
- ('ɪ', 'i'),
- ('ʊ', 'u'),
- ('ʒ', 'ʥ'),
- ('ʤ', 'ʥ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, lazy ipa2) pairs:
-_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ð', 'z'),
- ('θ', 's'),
- ('ʒ', 'ʑ'),
- ('ʤ', 'dʑ'),
- ('ˈ', '↓'),
-]]
-
-# List of (ipa, ipa2) pairs
-_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('r', 'ɹ'),
- ('ʤ', 'dʒ'),
- ('ʧ', 'tʃ')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def collapse_whitespace(text):
- return re.sub(r'\s+', ' ', text)
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
-
-
-def mark_dark_l(text):
- return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text)
-
-
-def english_to_ipa(text):
- text = unidecode(text).lower()
- text = expand_abbreviations(text)
- text = normalize_numbers(text)
- phonemes = ipa.convert(text)
- phonemes = collapse_whitespace(phonemes)
- return phonemes
-
-
-def english_to_lazy_ipa(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def english_to_ipa2(text):
- text = english_to_ipa(text)
- text = mark_dark_l(text)
- for regex, replacement in _ipa_to_ipa2:
- text = re.sub(regex, replacement, text)
- return text.replace('...', '…')
-
-
-def english_to_lazy_ipa2(text):
- text = english_to_ipa(text)
- for regex, replacement in _lazy_ipa2:
- text = re.sub(regex, replacement, text)
- return text
diff --git a/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/AlgorithmsInfo/decTreeInfo.py b/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/AlgorithmsInfo/decTreeInfo.py
deleted file mode 100644
index e795c6fd7770ce2b66665c534686e0bfdb14872f..0000000000000000000000000000000000000000
--- a/spaces/Jorgerv97/Herramienta_interactiva_ensenyanza_tecnicas_aprendizaje_supervisado_salud/AlgorithmsInfo/decTreeInfo.py
+++ /dev/null
@@ -1,79 +0,0 @@
-from shiny import module, ui, reactive, render
-from shiny.types import ImgData
-from pathlib import Path
-
-explanation_img_path = Path(__file__).parent.parent / "images"
-
-
-@module.ui
-def decTree_def_ui():
- return ui.div(
- ui.div(
- ui.markdown("Un árbol de decisión es un algoritmo de aprendizaje automático que se utiliza para **clasificar elementos de datos siguiendo una estructura similar a un árbol**. Partiendo del nodo más alto, el nodo raíz, cada nodo del árbol representa una prueba en una variable de entrada o atributo. Dependiendo del resultado de dicha prueba, el algoritmo se bifurca hacia el siguiente nodo correspondiente en un nuevo nivel hasta llegar a un nodo hoja, que representa la decisión final de clasificación. Los nodos hoja contienen los resultados de clasificación. El árbol de decisión es una **forma intuitiva de modelar la lógica de toma de decisiones** y ha sido uno de los algoritmos más utilizados en el aprendizaje automático.")
- , style="padding-right:50px; text-align: justify; text-justify: inter-word;"
- ),
- ui.div(
- ui.markdown("A continuación, se muestra un ejemplo de una representación de un árbol de decisión simple, que cuenta con 3 variables (C1, C2 y C3). Según los resultados de las pruebas con dichas variables se termina clasificando la muestra en una de las dos clases existentes.")
- , style="padding-right:50px; padding-bottom:10px; text-align: justify; text-justify: inter-word;"
- ),
- ui.output_image("dec_tree_expl_image", height="260px"),
- )
-
-@module.ui
-def decTree_howTo_ui():
- return ui.div(
- {"id": "dec_tree_how_generate"},
- ui.input_action_button("dec_tree_show_how_info", "¿Cómo se genera el modelo de árbol de decisión? ▽"
- , style="padding: 30px 0px 10px 0px; background: white; border: none; font-weight: bold; text-decoration: underline; border: 0 !important; box-shadow: 0 0 !important; transition: 0.1s !important; background-color: transparent !important;"),
-
- )
-
-@module.ui
-def decTree_performance_ui():
- return ui.div(
- ui.div(
- ui.markdown("""**No hay un umbral exacto para considerar un modelo como bueno**, ya que depende del contexto y las necesidades del problema. En general, en aplicaciones relacionadas con el ámbito sanitario se busca maximizar tanto la precisión (para minimizar falsos positivos) como la sensibilidad o TVP (para minimizar falsos negativos), por lo que **se busca obtener un valor alto de F1**. En este ejemplo el valor de F1 puede superar el 90% pero es muy fácil sobreajustar el modelo con un árbol de decisión.
-
-
-*Consejo: editar la profundidad máxima del árbol es un buen punto de inicio para evitar el sobreajuste.*""")
- , style="padding-top:30px; padding-right:50px; text-align: justify; text-justify: inter-word;"
- ),
- )
-
-
-@module.server
-def decTree_server(input, output, session):
-
- @reactive.Effect
- @reactive.event(input.dec_tree_show_how_info)
- def _():
- show_dec_tree_how_gen_button = input.dec_tree_show_how_info()
- if show_dec_tree_how_gen_button % 2 == 1:
- ui.update_action_button("dec_tree_show_how_info", label="¿Cómo se genera el modelo de árbol de decisión? △")
- ui.insert_ui(
- ui.div({"id": "inserted-dec-tree-how-gen-info"},
- ui.markdown("""Todos los modelos siguen los mismos pasos para ser creados:
-- Primero debemos **elegir los ajustes del modelo** que queremos crear. En este caso, disponemos de los siguientes ajustes:
- - **Criterion**: La función utilizada para medir la calidad de una división.
- - **Splitter**: La estrategia utilizada para elegir la división en cada nodo.
- - **Max Depth**: La profundidad máxima del árbol. Si es None, los nodos se expandirán hasta que todas las hojas sean puras o hasta que todas las hojas contengan menos muestras que min_samples_split.
- - **Min samples split**: El número mínimo de muestras requeridas para dividir un nodo interno.
- - **Min samples leaf**: El número mínimo de muestras requeridas para estar en un nodo hoja.
- - **Max features**: El número de características a considerar al buscar la mejor división.
-- Después debemos **elegir las características** que queremos usar para predecir el resultado. No todas las características pueden ser relevantes para el modelo y puede que nos encontremos algunas que aporten ruido a nuestros resultados. Si es la primera vez que creas el modelo, selecciona todas las características de momento.
-- Por último, **¡genera el modelo!**"""
- ),
- style="border: solid 0px grey; border-radius: 10px; background:#eceef1 ;margin-right:50px; padding:15px 20px 10px 20px; text-align: justify; text-justify: inter-word;",
- ),
- selector="#dec_tree_how_generate",
- where="beforeEnd",
- )
- else:
- ui.update_action_button("dec_tree_show_how_info", label="¿Cómo se genera el modelo de árbol de decisión? ▽")
- ui.remove_ui("#inserted-dec-tree-how-gen-info")
-
- @output
- @render.image
- def dec_tree_expl_image():
- img: ImgData = {"src": str(explanation_img_path / "dec_tree_expl.png"), "height":"250px", "style":"display:block; margin-left:25%;"}
- return img
\ No newline at end of file
diff --git a/spaces/KPCGD/bingo/src/components/button-scroll-to-bottom.tsx b/spaces/KPCGD/bingo/src/components/button-scroll-to-bottom.tsx
deleted file mode 100644
index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/components/button-scroll-to-bottom.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-import { cn } from '@/lib/utils'
-import { useAtBottom } from '@/lib/hooks/use-at-bottom'
-import { Button, type ButtonProps } from '@/components/ui/button'
-import { IconArrowDown } from '@/components/ui/icons'
-
-export function ButtonScrollToBottom({ className, ...props }: ButtonProps) {
- const isAtBottom = useAtBottom()
-
- return (
-
- )
-}
diff --git a/spaces/KatieChau/text-generator/app.py b/spaces/KatieChau/text-generator/app.py
deleted file mode 100644
index 15a53d74b700aea82666da21941cf32142aafaa7..0000000000000000000000000000000000000000
--- a/spaces/KatieChau/text-generator/app.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel
-
-title="My First Text Generator"
-description="Input text."
-
-#variables, functions and parameters
-model1 = gr.Interface.load("huggingface/gpt2")
-model2 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-model3 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B")
-
-#functions, parameters and variables
-gr.Parallel(model1, model2, model3,title=title,description=description).launch()
-
-
diff --git a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/iou_matching.py b/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/iou_matching.py
deleted file mode 100644
index c7e0f7a41c1d95d4bd6ca04245c5abb9b3ed6156..0000000000000000000000000000000000000000
--- a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/sort/iou_matching.py
+++ /dev/null
@@ -1,84 +0,0 @@
-# vim: expandtab:ts=4:sw=4
-from __future__ import absolute_import
-import numpy as np
-from . import linear_assignment
-
-#计算两个框的IOU
-def iou(bbox, candidates):
- """Computer intersection over union.
-
- Parameters
- ----------
- bbox : ndarray
- A bounding box in format `(top left x, top left y, width, height)`.
- candidates : ndarray
- A matrix of candidate bounding boxes (one per row) in the same format
- as `bbox`.
-
- Returns
- -------
- ndarray
- The intersection over union in [0, 1] between the `bbox` and each
- candidate. A higher score means a larger fraction of the `bbox` is
- occluded by the candidate.
-
- """
- bbox_tl, bbox_br = bbox[:2], bbox[:2] + bbox[2:]
- candidates_tl = candidates[:, :2]
- candidates_br = candidates[:, :2] + candidates[:, 2:]
-
- # np.c_ Translates slice objects to concatenation along the second axis.
- tl = np.c_[np.maximum(bbox_tl[0], candidates_tl[:, 0])[:, np.newaxis],
- np.maximum(bbox_tl[1], candidates_tl[:, 1])[:, np.newaxis]]
- br = np.c_[np.minimum(bbox_br[0], candidates_br[:, 0])[:, np.newaxis],
- np.minimum(bbox_br[1], candidates_br[:, 1])[:, np.newaxis]]
- wh = np.maximum(0., br - tl)
-
- area_intersection = wh.prod(axis=1)
- area_bbox = bbox[2:].prod()
- area_candidates = candidates[:, 2:].prod(axis=1)
- return area_intersection / (area_bbox + area_candidates - area_intersection)
-
-# 计算tracks和detections之间的IOU距离成本矩阵
-def iou_cost(tracks, detections, track_indices=None,
- detection_indices=None):
- """An intersection over union distance metric.
-
- 用于计算tracks和detections之间的iou距离矩阵
-
- Parameters
- ----------
- tracks : List[deep_sort.track.Track]
- A list of tracks.
- detections : List[deep_sort.detection.Detection]
- A list of detections.
- track_indices : Optional[List[int]]
- A list of indices to tracks that should be matched. Defaults to
- all `tracks`.
- detection_indices : Optional[List[int]]
- A list of indices to detections that should be matched. Defaults
- to all `detections`.
-
- Returns
- -------
- ndarray
- Returns a cost matrix of shape
- len(track_indices), len(detection_indices) where entry (i, j) is
- `1 - iou(tracks[track_indices[i]], detections[detection_indices[j]])`.
-
- """
- if track_indices is None:
- track_indices = np.arange(len(tracks))
- if detection_indices is None:
- detection_indices = np.arange(len(detections))
-
- cost_matrix = np.zeros((len(track_indices), len(detection_indices)))
- for row, track_idx in enumerate(track_indices):
- if tracks[track_idx].time_since_update > 1:
- cost_matrix[row, :] = linear_assignment.INFTY_COST
- continue
-
- bbox = tracks[track_idx].to_tlwh()
- candidates = np.asarray([detections[i].tlwh for i in detection_indices])
- cost_matrix[row, :] = 1. - iou(bbox, candidates)
- return cost_matrix
diff --git a/spaces/Kevin676/AutoGPT/autogpt/memory/pinecone.py b/spaces/Kevin676/AutoGPT/autogpt/memory/pinecone.py
deleted file mode 100644
index 27fcd62482d0cf44e02fa1c339195be58cb745b0..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/autogpt/memory/pinecone.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import pinecone
-from colorama import Fore, Style
-
-from autogpt.llm_utils import create_embedding_with_ada
-from autogpt.logs import logger
-from autogpt.memory.base import MemoryProviderSingleton
-
-
-class PineconeMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- pinecone_api_key = cfg.pinecone_api_key
- pinecone_region = cfg.pinecone_region
- pinecone.init(api_key=pinecone_api_key, environment=pinecone_region)
- dimension = 1536
- metric = "cosine"
- pod_type = "p1"
- table_name = "auto-gpt"
- # this assumes we don't start with memory.
- # for now this works.
- # we'll need a more complicated and robust system if we want to start with
- # memory.
- self.vec_num = 0
-
- try:
- pinecone.whoami()
- except Exception as e:
- logger.typewriter_log(
- "FAILED TO CONNECT TO PINECONE",
- Fore.RED,
- Style.BRIGHT + str(e) + Style.RESET_ALL,
- )
- logger.double_check(
- "Please ensure you have setup and configured Pinecone properly for use."
- + f"You can check out {Fore.CYAN + Style.BRIGHT}"
- "https://github.com/Torantulino/Auto-GPT#-pinecone-api-key-setup"
- f"{Style.RESET_ALL} to ensure you've set up everything correctly."
- )
- exit(1)
-
- if table_name not in pinecone.list_indexes():
- pinecone.create_index(
- table_name, dimension=dimension, metric=metric, pod_type=pod_type
- )
- self.index = pinecone.Index(table_name)
-
- def add(self, data):
- vector = create_embedding_with_ada(data)
- # no metadata here. We may wish to change that long term.
- self.index.upsert([(str(self.vec_num), vector, {"raw_text": data})])
- _text = f"Inserting data into memory at index: {self.vec_num}:\n data: {data}"
- self.vec_num += 1
- return _text
-
- def get(self, data):
- return self.get_relevant(data, 1)
-
- def clear(self):
- self.index.delete(deleteAll=True)
- return "Obliviated"
-
- def get_relevant(self, data, num_relevant=5):
- """
- Returns all the data in the memory that is relevant to the given data.
- :param data: The data to compare to.
- :param num_relevant: The number of relevant data to return. Defaults to 5
- """
- query_embedding = create_embedding_with_ada(data)
- results = self.index.query(
- query_embedding, top_k=num_relevant, include_metadata=True
- )
- sorted_results = sorted(results.matches, key=lambda x: x.score)
- return [str(item["metadata"]["raw_text"]) for item in sorted_results]
-
- def get_stats(self):
- return self.index.describe_index_stats()
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/fregan/loss.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/fregan/loss.py
deleted file mode 100644
index e37dc64e29446ecdd9dce03290f4e0eba58fb3d7..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/fregan/loss.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import torch
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss*2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
\ No newline at end of file
diff --git a/spaces/Kevin676/Speechbrain-Speech-enhancement/README.md b/spaces/Kevin676/Speechbrain-Speech-enhancement/README.md
deleted file mode 100644
index 7556741b5767c451105574636e9affff570c4960..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Speechbrain-Speech-enhancement/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Speechbrain Speech Enhancement
-emoji: 👁
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
-duplicated_from: akhaliq/Speechbrain-Speech-enhancement
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/KyanChen/FunSR/test_inr_liif_metasr_aliif.py b/spaces/KyanChen/FunSR/test_inr_liif_metasr_aliif.py
deleted file mode 100644
index e8c471fc58c54e710e5129ad88a0cf103f0de72b..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/FunSR/test_inr_liif_metasr_aliif.py
+++ /dev/null
@@ -1,216 +0,0 @@
-import argparse
-import json
-import os
-
-import math
-from functools import partial
-
-import cv2
-import numpy as np
-import yaml
-import torch
-from einops import rearrange
-from torch.utils.data import DataLoader
-from tqdm import tqdm
-
-import datasets
-import models
-import utils
-
-device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
-
-def batched_predict(model, inp, coord, bsize):
- with torch.no_grad():
- pred = model(inp, coord)
- return pred
-
-
-def eval_psnr(loader, class_names, model,
- data_norm=None, eval_type=None, save_fig=False,
- scale_ratio=1, save_path=None, verbose=False, crop_border=4,
- cal_metrics=True,
- ):
- crop_border = int(crop_border) if crop_border else crop_border
- print('crop border: ', crop_border)
- model.eval()
-
- if data_norm is None:
- data_norm = {
- 'inp': {'sub': [0], 'div': [1]},
- 'gt': {'sub': [0], 'div': [1]}
- }
- t = data_norm['inp']
- inp_sub = torch.FloatTensor(t['sub']).view(1, -1, 1, 1).to(device)
- inp_div = torch.FloatTensor(t['div']).view(1, -1, 1, 1).to(device)
- t = data_norm['gt']
- gt_sub = torch.FloatTensor(t['sub']).view(1, 1, -1).to(device)
- gt_div = torch.FloatTensor(t['div']).view(1, 1, -1).to(device)
-
- if eval_type is None:
- metric_fn = [utils.calculate_psnr_pt, utils.calculate_ssim_pt]
- elif eval_type == 'psnr+ssim':
- metric_fn = [utils.calculate_psnr_pt, utils.calculate_ssim_pt]
- elif eval_type.startswith('div2k'):
- scale = int(eval_type.split('-')[1])
- metric_fn = partial(utils.calc_psnr, dataset='div2k', scale=scale)
- elif eval_type.startswith('benchmark'):
- scale = int(eval_type.split('-')[1])
- metric_fn = partial(utils.calc_psnr, dataset='benchmark', scale=scale)
- else:
- raise NotImplementedError
-
- val_res_psnr = utils.Averager(class_names)
- val_res_ssim = utils.Averager(class_names)
-
- pbar = tqdm(loader, leave=False, desc='val')
- for batch in pbar:
- for k, v in batch.items():
- if torch.is_tensor(v):
- batch[k] = v.to(device)
-
- inp = (batch['inp'] - inp_sub) / inp_div
-
- with torch.no_grad():
- pred = model(inp, batch['coord'], batch['cell'])
- pred = pred * gt_div + gt_sub
-
- if eval_type is not None: # reshape for shaving-eval
- ih, iw = batch['inp'].shape[-2:]
- s = math.sqrt(batch['coord'].shape[1] / (ih * iw))
- if s > 1:
- shape = [batch['inp'].shape[0], round(ih * s), round(iw * s), 3]
- else:
- shape = [batch['inp'].shape[0], 32, batch['coord'].shape[1]//32, 3]
-
- pred = pred.view(*shape) \
- .permute(0, 3, 1, 2).contiguous()
- batch['gt'] = batch['gt'].view(*shape) \
- .permute(0, 3, 1, 2).contiguous()
- if cal_metrics:
- res_psnr = metric_fn[0](
- pred,
- batch['gt'],
- crop_border=crop_border
- )
- res_ssim = metric_fn[1](
- pred,
- batch['gt'],
- crop_border=crop_border
- )
- else:
- res_psnr = torch.ones(len(pred))
- res_ssim = torch.ones(len(pred))
-
- file_names = batch.get('filename', None)
- if file_names is not None and save_fig:
- for idx in range(len(batch['inp'])):
- ori_img = batch['inp'][idx].cpu().numpy() * 255
- ori_img = np.clip(ori_img, a_min=0, a_max=255)
- ori_img = ori_img.astype(np.uint8)
- ori_img = rearrange(ori_img, 'C H W -> H W C')
-
- pred_img = pred[idx].cpu().numpy() * 255
- pred_img = np.clip(pred_img, a_min=0, a_max=255)
- pred_img = pred_img.astype(np.uint8)
- pred_img = rearrange(pred_img, 'C H W -> H W C')
-
- gt_img = batch['gt'][idx].cpu().numpy() * 255
- gt_img = np.clip(gt_img, a_min=0, a_max=255)
- gt_img = gt_img.astype(np.uint8)
- gt_img = rearrange(gt_img, 'C H W -> H W C')
-
- psnr = res_psnr[idx].cpu().numpy()
- ssim = res_ssim[idx].cpu().numpy()
- ori_file_name = f'{save_path}/{file_names[idx]}_Ori.png'
- cv2.imwrite(ori_file_name, ori_img)
- pred_file_name = f'{save_path}/{file_names[idx]}_{scale_ratio}X_{psnr:.2f}_{ssim:.4f}.png'
- cv2.imwrite(pred_file_name, pred_img)
- gt_file_name = f'{save_path}/{file_names[idx]}_GT.png'
- cv2.imwrite(gt_file_name, gt_img)
-
- val_res_psnr.add(batch['class_name'], res_psnr)
- val_res_ssim.add(batch['class_name'], res_ssim)
-
- if verbose:
- pbar.set_description(
- 'val psnr: {:.4f} ssim: {:.4f}'.format(val_res_psnr.item()['all'], val_res_ssim.item()['all']))
-
- return val_res_psnr.item(), val_res_ssim.item()
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--config', default='configs/test_INR_mysr.yaml')
- parser.add_argument('--model', default='checkpoints/EXP20220610_5/epoch-best.pth')
- parser.add_argument('--scale_ratio', default=4, type=float)
- parser.add_argument('--save_fig', default=False, type=bool)
- parser.add_argument('--save_path', default='tmp', type=str)
- parser.add_argument('--cal_metrics', default=True, type=bool)
- parser.add_argument('--return_class_metrics', default=False, type=bool)
- parser.add_argument('--dataset_name', default='UC', type=str)
- args = parser.parse_args()
-
- with open(args.config, 'r') as f:
- config = yaml.load(f, Loader=yaml.FullLoader)
- root_split_file = {'UC':
- {
- 'root_path': '/data/kyanchen/datasets/UC/256',
- 'split_file': 'data_split/UC_split.json'
- },
- 'AID':
- {
- 'root_path': '/data/kyanchen/datasets/AID',
- 'split_file': 'data_split/AID_split.json'
- }
- }
- config['test_dataset']['dataset']['args']['root_path'] = root_split_file[args.dataset_name]['root_path']
- config['test_dataset']['dataset']['args']['split_file'] = root_split_file[args.dataset_name]['split_file']
-
- config['test_dataset']['wrapper']['args']['scale_ratio'] = args.scale_ratio
-
- spec = config['test_dataset']
- dataset = datasets.make(spec['dataset'])
- dataset = datasets.make(spec['wrapper'], args={'dataset': dataset})
- loader = DataLoader(dataset, batch_size=spec['batch_size'], num_workers=0, pin_memory=True, shuffle=False,
- drop_last=False)
- if not os.path.exists(args.model):
- assert NameError
- model_spec = torch.load(args.model)['model']
- print(model_spec['args'])
- model = models.make(model_spec, load_sd=True).to(device)
-
- file_names = json.load(open(config['test_dataset']['dataset']['args']['split_file']))['test']
- class_names = list(set([os.path.basename(os.path.dirname(x)) for x in file_names]))
-
- crop_border = config['test_dataset']['wrapper']['args']['scale_ratio'] + 5
- dataset_name = os.path.basename(config['test_dataset']['dataset']['args']['split_file']).split('_')[0]
- max_scale = {'UC': 5, 'AID': 12}
- if args.scale_ratio > max_scale[dataset_name]:
- crop_border = int((args.scale_ratio - max_scale[dataset_name]) / 2 * 48)
-
- if args.save_fig:
- os.makedirs(args.save_path, exist_ok=True)
-
- res = eval_psnr(
- loader, class_names, model,
- data_norm=config.get('data_norm'),
- eval_type=config.get('eval_type'),
- crop_border=crop_border,
- verbose=True,
- save_fig=args.save_fig,
- scale_ratio=args.scale_ratio,
- save_path=args.save_path,
- cal_metrics=args.cal_metrics
- )
-
- if args.return_class_metrics:
- keys = list(res[0].keys())
- keys.sort()
- print('psnr')
- for k in keys:
- print(f'{k}: {res[0][k]:0.2f}')
- print('ssim')
- for k in keys:
- print(f'{k}: {res[1][k]:0.4f}')
- print(f'psnr: {res[0]["all"]:0.2f}')
- print(f'ssim: {res[1]["all"]:0.4f}')
diff --git a/spaces/Lasion/NCKH_2023/app.py b/spaces/Lasion/NCKH_2023/app.py
deleted file mode 100644
index 641ce8ac2e76d8999958b3a93185a882ad00079f..0000000000000000000000000000000000000000
--- a/spaces/Lasion/NCKH_2023/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import gradio as gr
-import cv2
-from ultralytics import YOLO
-
-def run(source):
- global model
- res = model(source, conf=.5, iou=.5)
- res_plotted = res[0].plot()
- # converting BGR to RGB
- result = cv2.cvtColor(res_plotted, cv2.COLOR_BGR2RGB)
- return result
-
-model = YOLO("yolov8n-nckh2023.pt") # Select YOLO model
-
-gr.Interface(
- run,
- inputs=gr.Image(label="Upload image", type="filepath"),
- outputs=gr.Image(label="Your result"),
- title="Motorcyclist, helmet, and license plate detection",
-).launch()
diff --git a/spaces/Latryna/roop/roop/globals.py b/spaces/Latryna/roop/roop/globals.py
deleted file mode 100644
index 77fd391db235b878ce1f91765596bd76adb06697..0000000000000000000000000000000000000000
--- a/spaces/Latryna/roop/roop/globals.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from typing import List
-
-source_path = None
-target_path = None
-output_path = None
-frame_processors: List[str] = []
-keep_fps = None
-keep_audio = None
-keep_frames = None
-many_faces = None
-video_encoder = None
-video_quality = None
-max_memory = None
-execution_providers: List[str] = []
-execution_threads = None
-headless = None
-log_level = 'error'
diff --git a/spaces/LayBraid/SpaceVector_v0/app.py b/spaces/LayBraid/SpaceVector_v0/app.py
deleted file mode 100644
index 02f8807780f02d540688fe3467c67782b9c59fbf..0000000000000000000000000000000000000000
--- a/spaces/LayBraid/SpaceVector_v0/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import streamlit as st
-import text_to_image
-import home
-import contributing
-
-
-PAGES = {
- "Home": home,
- "Retrieve Images given Text": text_to_image,
- "Contribute to Space Vector": contributing,
-}
-
-st.sidebar.title("Space Vector")
-st.sidebar.image("space.jpeg")
-st.sidebar.markdown("""
- SpaceVector is a semantic search engine. It allows you to find your texts in images.
-""")
-selection = st.sidebar.radio("Go to", list(PAGES.keys()))
-page = PAGES[selection]
-page.app()
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/fixes/local_fixes.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/fixes/local_fixes.py
deleted file mode 100644
index a7abad699332af42bdcb29f31eb3370423421cb4..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/fixes/local_fixes.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import os
-import time
-import shutil
-import requests
-import zipfile
-
-def insert_new_line(file_name, line_to_find, text_to_insert):
- lines = []
- with open(file_name, 'r', encoding='utf-8') as read_obj:
- lines = read_obj.readlines()
- already_exists = False
- with open(file_name + '.tmp', 'w', encoding='utf-8') as write_obj:
- for i in range(len(lines)):
- write_obj.write(lines[i])
- if lines[i].strip() == line_to_find:
- # If next line exists and starts with sys.path.append, skip
- if i+1 < len(lines) and lines[i+1].strip().startswith("sys.path.append"):
- print('It was already fixed! Skip adding a line...')
- already_exists = True
- break
- else:
- write_obj.write(text_to_insert + '\n')
- # If no existing sys.path.append line was found, replace the original file
- if not already_exists:
- os.replace(file_name + '.tmp', file_name)
- return True
- else:
- # If existing line was found, delete temporary file
- os.remove(file_name + '.tmp')
- return False
-
-def replace_in_file(file_name, old_text, new_text):
- with open(file_name, 'r', encoding='utf-8') as file:
- file_contents = file.read()
-
- if old_text in file_contents:
- file_contents = file_contents.replace(old_text, new_text)
- with open(file_name, 'w', encoding='utf-8') as file:
- file.write(file_contents)
- return True
-
- return False
-
-
-def find_torchcrepe_directory(directory):
- """
- Recursively searches for the topmost folder named 'torchcrepe' within a directory.
- Returns the path of the directory found or None if none is found.
- """
- for root, dirs, files in os.walk(directory):
- if 'torchcrepe' in dirs:
- return os.path.join(root, 'torchcrepe')
- return None
-
-def download_and_extract_torchcrepe():
- url = 'https://github.com/maxrmorrison/torchcrepe/archive/refs/heads/master.zip'
- temp_dir = 'temp_torchcrepe'
- destination_dir = os.getcwd()
-
- try:
- torchcrepe_dir_path = os.path.join(destination_dir, 'torchcrepe')
-
- if os.path.exists(torchcrepe_dir_path):
- print("Skipping the torchcrepe download. The folder already exists.")
- return
-
- # Download the file
- print("Starting torchcrepe download...")
- response = requests.get(url)
-
- # Raise an error if the GET request was unsuccessful
- response.raise_for_status()
- print("Download completed.")
-
- # Save the downloaded file
- zip_file_path = os.path.join(temp_dir, 'master.zip')
- os.makedirs(temp_dir, exist_ok=True)
- with open(zip_file_path, 'wb') as file:
- file.write(response.content)
- print(f"Zip file saved to {zip_file_path}")
-
- # Extract the zip file
- print("Extracting content...")
- with zipfile.ZipFile(zip_file_path, 'r') as zip_file:
- zip_file.extractall(temp_dir)
- print("Extraction completed.")
-
- # Locate the torchcrepe folder and move it to the destination directory
- torchcrepe_dir = find_torchcrepe_directory(temp_dir)
- if torchcrepe_dir:
- shutil.move(torchcrepe_dir, destination_dir)
- print(f"Moved the torchcrepe directory to {destination_dir}!")
- else:
- print("The torchcrepe directory could not be located.")
-
- except Exception as e:
- print("Torchcrepe not successfully downloaded", e)
-
- # Clean up temporary directory
- if os.path.exists(temp_dir):
- shutil.rmtree(temp_dir)
-
-# Run the function
-download_and_extract_torchcrepe()
-
-temp_dir = 'temp_torchcrepe'
-
-if os.path.exists(temp_dir):
- shutil.rmtree(temp_dir)
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/plot/scheme.py b/spaces/Lianjd/stock_dashboard/backtrader/plot/scheme.py
deleted file mode 100644
index ac03acdef120e8e13ad4e0bb3c3569a2054c7076..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/plot/scheme.py
+++ /dev/null
@@ -1,189 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-
-tableau20 = [
- 'steelblue', # 0
- 'lightsteelblue', # 1
- 'darkorange', # 2
- 'peachpuff', # 3
- 'green', # 4
- 'lightgreen', # 5
- 'crimson', # 6
- 'lightcoral', # 7
- 'mediumpurple', # 8
- 'thistle', # 9
- 'saddlebrown', # 10
- 'rosybrown', # 11
- 'orchid', # 12
- 'lightpink', # 13
- 'gray', # 14
- 'lightgray', # 15
- 'olive', # 16
- 'palegoldenrod', # 17
- 'mediumturquoise', # 18
- 'paleturquoise', # 19
-]
-
-tableau10 = [
- 'blue', # 'steelblue', # 0
- 'darkorange', # 1
- 'green', # 2
- 'crimson', # 3
- 'mediumpurple', # 4
- 'saddlebrown', # 5
- 'orchid', # 6
- 'gray', # 7
- 'olive', # 8
- 'mediumturquoise', # 9
-]
-
-tableau10_light = [
- 'lightsteelblue', # 0
- 'peachpuff', # 1
- 'lightgreen', # 2
- 'lightcoral', # 3
- 'thistle', # 4
- 'rosybrown', # 5
- 'lightpink', # 6
- 'lightgray', # 7
- 'palegoldenrod', # 8
- 'paleturquoise', # 9
-]
-
-tab10_index = [3, 0, 2, 1, 2, 4, 5, 6, 7, 8, 9]
-
-
-class PlotScheme(object):
- def __init__(self):
- # to have a tight packing on the chart wether only the x axis or also
- # the y axis have (see matplotlib)
- self.ytight = False
-
- # y-margin (top/bottom) for the subcharts. This will not overrule the
- # option plotinfo.plotymargin
- self.yadjust = 0.0
- # Each new line is in z-order below the previous one. change it False
- # to have lines paint above the previous line
- self.zdown = True
- # Rotation of the date labes on the x axis
- self.tickrotation = 15
-
- # How many "subparts" takes a major chart (datas) in the overall chart
- # This is proportional to the total number of subcharts
- self.rowsmajor = 5
-
- # How many "subparts" takes a minor chart (indicators/observers) in the
- # overall chart. This is proportional to the total number of subcharts
- # Together with rowsmajor, this defines a proportion ratio betwen data
- # charts and indicators/observers charts
- self.rowsminor = 1
-
- # Distance in between subcharts
- self.plotdist = 0.0
-
- # Have a grid in the background of all charts
- self.grid = True
-
- # Default plotstyle for the OHLC bars which (line -> line on close)
- # Other options: 'bar' and 'candle'
- self.style = 'line'
-
- # Default color for the 'line on close' plot
- self.loc = 'black'
- # Default color for a bullish bar/candle (0.75 -> intensity of gray)
- self.barup = '0.75'
- # Default color for a bearish bar/candle
- self.bardown = 'red'
- # Level of transparency to apply to bars/cancles (NOT USED)
- self.bartrans = 1.0
-
- # Wether the candlesticks have to be filled or be transparent
- self.barupfill = True
- self.bardownfill = True
-
- # Opacity for the filled candlesticks (1.0 opaque - 0.0 transparent)
- self.baralpha = 1.0
-
- # Alpha blending for fill areas between lines (_fill_gt and _fill_lt)
- self.fillalpha = 0.20
-
- # Wether to plot volume or not. Note: if the data in question has no
- # volume values, volume plotting will be skipped even if this is True
- self.volume = True
-
- # Wether to overlay the volume on the data or use a separate subchart
- self.voloverlay = True
- # Scaling of the volume to the data when plotting as overlay
- self.volscaling = 0.33
- # Pushing overlay volume up for better visibiliy. Experimentation
- # needed if the volume and data overlap too much
- self.volpushup = 0.00
-
- # Default colour for the volume of a bullish day
- self.volup = '#aaaaaa' # 0.66 of gray
- # Default colour for the volume of a bearish day
- self.voldown = '#cc6073' # (204, 96, 115)
- # Transparency to apply to the volume when overlaying
- self.voltrans = 0.50
-
- # Transparency for text labels (NOT USED CURRENTLY)
- self.subtxttrans = 0.66
- # Default font text size for labels on the chart
- self.subtxtsize = 9
-
- # Transparency for the legend (NOT USED CURRENTLY)
- self.legendtrans = 0.25
- # Wether indicators have a leged displaey in their charts
- self.legendind = True
- # Location of the legend for indicators (see matplotlib)
- self.legendindloc = 'upper left'
-
- # Location of the legend for datafeeds (see matplotlib)
- self.legenddataloc = 'upper left'
-
- # Plot the last value of a line after the Object name
- self.linevalues = True
-
- # Plot a tag at the end of each line with the last value
- self.valuetags = True
-
- # Default color for horizontal lines (see plotinfo.plothlines)
- self.hlinescolor = '0.66' # shade of gray
- # Default style for horizontal lines
- self.hlinesstyle = '--'
- # Default width for horizontal lines
- self.hlineswidth = 1.0
-
- # Default color scheme: Tableau 10
- self.lcolors = tableau10
-
- # strftime Format string for the display of ticks on the x axis
- self.fmt_x_ticks = '%Y-%m-%d %H:%M'
-
- # strftime Format string for the display of data points values
- self.fmt_x_data = None
-
- def color(self, idx):
- colidx = tab10_index[idx % len(tab10_index)]
- return self.lcolors[colidx]
diff --git a/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_utils.py b/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_utils.py
deleted file mode 100644
index 5fc7a505fad66f2903ce9f3cff06dea15b128080..0000000000000000000000000000000000000000
--- a/spaces/LightSY/W2L-TD/facelib/detection/retinaface/retinaface_utils.py
+++ /dev/null
@@ -1,478 +0,0 @@
-import numpy as np
-# import torch
-# import torchvision
-from itertools import product as product
-from math import ceil
-from numpy import array
-
-
-def box_area(boxes: array):
- """
- :param boxes: [N, 4]
- :return: [N]
- """
- return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
-
-
-def box_iou(box1: array, box2: array):
- """
- :param box1: [N, 4]
- :param box2: [M, 4]
- :return: [N, M]
- """
- area1 = box_area(box1) # N
- area2 = box_area(box2) # M
- # broadcasting, 两个数组各维度大小 从后往前对比一致, 或者 有一维度值为1;
- lt = np.maximum(box1[:, np.newaxis, :2], box2[:, :2])
- rb = np.minimum(box1[:, np.newaxis, 2:], box2[:, 2:])
- wh = rb - lt
- wh = np.maximum(0, wh) # [N, M, 2]
- inter = wh[:, :, 0] * wh[:, :, 1]
- iou = inter / (area1[:, np.newaxis] + area2 - inter)
- return iou # NxM
-
-
-def numpy_nms(boxes: array, scores: array, iou_threshold: float):
- idxs = scores.argsort() # 按分数 降序排列的索引 [N]
- keep = []
- while idxs.size > 0: # 统计数组中元素的个数
- max_score_index = idxs[-1]
- max_score_box = boxes[max_score_index][None, :]
- keep.append(max_score_index)
- if idxs.size == 1:
- break
- idxs = idxs[:-1] # 将得分最大框 从索引中删除; 剩余索引对应的框 和 得分最大框 计算IoU;
- other_boxes = boxes[idxs] # [?, 4]
- ious = box_iou(max_score_box, other_boxes) # 一个框和其余框比较 1XM
- idxs = idxs[ious[0] <= iou_threshold]
-
- keep = np.array(keep)
- return keep
-
-
-class PriorBox(object):
-
- def __init__(self, cfg, image_size=None, phase='train'):
- super(PriorBox, self).__init__()
- self.min_sizes = cfg['min_sizes']
- self.steps = cfg['steps']
- self.clip = cfg['clip']
- self.image_size = image_size
- self.feature_maps = [[ceil(self.image_size[0] / step), ceil(self.image_size[1] / step)] for step in self.steps]
- self.name = 's'
-
- def forward(self):
- anchors = []
- for k, f in enumerate(self.feature_maps):
- min_sizes = self.min_sizes[k]
- for i, j in product(range(f[0]), range(f[1])):
- for min_size in min_sizes:
- s_kx = min_size / self.image_size[1]
- s_ky = min_size / self.image_size[0]
- dense_cx = [x * self.steps[k] / self.image_size[1] for x in [j + 0.5]]
- dense_cy = [y * self.steps[k] / self.image_size[0] for y in [i + 0.5]]
- for cy, cx in product(dense_cy, dense_cx):
- anchors += [cx, cy, s_kx, s_ky]
-
- # back to torch land
- # output = torch.Tensor(anchors).view(-1, 4)
- output = np.array(anchors).reshape(-1, 4)
- if self.clip:
- output.clamp_(max=1, min=0)
- return output
-
-
-def py_cpu_nms(dets, thresh):
- """Pure Python NMS baseline."""
- # keep = torchvision.ops.nms(
- # boxes=torch.Tensor(dets[:, :4]),
- # scores=torch.Tensor(dets[:, 4]),
- # iou_threshold=thresh,
- # )
- keep = numpy_nms(boxes=dets[:, :4], scores=dets[:, 4], iou_threshold=thresh)
- return list(keep)
-
-
-def point_form(boxes):
- """ Convert prior_boxes to (xmin, ymin, xmax, ymax)
- representation for comparison to point form ground truth data.
- Args:
- boxes: (tensor) center-size default boxes from priorbox layers.
- Return:
- boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.
- """
- return torch.cat(
- (
- boxes[:, :2] - boxes[:, 2:] / 2, # xmin, ymin
- boxes[:, :2] + boxes[:, 2:] / 2),
- 1) # xmax, ymax
-
-
-def center_size(boxes):
- """ Convert prior_boxes to (cx, cy, w, h)
- representation for comparison to center-size form ground truth data.
- Args:
- boxes: (tensor) point_form boxes
- Return:
- boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.
- """
- return torch.cat(
- (boxes[:, 2:] + boxes[:, :2]) / 2, # cx, cy
- boxes[:, 2:] - boxes[:, :2],
- 1) # w, h
-
-
-def intersect(box_a, box_b):
- """ We resize both tensors to [A,B,2] without new malloc:
- [A,2] -> [A,1,2] -> [A,B,2]
- [B,2] -> [1,B,2] -> [A,B,2]
- Then we compute the area of intersect between box_a and box_b.
- Args:
- box_a: (tensor) bounding boxes, Shape: [A,4].
- box_b: (tensor) bounding boxes, Shape: [B,4].
- Return:
- (tensor) intersection area, Shape: [A,B].
- """
- A = box_a.size(0)
- B = box_b.size(0)
- max_xy = torch.min(box_a[:, 2:].unsqueeze(1).expand(A, B, 2), box_b[:, 2:].unsqueeze(0).expand(A, B, 2))
- min_xy = torch.max(box_a[:, :2].unsqueeze(1).expand(A, B, 2), box_b[:, :2].unsqueeze(0).expand(A, B, 2))
- inter = torch.clamp((max_xy - min_xy), min=0)
- return inter[:, :, 0] * inter[:, :, 1]
-
-
-def jaccard(box_a, box_b):
- """Compute the jaccard overlap of two sets of boxes. The jaccard overlap
- is simply the intersection over union of two boxes. Here we operate on
- ground truth boxes and default boxes.
- E.g.:
- A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
- Args:
- box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]
- box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]
- Return:
- jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)]
- """
- inter = intersect(box_a, box_b)
- area_a = ((box_a[:, 2] - box_a[:, 0]) * (box_a[:, 3] - box_a[:, 1])).unsqueeze(1).expand_as(inter) # [A,B]
- area_b = ((box_b[:, 2] - box_b[:, 0]) * (box_b[:, 3] - box_b[:, 1])).unsqueeze(0).expand_as(inter) # [A,B]
- union = area_a + area_b - inter
- return inter / union # [A,B]
-
-
-def matrix_iou(a, b):
- """
- return iou of a and b, numpy version for data augenmentation
- """
- lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])
- rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])
-
- area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)
- area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)
- area_b = np.prod(b[:, 2:] - b[:, :2], axis=1)
- return area_i / (area_a[:, np.newaxis] + area_b - area_i)
-
-
-def matrix_iof(a, b):
- """
- return iof of a and b, numpy version for data augenmentation
- """
- lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])
- rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])
-
- area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)
- area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)
- return area_i / np.maximum(area_a[:, np.newaxis], 1)
-
-
-def match(threshold, truths, priors, variances, labels, landms, loc_t, conf_t, landm_t, idx):
- """Match each prior box with the ground truth box of the highest jaccard
- overlap, encode the bounding boxes, then return the matched indices
- corresponding to both confidence and location preds.
- Args:
- threshold: (float) The overlap threshold used when matching boxes.
- truths: (tensor) Ground truth boxes, Shape: [num_obj, 4].
- priors: (tensor) Prior boxes from priorbox layers, Shape: [n_priors,4].
- variances: (tensor) Variances corresponding to each prior coord,
- Shape: [num_priors, 4].
- labels: (tensor) All the class labels for the image, Shape: [num_obj].
- landms: (tensor) Ground truth landms, Shape [num_obj, 10].
- loc_t: (tensor) Tensor to be filled w/ encoded location targets.
- conf_t: (tensor) Tensor to be filled w/ matched indices for conf preds.
- landm_t: (tensor) Tensor to be filled w/ encoded landm targets.
- idx: (int) current batch index
- Return:
- The matched indices corresponding to 1)location 2)confidence
- 3)landm preds.
- """
- # jaccard index
- overlaps = jaccard(truths, point_form(priors))
- # (Bipartite Matching)
- # [1,num_objects] best prior for each ground truth
- best_prior_overlap, best_prior_idx = overlaps.max(1, keepdim=True)
-
- # ignore hard gt
- valid_gt_idx = best_prior_overlap[:, 0] >= 0.2
- best_prior_idx_filter = best_prior_idx[valid_gt_idx, :]
- if best_prior_idx_filter.shape[0] <= 0:
- loc_t[idx] = 0
- conf_t[idx] = 0
- return
-
- # [1,num_priors] best ground truth for each prior
- best_truth_overlap, best_truth_idx = overlaps.max(0, keepdim=True)
- best_truth_idx.squeeze_(0)
- best_truth_overlap.squeeze_(0)
- best_prior_idx.squeeze_(1)
- best_prior_idx_filter.squeeze_(1)
- best_prior_overlap.squeeze_(1)
- best_truth_overlap.index_fill_(0, best_prior_idx_filter, 2) # ensure best prior
- # TODO refactor: index best_prior_idx with long tensor
- # ensure every gt matches with its prior of max overlap
- for j in range(best_prior_idx.size(0)): # 判别此anchor是预测哪一个boxes
- best_truth_idx[best_prior_idx[j]] = j
- matches = truths[best_truth_idx] # Shape: [num_priors,4] 此处为每一个anchor对应的bbox取出来
- conf = labels[best_truth_idx] # Shape: [num_priors] 此处为每一个anchor对应的label取出来
- conf[best_truth_overlap < threshold] = 0 # label as background overlap<0.35的全部作为负样本
- loc = encode(matches, priors, variances)
-
- matches_landm = landms[best_truth_idx]
- landm = encode_landm(matches_landm, priors, variances)
- loc_t[idx] = loc # [num_priors,4] encoded offsets to learn
- conf_t[idx] = conf # [num_priors] top class label for each prior
- landm_t[idx] = landm
-
-
-def encode(matched, priors, variances):
- """Encode the variances from the priorbox layers into the ground truth boxes
- we have matched (based on jaccard overlap) with the prior boxes.
- Args:
- matched: (tensor) Coords of ground truth for each prior in point-form
- Shape: [num_priors, 4].
- priors: (tensor) Prior boxes in center-offset form
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- encoded boxes (tensor), Shape: [num_priors, 4]
- """
-
- # dist b/t match center and prior's center
- g_cxcy = (matched[:, :2] + matched[:, 2:]) / 2 - priors[:, :2]
- # encode variance
- g_cxcy /= (variances[0] * priors[:, 2:])
- # match wh / prior wh
- g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:]
- g_wh = torch.log(g_wh) / variances[1]
- # return target for smooth_l1_loss
- return torch.cat([g_cxcy, g_wh], 1) # [num_priors,4]
-
-
-def encode_landm(matched, priors, variances):
- """Encode the variances from the priorbox layers into the ground truth boxes
- we have matched (based on jaccard overlap) with the prior boxes.
- Args:
- matched: (tensor) Coords of ground truth for each prior in point-form
- Shape: [num_priors, 10].
- priors: (tensor) Prior boxes in center-offset form
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- encoded landm (tensor), Shape: [num_priors, 10]
- """
-
- # dist b/t match center and prior's center
- matched = torch.reshape(matched, (matched.size(0), 5, 2))
- priors_cx = priors[:, 0].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)
- priors_cy = priors[:, 1].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)
- priors_w = priors[:, 2].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)
- priors_h = priors[:, 3].unsqueeze(1).expand(matched.size(0), 5).unsqueeze(2)
- priors = torch.cat([priors_cx, priors_cy, priors_w, priors_h], dim=2)
- g_cxcy = matched[:, :, :2] - priors[:, :, :2]
- # encode variance
- g_cxcy /= (variances[0] * priors[:, :, 2:])
- # g_cxcy /= priors[:, :, 2:]
- g_cxcy = g_cxcy.reshape(g_cxcy.size(0), -1)
- # return target for smooth_l1_loss
- return g_cxcy
-
-
-# Adapted from https://github.com/Hakuyume/chainer-ssd
-def decode(loc, priors, variances):
- """Decode locations from predictions using priors to undo
- the encoding we did for offset regression at train time.
- Args:
- loc (tensor): location predictions for loc layers,
- Shape: [num_priors,4]
- priors (tensor): Prior boxes in center-offset form.
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- decoded bounding box predictions
- """
-
- # boxes = torch.cat((priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:],
- # priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1)
-
- boxes = np.concatenate((priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:],
- priors[:, 2:] * np.exp(loc[:, 2:] * variances[1])), 1)
-
-
- boxes[:, :2] -= boxes[:, 2:] / 2
- boxes[:, 2:] += boxes[:, :2]
- return boxes
-
-
-def decode_landm(pre, priors, variances):
- """Decode landm from predictions using priors to undo
- the encoding we did for offset regression at train time.
- Args:
- pre (tensor): landm predictions for loc layers,
- Shape: [num_priors,10]
- priors (tensor): Prior boxes in center-offset form.
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- decoded landm predictions
- """
- tmp = (
- priors[:, :2] + pre[:, :2] * variances[0] * priors[:, 2:],
- priors[:, :2] + pre[:, 2:4] * variances[0] * priors[:, 2:],
- priors[:, :2] + pre[:, 4:6] * variances[0] * priors[:, 2:],
- priors[:, :2] + pre[:, 6:8] * variances[0] * priors[:, 2:],
- priors[:, :2] + pre[:, 8:10] * variances[0] * priors[:, 2:],
- )
- # landms = torch.cat(tmp, dim=1)
- landms = np.concatenate(tmp, axis=1)
- return landms
-
-
-def batched_decode(b_loc, priors, variances):
- """Decode locations from predictions using priors to undo
- the encoding we did for offset regression at train time.
- Args:
- b_loc (tensor): location predictions for loc layers,
- Shape: [num_batches,num_priors,4]
- priors (tensor): Prior boxes in center-offset form.
- Shape: [1,num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- decoded bounding box predictions
- """
- # boxes = (
- # priors[:, :, :2] + b_loc[:, :, :2] * variances[0] * priors[:, :, 2:],
- # priors[:, :, 2:] * torch.exp(b_loc[:, :, 2:] * variances[1]),
- # )
- # boxes = torch.cat(boxes, dim=2)
- boxes = (
- priors[:, :, :2] + b_loc[:, :, :2] * variances[0] * priors[:, :, 2:],
- priors[:, :, 2:] * np.exp(b_loc[:, :, 2:] * variances[1]),
- )
- boxes = np.concatenate(boxes, axis=2)
-
- boxes[:, :, :2] -= boxes[:, :, 2:] / 2
- boxes[:, :, 2:] += boxes[:, :, :2]
- return boxes
-
-
-def batched_decode_landm(pre, priors, variances):
- """Decode landm from predictions using priors to undo
- the encoding we did for offset regression at train time.
- Args:
- pre (tensor): landm predictions for loc layers,
- Shape: [num_batches,num_priors,10]
- priors (tensor): Prior boxes in center-offset form.
- Shape: [1,num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- decoded landm predictions
- """
- landms = (
- priors[:, :, :2] + pre[:, :, :2] * variances[0] * priors[:, :, 2:],
- priors[:, :, :2] + pre[:, :, 2:4] * variances[0] * priors[:, :, 2:],
- priors[:, :, :2] + pre[:, :, 4:6] * variances[0] * priors[:, :, 2:],
- priors[:, :, :2] + pre[:, :, 6:8] * variances[0] * priors[:, :, 2:],
- priors[:, :, :2] + pre[:, :, 8:10] * variances[0] * priors[:, :, 2:],
- )
- landms = torch.cat(landms, dim=2)
- return landms
-
-
-def log_sum_exp(x):
- """Utility function for computing log_sum_exp while determining
- This will be used to determine unaveraged confidence loss across
- all examples in a batch.
- Args:
- x (Variable(tensor)): conf_preds from conf layers
- """
- x_max = x.data.max()
- return torch.log(torch.sum(torch.exp(x - x_max), 1, keepdim=True)) + x_max
-
-
-# Original author: Francisco Massa:
-# https://github.com/fmassa/object-detection.torch
-# Ported to PyTorch by Max deGroot (02/01/2017)
-def nms(boxes, scores, overlap=0.5, top_k=200):
- """Apply non-maximum suppression at test time to avoid detecting too many
- overlapping bounding boxes for a given object.
- Args:
- boxes: (tensor) The location preds for the img, Shape: [num_priors,4].
- scores: (tensor) The class predscores for the img, Shape:[num_priors].
- overlap: (float) The overlap thresh for suppressing unnecessary boxes.
- top_k: (int) The Maximum number of box preds to consider.
- Return:
- The indices of the kept boxes with respect to num_priors.
- """
-
- keep = torch.Tensor(scores.size(0)).fill_(0).long()
- if boxes.numel() == 0:
- return keep
- x1 = boxes[:, 0]
- y1 = boxes[:, 1]
- x2 = boxes[:, 2]
- y2 = boxes[:, 3]
- area = torch.mul(x2 - x1, y2 - y1)
- v, idx = scores.sort(0) # sort in ascending order
- # I = I[v >= 0.01]
- idx = idx[-top_k:] # indices of the top-k largest vals
- xx1 = boxes.new()
- yy1 = boxes.new()
- xx2 = boxes.new()
- yy2 = boxes.new()
- w = boxes.new()
- h = boxes.new()
-
- # keep = torch.Tensor()
- count = 0
- while idx.numel() > 0:
- i = idx[-1] # index of current largest val
- # keep.append(i)
- keep[count] = i
- count += 1
- if idx.size(0) == 1:
- break
- idx = idx[:-1] # remove kept element from view
- # load bboxes of next highest vals
- torch.index_select(x1, 0, idx, out=xx1)
- torch.index_select(y1, 0, idx, out=yy1)
- torch.index_select(x2, 0, idx, out=xx2)
- torch.index_select(y2, 0, idx, out=yy2)
- # store element-wise max with next highest score
- xx1 = torch.clamp(xx1, min=x1[i])
- yy1 = torch.clamp(yy1, min=y1[i])
- xx2 = torch.clamp(xx2, max=x2[i])
- yy2 = torch.clamp(yy2, max=y2[i])
- w.resize_as_(xx2)
- h.resize_as_(yy2)
- w = xx2 - xx1
- h = yy2 - yy1
- # check sizes of xx1 and xx2.. after each iteration
- w = torch.clamp(w, min=0.0)
- h = torch.clamp(h, min=0.0)
- inter = w * h
- # IoU = i / (area(a) + area(b) - i)
- rem_areas = torch.index_select(area, 0, idx) # load remaining areas)
- union = (rem_areas - inter) + area[i]
- IoU = inter / union # store result in iou
- # keep only elements with an IoU <= overlap
- idx = idx[IoU.le(overlap)]
- return keep, count
diff --git a/spaces/Lijiahui/bingAI/Dockerfile b/spaces/Lijiahui/bingAI/Dockerfile
deleted file mode 100644
index d3bb59c4379a753da705c5b0783adedb89a0a6ab..0000000000000000000000000000000000000000
--- a/spaces/Lijiahui/bingAI/Dockerfile
+++ /dev/null
@@ -1,34 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,以便之后能从GitHub克隆项目
-RUN apk --no-cache add git
-
-# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
-RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
-
-# 设置工作目录为之前克隆的项目目录
-WORKDIR /workspace/app
-
-# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-# Runtime Stage
-# 使用轻量级的 alpine 镜像作为运行时的基础镜像
-FROM alpine
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-# 从构建阶段复制编译后的二进制文件到运行时镜像中
-COPY --from=builder /workspace/app/go-proxy-bingai .
-
-# 设置环境变量,此处为随机字符
-ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bEL9J09R26D"
-
-# 暴露8080端口
-EXPOSE 8080
-
-# 容器启动时运行的命令
-CMD ["/workspace/app/go-proxy-bingai"]
\ No newline at end of file
diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/vt_fns/vt_state.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/vt_fns/vt_state.py
deleted file mode 100644
index 18187286383ce2f3e881510852cf3aba7e6c43d1..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/crazy_functions/vt_fns/vt_state.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import pickle
-
-class VoidTerminalState():
- def __init__(self):
- self.reset_state()
-
- def reset_state(self):
- self.has_provided_explaination = False
-
- def lock_plugin(self, chatbot):
- chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'
- chatbot._cookies['plugin_state'] = pickle.dumps(self)
-
- def unlock_plugin(self, chatbot):
- self.reset_state()
- chatbot._cookies['lock_plugin'] = None
- chatbot._cookies['plugin_state'] = pickle.dumps(self)
-
- def set_state(self, chatbot, key, value):
- setattr(self, key, value)
- chatbot._cookies['plugin_state'] = pickle.dumps(self)
-
- def get_state(chatbot):
- state = chatbot._cookies.get('plugin_state', None)
- if state is not None: state = pickle.loads(state)
- else: state = VoidTerminalState()
- state.chatbot = chatbot
- return state
\ No newline at end of file
diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/export.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/export.py
deleted file mode 100644
index 6ab01ef1fd122dfba0cad8a05468eb5d6cc5677b..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/export.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import ONNXVITS_models
-import utils
-from text.symbols import symbols
-from text import text_to_sequence
-import torch
-import commons
-
-def get_text(text, hps):
- text_norm = text_to_sequence(text, symbols, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-def get_label(text, label):
- if f'[{label}]' in text:
- return True, text.replace(f'[{label}]', '')
- else:
- return False, text
-
-hps_ms = utils.get_hparams_from_file("/content/drive/MyDrive/moe/config.json")
-net_g_ms = ONNXVITS_models.SynthesizerTrn(
- len(symbols),
- hps_ms.data.filter_length // 2 + 1,
- hps_ms.train.segment_size // hps_ms.data.hop_length,
- n_speakers=hps_ms.data.n_speakers,
- **hps_ms.model)
-_ = net_g_ms.eval()
-
-_ = utils.load_checkpoint("/content/drive/MyDrive/moe/G_909000.pth", net_g_ms)
-
-text1 = get_text("[JA]ありがとうございます。[JA]", hps_ms)
-stn_tst = text1
-with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
- sid = torch.tensor([0])
- o = net_g_ms(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)
\ No newline at end of file
diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/cleaners.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/cleaners.py
deleted file mode 100644
index 7358b44249ffaef44c50c309b0fbb52c7527d547..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/cleaners.py
+++ /dev/null
@@ -1,176 +0,0 @@
-import re
-#from text.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2
-from text.japanese import japanese_to_romaji_with_accent, japanese_to_ipa, japanese_to_ipa2, japanese_to_ipa3
-from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo, chinese_to_romaji, chinese_to_lazy_ipa, chinese_to_ipa, chinese_to_ipa2
-# from text.sanskrit import devanagari_to_ipa
-# from text.english import english_to_lazy_ipa, english_to_ipa2, english_to_lazy_ipa2
-# from text.thai import num_to_thai, latin_to_thai
-# from text.shanghainese import shanghainese_to_ipa
-# from text.cantonese import cantonese_to_ipa
-# from text.ngu_dialect import ngu_dialect_to_ipa
-
-
-def japanese_cleaners(text):
- text = japanese_to_romaji_with_accent(text)
- if re.match('[A-Za-z]', text[-1]):
- text += '.'
- return text
-
-
-def japanese_cleaners2(text):
- return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…')
-
-
-def korean_cleaners(text):
- '''Pipeline for Korean text'''
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text = divide_hangul(text)
- if re.match('[\u3131-\u3163]', text[-1]):
- text += '.'
- return text
-
-
-def chinese_cleaners(text):
- '''Pipeline for Chinese text'''
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- if re.match('[ˉˊˇˋ˙]', text[-1]):
- text += '。'
- return text
-
-
-def zh_ja_mixture_cleaners(text):
- chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text)
- japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text)
- for chinese_text in chinese_texts:
- cleaned_text = chinese_to_romaji(chinese_text[4:-4])
- text = text.replace(chinese_text, cleaned_text+' ', 1)
- for japanese_text in japanese_texts:
- cleaned_text = japanese_to_romaji_with_accent(
- japanese_text[4:-4]).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')
- text = text.replace(japanese_text, cleaned_text+' ', 1)
- text = text[:-1]
- if re.match('[A-Za-zɯɹəɥ→↓↑]', text[-1]):
- text += '.'
- return text
-
-
-def sanskrit_cleaners(text):
- text = text.replace('॥', '।').replace('ॐ', 'ओम्')
- if text[-1] != '।':
- text += ' ।'
- return text
-
-
-def cjks_cleaners(text):
- chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text)
- japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text)
- korean_texts = re.findall(r'\[KO\].*?\[KO\]', text)
- sanskrit_texts = re.findall(r'\[SA\].*?\[SA\]', text)
- english_texts = re.findall(r'\[EN\].*?\[EN\]', text)
- for chinese_text in chinese_texts:
- cleaned_text = chinese_to_lazy_ipa(chinese_text[4:-4])
- text = text.replace(chinese_text, cleaned_text+' ', 1)
- for japanese_text in japanese_texts:
- cleaned_text = japanese_to_ipa(japanese_text[4:-4])
- text = text.replace(japanese_text, cleaned_text+' ', 1)
- for korean_text in korean_texts:
- cleaned_text = korean_to_lazy_ipa(korean_text[4:-4])
- text = text.replace(korean_text, cleaned_text+' ', 1)
- for sanskrit_text in sanskrit_texts:
- cleaned_text = devanagari_to_ipa(sanskrit_text[4:-4])
- text = text.replace(sanskrit_text, cleaned_text+' ', 1)
- for english_text in english_texts:
- cleaned_text = english_to_lazy_ipa(english_text[4:-4])
- text = text.replace(english_text, cleaned_text+' ', 1)
- text = text[:-1]
- if re.match(r'[^\.,!\?\-…~]', text[-1]):
- text += '.'
- return text
-
-
-def cjke_cleaners(text):
- chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text)
- japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text)
- korean_texts = re.findall(r'\[KO\].*?\[KO\]', text)
- english_texts = re.findall(r'\[EN\].*?\[EN\]', text)
- for chinese_text in chinese_texts:
- cleaned_text = chinese_to_lazy_ipa(chinese_text[4:-4])
- cleaned_text = cleaned_text.replace(
- 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')
- text = text.replace(chinese_text, cleaned_text+' ', 1)
- for japanese_text in japanese_texts:
- cleaned_text = japanese_to_ipa(japanese_text[4:-4])
- cleaned_text = cleaned_text.replace('ʧ', 'tʃ').replace(
- 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')
- text = text.replace(japanese_text, cleaned_text+' ', 1)
- for korean_text in korean_texts:
- cleaned_text = korean_to_ipa(korean_text[4:-4])
- text = text.replace(korean_text, cleaned_text+' ', 1)
- #for english_text in english_texts:
- # cleaned_text = english_to_ipa2(english_text[4:-4])
- # cleaned_text = cleaned_text.replace('ɑ', 'a').replace(
- # 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')
- # text = text.replace(english_text, cleaned_text+' ', 1)
- text = text[:-1]
- if re.match(r'[^\.,!\?\-…~]', text[-1]):
- text += '.'
- return text
-
-
-def cjke_cleaners2(text):
- chinese_texts = re.findall(r'\[ZH\].*?\[ZH\]', text)
- japanese_texts = re.findall(r'\[JA\].*?\[JA\]', text)
- korean_texts = re.findall(r'\[KO\].*?\[KO\]', text)
- english_texts = re.findall(r'\[EN\].*?\[EN\]', text)
- for chinese_text in chinese_texts:
- cleaned_text = chinese_to_ipa(chinese_text[4:-4])
- text = text.replace(chinese_text, cleaned_text+' ', 1)
- for japanese_text in japanese_texts:
- cleaned_text = japanese_to_ipa2(japanese_text[4:-4])
- text = text.replace(japanese_text, cleaned_text+' ', 1)
- for korean_text in korean_texts:
- cleaned_text = korean_to_ipa(korean_text[4:-4])
- text = text.replace(korean_text, cleaned_text+' ', 1)
- for english_text in english_texts:
- cleaned_text = english_to_ipa2(english_text[4:-4])
- text = text.replace(english_text, cleaned_text+' ', 1)
- text = text[:-1]
- if re.match(r'[^\.,!\?\-…~]', text[-1]):
- text += '.'
- return text
-
-
-def thai_cleaners(text):
- text = num_to_thai(text)
- text = latin_to_thai(text)
- return text
-
-
-def shanghainese_cleaners(text):
- text = shanghainese_to_ipa(text)
- if re.match(r'[^\.,!\?\-…~]', text[-1]):
- text += '.'
- return text
-
-
-def chinese_dialect_cleaners(text):
- text = re.sub(r'\[MD\](.*?)\[MD\]',
- lambda x: chinese_to_ipa2(x.group(1))+' ', text)
- text = re.sub(r'\[TW\](.*?)\[TW\]',
- lambda x: chinese_to_ipa2(x.group(1), True)+' ', text)
- text = re.sub(r'\[JA\](.*?)\[JA\]',
- lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text)
- text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5',
- '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text)
- text = re.sub(r'\[GD\](.*?)\[GD\]',
- lambda x: cantonese_to_ipa(x.group(1))+' ', text)
- text = re.sub(r'\[EN\](.*?)\[EN\]',
- lambda x: english_to_lazy_ipa2(x.group(1))+' ', text)
- text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group(
- 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text)
- text = re.sub(r'\s+$', '', text)
- text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text)
- return text
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/dataset/range_transform.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/dataset/range_transform.py
deleted file mode 100644
index ae1b0b3b2a01a061b9b2220a93cdf7f7a6357bfb..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/dataset/range_transform.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import torchvision.transforms as transforms
-
-im_mean = (124, 116, 104)
-
-im_normalization = transforms.Normalize(
- mean=[0.485, 0.456, 0.406],
- std=[0.229, 0.224, 0.225]
- )
-
-inv_im_trans = transforms.Normalize(
- mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
- std=[1/0.229, 1/0.224, 1/0.225])
diff --git a/spaces/MarkMcCormack/Automated-Grading-Dashboard/app.py b/spaces/MarkMcCormack/Automated-Grading-Dashboard/app.py
deleted file mode 100644
index bc2f7ca7387acda506545b95de28e83536eb0dd4..0000000000000000000000000000000000000000
--- a/spaces/MarkMcCormack/Automated-Grading-Dashboard/app.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import streamlit as st
-from studentDashboard import run as studentDashboardRun
-from teacherDashboard import run as teacherDashboardRun
-from langchain.llms import OpenAI
-from langchain.chains import LLMChain
-import pymongo
-import utils
-
-st.set_page_config(
- page_title="LLM/GPT Teacher Dashboard for Pedagogical Analysis of Students",
- page_icon="🧮",
- layout="wide",
-)
-
-left, right = st.columns(2)
-
-def main():
- import studentDashboard as studentDashboard
-
- st.title("🤖🧮 LLM/GPT Teacher Dashboard for Pedagogical Analysis of Students")
-
- with left:
- api_key = st.text_input("Enter your API key", type="password")
-
- if st.button("Submit API Key!"):
- if api_key:
- studentDashboard.llmOpenAI = OpenAI(openai_api_key=api_key, temperature=0.9)
- studentDashboard.chain = LLMChain(llm=studentDashboard.llmOpenAI, prompt=studentDashboard.promptTemplate)
- st.success("API key submitted successfully!")
- else:
- st.error("Please enter your API key.")
-
- with right:
- db_credentials = st.text_input("Enter your MongoDB credentials", type="password")
-
- if st.button("Submit Credentials!") and not utils.database:
- if db_credentials:
- utils.database = pymongo.MongoClient(db_credentials)
- st.success("Credentials submitted successfully!")
- else:
- st.error("Please enter your credentials.")
-
- studentDashboard, teacherDashboard, studentGroup = st.tabs([
- "Individual Student Profile",
- "Teacher Classroom Dashboard",
- "Student Group Dashboard"
- ])
-
- with studentDashboard:
- studentDashboardRun()
-
- with teacherDashboard:
- teacherDashboardRun()
-
- with studentGroup:
- #run()
- pass
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/Mayanand/emotion-recognition/face_module.py b/spaces/Mayanand/emotion-recognition/face_module.py
deleted file mode 100644
index 19c83c1cd245d4a92db5215a1ad2649a14ebf1f6..0000000000000000000000000000000000000000
--- a/spaces/Mayanand/emotion-recognition/face_module.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import mediapipe as mp
-mp_face_detection = mp.solutions.face_detection
-
-def get_face_coords(image):
- with mp_face_detection.FaceDetection(
- model_selection=1, min_detection_confidence=0.5) as face_detection:
- #image = cv2.imread(file)
- # Convert the BGR image to RGB and process it with MediaPipe Face Detection.
- results = face_detection.process(image)
- # Draw face detections of each face.
- if not results.detections:
- return False
-
- # shape of image
- h, w, _ = image.shape
-
- t = results.detections[0].location_data.relative_bounding_box
- height = t.height * h
- ymin = t.ymin * h
- width = t.width * w
- xmin = t.xmin * w
- xmax = xmin + width
- ymax = ymin + height
- return int(xmin), int(ymin), int(xmax), int(ymax)
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv_custom/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv_custom/__init__.py
deleted file mode 100644
index 4b958738b9fd93bfcec239c550df1d9a44b8c536..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv_custom/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# -*- coding: utf-8 -*-
-
-from .checkpoint import load_checkpoint
-
-__all__ = ['load_checkpoint']
\ No newline at end of file
diff --git a/spaces/Milancheeks/AI_Music_Team/app.py b/spaces/Milancheeks/AI_Music_Team/app.py
deleted file mode 100644
index de81bdc0581576427849217a82ca2121a035f5a8..0000000000000000000000000000000000000000
--- a/spaces/Milancheeks/AI_Music_Team/app.py
+++ /dev/null
@@ -1,307 +0,0 @@
-import openai
-
-
-ai_role_dict = {
-"music_director": "You are an Experienced Music Director who has 15+ Years experience in the industry",
-"lyricist": "You are an Experienced Lyricist, who has written hit songs in several languages",
-"freelance_lyricist": "You are an Experienced Freelance Lyricist, who has helped writing songs in several languages",
-"music_composer": "You are an Experienced Music Composer, who has composed songs of several genre and arrangements over the years",
-"sound_engineer": "You are an Experienced Sound Engineer, who can provide expert feedback on the arrangement being used."
-}
-
-languages = [
-"Afrikaans",
-"Albanian",
-"Amharic",
-"Arabic",
-"Armenian",
-"Assamese",
-"Aymara",
-"Azerbaijani",
-"Bhojpuri",
-"Basque",
-"Belarusian",
-"Bengali",
-"Bambara",
-"Bosnian",
-"Bulgarian",
-"Burmese (Myanmar)",
-"Catalan",
-"Cebuano",
-"Chewa (Chichewa)",
-"Chinese (Simplified)",
-"Chinese (Traditional)",
-"Corsican",
-"Croatian",
-"Czech",
-"Danish",
-"Dogri",
-"Dutch",
-"English",
-"Esperanto",
-"Estonian",
-"Ewe",
-"Finnish",
-"French",
-"Galician",
-"Georgian",
-"German",
-"Greek",
-"Guarani",
-"Gujarati",
-"Haitian Creole",
-"Hausa",
-"Hawaiian",
-"Hebrew",
-"Hindi",
-"Hmong",
-"Hungarian",
-"Icelandic",
-"Igbo",
-"Ilocano",
-"Indonesian",
-"Irish",
-"Italian",
-"Japanese",
-"Javanese",
-"Kannada",
-"Kazakh",
-"Khmer",
-"Kinyarwanda",
-"Konkani",
-"Korean",
-"Krio",
-"Kurdish (Kurmanji)",
-"Kurdish (Sorani)",
-"Kyrgyz",
-"Lao",
-"Latin",
-"Latvian",
-"Lingala",
-"Lithuanian",
-"Luganda",
-"Luxembourgish",
-"Macedonian",
-"Maithili",
-"Malagasy",
-"Malay",
-"Malayalam",
-"Maldivian (Dhivehi)",
-"Maltese",
-"Māori (Maori)",
-"Marathi",
-"Meitei (Manipuri, Meiteilon)",
-"Mizo",
-"Mongolian",
-"Nepali",
-"Northern Sotho (Sepedi)",
-"Norwegian (Bokmål)",
-"Odia (Oriya)",
-"Oromo",
-"Pashto",
-"Persian",
-"Polish",
-"Portuguese",
-"Punjabi (Gurmukhi)",
-"Quechua",
-"Romanian",
-"Russian",
-"Samoan",
-"Sanskrit",
-"Scottish Gaelic (Scots Gaelic)",
-"Serbian",
-"Shona",
-"Sindhi",
-"Sinhala",
-"Slovak",
-"Slovenian",
-"Somali",
-"Sotho (Sesotho)",
-"Spanish",
-"Sundanese",
-"Swahili",
-"Swedish",
-"Tagalog (Filipino)",
-"Tajik",
-"Tamil",
-"Tatar",
-"Telugu",
-"Thai",
-"Tigrinya",
-"Tsonga",
-"Turkish",
-"Turkmen",
-"Twi",
-"Ukrainian",
-"Urdu",
-"Uyghur",
-"Uzbek",
-"Vietnamese",
-"Welsh",
-"West Frisian (Frisian)",
-"Xhosa",
-"Yiddish",
-"Yoruba",
-"Zulu"
-]
-
-from tenacity import (
- retry,
- stop_after_attempt,
- wait_random_exponential,
-) # for exponential backoff
-
-@retry(wait=wait_random_exponential(min=1, max=100), stop=stop_after_attempt(8))
-def get_response(ai_role, query, model):
-
- response = openai.ChatCompletion.create(
- model=model,
- messages=[
- {"role": "system", "content": "{}".format(ai_role)},
- {"role": "user", "content": "{}".format(query)},
- ]
- )
-
- return response['choices'][0]['message']['content']
-
-def write_intermediate_outputs(filename, text):
- with open(filename, 'w') as fw:
- fw.write(text)
-
- sample_file_path = f'./{filename}'
-
- return sample_file_path
-
-def write_and_compose(model, api_key, language, genre, keywords, emotion):
- openai.api_key = api_key
- initial_lyrics = get_response(ai_role_dict['freelance_lyricist'], "Write structured lyrics of a {} {} song with the following keywords - {}, and use the following emotion - {}".format(language, genre, keywords, emotion), model)
-
- query_feedback = '''The Freelance Lyricist submitted these lyrics:
-
-{}
-
-Provide suitable feedback (in bullet-points)
-'''
-
- feedback1 = get_response(ai_role_dict['music_director'], query_feedback.format(initial_lyrics), model)
- feedback2 = get_response(ai_role_dict['lyricist'], query_feedback.format(initial_lyrics), model)
-
- # Workflow: Step 3
-
- feedback = '''After seeing the lyrics you initially submitted -
-
-{}
-
-the music director provided the following feedback -
-{}
-
-the lyricist provided the following feedback as well -
-{}
-
-Incorporate this feedback, and make suggested changes to the lyrics based on the feedback only
-'''
-
- final_lyrics = get_response(ai_role_dict['freelance_lyricist'], feedback.format(initial_lyrics, feedback1, feedback2), model)
-
- # Workflow: Step 4
-
- query_composer = '''Given the lyrics of the {} {} song on {} in the emotion - {} -
-
-{}
-
-write a suitable chord progression (for each line of the same lyrics), followed by the suitable arrangement required to sing and record the song (in bullet points)'''
-
- composition_1 = get_response(ai_role_dict['music_composer'], query_composer.format(language, genre, keywords, emotion, final_lyrics), model)
-
- query_sound_engineer = '''Given the lyrics of the {} {} song on {} in the emotion - {} -
-
-{}
-
-with a Chord Progression and Arrangement (suggested by the Music Composer) -
-
-{}
-
-could you write improvements that could be made to the Arrangement (in bullet points)? If the current arrangement is upto the mark, write "No change in the arrangement required"
-'''
-
- composition_2 = get_response(ai_role_dict['sound_engineer'], query_sound_engineer.format(language, genre, keywords, emotion, final_lyrics, composition_1), model)
-
- final_query = '''Given the lyrics of the {} {} song on {} in the emotion - {} -
-
-{}
-
-with a Chord Progression and Arrangement (suggested by the Music Composer) -
-
-{}
-
-and further improvements on the Arrangement (suggested by the Sound Engineer)
-
-{}
-
-- suggest any further improvements that could be made to the (a) Chord Progression (b) Arrangement.
-- After that, Write 10 "="s in the next line
-- After that, Write the final Chord Progression and Arrangement
-- Also, write a suitable title for the song
-
-'''
-
- final_response = get_response(ai_role_dict['music_director'], final_query.format(language, genre, keywords, emotion, final_lyrics, composition_1, composition_2), model)
-
- final_improvements = final_response.split('==========')[0]
-
- final_chord_prog_and_composition = final_response.split('==========')[-1]
-
- # return initial_lyrics, feedback1, feedback2, final_lyrics, composition_1, composition_2, final_improvements, final_chord_prog_and_composition
-
- output_file_list = []
- output_file_list.append(write_intermediate_outputs('step_2.txt', initial_lyrics))
- output_file_list.append(write_intermediate_outputs('step_3A.txt', feedback1))
- output_file_list.append(write_intermediate_outputs('step_3B.txt', feedback2))
- output_file_list.append(write_intermediate_outputs('step_5.txt', composition_1))
- output_file_list.append(write_intermediate_outputs('step_6.txt', composition_2))
- output_file_list.append(write_intermediate_outputs('step_7.txt', final_improvements))
-
- return final_lyrics, final_chord_prog_and_composition, output_file_list
-
-import gradio as gr
-
-description = '''
-
-# Objective -
-
-Given specific Language, Genre, Keywords, and Emotion of your choice, make a Brand New Song without lifting a finger!
-
-1. Get lyrics of a new song
-2. Get a suitable chord progression
-3. Get a suitable musical arrangement for singing and recording the song
-4. Cherry on the top - Get a suitable song title!
-
-
-# AI Music Team is composed of several GPT agents with the following "personas" -
-
-1. Experienced Music Director who has 15+ Years experience in the industry
-2. Experienced Lyricist, who has written hit songs in several languages
-3. Experienced Freelance Lyricist, who has helped writing songs in several languages
-4. Experienced Music Composer, who has composed songs of several genre and arrangements over the years
-5. Experienced Sound Engineer, who can provide expert feedback on the arrangement being used
-
-
-# Workflow (Intermediate outputs/results are output as downloadable files) -
-
-1. Get Inputs from user (OpenAI API Endpoint, API Key, language, keywords, genre, emotion for the song). Check out [this link](https://platform.openai.com/account/api-keys) to get your API Key
-2. Experienced Freelance Lyricist writes a lyrics draft (**see `step_2.txt`**)
-3. Experienced Music Director and Experienced Lyricist provide feedback (**see `step_3A.txt` & `step_3B.txt` respectively**)
-4. Experienced Freelance Lyricist incorporates the feedback, **Lyrics is finalized here**
-5. Experienced Music Composer will provide a chord progression, and an arrangement of instruments (**see `step_5.txt`**)
-6. Experienced Sound Engineer will provide ways to improve on the existing arrangement (**see `step_6.txt`**)
-7. Finally, Music Director will provide improvements (**see `step_7.txt`**), resulting in the **final Chord Progression, Arrangement, and Song Title**
-
-'''
-
-demo = gr.Interface(title = 'Write and Compose brand new Songs using an Elite *AI Music Team*', description = description,
- fn=write_and_compose,
- inputs=[gr.Radio(["gpt-3.5-turbo", "gpt-4"], value="gpt-3.5-turbo", label = "Choose the OpenAI API Endpoint"), gr.Textbox(label="API Key (Check out [this link](https://platform.openai.com/account/api-keys) to get your API Key)"), gr.Dropdown(choices=languages, value='English', label="Language of the lyrics"), gr.Textbox(label="Genre"), gr.Textbox(label="Keywords (separated by comma)"), gr.Textbox(label="Emotion")], # model, api_key, language, genre, keywords, emotion
- # outputs=[gr.Textbox(label="Lyrics after Step #2"), gr.Textbox(label="Feedback provided by Music Director in Step #3"), gr.Textbox(label="Feedback provided by Lyricist in Step #3"), gr.Textbox(label="Final Lyrics of the song after Step #4"), gr.Textbox(label="Chord Progression and Arrangement suggested by Music Composer in Step #5"), gr.Textbox(label="Arrangement improvements suggested by Sound Engineer in Step #6"), gr.Textbox(label="Chord and Arrangement improvements suggested by Music Director in Step #7"), gr.Textbox(label="Final Chord Progression, Arrangment, and Song Title")], # initial_lyrics, feedback1, feedback2, final_lyrics, composition_1, composition_2, final_improvements, final_chord_prog_and_composition
- outputs=[gr.Textbox(label="Final Lyrics of the song after Step #4"), gr.Textbox(label="Final Chord Progression, Arrangement, and Song Title"), gr.File(label='Intermediate Outputs')], # initial_lyrics, feedback1, feedback2, final_lyrics, composition_1, composition_2, final_improvements, final_chord_prog_and_composition
-)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/MirageML/lowpoly-landscape/README.md b/spaces/MirageML/lowpoly-landscape/README.md
deleted file mode 100644
index 6191898eafb27be3c9417560ca2881780e458888..0000000000000000000000000000000000000000
--- a/spaces/MirageML/lowpoly-landscape/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Lowpoly Landscape
-emoji: 😻
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/_base_sdmgr_novisual.py b/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/_base_sdmgr_novisual.py
deleted file mode 100644
index 5e85de2f78f020bd5695858098ad143dbbd09ed0..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/kie/sdmgr/_base_sdmgr_novisual.py
+++ /dev/null
@@ -1,35 +0,0 @@
-num_classes = 26
-
-model = dict(
- type='SDMGR',
- kie_head=dict(
- type='SDMGRHead',
- visual_dim=16,
- num_classes=num_classes,
- module_loss=dict(type='SDMGRModuleLoss'),
- postprocessor=dict(type='SDMGRPostProcessor')),
- dictionary=dict(
- type='Dictionary',
- dict_file='{{ fileDirname }}/../../../dicts/sdmgr_dict.txt',
- with_padding=True,
- with_unknown=True,
- unknown_token=None),
-)
-
-train_pipeline = [
- dict(type='LoadKIEAnnotations'),
- dict(type='Resize', scale=(1024, 512), keep_ratio=True),
- dict(type='PackKIEInputs')
-]
-test_pipeline = [
- dict(type='LoadKIEAnnotations'),
- dict(type='Resize', scale=(1024, 512), keep_ratio=True),
- dict(type='PackKIEInputs'),
-]
-
-val_evaluator = dict(
- type='F1Metric',
- mode='macro',
- num_classes=num_classes,
- ignored_classes=[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 25])
-test_evaluator = val_evaluator
diff --git a/spaces/NN520/AI/src/pages/api/blob.ts b/spaces/NN520/AI/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/Nee001/bing0/src/components/chat-history.tsx b/spaces/Nee001/bing0/src/components/chat-history.tsx
deleted file mode 100644
index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/components/chat-history.tsx
+++ /dev/null
@@ -1,48 +0,0 @@
-import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons"
-
-export function ChatHistory() {
- return (
-
-
- 历史记录
-
-
-
-
-
-
-
-
-
-
无标题的聊天
-
-
上午1:42
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- )
-}
diff --git a/spaces/Nephele/bert-vits2-multi-voice/models.py b/spaces/Nephele/bert-vits2-multi-voice/models.py
deleted file mode 100644
index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000
--- a/spaces/Nephele/bert-vits2-multi-voice/models.py
+++ /dev/null
@@ -1,707 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-
-from commons import init_weights, get_padding
-from text import symbols, num_tones, num_languages
-class DurationDiscriminator(nn.Module): #vits2
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.dur_proj = nn.Conv1d(1, filter_channels, 1)
-
- self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.pre_out_norm_1 = modules.LayerNorm(filter_channels)
- self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.pre_out_norm_2 = modules.LayerNorm(filter_channels)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- self.output_layer = nn.Sequential(
- nn.Linear(filter_channels, 1),
- nn.Sigmoid()
- )
-
- def forward_probability(self, x, x_mask, dur, g=None):
- dur = self.dur_proj(dur)
- x = torch.cat([x, dur], dim=1)
- x = self.pre_out_conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.pre_out_norm_1(x)
- x = self.drop(x)
- x = self.pre_out_conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.pre_out_norm_2(x)
- x = self.drop(x)
- x = x * x_mask
- x = x.transpose(1, 2)
- output_prob = self.output_layer(x)
- return output_prob
-
- def forward(self, x, x_mask, dur_r, dur_hat, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
-
- output_probs = []
- for dur in [dur_r, dur_hat]:
- output_prob = self.forward_probability(x, x_mask, dur, g)
- output_probs.append(output_prob)
-
- return output_probs
-
-class TransformerCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- n_flows=4,
- gin_channels=0,
- share_parameter=False
- ):
-
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
-
- self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None
-
- for i in range(n_flows):
- self.flows.append(
- modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
- logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=0):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
- self.emb = nn.Embedding(len(symbols), hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
- self.tone_emb = nn.Embedding(num_tones, hidden_channels)
- nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5)
- self.language_emb = nn.Embedding(num_languages, hidden_channels)
- nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5)
- self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=self.gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, tone, language, bert, g=None):
- x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask, g=g)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
- gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-class ReferenceEncoder(nn.Module):
- '''
- inputs --- [N, Ty/r, n_mels*r] mels
- outputs --- [N, ref_enc_gru_size]
- '''
-
- def __init__(self, spec_channels, gin_channels=0):
-
- super().__init__()
- self.spec_channels = spec_channels
- ref_enc_filters = [32, 32, 64, 64, 128, 128]
- K = len(ref_enc_filters)
- filters = [1] + ref_enc_filters
- convs = [weight_norm(nn.Conv2d(in_channels=filters[i],
- out_channels=filters[i + 1],
- kernel_size=(3, 3),
- stride=(2, 2),
- padding=(1, 1))) for i in range(K)]
- self.convs = nn.ModuleList(convs)
- # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)])
-
- out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K)
- self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels,
- hidden_size=256 // 2,
- batch_first=True)
- self.proj = nn.Linear(128, gin_channels)
-
- def forward(self, inputs, mask=None):
- N = inputs.size(0)
- out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs]
- for conv in self.convs:
- out = conv(out)
- # out = wn(out)
- out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K]
-
- out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
- T = out.size(1)
- N = out.size(0)
- out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
-
- self.gru.flatten_parameters()
- memory, out = self.gru(out) # out --- [1, N, 128]
-
- return self.proj(out.squeeze(0))
-
- def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
- for i in range(n_convs):
- L = (L - kernel_size + 2 * pad) // stride + 1
- return L
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=256,
- gin_channels=256,
- use_sdp=True,
- n_flow_layer = 4,
- n_layers_trans_flow = 3,
- flow_share_parameter = False,
- use_transformer_flow = True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
- self.n_layers_trans_flow = n_layers_trans_flow
- self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True)
- self.use_sdp = use_sdp
- self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False)
- self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01)
- self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6)
- self.current_mas_noise_scale = self.mas_noise_scale_initial
- if self.use_spk_conditioned_encoder and gin_channels > 0:
- self.enc_gin_channels = gin_channels
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- gin_channels=self.enc_gin_channels)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
- gin_channels=gin_channels)
- if use_transformer_flow:
- self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter)
- else:
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels)
- self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers >= 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
- else:
- self.ref_enc = ReferenceEncoder(spec_channels, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert):
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2),
- s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
- if self.use_noise_scaled_mas:
- epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale
- neg_cent = neg_cent + epsilon
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
-
- l_length_sdp = self.sdp(x, x_mask, w, g=g)
- l_length_sdp = l_length_sdp / torch.sum(x_mask)
-
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
-
- l_length = l_length_dp + l_length_sdp
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_)
-
- def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None):
- #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert)
- # g = self.gst(y)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1)
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g)
- logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:, :, :max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
diff --git a/spaces/NikeZoldyck/green-screen-composition-transfer/README.md b/spaces/NikeZoldyck/green-screen-composition-transfer/README.md
deleted file mode 100644
index ef17d16684705a79da2afb0d2e3298e089c71964..0000000000000000000000000000000000000000
--- a/spaces/NikeZoldyck/green-screen-composition-transfer/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Green Screen Composition Transfer
-emoji: 🌍
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/NimaBoscarino/climategan/eval_masker.py b/spaces/NimaBoscarino/climategan/eval_masker.py
deleted file mode 100644
index 72b0671b2a62f72da6e600f929b4c735e5e3a5cc..0000000000000000000000000000000000000000
--- a/spaces/NimaBoscarino/climategan/eval_masker.py
+++ /dev/null
@@ -1,796 +0,0 @@
-"""
-Compute metrics of the performance of the masker using a set of ground-truth labels
-
-run eval_masker.py --model "/miniscratch/_groups/ccai/checkpoints/model/"
-
-"""
-print("Imports...", end="")
-import os
-import os.path
-from argparse import ArgumentParser
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import pandas as pd
-from comet_ml import Experiment
-import torch
-import yaml
-from skimage.color import rgba2rgb
-from skimage.io import imread, imsave
-from skimage.transform import resize
-from skimage.util import img_as_ubyte
-from torchvision.transforms import ToTensor
-
-from climategan.data import encode_mask_label
-from climategan.eval_metrics import (
- masker_classification_metrics,
- get_confusion_matrix,
- edges_coherence_std_min,
- boxplot_metric,
- clustermap_metric,
-)
-from climategan.transforms import PrepareTest
-from climategan.trainer import Trainer
-from climategan.utils import find_images
-
-dict_metrics = {
- "names": {
- "tpr": "TPR, Recall, Sensitivity",
- "tnr": "TNR, Specificity, Selectivity",
- "fpr": "FPR",
- "fpt": "False positives relative to image size",
- "fnr": "FNR, Miss rate",
- "fnt": "False negatives relative to image size",
- "mpr": "May positive rate (MPR)",
- "mnr": "May negative rate (MNR)",
- "accuracy": "Accuracy (ignoring may)",
- "error": "Error (ignoring may)",
- "f05": "F0.05 score",
- "precision": "Precision",
- "edge_coherence": "Edge coherence",
- "accuracy_must_may": "Accuracy (ignoring cannot)",
- },
- "threshold": {
- "tpr": 0.95,
- "tnr": 0.95,
- "fpr": 0.05,
- "fpt": 0.01,
- "fnr": 0.05,
- "fnt": 0.01,
- "accuracy": 0.95,
- "error": 0.05,
- "f05": 0.95,
- "precision": 0.95,
- "edge_coherence": 0.02,
- "accuracy_must_may": 0.5,
- },
- "key_metrics": ["f05", "error", "edge_coherence", "mnr"],
-}
-
-print("Ok.")
-
-
-def parsed_args():
- """Parse and returns command-line args
-
- Returns:
- argparse.Namespace: the parsed arguments
- """
- parser = ArgumentParser()
- parser.add_argument(
- "--model",
- type=str,
- help="Path to a pre-trained model",
- )
- parser.add_argument(
- "--images_dir",
- default="/miniscratch/_groups/ccai/data/omnigan/masker-test-set/imgs",
- type=str,
- help="Directory containing the original test images",
- )
- parser.add_argument(
- "--labels_dir",
- default="/miniscratch/_groups/ccai/data/omnigan/masker-test-set/labels",
- type=str,
- help="Directory containing the labeled images",
- )
- parser.add_argument(
- "--image_size",
- default=640,
- type=int,
- help="The height and weight of the pre-processed images",
- )
- parser.add_argument(
- "--max_files",
- default=-1,
- type=int,
- help="Limit loaded samples",
- )
- parser.add_argument(
- "--bin_value", default=0.5, type=float, help="Mask binarization threshold"
- )
- parser.add_argument(
- "-y",
- "--yaml",
- default=None,
- type=str,
- help="load a yaml file to parametrize the evaluation",
- )
- parser.add_argument(
- "-t", "--tags", nargs="*", help="Comet.ml tags", default=[], type=str
- )
- parser.add_argument(
- "-p",
- "--plot",
- action="store_true",
- default=False,
- help="Plot masker images & their metrics overlays",
- )
- parser.add_argument(
- "--no_paint",
- action="store_true",
- default=False,
- help="Do not log painted images",
- )
- parser.add_argument(
- "--write_metrics",
- action="store_true",
- default=False,
- help="If True, write CSV file and maps images in model's path directory",
- )
- parser.add_argument(
- "--load_metrics",
- action="store_true",
- default=False,
- help="If True, load predictions and metrics instead of re-computing",
- )
- parser.add_argument(
- "--prepare_torch",
- action="store_true",
- default=False,
- help="If True, pre-process images as torch tensors",
- )
- parser.add_argument(
- "--output_csv",
- default=None,
- type=str,
- help="Filename of the output CSV with the metrics of all models",
- )
-
- return parser.parse_args()
-
-
-def uint8(array):
- return array.astype(np.uint8)
-
-
-def crop_and_resize(image_path, label_path):
- """
- Resizes an image so that it keeps the aspect ratio and the smallest dimensions
- is 640, then crops this resized image in its center so that the output is 640x640
- without aspect ratio distortion
-
- Args:
- image_path (Path or str): Path to an image
- label_path (Path or str): Path to the image's associated label
-
- Returns:
- tuple((np.ndarray, np.ndarray)): (new image, new label)
- """
-
- img = imread(image_path)
- lab = imread(label_path)
-
- # if img.shape[-1] == 4:
- # img = uint8(rgba2rgb(img) * 255)
-
- # TODO: remove (debug)
- if img.shape[:2] != lab.shape[:2]:
- print(
- "\nWARNING: shape mismatch: im -> ({}) {}, lab -> ({}) {}".format(
- img.shape[:2], image_path.name, lab.shape[:2], label_path.name
- )
- )
- # breakpoint()
-
- # resize keeping aspect ratio: smallest dim is 640
- i_h, i_w = img.shape[:2]
- if i_h < i_w:
- i_size = (640, int(640 * i_w / i_h))
- else:
- i_size = (int(640 * i_h / i_w), 640)
-
- l_h, l_w = img.shape[:2]
- if l_h < l_w:
- l_size = (640, int(640 * l_w / l_h))
- else:
- l_size = (int(640 * l_h / l_w), 640)
-
- r_img = resize(img, i_size, preserve_range=True, anti_aliasing=True)
- r_img = uint8(r_img)
-
- r_lab = resize(lab, l_size, preserve_range=True, anti_aliasing=False, order=0)
- r_lab = uint8(r_lab)
-
- # crop in the center
- H, W = r_img.shape[:2]
-
- top = (H - 640) // 2
- left = (W - 640) // 2
-
- rc_img = r_img[top : top + 640, left : left + 640, :]
- rc_lab = (
- r_lab[top : top + 640, left : left + 640, :]
- if r_lab.ndim == 3
- else r_lab[top : top + 640, left : left + 640]
- )
-
- return rc_img, rc_lab
-
-
-def plot_images(
- output_filename,
- img,
- label,
- pred,
- metrics_dict,
- maps_dict,
- edge_coherence=-1,
- pred_edge=None,
- label_edge=None,
- dpi=300,
- alpha=0.5,
- vmin=0.0,
- vmax=1.0,
- fontsize="xx-small",
- cmap={
- "fp": "Reds",
- "fn": "Reds",
- "may_neg": "Oranges",
- "may_pos": "Purples",
- "pred": "Greens",
- },
-):
- f, axes = plt.subplots(1, 5, dpi=dpi)
-
- # FPR (predicted mask on cannot flood)
- axes[0].imshow(img)
- fp_map_plt = axes[0].imshow( # noqa: F841
- maps_dict["fp"], vmin=vmin, vmax=vmax, cmap=cmap["fp"], alpha=alpha
- )
- axes[0].axis("off")
- axes[0].set_title("FPR: {:.4f}".format(metrics_dict["fpr"]), fontsize=fontsize)
-
- # FNR (missed mask on must flood)
- axes[1].imshow(img)
- fn_map_plt = axes[1].imshow( # noqa: F841
- maps_dict["fn"], vmin=vmin, vmax=vmax, cmap=cmap["fn"], alpha=alpha
- )
- axes[1].axis("off")
- axes[1].set_title("FNR: {:.4f}".format(metrics_dict["fnr"]), fontsize=fontsize)
-
- # May flood
- axes[2].imshow(img)
- if edge_coherence != -1:
- title = "MNR: {:.2f} | MPR: {:.2f}\nEdge coh.: {:.4f}".format(
- metrics_dict["mnr"], metrics_dict["mpr"], edge_coherence
- )
- # alpha_here = alpha / 4.
- # pred_edge_plt = axes[2].imshow(
- # 1.0 - pred_edge, cmap="gray", alpha=alpha_here
- # )
- # label_edge_plt = axes[2].imshow(
- # 1.0 - label_edge, cmap="gray", alpha=alpha_here
- # )
- else:
- title = "MNR: {:.2f} | MPR: {:.2f}".format(mnr, mpr) # noqa: F821
- # alpha_here = alpha / 2.
- may_neg_map_plt = axes[2].imshow( # noqa: F841
- maps_dict["may_neg"], vmin=vmin, vmax=vmax, cmap=cmap["may_neg"], alpha=alpha
- )
- may_pos_map_plt = axes[2].imshow( # noqa: F841
- maps_dict["may_pos"], vmin=vmin, vmax=vmax, cmap=cmap["may_pos"], alpha=alpha
- )
- axes[2].set_title(title, fontsize=fontsize)
- axes[2].axis("off")
-
- # Prediction
- axes[3].imshow(img)
- pred_mask = axes[3].imshow( # noqa: F841
- pred, vmin=vmin, vmax=vmax, cmap=cmap["pred"], alpha=alpha
- )
- axes[3].set_title("Predicted mask", fontsize=fontsize)
- axes[3].axis("off")
-
- # Labels
- axes[4].imshow(img)
- label_mask = axes[4].imshow(label, alpha=alpha) # noqa: F841
- axes[4].set_title("Labels", fontsize=fontsize)
- axes[4].axis("off")
-
- f.savefig(
- output_filename,
- dpi=f.dpi,
- bbox_inches="tight",
- facecolor="white",
- transparent=False,
- )
- plt.close(f)
-
-
-def load_ground(ground_output_path, ref_image_path):
- gop = Path(ground_output_path)
- rip = Path(ref_image_path)
-
- ground_paths = list((gop / "eval-metrics" / "pred").glob(f"{rip.stem}.jpg")) + list(
- (gop / "eval-metrics" / "pred").glob(f"{rip.stem}.png")
- )
- if len(ground_paths) == 0:
- raise ValueError(
- f"Could not find a ground match in {str(gop)} for image {str(rip)}"
- )
- elif len(ground_paths) > 1:
- raise ValueError(
- f"Found more than 1 ground match in {str(gop)} for image {str(rip)}:"
- + f" {list(map(str, ground_paths))}"
- )
- ground_path = ground_paths[0]
- _, ground = crop_and_resize(rip, ground_path)
- if ground.ndim == 3:
- ground = ground[:, :, 0]
- ground = (ground > 0).astype(np.float32)
- return torch.from_numpy(ground).unsqueeze(0).unsqueeze(0).cuda()
-
-
-def get_inferences(
- image_arrays, model_path, image_paths, paint=False, bin_value=0.5, verbose=0
-):
- """
- Obtains the mask predictions of a model for a set of images
-
- Parameters
- ----------
- image_arrays : array-like
- A list of (1, CH, H, W) images
-
- image_paths: list(Path)
- A list of paths for images, in the same order as image_arrays
-
- model_path : str
- The path to a pre-trained model
-
- Returns
- -------
- masks : list
- A list of (H, W) predicted masks
- """
- device = torch.device("cpu")
- torch.set_grad_enabled(False)
- to_tensor = ToTensor()
-
- is_ground = "ground" in Path(model_path).name
- is_instagan = "instagan" in Path(model_path).name
-
- if is_ground or is_instagan:
- # we just care about he painter here
- ground_path = model_path
- model_path = (
- "/miniscratch/_groups/ccai/experiments/runs/ablation-v1/out--38858350"
- )
-
- xs = [to_tensor(array).unsqueeze(0) for array in image_arrays]
- xs = [x.to(torch.float32).to(device) for x in xs]
- xs = [(x - 0.5) * 2 for x in xs]
- trainer = Trainer.resume_from_path(
- model_path, inference=True, new_exp=None, device=device
- )
- masks = []
- painted = []
- for idx, x in enumerate(xs):
- if verbose > 0:
- print(idx, "/", len(xs), end="\r")
-
- if not is_ground and not is_instagan:
- m = trainer.G.mask(x=x)
- else:
- m = load_ground(ground_path, image_paths[idx])
-
- masks.append(m.squeeze().cpu())
- if paint:
- p = trainer.G.paint(m > bin_value, x)
- painted.append(p.squeeze().cpu())
- return masks, painted
-
-
-if __name__ == "__main__":
- # -----------------------------
- # ----- Parse arguments -----
- # -----------------------------
- args = parsed_args()
- print("Args:\n" + "\n".join([f" {k:20}: {v}" for k, v in vars(args).items()]))
-
- # Determine output dir
- try:
- tmp_dir = Path(os.environ["SLURM_TMPDIR"])
- except Exception as e:
- print(e)
- tmp_dir = Path(input("Enter tmp output directory: ")).resolve()
-
- plot_dir = tmp_dir / "plots"
- plot_dir.mkdir(parents=True, exist_ok=True)
-
- # Build paths to data
- imgs_paths = sorted(
- find_images(args.images_dir, recursive=False), key=lambda x: x.name
- )
- labels_paths = sorted(
- find_images(args.labels_dir, recursive=False),
- key=lambda x: x.name.replace("_labeled.", "."),
- )
- if args.max_files > 0:
- imgs_paths = imgs_paths[: args.max_files]
- labels_paths = labels_paths[: args.max_files]
-
- print(f"Loading {len(imgs_paths)} images and labels...")
-
- # Pre-process images: resize + crop
- # TODO: ? make cropping more flexible, not only central
- if not args.prepare_torch:
- ims_labs = [crop_and_resize(i, l) for i, l in zip(imgs_paths, labels_paths)]
- imgs = [d[0] for d in ims_labs]
- labels = [d[1] for d in ims_labs]
- else:
- prepare = PrepareTest()
- imgs = prepare(imgs_paths, normalize=False, rescale=False)
- labels = prepare(labels_paths, normalize=False, rescale=False)
-
- imgs = [i.squeeze(0).permute(1, 2, 0).numpy().astype(np.uint8) for i in imgs]
- labels = [
- lab.squeeze(0).permute(1, 2, 0).numpy().astype(np.uint8) for lab in labels
- ]
- imgs = [rgba2rgb(img) if img.shape[-1] == 4 else img for img in imgs]
- print(" Done.")
-
- # Encode labels
- print("Encode labels...", end="", flush=True)
- # HW label
- labels = [np.squeeze(encode_mask_label(label, "flood")) for label in labels]
- print("Done.")
-
- if args.yaml:
- y_path = Path(args.yaml)
- assert y_path.exists()
- assert y_path.suffix in {".yaml", ".yml"}
- with y_path.open("r") as f:
- data = yaml.safe_load(f)
- assert "models" in data
-
- evaluations = [m for m in data["models"]]
- else:
- evaluations = [args.model]
-
- for e, eval_path in enumerate(evaluations):
- print("\n>>>>> Evaluation", e, ":", eval_path)
- print("=" * 50)
- print("=" * 50)
-
- model_metrics_path = Path(eval_path) / "eval-metrics"
- model_metrics_path.mkdir(exist_ok=True)
- if args.load_metrics:
- f_csv = model_metrics_path / "eval_masker.csv"
- pred_out = model_metrics_path / "pred"
- if f_csv.exists() and pred_out.exists():
- print("Skipping model because pre-computed metrics exist")
- continue
-
- # Initialize New Comet Experiment
- exp = Experiment(
- project_name="climategan-masker-metrics", display_summary_level=0
- )
-
- # Obtain mask predictions
- # TODO: remove (debug)
- print("Obtain mask predictions", end="", flush=True)
-
- preds, painted = get_inferences(
- imgs,
- eval_path,
- imgs_paths,
- paint=not args.no_paint,
- bin_value=args.bin_value,
- verbose=1,
- )
- preds = [pred.numpy() for pred in preds]
- print(" Done.")
-
- if args.bin_value > 0:
- preds = [pred > args.bin_value for pred in preds]
-
- # Compute metrics
- df = pd.DataFrame(
- columns=[
- "tpr",
- "tpt",
- "tnr",
- "tnt",
- "fpr",
- "fpt",
- "fnr",
- "fnt",
- "mnr",
- "mpr",
- "accuracy",
- "error",
- "precision",
- "f05",
- "accuracy_must_may",
- "edge_coherence",
- "filename",
- ]
- )
-
- print("Compute metrics and plot images")
- for idx, (img, label, pred) in enumerate(zip(*(imgs, labels, preds))):
- print(idx, "/", len(imgs), end="\r")
-
- # Basic classification metrics
- metrics_dict, maps_dict = masker_classification_metrics(
- pred, label, labels_dict={"cannot": 0, "must": 1, "may": 2}
- )
-
- # Edges coherence
- edge_coherence, pred_edge, label_edge = edges_coherence_std_min(pred, label)
-
- series_dict = {
- "tpr": metrics_dict["tpr"],
- "tpt": metrics_dict["tpt"],
- "tnr": metrics_dict["tnr"],
- "tnt": metrics_dict["tnt"],
- "fpr": metrics_dict["fpr"],
- "fpt": metrics_dict["fpt"],
- "fnr": metrics_dict["fnr"],
- "fnt": metrics_dict["fnt"],
- "mnr": metrics_dict["mnr"],
- "mpr": metrics_dict["mpr"],
- "accuracy": metrics_dict["accuracy"],
- "error": metrics_dict["error"],
- "precision": metrics_dict["precision"],
- "f05": metrics_dict["f05"],
- "accuracy_must_may": metrics_dict["accuracy_must_may"],
- "edge_coherence": edge_coherence,
- "filename": str(imgs_paths[idx].name),
- }
- df.loc[idx] = pd.Series(series_dict)
-
- for k, v in series_dict.items():
- if k == "filename":
- continue
- exp.log_metric(f"img_{k}", v, step=idx)
-
- # Confusion matrix
- confmat, _ = get_confusion_matrix(
- metrics_dict["tpr"],
- metrics_dict["tnr"],
- metrics_dict["fpr"],
- metrics_dict["fnr"],
- metrics_dict["mnr"],
- metrics_dict["mpr"],
- )
- confmat = np.around(confmat, decimals=3)
- exp.log_confusion_matrix(
- file_name=imgs_paths[idx].name + ".json",
- title=imgs_paths[idx].name,
- matrix=confmat,
- labels=["Cannot", "Must", "May"],
- row_label="Predicted",
- column_label="Ground truth",
- )
-
- if args.plot:
- # Plot prediction images
- fig_filename = plot_dir / imgs_paths[idx].name
- plot_images(
- fig_filename,
- img,
- label,
- pred,
- metrics_dict,
- maps_dict,
- edge_coherence,
- pred_edge,
- label_edge,
- )
- exp.log_image(fig_filename)
- if not args.no_paint:
- masked = img * (1 - pred[..., None])
- flooded = img_as_ubyte(
- (painted[idx].permute(1, 2, 0).cpu().numpy() + 1) / 2
- )
- combined = np.concatenate([img, masked, flooded], 1)
- exp.log_image(combined, imgs_paths[idx].name)
-
- if args.write_metrics:
- pred_out = model_metrics_path / "pred"
- pred_out.mkdir(exist_ok=True)
- imsave(
- pred_out / f"{imgs_paths[idx].stem}_pred.png",
- pred.astype(np.uint8),
- )
- for k, v in maps_dict.items():
- metric_out = model_metrics_path / k
- metric_out.mkdir(exist_ok=True)
- imsave(
- metric_out / f"{imgs_paths[idx].stem}_{k}.png",
- v.astype(np.uint8),
- )
-
- # --------------------------------
- # ----- END OF IMAGES LOOP -----
- # --------------------------------
-
- if args.write_metrics:
- print(f"Writing metrics in {str(model_metrics_path)}")
- f_csv = model_metrics_path / "eval_masker.csv"
- df.to_csv(f_csv, index_label="idx")
-
- print(" Done.")
- # Summary statistics
- means = df.mean(axis=0)
- confmat_mean, confmat_std = get_confusion_matrix(
- df.tpr, df.tnr, df.fpr, df.fnr, df.mpr, df.mnr
- )
- confmat_mean = np.around(confmat_mean, decimals=3)
- confmat_std = np.around(confmat_std, decimals=3)
-
- # Log to comet
- exp.log_confusion_matrix(
- file_name="confusion_matrix_mean.json",
- title="confusion_matrix_mean.json",
- matrix=confmat_mean,
- labels=["Cannot", "Must", "May"],
- row_label="Predicted",
- column_label="Ground truth",
- )
- exp.log_confusion_matrix(
- file_name="confusion_matrix_std.json",
- title="confusion_matrix_std.json",
- matrix=confmat_std,
- labels=["Cannot", "Must", "May"],
- row_label="Predicted",
- column_label="Ground truth",
- )
- exp.log_metrics(dict(means))
- exp.log_table("metrics.csv", df)
- exp.log_html(df.to_html(col_space="80px"))
- exp.log_parameters(vars(args))
- exp.log_parameter("eval_path", str(eval_path))
- exp.add_tag("eval_masker")
- if args.tags:
- exp.add_tags(args.tags)
- exp.log_parameter("model_id", Path(eval_path).name)
-
- # Close comet
- exp.end()
-
- # --------------------------------
- # ----- END OF MODElS LOOP -----
- # --------------------------------
-
- # Compare models
- if (args.load_metrics or args.write_metrics) and len(evaluations) > 1:
- print(
- "Plots for comparing the input models will be created and logged to comet"
- )
-
- # Initialize New Comet Experiment
- exp = Experiment(
- project_name="climategan-masker-metrics", display_summary_level=0
- )
- if args.tags:
- exp.add_tags(args.tags)
-
- # Build DataFrame with all models
- print("Building pandas DataFrame...")
- models_df = {}
- for (m, model_path) in enumerate(evaluations):
- model_path = Path(model_path)
- with open(model_path / "opts.yaml", "r") as f:
- opt = yaml.safe_load(f)
- model_feats = ", ".join(
- [
- t
- for t in sorted(opt["comet"]["tags"])
- if "branch" not in t and "ablation" not in t and "trash" not in t
- ]
- )
- model_id = f"{model_path.parent.name[-2:]}/{model_path.name}"
- df_m = pd.read_csv(
- model_path / "eval-metrics" / "eval_masker.csv", index_col=False
- )
- df_m["model"] = [model_id] * len(df_m)
- df_m["model_idx"] = [m] * len(df_m)
- df_m["model_feats"] = [model_feats] * len(df_m)
- models_df.update({model_id: df_m})
- df = pd.concat(list(models_df.values()), ignore_index=True)
- df["model_img_idx"] = df.model.astype(str) + "-" + df.idx.astype(str)
- df.rename(columns={"idx": "img_idx"}, inplace=True)
- dict_models_labels = {
- k: f"{v['model_idx'][0]}: {v['model_feats'][0]}"
- for k, v in models_df.items()
- }
- print("Done")
-
- if args.output_csv:
- print(f"Writing DataFrame to {args.output_csv}")
- df.to_csv(args.output_csv, index_label="model_img_idx")
-
- # Determine images with low metrics in any model
- print("Constructing filter based on metrics thresholds...")
- idx_not_good_in_any = []
- for idx in df.img_idx.unique():
- df_th = df.loc[
- (
- # TODO: rethink thresholds
- (df.tpr <= dict_metrics["threshold"]["tpr"])
- | (df.fpr >= dict_metrics["threshold"]["fpr"])
- | (df.edge_coherence >= dict_metrics["threshold"]["edge_coherence"])
- )
- & ((df.img_idx == idx) & (df.model.isin(df.model.unique())))
- ]
- if len(df_th) > 0:
- idx_not_good_in_any.append(idx)
- filters = {"all": df.img_idx.unique(), "not_good_in_any": idx_not_good_in_any}
- print("Done")
-
- # Boxplots of metrics
- print("Plotting boxplots of metrics...")
- for k, f in filters.items():
- print(f"\tDistribution of [{k}] images...")
- for metric in dict_metrics["names"].keys():
- fig_filename = plot_dir / f"boxplot_{metric}_{k}.png"
- if metric in ["mnr", "mpr", "accuracy_must_may"]:
- boxplot_metric(
- fig_filename,
- df.loc[df.img_idx.isin(f)],
- metric=metric,
- dict_metrics=dict_metrics["names"],
- do_stripplot=True,
- dict_models=dict_models_labels,
- order=list(df.model.unique()),
- )
- else:
- boxplot_metric(
- fig_filename,
- df.loc[df.img_idx.isin(f)],
- metric=metric,
- dict_metrics=dict_metrics["names"],
- dict_models=dict_models_labels,
- fliersize=1.0,
- order=list(df.model.unique()),
- )
- exp.log_image(fig_filename)
- print("Done")
-
- # Cluster Maps
- print("Plotting clustermaps...")
- for k, f in filters.items():
- print(f"\tDistribution of [{k}] images...")
- for metric in dict_metrics["names"].keys():
- fig_filename = plot_dir / f"clustermap_{metric}_{k}.png"
- df_mf = df.loc[df.img_idx.isin(f)].pivot("img_idx", "model", metric)
- clustermap_metric(
- output_filename=fig_filename,
- df=df_mf,
- metric=metric,
- dict_metrics=dict_metrics["names"],
- method="average",
- cluster_metric="euclidean",
- dict_models=dict_models_labels,
- row_cluster=False,
- )
- exp.log_image(fig_filename)
- print("Done")
-
- # Close comet
- exp.end()
diff --git a/spaces/NoriZC/vits-models/monotonic_align/core.py b/spaces/NoriZC/vits-models/monotonic_align/core.py
deleted file mode 100644
index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000
--- a/spaces/NoriZC/vits-models/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
\ No newline at end of file
diff --git a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/config.py b/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/config.py
deleted file mode 100644
index 1c21312f3de971bfa008254c6035cebc09f05e4c..0000000000000000000000000000000000000000
--- a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/config.py
+++ /dev/null
@@ -1,45 +0,0 @@
-librispeech_datasets = {
- "train": {
- "clean": ["LibriSpeech/train-clean-100", "LibriSpeech/train-clean-360"],
- "other": ["LibriSpeech/train-other-500"]
- },
- "test": {
- "clean": ["LibriSpeech/test-clean"],
- "other": ["LibriSpeech/test-other"]
- },
- "dev": {
- "clean": ["LibriSpeech/dev-clean"],
- "other": ["LibriSpeech/dev-other"]
- },
-}
-libritts_datasets = {
- "train": {
- "clean": ["LibriTTS/train-clean-100", "LibriTTS/train-clean-360"],
- "other": ["LibriTTS/train-other-500"]
- },
- "test": {
- "clean": ["LibriTTS/test-clean"],
- "other": ["LibriTTS/test-other"]
- },
- "dev": {
- "clean": ["LibriTTS/dev-clean"],
- "other": ["LibriTTS/dev-other"]
- },
-}
-voxceleb_datasets = {
- "voxceleb1" : {
- "train": ["VoxCeleb1/wav"],
- "test": ["VoxCeleb1/test_wav"]
- },
- "voxceleb2" : {
- "train": ["VoxCeleb2/dev/aac"],
- "test": ["VoxCeleb2/test_wav"]
- }
-}
-
-other_datasets = [
- "LJSpeech-1.1",
- "VCTK-Corpus/wav48",
-]
-
-anglophone_nationalites = ["australia", "canada", "ireland", "uk", "usa"]
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh
deleted file mode 100644
index e74953194d41f0d93855d41b2acef08556d92477..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-# you can change cmd.sh depending on what type of queue you are using.
-# If you have no queueing system and want to run on a local machine, you
-# can change all instances 'queue.pl' to run.pl (but be careful and run
-# commands one by one: most recipes will exhaust the memory on your
-# machine). queue.pl works with GridEngine (qsub). slurm.pl works
-# with slurm. Different queues are configured differently, with different
-# queue names and different ways of specifying things like memory;
-# to account for these differences you can create and edit the file
-# conf/queue.conf to match your queue's configuration. Search for
-# conf/queue.conf in http://kaldi-asr.org/doc/queue.html for more information,
-# or search for the string 'default_config' in utils/queue.pl or utils/slurm.pl.
-
-export train_cmd="run.pl --mem 2G"
-export decode_cmd="run.pl --mem 4G"
-export mkgraph_cmd="run.pl --mem 8G"
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/utils.py
deleted file mode 100644
index d93eb532ef84f0e2bc708b777229ab2cb76ca14b..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/encoders/utils.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from fairseq.data import encoders
-
-
-def get_whole_word_mask(args, dictionary):
- bpe = encoders.build_bpe(args)
- if bpe is not None:
-
- def is_beginning_of_word(i):
- if i < dictionary.nspecial:
- # special elements are always considered beginnings
- return True
- tok = dictionary[i]
- if tok.startswith("madeupword"):
- return True
- try:
- return bpe.is_beginning_of_word(tok)
- except ValueError:
- return True
-
- mask_whole_words = torch.ByteTensor(
- list(map(is_beginning_of_word, range(len(dictionary))))
- )
- return mask_whole_words
- return None
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/multilingual/multilingual_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/multilingual/multilingual_utils.py
deleted file mode 100644
index b4e0f9828cabfdbe375d05d9152b58bdbd6de7dc..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/multilingual/multilingual_utils.py
+++ /dev/null
@@ -1,63 +0,0 @@
-from enum import Enum
-from typing import Dict, List, Optional, Sequence
-
-import torch
-from fairseq.data import Dictionary
-
-
-class EncoderLangtok(Enum):
- """
- Prepend to the beginning of source sentence either the
- source or target language token. (src/tgt).
- """
-
- src = "src"
- tgt = "tgt"
-
-
-class LangTokSpec(Enum):
- main = "main"
- mono_dae = "mono_dae"
-
-
-class LangTokStyle(Enum):
- multilingual = "multilingual"
- mbart = "mbart"
-
-
-@torch.jit.export
-def get_lang_tok(
- lang: str, lang_tok_style: str, spec: str = LangTokSpec.main.value
-) -> str:
- # TOKEN_STYLES can't be defined outside this fn since it needs to be
- # TorchScriptable.
- TOKEN_STYLES: Dict[str, str] = {
- LangTokStyle.mbart.value: "[{}]",
- LangTokStyle.multilingual.value: "__{}__",
- }
-
- if spec.endswith("dae"):
- lang = f"{lang}_dae"
- elif spec.endswith("mined"):
- lang = f"{lang}_mined"
- style = TOKEN_STYLES[lang_tok_style]
- return style.format(lang)
-
-
-def augment_dictionary(
- dictionary: Dictionary,
- language_list: List[str],
- lang_tok_style: str,
- langtoks_specs: Sequence[str] = (LangTokSpec.main.value,),
- extra_data: Optional[Dict[str, str]] = None,
-) -> None:
- for spec in langtoks_specs:
- for language in language_list:
- dictionary.add_symbol(
- get_lang_tok(lang=language, lang_tok_style=lang_tok_style, spec=spec)
- )
-
- if lang_tok_style == LangTokStyle.mbart.value or (
- extra_data is not None and LangTokSpec.mono_dae.value in extra_data
- ):
- dictionary.add_symbol("")
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/multilingual_translation.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/multilingual_translation.py
deleted file mode 100644
index 4f85ab4832a6c7cbe57a99a3efc6987125d956fc..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/multilingual_translation.py
+++ /dev/null
@@ -1,462 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import contextlib
-import logging
-import os
-from collections import OrderedDict
-from argparse import ArgumentError
-
-import torch
-from fairseq import metrics, options, utils
-from fairseq.data import (
- Dictionary,
- LanguagePairDataset,
- RoundRobinZipDatasets,
- TransformEosLangPairDataset,
-)
-from fairseq.models import FairseqMultiModel
-from fairseq.tasks.translation import load_langpair_dataset
-
-from . import LegacyFairseqTask, register_task
-
-
-logger = logging.getLogger(__name__)
-
-
-def _lang_token(lang: str):
- return "__{}__".format(lang)
-
-
-def _lang_token_index(dic: Dictionary, lang: str):
- """Return language token index."""
- idx = dic.index(_lang_token(lang))
- assert idx != dic.unk_index, "cannot find language token for lang {}".format(lang)
- return idx
-
-
-@register_task("multilingual_translation")
-class MultilingualTranslationTask(LegacyFairseqTask):
- """A task for training multiple translation models simultaneously.
-
- We iterate round-robin over batches from multiple language pairs, ordered
- according to the `--lang-pairs` argument.
-
- The training loop is roughly:
-
- for i in range(len(epoch)):
- for lang_pair in args.lang_pairs:
- batch = next_batch_for_lang_pair(lang_pair)
- loss = criterion(model_for_lang_pair(lang_pair), batch)
- loss.backward()
- optimizer.step()
-
- In practice, `next_batch_for_lang_pair` is abstracted in a FairseqDataset
- (e.g., `RoundRobinZipDatasets`) and `model_for_lang_pair` is a model that
- implements the `FairseqMultiModel` interface.
-
- During inference it is required to specify a single `--source-lang` and
- `--target-lang`, which indicates the inference langauge direction.
- `--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to
- the same value as training.
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- # fmt: off
- parser.add_argument('data', metavar='DIR', help='path to data directory')
- parser.add_argument('--lang-pairs', default=None, metavar='PAIRS',
- help='comma-separated list of language pairs (in training order): en-de,en-fr,de-fr')
- parser.add_argument('-s', '--source-lang', default=None, metavar='SRC',
- help='source language (only needed for inference)')
- parser.add_argument('-t', '--target-lang', default=None, metavar='TARGET',
- help='target language (only needed for inference)')
- parser.add_argument('--left-pad-source', default='True', type=str, metavar='BOOL',
- help='pad the source on the left (default: True)')
- parser.add_argument('--left-pad-target', default='False', type=str, metavar='BOOL',
- help='pad the target on the left (default: False)')
- try:
- parser.add_argument('--max-source-positions', default=1024, type=int, metavar='N',
- help='max number of tokens in the source sequence')
- parser.add_argument('--max-target-positions', default=1024, type=int, metavar='N',
- help='max number of tokens in the target sequence')
- except ArgumentError:
- # this might have already been defined. Once we transition this to hydra it should be fine to add it here.
- pass
- parser.add_argument('--upsample-primary', default=1, type=int,
- help='amount to upsample primary dataset')
- parser.add_argument('--encoder-langtok', default=None, type=str, choices=['src', 'tgt'],
- metavar='SRCTGT',
- help='replace beginning-of-sentence in source sentence with source or target '
- 'language token. (src/tgt)')
- parser.add_argument('--decoder-langtok', action='store_true',
- help='replace beginning-of-sentence in target sentence with target language token')
- # fmt: on
-
- def __init__(self, args, dicts, training):
- super().__init__(args)
- self.dicts = dicts
- self.training = training
- if training:
- self.lang_pairs = args.lang_pairs
- else:
- self.lang_pairs = ["{}-{}".format(args.source_lang, args.target_lang)]
- # eval_lang_pairs for multilingual translation is usually all of the
- # lang_pairs. However for other multitask settings or when we want to
- # optimize for certain languages we want to use a different subset. Thus
- # the eval_lang_pairs class variable is provided for classes that extend
- # this class.
- self.eval_lang_pairs = self.lang_pairs
- # model_lang_pairs will be used to build encoder-decoder model pairs in
- # models.build_model(). This allows multitask type of sub-class can
- # build models other than the input lang_pairs
- self.model_lang_pairs = self.lang_pairs
- self.langs = list(dicts.keys())
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- dicts, training = cls.prepare(args, **kwargs)
- return cls(args, dicts, training)
-
- @classmethod
- def update_args(cls, args):
- args.left_pad_source = utils.eval_bool(args.left_pad_source)
- args.left_pad_target = utils.eval_bool(args.left_pad_target)
-
- if args.lang_pairs is None:
- raise ValueError(
- "--lang-pairs is required. List all the language pairs in the training objective."
- )
- if isinstance(args.lang_pairs, str):
- args.lang_pairs = args.lang_pairs.split(",")
-
- @classmethod
- def prepare(cls, args, **kargs):
- cls.update_args(args)
- sorted_langs = sorted(
- list({x for lang_pair in args.lang_pairs for x in lang_pair.split("-")})
- )
- if args.source_lang is not None or args.target_lang is not None:
- training = False
- else:
- training = True
-
- # load dictionaries
- dicts = OrderedDict()
- for lang in sorted_langs:
- paths = utils.split_paths(args.data)
- assert len(paths) > 0
- dicts[lang] = cls.load_dictionary(
- os.path.join(paths[0], "dict.{}.txt".format(lang))
- )
- if len(dicts) > 0:
- assert dicts[lang].pad() == dicts[sorted_langs[0]].pad()
- assert dicts[lang].eos() == dicts[sorted_langs[0]].eos()
- assert dicts[lang].unk() == dicts[sorted_langs[0]].unk()
- if args.encoder_langtok is not None or args.decoder_langtok:
- for lang_to_add in sorted_langs:
- dicts[lang].add_symbol(_lang_token(lang_to_add))
- logger.info("[{}] dictionary: {} types".format(lang, len(dicts[lang])))
- return dicts, training
-
- def get_encoder_langtok(self, src_lang, tgt_lang):
- if self.args.encoder_langtok is None:
- return self.dicts[src_lang].eos()
- if self.args.encoder_langtok == "src":
- return _lang_token_index(self.dicts[src_lang], src_lang)
- else:
- return _lang_token_index(self.dicts[src_lang], tgt_lang)
-
- def get_decoder_langtok(self, tgt_lang):
- if not self.args.decoder_langtok:
- return self.dicts[tgt_lang].eos()
- return _lang_token_index(self.dicts[tgt_lang], tgt_lang)
-
- def alter_dataset_langtok(
- self,
- lang_pair_dataset,
- src_eos=None,
- src_lang=None,
- tgt_eos=None,
- tgt_lang=None,
- ):
- if self.args.encoder_langtok is None and not self.args.decoder_langtok:
- return lang_pair_dataset
-
- new_src_eos = None
- if (
- self.args.encoder_langtok is not None
- and src_eos is not None
- and src_lang is not None
- and tgt_lang is not None
- ):
- new_src_eos = self.get_encoder_langtok(src_lang, tgt_lang)
- else:
- src_eos = None
-
- new_tgt_bos = None
- if self.args.decoder_langtok and tgt_eos is not None and tgt_lang is not None:
- new_tgt_bos = self.get_decoder_langtok(tgt_lang)
- else:
- tgt_eos = None
-
- return TransformEosLangPairDataset(
- lang_pair_dataset,
- src_eos=src_eos,
- new_src_eos=new_src_eos,
- tgt_bos=tgt_eos,
- new_tgt_bos=new_tgt_bos,
- )
-
- def load_dataset(self, split, epoch=1, **kwargs):
- """Load a dataset split."""
- paths = utils.split_paths(self.args.data)
- assert len(paths) > 0
- data_path = paths[(epoch - 1) % len(paths)]
-
- def language_pair_dataset(lang_pair):
- src, tgt = lang_pair.split("-")
- langpair_dataset = load_langpair_dataset(
- data_path,
- split,
- src,
- self.dicts[src],
- tgt,
- self.dicts[tgt],
- combine=True,
- dataset_impl=self.args.dataset_impl,
- upsample_primary=self.args.upsample_primary,
- left_pad_source=self.args.left_pad_source,
- left_pad_target=self.args.left_pad_target,
- max_source_positions=self.args.max_source_positions,
- max_target_positions=self.args.max_target_positions,
- )
- return self.alter_dataset_langtok(
- langpair_dataset,
- src_eos=self.dicts[src].eos(),
- src_lang=src,
- tgt_eos=self.dicts[tgt].eos(),
- tgt_lang=tgt,
- )
-
- self.datasets[split] = RoundRobinZipDatasets(
- OrderedDict(
- [
- (lang_pair, language_pair_dataset(lang_pair))
- for lang_pair in self.lang_pairs
- ]
- ),
- eval_key=None
- if self.training
- else "%s-%s" % (self.args.source_lang, self.args.target_lang),
- )
-
- def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None):
- if constraints is not None:
- raise NotImplementedError(
- "Constrained decoding with the multilingual_translation task is not supported"
- )
-
- lang_pair = "%s-%s" % (self.args.source_lang, self.args.target_lang)
- return RoundRobinZipDatasets(
- OrderedDict(
- [
- (
- lang_pair,
- self.alter_dataset_langtok(
- LanguagePairDataset(
- src_tokens, src_lengths, self.source_dictionary
- ),
- src_eos=self.source_dictionary.eos(),
- src_lang=self.args.source_lang,
- tgt_eos=self.target_dictionary.eos(),
- tgt_lang=self.args.target_lang,
- ),
- )
- ]
- ),
- eval_key=lang_pair,
- )
-
- def build_model(self, args):
- def check_args():
- messages = []
- if (
- len(set(self.args.lang_pairs).symmetric_difference(args.lang_pairs))
- != 0
- ):
- messages.append(
- "--lang-pairs should include all the language pairs {}.".format(
- args.lang_pairs
- )
- )
- if self.args.encoder_langtok != args.encoder_langtok:
- messages.append(
- "--encoder-langtok should be {}.".format(args.encoder_langtok)
- )
- if self.args.decoder_langtok != args.decoder_langtok:
- messages.append(
- "--decoder-langtok should {} be set.".format(
- "" if args.decoder_langtok else "not"
- )
- )
-
- if len(messages) > 0:
- raise ValueError(" ".join(messages))
-
- # Update args -> the fact that the constructor here
- # changes the args object doesn't mean you get the same one here
- self.update_args(args)
-
- # Check if task args are consistant with model args
- check_args()
-
- from fairseq import models
-
- model = models.build_model(args, self)
- if not isinstance(model, FairseqMultiModel):
- raise ValueError(
- "MultilingualTranslationTask requires a FairseqMultiModel architecture"
- )
- return model
-
- def _per_lang_pair_train_loss(
- self, lang_pair, model, update_num, criterion, sample, optimizer, ignore_grad
- ):
- loss, sample_size, logging_output = criterion(
- model.models[lang_pair], sample[lang_pair]
- )
- if ignore_grad:
- loss *= 0
- optimizer.backward(loss)
- return loss, sample_size, logging_output
-
- def train_step(
- self, sample, model, criterion, optimizer, update_num, ignore_grad=False
- ):
- model.train()
- from collections import defaultdict
-
- agg_loss, agg_sample_size, agg_logging_output = 0.0, 0.0, defaultdict(float)
- curr_lang_pairs = [
- lang_pair
- for lang_pair in self.model_lang_pairs
- if sample[lang_pair] is not None and len(sample[lang_pair]) != 0
- ]
-
- for idx, lang_pair in enumerate(curr_lang_pairs):
-
- def maybe_no_sync():
- if (
- self.args.distributed_world_size > 1
- and hasattr(model, "no_sync")
- and idx < len(curr_lang_pairs) - 1
- ):
- return model.no_sync()
- else:
- return contextlib.ExitStack() # dummy contextmanager
-
- with maybe_no_sync():
- loss, sample_size, logging_output = self._per_lang_pair_train_loss(
- lang_pair,
- model,
- update_num,
- criterion,
- sample,
- optimizer,
- ignore_grad,
- )
- agg_loss += loss.detach().item()
- # TODO make summing of the sample sizes configurable
- agg_sample_size += sample_size
- for k in logging_output:
- agg_logging_output[k] += logging_output[k]
- agg_logging_output[f"{lang_pair}:{k}"] += logging_output[k]
- return agg_loss, agg_sample_size, agg_logging_output
-
- def _per_lang_pair_valid_loss(self, lang_pair, model, criterion, sample):
- return criterion(model.models[lang_pair], sample[lang_pair])
-
- def valid_step(self, sample, model, criterion):
- model.eval()
- with torch.no_grad():
- from collections import defaultdict
-
- agg_loss, agg_sample_size, agg_logging_output = 0.0, 0.0, defaultdict(float)
- for lang_pair in self.eval_lang_pairs:
- if (
- lang_pair not in sample
- or sample[lang_pair] is None
- or len(sample[lang_pair]) == 0
- ):
- continue
- loss, sample_size, logging_output = self._per_lang_pair_valid_loss(
- lang_pair, model, criterion, sample
- )
- agg_loss += loss.data.item()
- # TODO make summing of the sample sizes configurable
- agg_sample_size += sample_size
- for k in logging_output:
- agg_logging_output[k] += logging_output[k]
- agg_logging_output[f"{lang_pair}:{k}"] += logging_output[k]
- return agg_loss, agg_sample_size, agg_logging_output
-
- def inference_step(
- self, generator, models, sample, prefix_tokens=None, constraints=None
- ):
- with torch.no_grad():
- if self.args.decoder_langtok:
- bos_token = _lang_token_index(
- self.target_dictionary, self.args.target_lang
- )
- else:
- bos_token = self.target_dictionary.eos()
- return generator.generate(
- models,
- sample,
- prefix_tokens=prefix_tokens,
- constraints=constraints,
- bos_token=bos_token,
- )
-
- def reduce_metrics(self, logging_outputs, criterion):
- with metrics.aggregate():
- # pass 'sample_size', 'nsentences', 'ntokens' stats to fairseq_task
- super().reduce_metrics(logging_outputs, criterion)
- for k in ["sample_size", "nsentences", "ntokens"]:
- metrics.log_scalar(k, sum(l[k] for l in logging_outputs))
-
- @property
- def source_dictionary(self):
- if self.training:
- return next(iter(self.dicts.values()))
- else:
- return self.dicts[self.args.source_lang]
-
- @property
- def target_dictionary(self):
- if self.training:
- return next(iter(self.dicts.values()))
- else:
- return self.dicts[self.args.target_lang]
-
- def max_positions(self):
- """Return the max sentence length allowed by the task."""
- if len(self.datasets.values()) == 0:
- return {
- "%s-%s"
- % (self.args.source_lang, self.args.target_lang): (
- self.args.max_source_positions,
- self.args.max_target_positions,
- )
- }
- return OrderedDict(
- [
- (key, (self.args.max_source_positions, self.args.max_target_positions))
- for split in self.datasets.keys()
- for key in self.datasets[split].datasets.keys()
- ]
- )
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/nat/fairseq_nat_model.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/nat/fairseq_nat_model.py
deleted file mode 100644
index b09394112f57d9e82f2a4cbc371af888281b9e8a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/nat/fairseq_nat_model.py
+++ /dev/null
@@ -1,170 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-from fairseq.models.transformer import (
- TransformerDecoder,
- TransformerEncoder,
- TransformerModel,
-)
-from fairseq.modules.transformer_sentence_encoder import init_bert_params
-
-
-def ensemble_encoder(func):
- def wrapper(self, *args, **kwargs):
- if self.ensemble_models is None or len(self.ensemble_models) == 1:
- return func(self, *args, **kwargs)
- encoder_outs = [func(model, *args, **kwargs, return_all_hiddens=True) for model in self.ensemble_models]
- _encoder_out = encoder_outs[0].copy()
-
- def stack(key):
- outs = [e[key][0] for e in encoder_outs]
- return [torch.stack(outs, -1) if outs[0] is not None else None]
-
- _encoder_out["encoder_out"] = stack("encoder_out")
- _encoder_out["encoder_embedding"] = stack("encoder_embedding")
-
- num_layers = len(_encoder_out["encoder_states"])
- if num_layers > 0:
- _encoder_out["encoder_states"] = [
- torch.stack([e["encoder_states"][i] for e in encoder_outs], -1)
- for i in range(num_layers)
- ]
- return _encoder_out
-
- return wrapper
-
-
-def ensemble_decoder(func):
- def wrapper(self, normalize=False, encoder_out=None, *args, **kwargs):
- if self.ensemble_models is None or len(self.ensemble_models) == 1:
- return func(
- self, normalize=normalize, encoder_out=encoder_out, *args, **kwargs
- )
-
- def _replace(encoder_out, new_val):
- new_encoder_out = encoder_out.copy()
- new_encoder_out["encoder_out"] = [new_val]
- return new_encoder_out
-
- action_outs = [
- func(
- model,
- normalize=normalize,
- encoder_out=_replace(
- encoder_out,
- encoder_out["encoder_out"][0][:, :, :, i]
- ),
- *args,
- **kwargs
- )
- for i, model in enumerate(self.ensemble_models)
- ]
-
- if not isinstance(action_outs[0], tuple): # return multiple values
- action_outs = [[a] for a in action_outs]
- else:
- action_outs = [list(a) for a in action_outs]
-
- ensembled_outs = []
- for i in range(len(action_outs[0])):
- if i == 0 and normalize:
- ensembled_outs += [
- torch.logsumexp(
- torch.stack([a[i] for a in action_outs], -1), dim=-1
- )
- - math.log(len(self.ensemble_models))
- ]
- elif action_outs[0][i] is not None:
- ensembled_outs += [torch.stack([a[i] for a in action_outs], -1)]
- else:
- ensembled_outs += [None]
-
- if len(ensembled_outs) == 1:
- return ensembled_outs[0]
- return tuple(ensembled_outs)
-
- return wrapper
-
-
-class FairseqNATModel(TransformerModel):
- """
- Abstract class for all nonautoregressive-based models
- """
-
- def __init__(self, args, encoder, decoder):
- super().__init__(args, encoder, decoder)
- self.tgt_dict = decoder.dictionary
- self.bos = decoder.dictionary.bos()
- self.eos = decoder.dictionary.eos()
- self.pad = decoder.dictionary.pad()
- self.unk = decoder.dictionary.unk()
-
- self.ensemble_models = None
-
- @property
- def allow_length_beam(self):
- return False
-
- @property
- def allow_ensemble(self):
- return True
-
- def enable_ensemble(self, models):
- self.encoder.ensemble_models = [m.encoder for m in models]
- self.decoder.ensemble_models = [m.decoder for m in models]
-
- @staticmethod
- def add_args(parser):
- TransformerModel.add_args(parser)
- parser.add_argument(
- "--apply-bert-init",
- action="store_true",
- help="use custom param initialization for BERT",
- )
-
- @classmethod
- def build_decoder(cls, args, tgt_dict, embed_tokens):
- decoder = FairseqNATDecoder(args, tgt_dict, embed_tokens)
- if getattr(args, "apply_bert_init", False):
- decoder.apply(init_bert_params)
- return decoder
-
- @classmethod
- def build_encoder(cls, args, src_dict, embed_tokens):
- encoder = FairseqNATEncoder(args, src_dict, embed_tokens)
- if getattr(args, "apply_bert_init", False):
- encoder.apply(init_bert_params)
- return encoder
-
- def forward_encoder(self, encoder_inputs):
- return self.encoder(*encoder_inputs)
-
- def forward_decoder(self, *args, **kwargs):
- return NotImplementedError
-
- def initialize_output_tokens(self, *args, **kwargs):
- return NotImplementedError
-
- def forward(self, *args, **kwargs):
- return NotImplementedError
-
-
-class FairseqNATEncoder(TransformerEncoder):
- def __init__(self, args, dictionary, embed_tokens):
- super().__init__(args, dictionary, embed_tokens)
- self.ensemble_models = None
-
- @ensemble_encoder
- def forward(self, *args, **kwargs):
- return super().forward(*args, **kwargs)
-
-
-class FairseqNATDecoder(TransformerDecoder):
- def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False):
- super().__init__(args, dictionary, embed_tokens, no_encoder_attn)
- self.ensemble_models = None
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/constraints/validate.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/constraints/validate.py
deleted file mode 100644
index d531ad9f39b1df42c98fe8f26ad61fe53a9ac0c5..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/constraints/validate.py
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/usr/bin/env python3
-#
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-
-
-"""Reads in a fairseq output file, and verifies that the constraints
-(C- lines) are present in the output (the first H- line). Assumes that
-constraints are listed prior to the first hypothesis.
-"""
-
-constraints = []
-found = 0
-total = 0
-for line in sys.stdin:
- if line.startswith("C-"):
- constraints.append(line.rstrip().split("\t")[1])
- elif line.startswith("H-"):
- text = line.split("\t")[2]
-
- for constraint in constraints:
- total += 1
- if constraint in text:
- found += 1
- else:
- print(f"No {constraint} in {text}", file=sys.stderr)
-
- constraints = []
-
-print(f"Found {found} / {total} = {100 * found / total:.1f}%")
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/data/replabels.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/data/replabels.py
deleted file mode 100644
index 441f1bd432b95865fc981c6c695cee299b07ed62..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_recognition/data/replabels.py
+++ /dev/null
@@ -1,70 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Replabel transforms for use with flashlight's ASG criterion.
-"""
-
-
-def replabel_symbol(i):
- """
- Replabel symbols used in flashlight, currently just "1", "2", ...
- This prevents training with numeral tokens, so this might change in the future
- """
- return str(i)
-
-
-def pack_replabels(tokens, dictionary, max_reps):
- """
- Pack a token sequence so that repeated symbols are replaced by replabels
- """
- if len(tokens) == 0 or max_reps <= 0:
- return tokens
-
- replabel_value_to_idx = [0] * (max_reps + 1)
- for i in range(1, max_reps + 1):
- replabel_value_to_idx[i] = dictionary.index(replabel_symbol(i))
-
- result = []
- prev_token = -1
- num_reps = 0
- for token in tokens:
- if token == prev_token and num_reps < max_reps:
- num_reps += 1
- else:
- if num_reps > 0:
- result.append(replabel_value_to_idx[num_reps])
- num_reps = 0
- result.append(token)
- prev_token = token
- if num_reps > 0:
- result.append(replabel_value_to_idx[num_reps])
- return result
-
-
-def unpack_replabels(tokens, dictionary, max_reps):
- """
- Unpack a token sequence so that replabels are replaced by repeated symbols
- """
- if len(tokens) == 0 or max_reps <= 0:
- return tokens
-
- replabel_idx_to_value = {}
- for i in range(1, max_reps + 1):
- replabel_idx_to_value[dictionary.index(replabel_symbol(i))] = i
-
- result = []
- prev_token = -1
- for token in tokens:
- try:
- for _ in range(replabel_idx_to_value[token]):
- result.append(prev_token)
- prev_token = -1
- except KeyError:
- result.append(token)
- prev_token = token
- return result
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamicconv_layer/setup.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamicconv_layer/setup.py
deleted file mode 100644
index 6a21f7e2ee0840a3b251522275a0b32a856951d7..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamicconv_layer/setup.py
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from setuptools import setup
-from torch.utils.cpp_extension import BuildExtension, CUDAExtension
-
-
-setup(
- name="dynamicconv_layer",
- ext_modules=[
- CUDAExtension(
- name="dynamicconv_cuda",
- sources=[
- "dynamicconv_cuda.cpp",
- "dynamicconv_cuda_kernel.cu",
- ],
- ),
- ],
- cmdclass={"build_ext": BuildExtension},
-)
diff --git a/spaces/OFA-Sys/OFA-vqa/run_scripts/caption/train_caption_stage1.sh b/spaces/OFA-Sys/OFA-vqa/run_scripts/caption/train_caption_stage1.sh
deleted file mode 100644
index 08cf67ee91eebe144996fcf559c0684dc81e1494..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/run_scripts/caption/train_caption_stage1.sh
+++ /dev/null
@@ -1,104 +0,0 @@
-#!/usr/bin/env
-
-log_dir=./stage1_logs
-save_dir=./stage1_checkpoints
-mkdir -p $log_dir $save_dir
-
-bpe_dir=../../utils/BPE
-user_dir=../../ofa_module
-
-data_dir=../../dataset/caption_data
-data=${data_dir}/caption_stage1_train.tsv,${data_dir}/caption_val.tsv
-restore_file=../../checkpoints/ofa_large.pt
-selected_cols=0,4,2
-
-task=caption
-arch=ofa_large
-criterion=ajust_label_smoothed_cross_entropy
-label_smoothing=0.1
-lr=1e-5
-max_epoch=5
-warmup_ratio=0.06
-batch_size=8
-update_freq=4
-resnet_drop_path_rate=0.0
-encoder_drop_path_rate=0.1
-decoder_drop_path_rate=0.1
-dropout=0.1
-attention_dropout=0.0
-max_src_length=80
-max_tgt_length=20
-num_bins=1000
-patch_image_size=480
-eval_cider_cached=${data_dir}/cider_cached_tokens/coco-valid-words.p
-drop_worst_ratio=0.2
-
-for max_epoch in {2,}; do
- echo "max_epoch "${max_epoch}
- for warmup_ratio in {0.06,}; do
- echo "warmup_ratio "${warmup_ratio}
- for drop_worst_after in {2500,}; do
- echo "drop_worst_after "${drop_worst_after}
-
- log_file=${log_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after}".log"
- save_path=${save_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after}
- mkdir -p $save_path
-
- CUDA_VISIBLE_DEVICES=0,1,2,3 python3 ../../train.py \
- $data \
- --selected-cols=${selected_cols} \
- --bpe-dir=${bpe_dir} \
- --user-dir=${user_dir} \
- --restore-file=${restore_file} \
- --reset-optimizer --reset-dataloader --reset-meters \
- --save-dir=${save_path} \
- --task=${task} \
- --arch=${arch} \
- --criterion=${criterion} \
- --label-smoothing=${label_smoothing} \
- --batch-size=${batch_size} \
- --update-freq=${update_freq} \
- --encoder-normalize-before \
- --decoder-normalize-before \
- --share-decoder-input-output-embed \
- --share-all-embeddings \
- --layernorm-embedding \
- --patch-layernorm-embedding \
- --code-layernorm-embedding \
- --resnet-drop-path-rate=${resnet_drop_path_rate} \
- --encoder-drop-path-rate=${encoder_drop_path_rate} \
- --decoder-drop-path-rate=${decoder_drop_path_rate} \
- --dropout=${dropout} \
- --attention-dropout=${attention_dropout} \
- --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \
- --lr-scheduler=polynomial_decay --lr=${lr} \
- --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \
- --log-format=simple --log-interval=10 \
- --fixed-validation-seed=7 \
- --no-epoch-checkpoints --keep-best-checkpoints=1 \
- --save-interval=1 --validate-interval=1 \
- --save-interval-updates=500 --validate-interval-updates=500 \
- --eval-cider \
- --eval-cider-cached-tokens=${eval_cider_cached} \
- --eval-args='{"beam":5,"max_len_b":16,"no_repeat_ngram_size":3}' \
- --best-checkpoint-metric=cider --maximize-best-checkpoint-metric \
- --max-src-length=${max_src_length} \
- --max-tgt-length=${max_tgt_length} \
- --find-unused-parameters \
- --freeze-encoder-embedding \
- --freeze-decoder-embedding \
- --add-type-embedding \
- --scale-attn \
- --scale-fc \
- --scale-heads \
- --disable-entangle \
- --num-bins=${num_bins} \
- --patch-image-size=${patch_image_size} \
- --drop-worst-ratio=${drop_worst_ratio} \
- --drop-worst-after=${drop_worst_after} \
- --fp16 \
- --fp16-scale-window=512 \
- --num-workers=0 >> ${log_file} 2>&1
- done
- done
-done
\ No newline at end of file
diff --git a/spaces/Omnibus/MusicGen/audiocraft/modules/activations.py b/spaces/Omnibus/MusicGen/audiocraft/modules/activations.py
deleted file mode 100644
index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/MusicGen/audiocraft/modules/activations.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-from typing import Union, Callable
-
-
-class CustomGLU(nn.Module):
- """Custom Gated Linear Unit activation.
- Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half
- of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation
- function (i.e. sigmoid, swish, etc.).
-
- Args:
- activation (nn.Module): The custom activation to apply in the Gated Linear Unit
- dim (int): the dimension on which to split the input. Default: -1
-
- Shape:
- - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional
- dimensions
- - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2`
-
- Examples::
- >>> m = CustomGLU(nn.Sigmoid())
- >>> input = torch.randn(4, 2)
- >>> output = m(input)
- """
- def __init__(self, activation: nn.Module, dim: int = -1):
- super(CustomGLU, self).__init__()
- self.dim = dim
- self.activation = activation
-
- def forward(self, x: Tensor):
- assert x.shape[self.dim] % 2 == 0 # M = N / 2
- a, b = torch.chunk(x, 2, dim=self.dim)
- return a * self.activation(b)
-
-
-class SwiGLU(CustomGLU):
- """SiLU Gated Linear Unit activation.
- Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(SwiGLU, self).__init__(nn.SiLU(), dim)
-
-
-class GeGLU(CustomGLU):
- """GeLU Gated Linear Unit activation.
- Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(GeGLU, self).__init__(nn.GELU(), dim)
-
-
-class ReGLU(CustomGLU):
- """ReLU Gated Linear Unit activation.
- Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(ReGLU, self).__init__(nn.ReLU(), dim)
-
-
-def get_activation_fn(
- activation: Union[str, Callable[[Tensor], Tensor]]
-) -> Union[str, Callable[[Tensor], Tensor]]:
- """Helper function to map an activation string to the activation class.
- If the supplied activation is not a string that is recognized, the activation is passed back.
-
- Args:
- activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check
- """
- if isinstance(activation, str):
- if activation == "reglu":
- return ReGLU()
- elif activation == "geglu":
- return GeGLU()
- elif activation == "swiglu":
- return SwiGLU()
- return activation
diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/LICENSE.md b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/LICENSE.md
deleted file mode 100644
index f288702d2fa16d3cdf0035b15a9fcbc552cd88e7..0000000000000000000000000000000000000000
--- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/LICENSE.md
+++ /dev/null
@@ -1,674 +0,0 @@
- GNU GENERAL PUBLIC LICENSE
- Version 3, 29 June 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users. We, the Free Software Foundation, use the
-GNU General Public License for most of our software; it applies also to
-any other work released this way by its authors. You can apply it to
-your programs, too.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you have
-certain responsibilities if you distribute copies of the software, or if
-you modify it: responsibilities to respect the freedom of others.
-
- For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
- Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
- For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
- Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the manufacturer
-can do so. This is fundamentally incompatible with the aim of
-protecting users' freedom to change the software. The systematic
-pattern of such abuse occurs in the area of products for individuals to
-use, which is precisely where it is most unacceptable. Therefore, we
-have designed this version of the GPL to prohibit the practice for those
-products. If such problems arise substantially in other domains, we
-stand ready to extend this provision to those domains in future versions
-of the GPL, as needed to protect the freedom of users.
-
- Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish to
-avoid the special danger that patents applied to a free program could
-make it effectively proprietary. To prevent this, the GPL assures that
-patents cannot be used to render the program non-free.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Use with the GNU Affero General Public License.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU General Public License from time to time. Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
- Copyright (C)
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program. If not, see .
-
-Also add information on how to contact you by electronic and paper mail.
-
- If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
-
- Copyright (C)
- This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
- This is free software, and you are welcome to redistribute it
- under certain conditions; type `show c' for details.
-
-The hypothetical commands `show w' and `show c' should show the appropriate
-parts of the General Public License. Of course, your program's commands
-might be different; for a GUI interface, you would use an "about box".
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU GPL, see
-.
-
- The GNU General Public License does not permit incorporating your program
-into proprietary programs. If your program is a subroutine library, you
-may consider it more useful to permit linking proprietary applications with
-the library. If this is what you want to do, use the GNU Lesser General
-Public License instead of this License. But first, please read
-.
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/__init__.py
deleted file mode 100644
index aed4fa323c2c8001593fdfdcd878a21718800167..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .models import *
diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/tools/easyconvert.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/tools/easyconvert.py
deleted file mode 100644
index 3649a93f947d47beb872fdc3f933d0b81fc56b37..0000000000000000000000000000000000000000
--- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/tools/easyconvert.py
+++ /dev/null
@@ -1,72 +0,0 @@
-from .geometry import *
-
-def nfeats_of(rottype):
- if rottype in ["rotvec", "axisangle"]:
- return 3
- elif rottype in ["rotquat", "quaternion"]:
- return 4
- elif rottype in ["rot6d", "6drot", "rotation6d"]:
- return 6
- elif rottype in ["rotmat"]:
- return 9
- else:
- return TypeError("This rotation type doesn't have features.")
-
-
-def axis_angle_to(newtype, rotations):
- if newtype in ["matrix"]:
- rotations = axis_angle_to_matrix(rotations)
- return rotations
- elif newtype in ["rotmat"]:
- rotations = axis_angle_to_matrix(rotations)
- rotations = matrix_to("rotmat", rotations)
- return rotations
- elif newtype in ["rot6d", "6drot", "rotation6d"]:
- rotations = axis_angle_to_matrix(rotations)
- rotations = matrix_to("rot6d", rotations)
- return rotations
- elif newtype in ["rotquat", "quaternion"]:
- rotations = axis_angle_to_quaternion(rotations)
- return rotations
- elif newtype in ["rotvec", "axisangle"]:
- return rotations
- else:
- raise NotImplementedError
-
-
-def matrix_to(newtype, rotations):
- if newtype in ["matrix"]:
- return rotations
- if newtype in ["rotmat"]:
- rotations = rotations.reshape((*rotations.shape[:-2], 9))
- return rotations
- elif newtype in ["rot6d", "6drot", "rotation6d"]:
- rotations = matrix_to_rotation_6d(rotations)
- return rotations
- elif newtype in ["rotquat", "quaternion"]:
- rotations = matrix_to_quaternion(rotations)
- return rotations
- elif newtype in ["rotvec", "axisangle"]:
- rotations = matrix_to_axis_angle(rotations)
- return rotations
- else:
- raise NotImplementedError
-
-
-def to_matrix(oldtype, rotations):
- if oldtype in ["matrix"]:
- return rotations
- if oldtype in ["rotmat"]:
- rotations = rotations.reshape((*rotations.shape[:-2], 3, 3))
- return rotations
- elif oldtype in ["rot6d", "6drot", "rotation6d"]:
- rotations = rotation_6d_to_matrix(rotations)
- return rotations
- elif oldtype in ["rotquat", "quaternion"]:
- rotations = quaternion_to_matrix(rotations)
- return rotations
- elif oldtype in ["rotvec", "axisangle"]:
- rotations = axis_angle_to_matrix(rotations)
- return rotations
- else:
- raise NotImplementedError
diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/simple_augment.py b/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/simple_augment.py
deleted file mode 100644
index 77776cd134046dc012e021d0ab80c1e0b90d2275..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/simple_augment.py
+++ /dev/null
@@ -1,478 +0,0 @@
-import math
-
-import torch
-from torch import autograd
-from torch.nn import functional as F
-import numpy as np
-
-from torch import distributed as dist
-#from distributed import reduce_sum
-from models.stylegan2.op2 import upfirdn2d
-
-def reduce_sum(tensor):
- if not dist.is_available():
- return tensor
-
- if not dist.is_initialized():
- return tensor
-
- tensor = tensor.clone()
- dist.all_reduce(tensor, op=dist.ReduceOp.SUM)
-
- return tensor
-
-
-class AdaptiveAugment:
- def __init__(self, ada_aug_target, ada_aug_len, update_every, device):
- self.ada_aug_target = ada_aug_target
- self.ada_aug_len = ada_aug_len
- self.update_every = update_every
-
- self.ada_update = 0
- self.ada_aug_buf = torch.tensor([0.0, 0.0], device=device)
- self.r_t_stat = 0
- self.ada_aug_p = 0
-
- @torch.no_grad()
- def tune(self, real_pred):
- self.ada_aug_buf += torch.tensor(
- (torch.sign(real_pred).sum().item(), real_pred.shape[0]),
- device=real_pred.device,
- )
- self.ada_update += 1
-
- if self.ada_update % self.update_every == 0:
- self.ada_aug_buf = reduce_sum(self.ada_aug_buf)
- pred_signs, n_pred = self.ada_aug_buf.tolist()
-
- self.r_t_stat = pred_signs / n_pred
-
- if self.r_t_stat > self.ada_aug_target:
- sign = 1
-
- else:
- sign = -1
-
- self.ada_aug_p += sign * n_pred / self.ada_aug_len
- self.ada_aug_p = min(1, max(0, self.ada_aug_p))
- self.ada_aug_buf.mul_(0)
- self.ada_update = 0
-
- return self.ada_aug_p
-
-
-SYM6 = (
- 0.015404109327027373,
- 0.0034907120842174702,
- -0.11799011114819057,
- -0.048311742585633,
- 0.4910559419267466,
- 0.787641141030194,
- 0.3379294217276218,
- -0.07263752278646252,
- -0.021060292512300564,
- 0.04472490177066578,
- 0.0017677118642428036,
- -0.007800708325034148,
-)
-
-
-def translate_mat(t_x, t_y, device="cpu"):
- batch = t_x.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- translate = torch.stack((t_x, t_y), 1)
- mat[:, :2, 2] = translate
-
- return mat
-
-
-def rotate_mat(theta, device="cpu"):
- batch = theta.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- sin_t = torch.sin(theta)
- cos_t = torch.cos(theta)
- rot = torch.stack((cos_t, -sin_t, sin_t, cos_t), 1).view(batch, 2, 2)
- mat[:, :2, :2] = rot
-
- return mat
-
-
-def scale_mat(s_x, s_y, device="cpu"):
- batch = s_x.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- mat[:, 0, 0] = s_x
- mat[:, 1, 1] = s_y
-
- return mat
-
-
-def translate3d_mat(t_x, t_y, t_z):
- batch = t_x.shape[0]
-
- mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- translate = torch.stack((t_x, t_y, t_z), 1)
- mat[:, :3, 3] = translate
-
- return mat
-
-
-def rotate3d_mat(axis, theta):
- batch = theta.shape[0]
-
- u_x, u_y, u_z = axis
-
- eye = torch.eye(3).unsqueeze(0)
- cross = torch.tensor([(0, -u_z, u_y), (u_z, 0, -u_x), (-u_y, u_x, 0)]).unsqueeze(0)
- outer = torch.tensor(axis)
- outer = (outer.unsqueeze(1) * outer).unsqueeze(0)
-
- sin_t = torch.sin(theta).view(-1, 1, 1)
- cos_t = torch.cos(theta).view(-1, 1, 1)
-
- rot = cos_t * eye + sin_t * cross + (1 - cos_t) * outer
-
- eye_4 = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- eye_4[:, :3, :3] = rot
-
- return eye_4
-
-
-def scale3d_mat(s_x, s_y, s_z):
- batch = s_x.shape[0]
-
- mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- mat[:, 0, 0] = s_x
- mat[:, 1, 1] = s_y
- mat[:, 2, 2] = s_z
-
- return mat
-
-
-def luma_flip_mat(axis, i):
- batch = i.shape[0]
-
- eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- axis = torch.tensor(axis + (0,))
- flip = 2 * torch.ger(axis, axis) * i.view(-1, 1, 1)
-
- return eye - flip
-
-
-def saturation_mat(axis, i):
- batch = i.shape[0]
-
- eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- axis = torch.tensor(axis + (0,))
- axis = torch.ger(axis, axis)
- saturate = axis + (eye - axis) * i.view(-1, 1, 1)
-
- return saturate
-
-
-def lognormal_sample(size, mean=0, std=1, device="cpu"):
- return torch.empty(size, device=device).log_normal_(mean=mean, std=std)
-
-
-def category_sample(size, categories, device="cpu"):
- category = torch.tensor(categories, device=device)
- sample = torch.randint(high=len(categories), size=(size,), device=device)
-
- return category[sample]
-
-
-def uniform_sample(size, low, high, device="cpu"):
- return torch.empty(size, device=device).uniform_(low, high)
-
-
-def normal_sample(size, mean=0, std=1, device="cpu"):
- return torch.empty(size, device=device).normal_(mean, std)
-
-
-def bernoulli_sample(size, p, device="cpu"):
- return torch.empty(size, device=device).bernoulli_(p)
-
-
-def random_mat_apply(p, transform, prev, eye, device="cpu"):
- size = transform.shape[0]
- select = bernoulli_sample(size, p, device=device).view(size, 1, 1)
- select_transform = select * transform + (1 - select) * eye
-
- return select_transform @ prev
-
-
-def sample_affine(p, size, height, width, device="cpu"):
- G = torch.eye(3, device=device).unsqueeze(0).repeat(size, 1, 1)
- eye = G
-
- # flip
- #param = category_sample(size, (0, 1))
- #Gc = scale_mat(1 - 2.0 * param, torch.ones(size), device=device)
- #G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('flip', G, scale_mat(1 - 2.0 * param, torch.ones(size)), sep='\n')
-
- # 90 rotate
- #param = category_sample(size, (0, 3))
- #Gc = rotate_mat(-math.pi / 2 * param, device=device)
- #G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('90 rotate', G, rotate_mat(-math.pi / 2 * param), sep='\n')
-
- # integer translate
- param = uniform_sample(size, -0.125, 0.125)
- param_height = torch.round(param * height) / height
- param_width = torch.round(param * width) / width
- Gc = translate_mat(param_width, param_height, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('integer translate', G, translate_mat(param_width, param_height), sep='\n')
-
- # isotropic scale
- param = lognormal_sample(size, std=0.1 * math.log(2))
- Gc = scale_mat(param, param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('isotropic scale', G, scale_mat(param, param), sep='\n')
-
- p_rot = 1 - math.sqrt(1 - p)
-
- # pre-rotate
- param = uniform_sample(size, -math.pi * 0.25, math.pi * 0.25)
- Gc = rotate_mat(-param, device=device)
- G = random_mat_apply(p_rot, Gc, G, eye, device=device)
- # print('pre-rotate', G, rotate_mat(-param), sep='\n')
-
- # anisotropic scale
- param = lognormal_sample(size, std=0.1 * math.log(2))
- Gc = scale_mat(param, 1 / param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('anisotropic scale', G, scale_mat(param, 1 / param), sep='\n')
-
- # post-rotate
- param = uniform_sample(size, -math.pi * 0.25, math.pi * 0.25)
- Gc = rotate_mat(-param, device=device)
- G = random_mat_apply(p_rot, Gc, G, eye, device=device)
- # print('post-rotate', G, rotate_mat(-param), sep='\n')
-
- # fractional translate
- param = normal_sample(size, std=0.125)
- Gc = translate_mat(param, param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('fractional translate', G, translate_mat(param, param), sep='\n')
-
- return G
-
-
-def sample_color(p, size):
- C = torch.eye(4).unsqueeze(0).repeat(size, 1, 1)
- eye = C
- axis_val = 1 / math.sqrt(3)
- axis = (axis_val, axis_val, axis_val)
-
- # brightness
- param = normal_sample(size, std=0.2)
- Cc = translate3d_mat(param, param, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # contrast
- param = lognormal_sample(size, std=0.5 * math.log(2))
- Cc = scale3d_mat(param, param, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # luma flip
- param = category_sample(size, (0, 1))
- Cc = luma_flip_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # hue rotation
- param = uniform_sample(size, -math.pi, math.pi)
- Cc = rotate3d_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # saturation
- param = lognormal_sample(size, std=1 * math.log(2))
- Cc = saturation_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- return C
-
-
-def make_grid(shape, x0, x1, y0, y1, device):
- n, c, h, w = shape
- grid = torch.empty(n, h, w, 3, device=device)
- grid[:, :, :, 0] = torch.linspace(x0, x1, w, device=device)
- grid[:, :, :, 1] = torch.linspace(y0, y1, h, device=device).unsqueeze(-1)
- grid[:, :, :, 2] = 1
-
- return grid
-
-
-def affine_grid(grid, mat):
- n, h, w, _ = grid.shape
- return (grid.view(n, h * w, 3) @ mat.transpose(1, 2)).view(n, h, w, 2)
-
-
-def get_padding(G, height, width, kernel_size):
- device = G.device
-
- cx = (width - 1) / 2
- cy = (height - 1) / 2
- cp = torch.tensor(
- [(-cx, -cy, 1), (cx, -cy, 1), (cx, cy, 1), (-cx, cy, 1)], device=device
- )
- cp = G @ cp.T
-
- pad_k = kernel_size // 4
-
- pad = cp[:, :2, :].permute(1, 0, 2).flatten(1)
- pad = torch.cat((-pad, pad)).max(1).values
- pad = pad + torch.tensor([pad_k * 2 - cx, pad_k * 2 - cy] * 2, device=device)
- pad = pad.max(torch.tensor([0, 0] * 2, device=device))
- pad = pad.min(torch.tensor([width - 1, height - 1] * 2, device=device))
-
- pad_x1, pad_y1, pad_x2, pad_y2 = pad.ceil().to(torch.int32)
-
- return pad_x1, pad_x2, pad_y1, pad_y2
-
-
-def try_sample_affine_and_pad(img, p, kernel_size, G=None):
- batch, _, height, width = img.shape
-
- G_try = G
-
- if G is None:
- G_try = torch.inverse(sample_affine(p, batch, height, width))
-
- pad_x1, pad_x2, pad_y1, pad_y2 = get_padding(G_try, height, width, kernel_size)
-
- img_pad = F.pad(img, (pad_x1, pad_x2, pad_y1, pad_y2), mode="reflect")
-
- return img_pad, G_try, (pad_x1, pad_x2, pad_y1, pad_y2)
-
-
-class GridSampleForward(autograd.Function):
- @staticmethod
- def forward(ctx, input, grid):
- out = F.grid_sample(
- input, grid, mode="bilinear", padding_mode="zeros", align_corners=False
- )
- ctx.save_for_backward(input, grid)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- input, grid = ctx.saved_tensors
- grad_input, grad_grid = GridSampleBackward.apply(grad_output, input, grid)
-
- return grad_input, grad_grid
-
-
-class GridSampleBackward(autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input, grid):
- op = torch._C._jit_get_operation("aten::grid_sampler_2d_backward")
- grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False)
- ctx.save_for_backward(grid)
-
- return grad_input, grad_grid
-
- @staticmethod
- def backward(ctx, grad_grad_input, grad_grad_grid):
- grid, = ctx.saved_tensors
- grad_grad_output = None
-
- if ctx.needs_input_grad[0]:
- grad_grad_output = GridSampleForward.apply(grad_grad_input, grid)
-
- return grad_grad_output, None, None
-
-
-grid_sample = GridSampleForward.apply
-
-
-def scale_mat_single(s_x, s_y):
- return torch.tensor(((s_x, 0, 0), (0, s_y, 0), (0, 0, 1)), dtype=torch.float32)
-
-
-def translate_mat_single(t_x, t_y):
- return torch.tensor(((1, 0, t_x), (0, 1, t_y), (0, 0, 1)), dtype=torch.float32)
-
-
-def random_apply_affine(img, p, G=None, antialiasing_kernel=SYM6):
- kernel = antialiasing_kernel
- len_k = len(kernel)
-
- kernel = torch.as_tensor(kernel).to(img)
- # kernel = torch.ger(kernel, kernel).to(img)
- kernel_flip = torch.flip(kernel, (0,))
-
- img_pad, G, (pad_x1, pad_x2, pad_y1, pad_y2) = try_sample_affine_and_pad(
- img, p, len_k, G
- )
-
- G_inv = (
- translate_mat_single((pad_x1 - pad_x2).item() / 2, (pad_y1 - pad_y2).item() / 2)
- @ G
- )
- up_pad = (
- (len_k + 2 - 1) // 2,
- (len_k - 2) // 2,
- (len_k + 2 - 1) // 2,
- (len_k - 2) // 2,
- )
- img_2x = upfirdn2d(img_pad, kernel.unsqueeze(0), up=(2, 1), pad=(*up_pad[:2], 0, 0))
- img_2x = upfirdn2d(img_2x, kernel.unsqueeze(1), up=(1, 2), pad=(0, 0, *up_pad[2:]))
- G_inv = scale_mat_single(2, 2) @ G_inv @ scale_mat_single(1 / 2, 1 / 2)
- G_inv = translate_mat_single(-0.5, -0.5) @ G_inv @ translate_mat_single(0.5, 0.5)
- batch_size, channel, height, width = img.shape
- pad_k = len_k // 4
- shape = (batch_size, channel, (height + pad_k * 2) * 2, (width + pad_k * 2) * 2)
- G_inv = (
- scale_mat_single(2 / img_2x.shape[3], 2 / img_2x.shape[2])
- @ G_inv
- @ scale_mat_single(1 / (2 / shape[3]), 1 / (2 / shape[2]))
- )
- grid = F.affine_grid(G_inv[:, :2, :].to(img_2x), shape, align_corners=False)
- img_affine = grid_sample(img_2x, grid)
- d_p = -pad_k * 2
- down_pad = (
- d_p + (len_k - 2 + 1) // 2,
- d_p + (len_k - 2) // 2,
- d_p + (len_k - 2 + 1) // 2,
- d_p + (len_k - 2) // 2,
- )
- img_down = upfirdn2d(
- img_affine, kernel_flip.unsqueeze(0), down=(2, 1), pad=(*down_pad[:2], 0, 0)
- )
- img_down = upfirdn2d(
- img_down, kernel_flip.unsqueeze(1), down=(1, 2), pad=(0, 0, *down_pad[2:])
- )
-
- return img_down, G
-
-
-def apply_color(img, mat):
- batch = img.shape[0]
- img = img.permute(0, 2, 3, 1)
- mat_mul = mat[:, :3, :3].transpose(1, 2).view(batch, 1, 3, 3)
- mat_add = mat[:, :3, 3].view(batch, 1, 1, 3)
- img = img @ mat_mul + mat_add
- img = img.permute(0, 3, 1, 2)
-
- return img
-
-
-def random_apply_color(img, p, C=None):
- if C is None:
- C = sample_color(p, img.shape[0])
-
- img = apply_color(img, C.to(img))
-
- return img, C
-
-
-def augment(img, p, transform_matrix=(None, None)):
- img, G = random_apply_affine(img, p, transform_matrix[0])
- img, C = random_apply_color(img, p, transform_matrix[1])
-
- return img, (G, C)
diff --git a/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/README.md b/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/README.md
deleted file mode 100644
index a4e4d994277b0ddf86f6bf76c9149a2632024d8b..0000000000000000000000000000000000000000
--- a/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Text To Music
-emoji: ⚡
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: unknown
-duplicated_from: Mubert/Text-to-Music
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/hrnet.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/hrnet.py
deleted file mode 100644
index 331ebf3ccb8597b3f507670753789073fc3c946d..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/hrnet.py
+++ /dev/null
@@ -1,555 +0,0 @@
-import torch.nn as nn
-from annotator.uniformer.mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init,
- kaiming_init)
-from annotator.uniformer.mmcv.runner import load_checkpoint
-from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm
-
-from annotator.uniformer.mmseg.ops import Upsample, resize
-from annotator.uniformer.mmseg.utils import get_root_logger
-from ..builder import BACKBONES
-from .resnet import BasicBlock, Bottleneck
-
-
-class HRModule(nn.Module):
- """High-Resolution Module for HRNet.
-
- In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange
- is in this module.
- """
-
- def __init__(self,
- num_branches,
- blocks,
- num_blocks,
- in_channels,
- num_channels,
- multiscale_output=True,
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True)):
- super(HRModule, self).__init__()
- self._check_branches(num_branches, num_blocks, in_channels,
- num_channels)
-
- self.in_channels = in_channels
- self.num_branches = num_branches
-
- self.multiscale_output = multiscale_output
- self.norm_cfg = norm_cfg
- self.conv_cfg = conv_cfg
- self.with_cp = with_cp
- self.branches = self._make_branches(num_branches, blocks, num_blocks,
- num_channels)
- self.fuse_layers = self._make_fuse_layers()
- self.relu = nn.ReLU(inplace=False)
-
- def _check_branches(self, num_branches, num_blocks, in_channels,
- num_channels):
- """Check branches configuration."""
- if num_branches != len(num_blocks):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_BLOCKS(' \
- f'{len(num_blocks)})'
- raise ValueError(error_msg)
-
- if num_branches != len(num_channels):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_CHANNELS(' \
- f'{len(num_channels)})'
- raise ValueError(error_msg)
-
- if num_branches != len(in_channels):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_INCHANNELS(' \
- f'{len(in_channels)})'
- raise ValueError(error_msg)
-
- def _make_one_branch(self,
- branch_index,
- block,
- num_blocks,
- num_channels,
- stride=1):
- """Build one branch."""
- downsample = None
- if stride != 1 or \
- self.in_channels[branch_index] != \
- num_channels[branch_index] * block.expansion:
- downsample = nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- self.in_channels[branch_index],
- num_channels[branch_index] * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- build_norm_layer(self.norm_cfg, num_channels[branch_index] *
- block.expansion)[1])
-
- layers = []
- layers.append(
- block(
- self.in_channels[branch_index],
- num_channels[branch_index],
- stride,
- downsample=downsample,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
- self.in_channels[branch_index] = \
- num_channels[branch_index] * block.expansion
- for i in range(1, num_blocks[branch_index]):
- layers.append(
- block(
- self.in_channels[branch_index],
- num_channels[branch_index],
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*layers)
-
- def _make_branches(self, num_branches, block, num_blocks, num_channels):
- """Build multiple branch."""
- branches = []
-
- for i in range(num_branches):
- branches.append(
- self._make_one_branch(i, block, num_blocks, num_channels))
-
- return nn.ModuleList(branches)
-
- def _make_fuse_layers(self):
- """Build fuse layer."""
- if self.num_branches == 1:
- return None
-
- num_branches = self.num_branches
- in_channels = self.in_channels
- fuse_layers = []
- num_out_branches = num_branches if self.multiscale_output else 1
- for i in range(num_out_branches):
- fuse_layer = []
- for j in range(num_branches):
- if j > i:
- fuse_layer.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[i],
- kernel_size=1,
- stride=1,
- padding=0,
- bias=False),
- build_norm_layer(self.norm_cfg, in_channels[i])[1],
- # we set align_corners=False for HRNet
- Upsample(
- scale_factor=2**(j - i),
- mode='bilinear',
- align_corners=False)))
- elif j == i:
- fuse_layer.append(None)
- else:
- conv_downsamples = []
- for k in range(i - j):
- if k == i - j - 1:
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[i],
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- in_channels[i])[1]))
- else:
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[j],
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- in_channels[j])[1],
- nn.ReLU(inplace=False)))
- fuse_layer.append(nn.Sequential(*conv_downsamples))
- fuse_layers.append(nn.ModuleList(fuse_layer))
-
- return nn.ModuleList(fuse_layers)
-
- def forward(self, x):
- """Forward function."""
- if self.num_branches == 1:
- return [self.branches[0](x[0])]
-
- for i in range(self.num_branches):
- x[i] = self.branches[i](x[i])
-
- x_fuse = []
- for i in range(len(self.fuse_layers)):
- y = 0
- for j in range(self.num_branches):
- if i == j:
- y += x[j]
- elif j > i:
- y = y + resize(
- self.fuse_layers[i][j](x[j]),
- size=x[i].shape[2:],
- mode='bilinear',
- align_corners=False)
- else:
- y += self.fuse_layers[i][j](x[j])
- x_fuse.append(self.relu(y))
- return x_fuse
-
-
-@BACKBONES.register_module()
-class HRNet(nn.Module):
- """HRNet backbone.
-
- High-Resolution Representations for Labeling Pixels and Regions
- arXiv: https://arxiv.org/abs/1904.04514
-
- Args:
- extra (dict): detailed configuration for each stage of HRNet.
- in_channels (int): Number of input image channels. Normally 3.
- conv_cfg (dict): dictionary to construct and config conv layer.
- norm_cfg (dict): dictionary to construct and config norm layer.
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- zero_init_residual (bool): whether to use zero init for last norm layer
- in resblocks to let them behave as identity.
-
- Example:
- >>> from annotator.uniformer.mmseg.models import HRNet
- >>> import torch
- >>> extra = dict(
- >>> stage1=dict(
- >>> num_modules=1,
- >>> num_branches=1,
- >>> block='BOTTLENECK',
- >>> num_blocks=(4, ),
- >>> num_channels=(64, )),
- >>> stage2=dict(
- >>> num_modules=1,
- >>> num_branches=2,
- >>> block='BASIC',
- >>> num_blocks=(4, 4),
- >>> num_channels=(32, 64)),
- >>> stage3=dict(
- >>> num_modules=4,
- >>> num_branches=3,
- >>> block='BASIC',
- >>> num_blocks=(4, 4, 4),
- >>> num_channels=(32, 64, 128)),
- >>> stage4=dict(
- >>> num_modules=3,
- >>> num_branches=4,
- >>> block='BASIC',
- >>> num_blocks=(4, 4, 4, 4),
- >>> num_channels=(32, 64, 128, 256)))
- >>> self = HRNet(extra, in_channels=1)
- >>> self.eval()
- >>> inputs = torch.rand(1, 1, 32, 32)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- (1, 32, 8, 8)
- (1, 64, 4, 4)
- (1, 128, 2, 2)
- (1, 256, 1, 1)
- """
-
- blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck}
-
- def __init__(self,
- extra,
- in_channels=3,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=False,
- with_cp=False,
- zero_init_residual=False):
- super(HRNet, self).__init__()
- self.extra = extra
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.norm_eval = norm_eval
- self.with_cp = with_cp
- self.zero_init_residual = zero_init_residual
-
- # stem net
- self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1)
- self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2)
-
- self.conv1 = build_conv_layer(
- self.conv_cfg,
- in_channels,
- 64,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False)
-
- self.add_module(self.norm1_name, norm1)
- self.conv2 = build_conv_layer(
- self.conv_cfg,
- 64,
- 64,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False)
-
- self.add_module(self.norm2_name, norm2)
- self.relu = nn.ReLU(inplace=True)
-
- # stage 1
- self.stage1_cfg = self.extra['stage1']
- num_channels = self.stage1_cfg['num_channels'][0]
- block_type = self.stage1_cfg['block']
- num_blocks = self.stage1_cfg['num_blocks'][0]
-
- block = self.blocks_dict[block_type]
- stage1_out_channels = num_channels * block.expansion
- self.layer1 = self._make_layer(block, 64, num_channels, num_blocks)
-
- # stage 2
- self.stage2_cfg = self.extra['stage2']
- num_channels = self.stage2_cfg['num_channels']
- block_type = self.stage2_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition1 = self._make_transition_layer([stage1_out_channels],
- num_channels)
- self.stage2, pre_stage_channels = self._make_stage(
- self.stage2_cfg, num_channels)
-
- # stage 3
- self.stage3_cfg = self.extra['stage3']
- num_channels = self.stage3_cfg['num_channels']
- block_type = self.stage3_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition2 = self._make_transition_layer(pre_stage_channels,
- num_channels)
- self.stage3, pre_stage_channels = self._make_stage(
- self.stage3_cfg, num_channels)
-
- # stage 4
- self.stage4_cfg = self.extra['stage4']
- num_channels = self.stage4_cfg['num_channels']
- block_type = self.stage4_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition3 = self._make_transition_layer(pre_stage_channels,
- num_channels)
- self.stage4, pre_stage_channels = self._make_stage(
- self.stage4_cfg, num_channels)
-
- @property
- def norm1(self):
- """nn.Module: the normalization layer named "norm1" """
- return getattr(self, self.norm1_name)
-
- @property
- def norm2(self):
- """nn.Module: the normalization layer named "norm2" """
- return getattr(self, self.norm2_name)
-
- def _make_transition_layer(self, num_channels_pre_layer,
- num_channels_cur_layer):
- """Make transition layer."""
- num_branches_cur = len(num_channels_cur_layer)
- num_branches_pre = len(num_channels_pre_layer)
-
- transition_layers = []
- for i in range(num_branches_cur):
- if i < num_branches_pre:
- if num_channels_cur_layer[i] != num_channels_pre_layer[i]:
- transition_layers.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- num_channels_pre_layer[i],
- num_channels_cur_layer[i],
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- num_channels_cur_layer[i])[1],
- nn.ReLU(inplace=True)))
- else:
- transition_layers.append(None)
- else:
- conv_downsamples = []
- for j in range(i + 1 - num_branches_pre):
- in_channels = num_channels_pre_layer[-1]
- out_channels = num_channels_cur_layer[i] \
- if j == i - num_branches_pre else in_channels
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels,
- out_channels,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg, out_channels)[1],
- nn.ReLU(inplace=True)))
- transition_layers.append(nn.Sequential(*conv_downsamples))
-
- return nn.ModuleList(transition_layers)
-
- def _make_layer(self, block, inplanes, planes, blocks, stride=1):
- """Make each layer."""
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- build_norm_layer(self.norm_cfg, planes * block.expansion)[1])
-
- layers = []
- layers.append(
- block(
- inplanes,
- planes,
- stride,
- downsample=downsample,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
- inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(
- block(
- inplanes,
- planes,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*layers)
-
- def _make_stage(self, layer_config, in_channels, multiscale_output=True):
- """Make each stage."""
- num_modules = layer_config['num_modules']
- num_branches = layer_config['num_branches']
- num_blocks = layer_config['num_blocks']
- num_channels = layer_config['num_channels']
- block = self.blocks_dict[layer_config['block']]
-
- hr_modules = []
- for i in range(num_modules):
- # multi_scale_output is only used for the last module
- if not multiscale_output and i == num_modules - 1:
- reset_multiscale_output = False
- else:
- reset_multiscale_output = True
-
- hr_modules.append(
- HRModule(
- num_branches,
- block,
- num_blocks,
- in_channels,
- num_channels,
- reset_multiscale_output,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*hr_modules), in_channels
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
-
- if self.zero_init_residual:
- for m in self.modules():
- if isinstance(m, Bottleneck):
- constant_init(m.norm3, 0)
- elif isinstance(m, BasicBlock):
- constant_init(m.norm2, 0)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- """Forward function."""
-
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu(x)
- x = self.conv2(x)
- x = self.norm2(x)
- x = self.relu(x)
- x = self.layer1(x)
-
- x_list = []
- for i in range(self.stage2_cfg['num_branches']):
- if self.transition1[i] is not None:
- x_list.append(self.transition1[i](x))
- else:
- x_list.append(x)
- y_list = self.stage2(x_list)
-
- x_list = []
- for i in range(self.stage3_cfg['num_branches']):
- if self.transition2[i] is not None:
- x_list.append(self.transition2[i](y_list[-1]))
- else:
- x_list.append(y_list[i])
- y_list = self.stage3(x_list)
-
- x_list = []
- for i in range(self.stage4_cfg['num_branches']):
- if self.transition3[i] is not None:
- x_list.append(self.transition3[i](y_list[-1]))
- else:
- x_list.append(y_list[i])
- y_list = self.stage4(x_list)
-
- return y_list
-
- def train(self, mode=True):
- """Convert the model into training mode will keeping the normalization
- layer freezed."""
- super(HRNet, self).train(mode)
- if mode and self.norm_eval:
- for m in self.modules():
- # trick: eval have effect on BatchNorm only
- if isinstance(m, _BatchNorm):
- m.eval()
diff --git a/spaces/RGBD-SOD/bbsnet/prepare_samples.py b/spaces/RGBD-SOD/bbsnet/prepare_samples.py
deleted file mode 100644
index 1f0d646a87572ed338b5304aa9565c36f0eece2e..0000000000000000000000000000000000000000
--- a/spaces/RGBD-SOD/bbsnet/prepare_samples.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import os
-import shutil
-from typing import List, Tuple
-
-from PIL import Image
-from datasets import load_dataset
-
-
-dataset = load_dataset("RGBD-SOD/test", "v1", split="train", cache_dir="data")
-SAMPLES_DIR = "samples"
-
-
-def prepare_samples():
- samples: List[Tuple[str, str, str]] = []
- for sample in dataset:
- rgb: Image.Image = sample["rgb"]
- depth: Image.Image = sample["depth"]
- gt: Image.Image = sample["gt"]
- name: str = sample["name"]
- dir_path = os.path.join(SAMPLES_DIR, name)
- shutil.rmtree(dir_path, ignore_errors=True)
- os.makedirs(dir_path, exist_ok=True)
- rgb_path = os.path.join(dir_path, f"rgb.jpg")
- rgb.save(rgb_path)
- depth_path = os.path.join(dir_path, f"depth.jpg")
- depth.save(depth_path)
- gt_path = os.path.join(dir_path, f"gt.png")
- gt.save(gt_path)
-
- samples.append([rgb_path, depth_path, gt_path])
- return samples
diff --git a/spaces/RMXK/RVC_HFF/infer/modules/train/train.py b/spaces/RMXK/RVC_HFF/infer/modules/train/train.py
deleted file mode 100644
index 550bef391444c9b6c0d8c44ae3a3809b3ade4218..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/infer/modules/train/train.py
+++ /dev/null
@@ -1,723 +0,0 @@
-import os
-import sys
-import logging
-
-logger = logging.getLogger(__name__)
-
-now_dir = os.getcwd()
-sys.path.append(os.path.join(now_dir))
-
-import datetime
-
-from infer.lib.train import utils
-
-hps = utils.get_hparams()
-os.environ["CUDA_VISIBLE_DEVICES"] = hps.gpus.replace("-", ",")
-n_gpus = len(hps.gpus.split("-"))
-from random import randint, shuffle
-
-import torch
-try:
- import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
- if torch.xpu.is_available():
- from infer.modules.ipex import ipex_init
- from infer.modules.ipex.gradscaler import gradscaler_init
- from torch.xpu.amp import autocast
- GradScaler = gradscaler_init()
- ipex_init()
- else:
- from torch.cuda.amp import GradScaler, autocast
-except Exception:
- from torch.cuda.amp import GradScaler, autocast
-
-torch.backends.cudnn.deterministic = False
-torch.backends.cudnn.benchmark = False
-from time import sleep
-from time import time as ttime
-
-import torch.distributed as dist
-import torch.multiprocessing as mp
-
-from torch.nn import functional as F
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-
-from infer.lib.infer_pack import commons
-from infer.lib.train.data_utils import (
- DistributedBucketSampler,
- TextAudioCollate,
- TextAudioCollateMultiNSFsid,
- TextAudioLoader,
- TextAudioLoaderMultiNSFsid,
-)
-
-if hps.version == "v1":
- from infer.lib.infer_pack.models import MultiPeriodDiscriminator
- from infer.lib.infer_pack.models import SynthesizerTrnMs256NSFsid as RVC_Model_f0
- from infer.lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid_nono as RVC_Model_nof0,
- )
-else:
- from infer.lib.infer_pack.models import (
- SynthesizerTrnMs768NSFsid as RVC_Model_f0,
- SynthesizerTrnMs768NSFsid_nono as RVC_Model_nof0,
- MultiPeriodDiscriminatorV2 as MultiPeriodDiscriminator,
- )
-
-from infer.lib.train.losses import (
- discriminator_loss,
- feature_loss,
- generator_loss,
- kl_loss,
-)
-from infer.lib.train.mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from infer.lib.train.process_ckpt import savee
-
-global_step = 0
-import csv
-
-class EpochRecorder:
- def __init__(self):
- self.last_time = ttime()
-
- def record(self):
- now_time = ttime()
- elapsed_time = now_time - self.last_time
- self.last_time = now_time
- elapsed_time_str = str(datetime.timedelta(seconds=elapsed_time))
- current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- return f"[{current_time}] | ({elapsed_time_str})"
-
-def reset_stop_flag():
- with open("csvdb/stop.csv", "w+", newline="") as STOPCSVwrite:
- csv_writer = csv.writer(STOPCSVwrite, delimiter=",")
- csv_writer.writerow(["False"])
-
-def create_model(hps, model_f0, model_nof0):
- filter_length_adjusted = hps.data.filter_length // 2 + 1
- segment_size_adjusted = hps.train.segment_size // hps.data.hop_length
- is_half = hps.train.fp16_run
- sr = hps.sample_rate
-
- model = model_f0 if hps.if_f0 == 1 else model_nof0
-
- return model(
- filter_length_adjusted,
- segment_size_adjusted,
- **hps.model,
- is_half=is_half,
- sr=sr
- )
-
-def move_model_to_cuda_if_available(model, rank):
- if torch.cuda.is_available():
- return model.cuda(rank)
- else:
- return model
-
-def create_optimizer(model, hps):
- return torch.optim.AdamW(
- model.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
-
-def create_ddp_model(model, rank):
- if torch.cuda.is_available():
- return DDP(model, device_ids=[rank])
- else:
- return DDP(model)
-
-def create_dataset(hps, if_f0=True):
- return TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data) if if_f0 else TextAudioLoader(hps.data.training_files, hps.data)
-
-def create_sampler(dataset, batch_size, n_gpus, rank):
- return DistributedBucketSampler(
- dataset,
- batch_size * n_gpus,
- # [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s
- [100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True,
- )
-
-def set_collate_fn(if_f0=True):
- return TextAudioCollateMultiNSFsid() if if_f0 else TextAudioCollate()
-
-
-def main():
- n_gpus = torch.cuda.device_count()
-
- if torch.cuda.is_available() == False and torch.backends.mps.is_available() == True:
- n_gpus = 1
- if n_gpus < 1:
- # patch to unblock people without gpus. there is probably a better way.
- logger.warn("NO GPU DETECTED: falling back to CPU - this may take a while")
- n_gpus = 1
- os.environ["MASTER_ADDR"] = "localhost"
- os.environ["MASTER_PORT"] = str(randint(20000, 55555))
- children = []
- for i in range(n_gpus):
- subproc = mp.Process(
- target=run,
- args=(
- i,
- n_gpus,
- hps,
- ),
- )
- children.append(subproc)
- subproc.start()
-
- for i in range(n_gpus):
- children[i].join()
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- # utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- dist.init_process_group(
- backend="gloo", init_method="env://", world_size=n_gpus, rank=rank
- )
- torch.manual_seed(hps.train.seed)
- if torch.cuda.is_available():
- torch.cuda.set_device(rank)
-
- if hps.if_f0 == 1:
- train_dataset = TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data)
- else:
- train_dataset = TextAudioLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size * n_gpus,
- # [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s
- [100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True,
- )
- # It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
- # num_workers=8 -> num_workers=4
- if hps.if_f0 == 1:
- collate_fn = TextAudioCollateMultiNSFsid()
- else:
- collate_fn = TextAudioCollate()
- train_loader = DataLoader(
- train_dataset,
- num_workers=4,
- shuffle=False,
- pin_memory=True,
- collate_fn=collate_fn,
- batch_sampler=train_sampler,
- persistent_workers=True,
- prefetch_factor=8,
- )
- if hps.if_f0 == 1:
- net_g = RVC_Model_f0(
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model,
- is_half=hps.train.fp16_run,
- sr=hps.sample_rate,
- )
- else:
- net_g = RVC_Model_nof0(
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model,
- is_half=hps.train.fp16_run,
- )
- if torch.cuda.is_available():
- net_g = net_g.cuda(rank)
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm)
- if torch.cuda.is_available():
- net_d = net_d.cuda(rank)
- optim_g = torch.optim.AdamW(
- net_g.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
- # net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
- # net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
- if hasattr(torch, "xpu") and torch.xpu.is_available():
- pass
- elif torch.cuda.is_available():
- net_g = DDP(net_g, device_ids=[rank])
- net_d = DDP(net_d, device_ids=[rank])
- else:
- net_g = DDP(net_g)
- net_d = DDP(net_d)
-
- try: # 如果能加载自动resume
- _, _, _, epoch_str = utils.load_checkpoint(
- utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d
- ) # D多半加载没事
- if rank == 0:
- logger.info("loaded D")
- # _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g,load_opt=0)
- _, _, _, epoch_str = utils.load_checkpoint(
- utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g
- )
- global_step = (epoch_str - 1) * len(train_loader)
- # epoch_str = 1
- # global_step = 0
- except: # 如果首次不能加载,加载pretrain
- # traceback.print_exc()
- epoch_str = 1
- global_step = 0
- if hps.pretrainG != "":
- if rank == 0:
- logger.info("loaded pretrained %s" % (hps.pretrainG))
- if hasattr(net_g, "module"):
- logger.info(
- net_g.module.load_state_dict(
- torch.load(hps.pretrainG, map_location="cpu")["model"]
- )
- ) ##测试不加载优化器
- else:
- logger.info(
- net_g.load_state_dict(
- torch.load(hps.pretrainG, map_location="cpu")["model"]
- )
- ) ##测试不加载优化器
- if hps.pretrainD != "":
- if rank == 0:
- logger.info("loaded pretrained %s" % (hps.pretrainD))
- if hasattr(net_d, "module"):
- logger.info(
- net_d.module.load_state_dict(
- torch.load(hps.pretrainD, map_location="cpu")["model"]
- )
- )
- else:
- logger.info(
- net_d.load_state_dict(
- torch.load(hps.pretrainD, map_location="cpu")["model"]
- )
- )
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(
- optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
- )
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(
- optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
- )
-
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- cache = []
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(
- rank,
- epoch,
- hps,
- [net_g, net_d],
- [optim_g, optim_d],
- [scheduler_g, scheduler_d],
- scaler,
- [train_loader, None],
- logger,
- [writer, writer_eval],
- cache,
- )
- else:
- train_and_evaluate(
- rank,
- epoch,
- hps,
- [net_g, net_d],
- [optim_g, optim_d],
- [scheduler_g, scheduler_d],
- scaler,
- [train_loader, None],
- None,
- None,
- cache,
- )
- scheduler_g.step()
- scheduler_d.step()
-
-
-def train_and_evaluate(
- rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers, cache
-):
- net_g, net_d = nets
- optim_g, optim_d = optims
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
-
- # Prepare data iterator
- if hps.if_cache_data_in_gpu == True:
- # Use Cache
- data_iterator = cache
- if cache == []:
- # Make new cache
- for batch_idx, info in enumerate(train_loader):
- # Unpack
- if hps.if_f0 == 1:
- (
- phone,
- phone_lengths,
- pitch,
- pitchf,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ) = info
- else:
- (
- phone,
- phone_lengths,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ) = info
- # Load on CUDA
- if torch.cuda.is_available():
- phone = phone.cuda(rank, non_blocking=True)
- phone_lengths = phone_lengths.cuda(rank, non_blocking=True)
- if hps.if_f0 == 1:
- pitch = pitch.cuda(rank, non_blocking=True)
- pitchf = pitchf.cuda(rank, non_blocking=True)
- sid = sid.cuda(rank, non_blocking=True)
- spec = spec.cuda(rank, non_blocking=True)
- spec_lengths = spec_lengths.cuda(rank, non_blocking=True)
- wave = wave.cuda(rank, non_blocking=True)
- wave_lengths = wave_lengths.cuda(rank, non_blocking=True)
- # Cache on list
- if hps.if_f0 == 1:
- cache.append(
- (
- batch_idx,
- (
- phone,
- phone_lengths,
- pitch,
- pitchf,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ),
- )
- )
- else:
- cache.append(
- (
- batch_idx,
- (
- phone,
- phone_lengths,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ),
- )
- )
- else:
- # Load shuffled cache
- shuffle(cache)
- else:
- # Loader
- data_iterator = enumerate(train_loader)
-
- # Run steps
- epoch_recorder = EpochRecorder()
- for batch_idx, info in data_iterator:
- # Data
- ## Unpack
- if hps.if_f0 == 1:
- (
- phone,
- phone_lengths,
- pitch,
- pitchf,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ) = info
- else:
- phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info
- ## Load on CUDA
- if (hps.if_cache_data_in_gpu == False) and torch.cuda.is_available():
- phone = phone.cuda(rank, non_blocking=True)
- phone_lengths = phone_lengths.cuda(rank, non_blocking=True)
- if hps.if_f0 == 1:
- pitch = pitch.cuda(rank, non_blocking=True)
- pitchf = pitchf.cuda(rank, non_blocking=True)
- sid = sid.cuda(rank, non_blocking=True)
- spec = spec.cuda(rank, non_blocking=True)
- spec_lengths = spec_lengths.cuda(rank, non_blocking=True)
- wave = wave.cuda(rank, non_blocking=True)
- # wave_lengths = wave_lengths.cuda(rank, non_blocking=True)
-
- # Calculate
- with autocast(enabled=hps.train.fp16_run):
- if hps.if_f0 == 1:
- (
- y_hat,
- ids_slice,
- x_mask,
- z_mask,
- (z, z_p, m_p, logs_p, m_q, logs_q),
- ) = net_g(phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid)
- else:
- (
- y_hat,
- ids_slice,
- x_mask,
- z_mask,
- (z, z_p, m_p, logs_p, m_q, logs_q),
- ) = net_g(phone, phone_lengths, spec, spec_lengths, sid)
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax,
- )
- y_mel = commons.slice_segments(
- mel, ids_slice, hps.train.segment_size // hps.data.hop_length
- )
- with autocast(enabled=False):
- y_hat_mel = mel_spectrogram_torch(
- y_hat.float().squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax,
- )
- if hps.train.fp16_run == True:
- y_hat_mel = y_hat_mel.half()
- wave = commons.slice_segments(
- wave, ids_slice * hps.data.hop_length, hps.train.segment_size
- ) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(
- y_d_hat_r, y_d_hat_g
- )
- optim_d.zero_grad()
- scaler.scale(loss_disc).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat)
- with autocast(enabled=False):
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]["lr"]
- logger.info(
- "Train Epoch: {} [{:.0f}%]".format(
- epoch, 100.0 * batch_idx / len(train_loader)
- )
- )
- # Amor For Tensorboard display
- if loss_mel > 75:
- loss_mel = 75
- if loss_kl > 9:
- loss_kl = 9
-
- logger.info([global_step, lr])
- logger.info(
- f"loss_disc={loss_disc:.3f}, loss_gen={loss_gen:.3f}, loss_fm={loss_fm:.3f},loss_mel={loss_mel:.3f}, loss_kl={loss_kl:.3f}"
- )
- scalar_dict = {
- "loss/g/total": loss_gen_all,
- "loss/d/total": loss_disc,
- "learning_rate": lr,
- "grad_norm_d": grad_norm_d,
- "grad_norm_g": grad_norm_g,
- }
- scalar_dict.update(
- {
- "loss/g/fm": loss_fm,
- "loss/g/mel": loss_mel,
- "loss/g/kl": loss_kl,
- }
- )
-
- scalar_dict.update(
- {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}
- )
- scalar_dict.update(
- {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}
- )
- scalar_dict.update(
- {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}
- )
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(
- y_mel[0].data.cpu().numpy()
- ),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(
- y_hat_mel[0].data.cpu().numpy()
- ),
- "all/mel": utils.plot_spectrogram_to_numpy(
- mel[0].data.cpu().numpy()
- ),
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict,
- )
- global_step += 1
- # /Run steps
-
- if epoch % hps.save_every_epoch == 0 and rank == 0:
- if hps.if_latest == 0:
- utils.save_checkpoint(
- net_g,
- optim_g,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)),
- )
- utils.save_checkpoint(
- net_d,
- optim_d,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)),
- )
- else:
- utils.save_checkpoint(
- net_g,
- optim_g,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(2333333)),
- )
- utils.save_checkpoint(
- net_d,
- optim_d,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(2333333)),
- )
- if rank == 0 and hps.save_every_weights == "1":
- if hasattr(net_g, "module"):
- ckpt = net_g.module.state_dict()
- else:
- ckpt = net_g.state_dict()
- logger.info(
- "saving ckpt %s_e%s:%s"
- % (
- hps.name,
- epoch,
- savee(
- ckpt,
- hps.sample_rate,
- hps.if_f0,
- hps.name + "_e%s_s%s" % (epoch, global_step),
- epoch,
- hps.version,
- hps,
- ),
- )
- )
-
- stopbtn = False
- try:
- with open("csvdb/stop.csv", 'r') as csv_file:
- stopbtn_str = next(csv.reader(csv_file), [None])[0]
- if stopbtn_str is not None: stopbtn = stopbtn_str.lower() == 'true'
- except (ValueError, TypeError, FileNotFoundError, IndexError) as e:
- print(f"Handling exception: {e}")
- stopbtn = False
-
- if stopbtn:
- logger.info("Stop Button was pressed. The program is closed.")
- ckpt = net_g.module.state_dict() if hasattr(net_g, "module") else net_g.state_dict()
- logger.info(
- "saving final ckpt:%s"
- % (
- savee(
- ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps
- )
- )
- )
- sleep(1)
- reset_stop_flag()
- os._exit(2333333)
-
- if rank == 0:
- logger.info("====> Epoch: {} {}".format(epoch, epoch_recorder.record()))
- if epoch >= hps.total_epoch and rank == 0:
- logger.info("Training is done. The program is closed.")
-
- if hasattr(net_g, "module"):
- ckpt = net_g.module.state_dict()
- else:
- ckpt = net_g.state_dict()
- logger.info(
- "saving final ckpt:%s"
- % (
- savee(
- ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps
- )
- )
- )
- sleep(1)
- os._exit(2333333)
-
-
-if __name__ == "__main__":
- torch.multiprocessing.set_start_method("spawn")
- main()
diff --git a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/utils.py b/spaces/RMXK/RVC_HFF/lib/uvr5_pack/utils.py
deleted file mode 100644
index 0fafe8793b0d539fa58dd024342250b24b6187a9..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/utils.py
+++ /dev/null
@@ -1,120 +0,0 @@
-import torch
-import numpy as np
-from tqdm import tqdm
-import json
-
-
-def load_data(file_name: str = "./lib/uvr5_pack/name_params.json") -> dict:
- with open(file_name, "r") as f:
- data = json.load(f)
-
- return data
-
-
-def make_padding(width, cropsize, offset):
- left = offset
- roi_size = cropsize - left * 2
- if roi_size == 0:
- roi_size = cropsize
- right = roi_size - (width % roi_size) + left
-
- return left, right, roi_size
-
-
-def inference(X_spec, device, model, aggressiveness, data):
- """
- data : dic configs
- """
-
- def _execute(
- X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half=True
- ):
- model.eval()
- with torch.no_grad():
- preds = []
-
- iterations = [n_window]
-
- total_iterations = sum(iterations)
- for i in tqdm(range(n_window)):
- start = i * roi_size
- X_mag_window = X_mag_pad[
- None, :, :, start : start + data["window_size"]
- ]
- X_mag_window = torch.from_numpy(X_mag_window)
- if is_half:
- X_mag_window = X_mag_window.half()
- X_mag_window = X_mag_window.to(device)
-
- pred = model.predict(X_mag_window, aggressiveness)
-
- pred = pred.detach().cpu().numpy()
- preds.append(pred[0])
-
- pred = np.concatenate(preds, axis=2)
- return pred
-
- def preprocess(X_spec):
- X_mag = np.abs(X_spec)
- X_phase = np.angle(X_spec)
-
- return X_mag, X_phase
-
- X_mag, X_phase = preprocess(X_spec)
-
- coef = X_mag.max()
- X_mag_pre = X_mag / coef
-
- n_frame = X_mag_pre.shape[2]
- pad_l, pad_r, roi_size = make_padding(n_frame, data["window_size"], model.offset)
- n_window = int(np.ceil(n_frame / roi_size))
-
- X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant")
-
- if list(model.state_dict().values())[0].dtype == torch.float16:
- is_half = True
- else:
- is_half = False
- pred = _execute(
- X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half
- )
- pred = pred[:, :, :n_frame]
-
- if data["tta"]:
- pad_l += roi_size // 2
- pad_r += roi_size // 2
- n_window += 1
-
- X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant")
-
- pred_tta = _execute(
- X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half
- )
- pred_tta = pred_tta[:, :, roi_size // 2 :]
- pred_tta = pred_tta[:, :, :n_frame]
-
- return (pred + pred_tta) * 0.5 * coef, X_mag, np.exp(1.0j * X_phase)
- else:
- return pred * coef, X_mag, np.exp(1.0j * X_phase)
-
-
-def _get_name_params(model_path, model_hash):
- data = load_data()
- flag = False
- ModelName = model_path
- for type in list(data):
- for model in list(data[type][0]):
- for i in range(len(data[type][0][model])):
- if str(data[type][0][model][i]["hash_name"]) == model_hash:
- flag = True
- elif str(data[type][0][model][i]["hash_name"]) in ModelName:
- flag = True
-
- if flag:
- model_params_auto = data[type][0][model][i]["model_params"]
- param_name_auto = data[type][0][model][i]["param_name"]
- if type == "equivalent":
- return param_name_auto, model_params_auto
- else:
- flag = False
- return param_name_auto, model_params_auto
diff --git a/spaces/RMXK/RVC_HFF/train/utils.py b/spaces/RMXK/RVC_HFF/train/utils.py
deleted file mode 100644
index aae833b08acc24b848aa70114fd9b7aad8b1a6ad..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/train/utils.py
+++ /dev/null
@@ -1,500 +0,0 @@
-import os, traceback
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
-
- ##################
- def go(model, bkey):
- saved_state_dict = checkpoint_dict[bkey]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items(): # 模型需要的shape
- try:
- new_state_dict[k] = saved_state_dict[k]
- if saved_state_dict[k].shape != state_dict[k].shape:
- print(
- "shape-%s-mismatch|need-%s|get-%s"
- % (k, state_dict[k].shape, saved_state_dict[k].shape)
- ) #
- raise KeyError
- except:
- # logger.info(traceback.format_exc())
- logger.info("%s is not in the checkpoint" % k) # pretrain缺失的
- new_state_dict[k] = v # 模型自带的随机值
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict, strict=False)
- else:
- model.load_state_dict(new_state_dict, strict=False)
-
- go(combd, "combd")
- go(sbd, "sbd")
- #############
- logger.info("Loaded model weights")
-
- iteration = checkpoint_dict["iteration"]
- learning_rate = checkpoint_dict["learning_rate"]
- if (
- optimizer is not None and load_opt == 1
- ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
- # try:
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- # except:
- # traceback.print_exc()
- logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-# def load_checkpoint(checkpoint_path, model, optimizer=None):
-# assert os.path.isfile(checkpoint_path)
-# checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
-# iteration = checkpoint_dict['iteration']
-# learning_rate = checkpoint_dict['learning_rate']
-# if optimizer is not None:
-# optimizer.load_state_dict(checkpoint_dict['optimizer'])
-# # print(1111)
-# saved_state_dict = checkpoint_dict['model']
-# # print(1111)
-#
-# if hasattr(model, 'module'):
-# state_dict = model.module.state_dict()
-# else:
-# state_dict = model.state_dict()
-# new_state_dict= {}
-# for k, v in state_dict.items():
-# try:
-# new_state_dict[k] = saved_state_dict[k]
-# except:
-# logger.info("%s is not in the checkpoint" % k)
-# new_state_dict[k] = v
-# if hasattr(model, 'module'):
-# model.module.load_state_dict(new_state_dict)
-# else:
-# model.load_state_dict(new_state_dict)
-# logger.info("Loaded checkpoint '{}' (epoch {})" .format(
-# checkpoint_path, iteration))
-# return model, optimizer, learning_rate, iteration
-def load_checkpoint(checkpoint_path, model, optimizer=None, load_opt=1):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
-
- saved_state_dict = checkpoint_dict["model"]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items(): # 模型需要的shape
- try:
- new_state_dict[k] = saved_state_dict[k]
- if saved_state_dict[k].shape != state_dict[k].shape:
- print(
- "shape-%s-mismatch|need-%s|get-%s"
- % (k, state_dict[k].shape, saved_state_dict[k].shape)
- ) #
- raise KeyError
- except:
- # logger.info(traceback.format_exc())
- logger.info("%s is not in the checkpoint" % k) # pretrain缺失的
- new_state_dict[k] = v # 模型自带的随机值
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict, strict=False)
- else:
- model.load_state_dict(new_state_dict, strict=False)
- logger.info("Loaded model weights")
-
- iteration = checkpoint_dict["iteration"]
- learning_rate = checkpoint_dict["learning_rate"]
- if (
- optimizer is not None and load_opt == 1
- ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
- # try:
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- # except:
- # traceback.print_exc()
- logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at epoch {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save(
- {
- "model": state_dict,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def save_checkpoint_d(combd, sbd, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at epoch {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(combd, "module"):
- state_dict_combd = combd.module.state_dict()
- else:
- state_dict_combd = combd.state_dict()
- if hasattr(sbd, "module"):
- state_dict_sbd = sbd.module.state_dict()
- else:
- state_dict_sbd = sbd.state_dict()
- torch.save(
- {
- "combd": state_dict_combd,
- "sbd": state_dict_sbd,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def summarize(
- writer,
- global_step,
- scalars={},
- histograms={},
- images={},
- audios={},
- audio_sampling_rate=22050,
-):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats="HWC")
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(
- alignment.transpose(), aspect="auto", origin="lower", interpolation="none"
- )
- fig.colorbar(im, ax=ax)
- xlabel = "Decoder timestep"
- if info is not None:
- xlabel += "\n\n" + info
- plt.xlabel(xlabel)
- plt.ylabel("Encoder timestep")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- filepaths_and_text = [item for item in filepaths_and_text if len(item) == 5] # ensure there are 5 items.
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- """
- todo:
- 结尾七人组:
- 保存频率、总epoch done
- bs done
- pretrainG、pretrainD done
- 卡号:os.en["CUDA_VISIBLE_DEVICES"] done
- if_latest done
- 模型:if_f0 done
- 采样率:自动选择config done
- 是否缓存数据集进GPU:if_cache_data_in_gpu done
-
- -m:
- 自动决定training_files路径,改掉train_nsf_load_pretrain.py里的hps.data.training_files done
- -c不要了
- """
- parser = argparse.ArgumentParser()
- # parser.add_argument('-c', '--config', type=str, default="configs/40k.json",help='JSON file for configuration')
- parser.add_argument(
- "-se",
- "--save_every_epoch",
- type=int,
- required=True,
- help="checkpoint save frequency (epoch)",
- )
- parser.add_argument(
- "-te", "--total_epoch", type=int, required=True, help="total_epoch"
- )
- parser.add_argument(
- "-pg", "--pretrainG", type=str, default="", help="Pretrained Discriminator path"
- )
- parser.add_argument(
- "-pd", "--pretrainD", type=str, default="", help="Pretrained Generator path"
- )
- parser.add_argument("-g", "--gpus", type=str, default="0", help="split by -")
- parser.add_argument(
- "-bs", "--batch_size", type=int, required=True, help="batch size"
- )
- parser.add_argument(
- "-e", "--experiment_dir", type=str, required=True, help="experiment dir"
- ) # -m
- parser.add_argument(
- "-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k"
- )
- parser.add_argument(
- "-sw",
- "--save_every_weights",
- type=str,
- default="0",
- help="save the extracted model in weights directory when saving checkpoints",
- )
- parser.add_argument(
- "-v", "--version", type=str, required=True, help="model version"
- )
- parser.add_argument(
- "-f0",
- "--if_f0",
- type=int,
- required=True,
- help="use f0 as one of the inputs of the model, 1 or 0",
- )
- parser.add_argument(
- "-l",
- "--if_latest",
- type=int,
- required=True,
- help="if only save the latest G/D pth file, 1 or 0",
- )
- parser.add_argument(
- "-c",
- "--if_cache_data_in_gpu",
- type=int,
- required=True,
- help="if caching the dataset in GPU memory, 1 or 0",
- )
- parser.add_argument(
- "-li", "--log_interval", type=int, required=True, help="log interval"
- )
-
- args = parser.parse_args()
- name = args.experiment_dir
- experiment_dir = os.path.join("./logs", args.experiment_dir)
-
- if not os.path.exists(experiment_dir):
- os.makedirs(experiment_dir)
-
- if args.version == "v1" or args.sample_rate == "40k":
- config_path = "configs/%s.json" % args.sample_rate
- else:
- config_path = "configs/%s_v2.json" % args.sample_rate
- config_save_path = os.path.join(experiment_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = hparams.experiment_dir = experiment_dir
- hparams.save_every_epoch = args.save_every_epoch
- hparams.name = name
- hparams.total_epoch = args.total_epoch
- hparams.pretrainG = args.pretrainG
- hparams.pretrainD = args.pretrainD
- hparams.version = args.version
- hparams.gpus = args.gpus
- hparams.train.batch_size = args.batch_size
- hparams.sample_rate = args.sample_rate
- hparams.if_f0 = args.if_f0
- hparams.if_latest = args.if_latest
- hparams.save_every_weights = args.save_every_weights
- hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu
- hparams.data.training_files = "%s/filelist.txt" % experiment_dir
-
- hparams.train.log_interval = args.log_interval
-
- # Update log_interval in the 'train' section of the config dictionary
- config["train"]["log_interval"] = args.log_interval
-
- # Save the updated config back to the config_save_path
- with open(config_save_path, "w") as f:
- json.dump(config, f, indent=4)
-
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn(
- "{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- )
- )
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn(
- "git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]
- )
- )
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/Rebskii/rvc-models-test/vc_infer_pipeline.py b/spaces/Rebskii/rvc-models-test/vc_infer_pipeline.py
deleted file mode 100644
index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000
--- a/spaces/Rebskii/rvc-models-test/vc_infer_pipeline.py
+++ /dev/null
@@ -1,306 +0,0 @@
-import numpy as np, parselmouth, torch, pdb
-from time import time as ttime
-import torch.nn.functional as F
-from config import x_pad, x_query, x_center, x_max
-import scipy.signal as signal
-import pyworld, os, traceback, faiss
-from scipy import signal
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-
-class VC(object):
- def __init__(self, tgt_sr, device, is_half):
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * x_query # 查询切点前后查询时间
- self.t_center = self.sr * x_center # 查询切点位置
- self.t_max = self.sr * x_max # 免查询时长阈值
- self.device = device
- self.is_half = is_half
-
- def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None):
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9, # layer 9
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0])
-
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
- _, I = index.search(npy, 1)
- npy = big_npy[I.squeeze()]
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- f0_file=None,
- ):
- if (
- file_big_npy != ""
- and file_index != ""
- and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- big_npy = np.load(file_big_npy)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- print("Feature retrieval library doesn't exist or ratio is 0")
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0)
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_new.py b/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_new.py
deleted file mode 100644
index bfaf72e48b31cc1130f2892b0973c9aa06f195a3..0000000000000000000000000000000000000000
--- a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/nets_new.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-from . import layers_new
-
-
-class BaseNet(nn.Module):
- def __init__(
- self, nin, nout, nin_lstm, nout_lstm, dilations=((4, 2), (8, 4), (12, 6))
- ):
- super(BaseNet, self).__init__()
- self.enc1 = layers_new.Conv2DBNActiv(nin, nout, 3, 1, 1)
- self.enc2 = layers_new.Encoder(nout, nout * 2, 3, 2, 1)
- self.enc3 = layers_new.Encoder(nout * 2, nout * 4, 3, 2, 1)
- self.enc4 = layers_new.Encoder(nout * 4, nout * 6, 3, 2, 1)
- self.enc5 = layers_new.Encoder(nout * 6, nout * 8, 3, 2, 1)
-
- self.aspp = layers_new.ASPPModule(nout * 8, nout * 8, dilations, dropout=True)
-
- self.dec4 = layers_new.Decoder(nout * (6 + 8), nout * 6, 3, 1, 1)
- self.dec3 = layers_new.Decoder(nout * (4 + 6), nout * 4, 3, 1, 1)
- self.dec2 = layers_new.Decoder(nout * (2 + 4), nout * 2, 3, 1, 1)
- self.lstm_dec2 = layers_new.LSTMModule(nout * 2, nin_lstm, nout_lstm)
- self.dec1 = layers_new.Decoder(nout * (1 + 2) + 1, nout * 1, 3, 1, 1)
-
- def __call__(self, x):
- e1 = self.enc1(x)
- e2 = self.enc2(e1)
- e3 = self.enc3(e2)
- e4 = self.enc4(e3)
- e5 = self.enc5(e4)
-
- h = self.aspp(e5)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = torch.cat([h, self.lstm_dec2(h)], dim=1)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedNet(nn.Module):
- def __init__(self, n_fft, nout=32, nout_lstm=128):
- super(CascadedNet, self).__init__()
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
- self.nin_lstm = self.max_bin // 2
- self.offset = 64
-
- self.stg1_low_band_net = nn.Sequential(
- BaseNet(2, nout // 2, self.nin_lstm // 2, nout_lstm),
- layers_new.Conv2DBNActiv(nout // 2, nout // 4, 1, 1, 0),
- )
-
- self.stg1_high_band_net = BaseNet(
- 2, nout // 4, self.nin_lstm // 2, nout_lstm // 2
- )
-
- self.stg2_low_band_net = nn.Sequential(
- BaseNet(nout // 4 + 2, nout, self.nin_lstm // 2, nout_lstm),
- layers_new.Conv2DBNActiv(nout, nout // 2, 1, 1, 0),
- )
- self.stg2_high_band_net = BaseNet(
- nout // 4 + 2, nout // 2, self.nin_lstm // 2, nout_lstm // 2
- )
-
- self.stg3_full_band_net = BaseNet(
- 3 * nout // 4 + 2, nout, self.nin_lstm, nout_lstm
- )
-
- self.out = nn.Conv2d(nout, 2, 1, bias=False)
- self.aux_out = nn.Conv2d(3 * nout // 4, 2, 1, bias=False)
-
- def forward(self, x):
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- l1_in = x[:, :, :bandw]
- h1_in = x[:, :, bandw:]
- l1 = self.stg1_low_band_net(l1_in)
- h1 = self.stg1_high_band_net(h1_in)
- aux1 = torch.cat([l1, h1], dim=2)
-
- l2_in = torch.cat([l1_in, l1], dim=1)
- h2_in = torch.cat([h1_in, h1], dim=1)
- l2 = self.stg2_low_band_net(l2_in)
- h2 = self.stg2_high_band_net(h2_in)
- aux2 = torch.cat([l2, h2], dim=2)
-
- f3_in = torch.cat([x, aux1, aux2], dim=1)
- f3 = self.stg3_full_band_net(f3_in)
-
- mask = torch.sigmoid(self.out(f3))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux = torch.cat([aux1, aux2], dim=1)
- aux = torch.sigmoid(self.aux_out(aux))
- aux = F.pad(
- input=aux,
- pad=(0, 0, 0, self.output_bin - aux.size()[2]),
- mode="replicate",
- )
- return mask, aux
- else:
- return mask
-
- def predict_mask(self, x):
- mask = self.forward(x)
-
- if self.offset > 0:
- mask = mask[:, :, :, self.offset : -self.offset]
- assert mask.size()[3] > 0
-
- return mask
-
- def predict(self, x, aggressiveness=None):
- mask = self.forward(x)
- pred_mag = x * mask
-
- if self.offset > 0:
- pred_mag = pred_mag[:, :, :, self.offset : -self.offset]
- assert pred_mag.size()[3] > 0
-
- return pred_mag
diff --git a/spaces/Ritori/TTS_Yui/logger.py b/spaces/Ritori/TTS_Yui/logger.py
deleted file mode 100644
index ad327383a24484476801ea7f6d840b9fdb49786b..0000000000000000000000000000000000000000
--- a/spaces/Ritori/TTS_Yui/logger.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import random
-import torch
-from torch.utils.tensorboard import SummaryWriter
-from plotting_utils import plot_alignment_to_numpy, plot_spectrogram_to_numpy
-from plotting_utils import plot_gate_outputs_to_numpy
-
-
-class Tacotron2Logger(SummaryWriter):
- def __init__(self, logdir):
- super(Tacotron2Logger, self).__init__(logdir)
-
- def log_training(self, reduced_loss, grad_norm, learning_rate, duration,
- iteration):
- self.add_scalar("training.loss", reduced_loss, iteration)
- self.add_scalar("grad.norm", grad_norm, iteration)
- self.add_scalar("learning.rate", learning_rate, iteration)
- self.add_scalar("duration", duration, iteration)
-
- def log_validation(self, reduced_loss, model, y, y_pred, iteration):
- self.add_scalar("validation.loss", reduced_loss, iteration)
- _, mel_outputs, gate_outputs, alignments = y_pred
- mel_targets, gate_targets = y
-
- # plot distribution of parameters
- for tag, value in model.named_parameters():
- tag = tag.replace('.', '/')
- self.add_histogram(tag, value.data.cpu().numpy(), iteration)
-
- # plot alignment, mel target and predicted, gate target and predicted
- idx = random.randint(0, alignments.size(0) - 1)
- self.add_image(
- "alignment",
- plot_alignment_to_numpy(alignments[idx].data.cpu().numpy().T),
- iteration, dataformats='HWC')
- self.add_image(
- "mel_target",
- plot_spectrogram_to_numpy(mel_targets[idx].data.cpu().numpy()),
- iteration, dataformats='HWC')
- self.add_image(
- "mel_predicted",
- plot_spectrogram_to_numpy(mel_outputs[idx].data.cpu().numpy()),
- iteration, dataformats='HWC')
- self.add_image(
- "gate",
- plot_gate_outputs_to_numpy(
- gate_targets[idx].data.cpu().numpy(),
- torch.sigmoid(gate_outputs[idx]).data.cpu().numpy()),
- iteration, dataformats='HWC')
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/roi_align_rotated.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/roi_align_rotated.py
deleted file mode 100644
index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/ops/roi_align_rotated.py
+++ /dev/null
@@ -1,177 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward'])
-
-
-class RoIAlignRotatedFunction(Function):
-
- @staticmethod
- def symbolic(g, features, rois, out_size, spatial_scale, sample_num,
- aligned, clockwise):
- if isinstance(out_size, int):
- out_h = out_size
- out_w = out_size
- elif isinstance(out_size, tuple):
- assert len(out_size) == 2
- assert isinstance(out_size[0], int)
- assert isinstance(out_size[1], int)
- out_h, out_w = out_size
- else:
- raise TypeError(
- '"out_size" must be an integer or tuple of integers')
- return g.op(
- 'mmcv::MMCVRoIAlignRotated',
- features,
- rois,
- output_height_i=out_h,
- output_width_i=out_h,
- spatial_scale_f=spatial_scale,
- sampling_ratio_i=sample_num,
- aligned_i=aligned,
- clockwise_i=clockwise)
-
- @staticmethod
- def forward(ctx,
- features,
- rois,
- out_size,
- spatial_scale,
- sample_num=0,
- aligned=True,
- clockwise=False):
- if isinstance(out_size, int):
- out_h = out_size
- out_w = out_size
- elif isinstance(out_size, tuple):
- assert len(out_size) == 2
- assert isinstance(out_size[0], int)
- assert isinstance(out_size[1], int)
- out_h, out_w = out_size
- else:
- raise TypeError(
- '"out_size" must be an integer or tuple of integers')
- ctx.spatial_scale = spatial_scale
- ctx.sample_num = sample_num
- ctx.aligned = aligned
- ctx.clockwise = clockwise
- ctx.save_for_backward(rois)
- ctx.feature_size = features.size()
-
- batch_size, num_channels, data_height, data_width = features.size()
- num_rois = rois.size(0)
-
- output = features.new_zeros(num_rois, num_channels, out_h, out_w)
- ext_module.roi_align_rotated_forward(
- features,
- rois,
- output,
- pooled_height=out_h,
- pooled_width=out_w,
- spatial_scale=spatial_scale,
- sample_num=sample_num,
- aligned=aligned,
- clockwise=clockwise)
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- feature_size = ctx.feature_size
- spatial_scale = ctx.spatial_scale
- aligned = ctx.aligned
- clockwise = ctx.clockwise
- sample_num = ctx.sample_num
- rois = ctx.saved_tensors[0]
- assert feature_size is not None
- batch_size, num_channels, data_height, data_width = feature_size
-
- out_w = grad_output.size(3)
- out_h = grad_output.size(2)
-
- grad_input = grad_rois = None
-
- if ctx.needs_input_grad[0]:
- grad_input = rois.new_zeros(batch_size, num_channels, data_height,
- data_width)
- ext_module.roi_align_rotated_backward(
- grad_output.contiguous(),
- rois,
- grad_input,
- pooled_height=out_h,
- pooled_width=out_w,
- spatial_scale=spatial_scale,
- sample_num=sample_num,
- aligned=aligned,
- clockwise=clockwise)
- return grad_input, grad_rois, None, None, None, None, None
-
-
-roi_align_rotated = RoIAlignRotatedFunction.apply
-
-
-class RoIAlignRotated(nn.Module):
- """RoI align pooling layer for rotated proposals.
-
- It accepts a feature map of shape (N, C, H, W) and rois with shape
- (n, 6) with each roi decoded as (batch_index, center_x, center_y,
- w, h, angle). The angle is in radian.
-
- Args:
- out_size (tuple): h, w
- spatial_scale (float): scale the input boxes by this number
- sample_num (int): number of inputs samples to take for each
- output sample. 0 to take samples densely for current models.
- aligned (bool): if False, use the legacy implementation in
- MMDetection. If True, align the results more perfectly.
- Default: True.
- clockwise (bool): If True, the angle in each proposal follows a
- clockwise fashion in image space, otherwise, the angle is
- counterclockwise. Default: False.
-
- Note:
- The implementation of RoIAlign when aligned=True is modified from
- https://github.com/facebookresearch/detectron2/
-
- The meaning of aligned=True:
-
- Given a continuous coordinate c, its two neighboring pixel
- indices (in our pixel model) are computed by floor(c - 0.5) and
- ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete
- indices [0] and [1] (which are sampled from the underlying signal
- at continuous coordinates 0.5 and 1.5). But the original roi_align
- (aligned=False) does not subtract the 0.5 when computing
- neighboring pixel indices and therefore it uses pixels with a
- slightly incorrect alignment (relative to our pixel model) when
- performing bilinear interpolation.
-
- With `aligned=True`,
- we first appropriately scale the ROI and then shift it by -0.5
- prior to calling roi_align. This produces the correct neighbors;
-
- The difference does not make a difference to the model's
- performance if ROIAlign is used together with conv layers.
- """
-
- def __init__(self,
- out_size,
- spatial_scale,
- sample_num=0,
- aligned=True,
- clockwise=False):
- super(RoIAlignRotated, self).__init__()
-
- self.out_size = out_size
- self.spatial_scale = float(spatial_scale)
- self.sample_num = int(sample_num)
- self.aligned = aligned
- self.clockwise = clockwise
-
- def forward(self, features, rois):
- return RoIAlignRotatedFunction.apply(features, rois, self.out_size,
- self.spatial_scale,
- self.sample_num, self.aligned,
- self.clockwise)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/closure.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/closure.py
deleted file mode 100644
index b955f81f425be4ac3e6bb3f4aac653887989e872..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/closure.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .hook import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class ClosureHook(Hook):
-
- def __init__(self, fn_name, fn):
- assert hasattr(self, fn_name)
- assert callable(fn)
- setattr(self, fn_name, fn)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/rpn_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/rpn_head.py
deleted file mode 100644
index a888cb8c188ca6fe63045b6230266553fbe8c996..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/rpn_head.py
+++ /dev/null
@@ -1,236 +0,0 @@
-import copy
-import warnings
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv import ConfigDict
-from mmcv.cnn import normal_init
-from mmcv.ops import batched_nms
-
-from ..builder import HEADS
-from .anchor_head import AnchorHead
-from .rpn_test_mixin import RPNTestMixin
-
-
-@HEADS.register_module()
-class RPNHead(RPNTestMixin, AnchorHead):
- """RPN head.
-
- Args:
- in_channels (int): Number of channels in the input feature map.
- """ # noqa: W605
-
- def __init__(self, in_channels, **kwargs):
- super(RPNHead, self).__init__(1, in_channels, **kwargs)
-
- def _init_layers(self):
- """Initialize layers of the head."""
- self.rpn_conv = nn.Conv2d(
- self.in_channels, self.feat_channels, 3, padding=1)
- self.rpn_cls = nn.Conv2d(self.feat_channels,
- self.num_anchors * self.cls_out_channels, 1)
- self.rpn_reg = nn.Conv2d(self.feat_channels, self.num_anchors * 4, 1)
-
- def init_weights(self):
- """Initialize weights of the head."""
- normal_init(self.rpn_conv, std=0.01)
- normal_init(self.rpn_cls, std=0.01)
- normal_init(self.rpn_reg, std=0.01)
-
- def forward_single(self, x):
- """Forward feature map of a single scale level."""
- x = self.rpn_conv(x)
- x = F.relu(x, inplace=True)
- rpn_cls_score = self.rpn_cls(x)
- rpn_bbox_pred = self.rpn_reg(x)
- return rpn_cls_score, rpn_bbox_pred
-
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- losses = super(RPNHead, self).loss(
- cls_scores,
- bbox_preds,
- gt_bboxes,
- None,
- img_metas,
- gt_bboxes_ignore=gt_bboxes_ignore)
- return dict(
- loss_rpn_cls=losses['loss_cls'], loss_rpn_bbox=losses['loss_bbox'])
-
- def _get_bboxes(self,
- cls_scores,
- bbox_preds,
- mlvl_anchors,
- img_shapes,
- scale_factors,
- cfg,
- rescale=False):
- """Transform outputs for a single batch item into bbox predictions.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W).
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W).
- mlvl_anchors (list[Tensor]): Box reference for each scale level
- with shape (num_total_anchors, 4).
- img_shapes (list[tuple[int]]): Shape of the input image,
- (height, width, 3).
- scale_factors (list[ndarray]): Scale factor of the image arange as
- (w_scale, h_scale, w_scale, h_scale).
- cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
-
- Returns:
- list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple.
- The first item is an (n, 5) tensor, where the first 4 columns
- are bounding box positions (tl_x, tl_y, br_x, br_y) and the
- 5-th column is a score between 0 and 1. The second item is a
- (n,) tensor where each item is the predicted class labelof the
- corresponding box.
- """
- cfg = self.test_cfg if cfg is None else cfg
- cfg = copy.deepcopy(cfg)
- # bboxes from different level should be independent during NMS,
- # level_ids are used as labels for batched NMS to separate them
- level_ids = []
- mlvl_scores = []
- mlvl_bbox_preds = []
- mlvl_valid_anchors = []
- batch_size = cls_scores[0].shape[0]
- nms_pre_tensor = torch.tensor(
- cfg.nms_pre, device=cls_scores[0].device, dtype=torch.long)
- for idx in range(len(cls_scores)):
- rpn_cls_score = cls_scores[idx]
- rpn_bbox_pred = bbox_preds[idx]
- assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:]
- rpn_cls_score = rpn_cls_score.permute(0, 2, 3, 1)
- if self.use_sigmoid_cls:
- rpn_cls_score = rpn_cls_score.reshape(batch_size, -1)
- scores = rpn_cls_score.sigmoid()
- else:
- rpn_cls_score = rpn_cls_score.reshape(batch_size, -1, 2)
- # We set FG labels to [0, num_class-1] and BG label to
- # num_class in RPN head since mmdet v2.5, which is unified to
- # be consistent with other head since mmdet v2.0. In mmdet v2.0
- # to v2.4 we keep BG label as 0 and FG label as 1 in rpn head.
- scores = rpn_cls_score.softmax(-1)[..., 0]
- rpn_bbox_pred = rpn_bbox_pred.permute(0, 2, 3, 1).reshape(
- batch_size, -1, 4)
- anchors = mlvl_anchors[idx]
- anchors = anchors.expand_as(rpn_bbox_pred)
- if nms_pre_tensor > 0:
- # sort is faster than topk
- # _, topk_inds = scores.topk(cfg.nms_pre)
- # keep topk op for dynamic k in onnx model
- if torch.onnx.is_in_onnx_export():
- # sort op will be converted to TopK in onnx
- # and k<=3480 in TensorRT
- scores_shape = torch._shape_as_tensor(scores)
- nms_pre = torch.where(scores_shape[1] < nms_pre_tensor,
- scores_shape[1], nms_pre_tensor)
- _, topk_inds = scores.topk(nms_pre)
- batch_inds = torch.arange(batch_size).view(
- -1, 1).expand_as(topk_inds)
- scores = scores[batch_inds, topk_inds]
- rpn_bbox_pred = rpn_bbox_pred[batch_inds, topk_inds, :]
- anchors = anchors[batch_inds, topk_inds, :]
-
- elif scores.shape[-1] > cfg.nms_pre:
- ranked_scores, rank_inds = scores.sort(descending=True)
- topk_inds = rank_inds[:, :cfg.nms_pre]
- scores = ranked_scores[:, :cfg.nms_pre]
- batch_inds = torch.arange(batch_size).view(
- -1, 1).expand_as(topk_inds)
- rpn_bbox_pred = rpn_bbox_pred[batch_inds, topk_inds, :]
- anchors = anchors[batch_inds, topk_inds, :]
-
- mlvl_scores.append(scores)
- mlvl_bbox_preds.append(rpn_bbox_pred)
- mlvl_valid_anchors.append(anchors)
- level_ids.append(
- scores.new_full((
- batch_size,
- scores.size(1),
- ),
- idx,
- dtype=torch.long))
-
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
- batch_mlvl_anchors = torch.cat(mlvl_valid_anchors, dim=1)
- batch_mlvl_rpn_bbox_pred = torch.cat(mlvl_bbox_preds, dim=1)
- batch_mlvl_proposals = self.bbox_coder.decode(
- batch_mlvl_anchors, batch_mlvl_rpn_bbox_pred, max_shape=img_shapes)
- batch_mlvl_ids = torch.cat(level_ids, dim=1)
-
- # deprecate arguments warning
- if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg:
- warnings.warn(
- 'In rpn_proposal or test_cfg, '
- 'nms_thr has been moved to a dict named nms as '
- 'iou_threshold, max_num has been renamed as max_per_img, '
- 'name of original arguments and the way to specify '
- 'iou_threshold of NMS will be deprecated.')
- if 'nms' not in cfg:
- cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr))
- if 'max_num' in cfg:
- if 'max_per_img' in cfg:
- assert cfg.max_num == cfg.max_per_img, f'You ' \
- f'set max_num and ' \
- f'max_per_img at the same time, but get {cfg.max_num} ' \
- f'and {cfg.max_per_img} respectively' \
- 'Please delete max_num which will be deprecated.'
- else:
- cfg.max_per_img = cfg.max_num
- if 'nms_thr' in cfg:
- assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set' \
- f' iou_threshold in nms and ' \
- f'nms_thr at the same time, but get' \
- f' {cfg.nms.iou_threshold} and {cfg.nms_thr}' \
- f' respectively. Please delete the nms_thr ' \
- f'which will be deprecated.'
-
- result_list = []
- for (mlvl_proposals, mlvl_scores,
- mlvl_ids) in zip(batch_mlvl_proposals, batch_mlvl_scores,
- batch_mlvl_ids):
- # Skip nonzero op while exporting to ONNX
- if cfg.min_bbox_size > 0 and (not torch.onnx.is_in_onnx_export()):
- w = mlvl_proposals[:, 2] - mlvl_proposals[:, 0]
- h = mlvl_proposals[:, 3] - mlvl_proposals[:, 1]
- valid_ind = torch.nonzero(
- (w >= cfg.min_bbox_size)
- & (h >= cfg.min_bbox_size),
- as_tuple=False).squeeze()
- if valid_ind.sum().item() != len(mlvl_proposals):
- mlvl_proposals = mlvl_proposals[valid_ind, :]
- mlvl_scores = mlvl_scores[valid_ind]
- mlvl_ids = mlvl_ids[valid_ind]
-
- dets, keep = batched_nms(mlvl_proposals, mlvl_scores, mlvl_ids,
- cfg.nms)
- result_list.append(dets[:cfg.max_per_img])
- return result_list
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/utils/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/utils/__init__.py
deleted file mode 100644
index a263e31c1e3977712827ca229bbc04910b4e928e..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/utils/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .flops_counter import get_model_complexity_info
-from .fuse_conv_bn import fuse_conv_bn
-from .sync_bn import revert_sync_batchnorm
-from .weight_init import (INITIALIZERS, Caffe2XavierInit, ConstantInit,
- KaimingInit, NormalInit, PretrainedInit,
- TruncNormalInit, UniformInit, XavierInit,
- bias_init_with_prob, caffe2_xavier_init,
- constant_init, initialize, kaiming_init, normal_init,
- trunc_normal_init, uniform_init, xavier_init)
-
-__all__ = [
- 'get_model_complexity_info', 'bias_init_with_prob', 'caffe2_xavier_init',
- 'constant_init', 'kaiming_init', 'normal_init', 'trunc_normal_init',
- 'uniform_init', 'xavier_init', 'fuse_conv_bn', 'initialize',
- 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit',
- 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit',
- 'Caffe2XavierInit', 'revert_sync_batchnorm'
-]
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/ema.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/ema.py
deleted file mode 100644
index 15c7e68088f019802a59e7ae41cc1fe0c7f28f96..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/ema.py
+++ /dev/null
@@ -1,89 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ...parallel import is_module_wrapper
-from ..hooks.hook import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class EMAHook(Hook):
- r"""Exponential Moving Average Hook.
-
- Use Exponential Moving Average on all parameters of model in training
- process. All parameters have a ema backup, which update by the formula
- as below. EMAHook takes priority over EvalHook and CheckpointSaverHook.
-
- .. math::
-
- \text{Xema\_{t+1}} = (1 - \text{momentum}) \times
- \text{Xema\_{t}} + \text{momentum} \times X_t
-
- Args:
- momentum (float): The momentum used for updating ema parameter.
- Defaults to 0.0002.
- interval (int): Update ema parameter every interval iteration.
- Defaults to 1.
- warm_up (int): During first warm_up steps, we may use smaller momentum
- to update ema parameters more slowly. Defaults to 100.
- resume_from (str): The checkpoint path. Defaults to None.
- """
-
- def __init__(self,
- momentum=0.0002,
- interval=1,
- warm_up=100,
- resume_from=None):
- assert isinstance(interval, int) and interval > 0
- self.warm_up = warm_up
- self.interval = interval
- assert momentum > 0 and momentum < 1
- self.momentum = momentum**interval
- self.checkpoint = resume_from
-
- def before_run(self, runner):
- """To resume model with it's ema parameters more friendly.
-
- Register ema parameter as ``named_buffer`` to model
- """
- model = runner.model
- if is_module_wrapper(model):
- model = model.module
- self.param_ema_buffer = {}
- self.model_parameters = dict(model.named_parameters(recurse=True))
- for name, value in self.model_parameters.items():
- # "." is not allowed in module's buffer name
- buffer_name = f"ema_{name.replace('.', '_')}"
- self.param_ema_buffer[name] = buffer_name
- model.register_buffer(buffer_name, value.data.clone())
- self.model_buffers = dict(model.named_buffers(recurse=True))
- if self.checkpoint is not None:
- runner.resume(self.checkpoint)
-
- def after_train_iter(self, runner):
- """Update ema parameter every self.interval iterations."""
- curr_step = runner.iter
- # We warm up the momentum considering the instability at beginning
- momentum = min(self.momentum,
- (1 + curr_step) / (self.warm_up + curr_step))
- if curr_step % self.interval != 0:
- return
- for name, parameter in self.model_parameters.items():
- buffer_name = self.param_ema_buffer[name]
- buffer_parameter = self.model_buffers[buffer_name]
- buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data)
-
- def after_train_epoch(self, runner):
- """We load parameter values from ema backup to model before the
- EvalHook."""
- self._swap_ema_parameters()
-
- def before_train_epoch(self, runner):
- """We recover model's parameter from ema backup after last epoch's
- EvalHook."""
- self._swap_ema_parameters()
-
- def _swap_ema_parameters(self):
- """Swap the parameter of model with parameter in ema_buffer."""
- for name, value in self.model_parameters.items():
- temp = value.data.clone()
- ema_buffer = self.model_buffers[self.param_ema_buffer[name]]
- value.data.copy_(ema_buffer.data)
- ema_buffer.data.copy_(temp)
diff --git a/spaces/Rongjiehuang/ProDiff/modules/FastDiff/module/modules.py b/spaces/Rongjiehuang/ProDiff/modules/FastDiff/module/modules.py
deleted file mode 100644
index 29b0f42123b10a0518093c23592277b9622b5266..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/modules/FastDiff/module/modules.py
+++ /dev/null
@@ -1,343 +0,0 @@
-import math
-import torch
-import numpy as np
-import torch.nn as nn
-import torch.nn.functional as F
-
-from torch.nn import Conv1d
-
-LRELU_SLOPE = 0.1
-
-
-
-def get_sinusoid_encoding_table(n_position, d_hid, padding_idx=None):
- ''' Sinusoid position encoding table '''
-
- def cal_angle(position, hid_idx):
- return position / np.power(10000, 2 * (hid_idx // 2) / d_hid)
-
- def get_posi_angle_vec(position):
- return [cal_angle(position, hid_j) for hid_j in range(d_hid)]
-
- sinusoid_table = np.array([get_posi_angle_vec(pos_i)
- for pos_i in range(n_position)])
-
- sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i
- sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1
-
- if padding_idx is not None:
- # zero vector for padding dimension
- sinusoid_table[padding_idx] = 0.
-
- return torch.FloatTensor(sinusoid_table)
-
-
-def overlap_and_add(signal, frame_step):
- """Reconstructs a signal from a framed representation.
-
- Adds potentially overlapping frames of a signal with shape
- `[..., frames, frame_length]`, offsetting subsequent frames by `frame_step`.
- The resulting tensor has shape `[..., output_size]` where
-
- output_size = (frames - 1) * frame_step + frame_length
-
- Args:
- signal: A [..., frames, frame_length] Tensor. All dimensions may be unknown, and rank must be at least 2.
- frame_step: An integer denoting overlap offsets. Must be less than or equal to frame_length.
-
- Returns:
- A Tensor with shape [..., output_size] containing the overlap-added frames of signal's inner-most two dimensions.
- output_size = (frames - 1) * frame_step + frame_length
-
- Based on https://github.com/tensorflow/tensorflow/blob/r1.12/tensorflow/contrib/signal/python/ops/reconstruction_ops.py
- """
- outer_dimensions = signal.size()[:-2]
- frames, frame_length = signal.size()[-2:]
-
- # gcd=Greatest Common Divisor
- subframe_length = math.gcd(frame_length, frame_step)
- subframe_step = frame_step // subframe_length
- subframes_per_frame = frame_length // subframe_length
- output_size = frame_step * (frames - 1) + frame_length
- output_subframes = output_size // subframe_length
-
- subframe_signal = signal.view(*outer_dimensions, -1, subframe_length)
-
- frame = torch.arange(0, output_subframes).unfold(0, subframes_per_frame, subframe_step)
- frame = signal.new_tensor(frame).long() # signal may in GPU or CPU
- frame = frame.contiguous().view(-1)
-
- result = signal.new_zeros(*outer_dimensions, output_subframes, subframe_length)
- device_of_result = result.device
- result.index_add_(-2, frame.to(device_of_result), subframe_signal)
- result = result.view(*outer_dimensions, -1)
- return result
-
-
-class LastLayer(nn.Module):
- def __init__(self, in_channels, out_channels,
- nonlinear_activation, nonlinear_activation_params,
- pad, kernel_size, pad_params, bias):
- super(LastLayer, self).__init__()
- self.activation = getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)
- self.pad = getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params)
- self.conv = torch.nn.Conv1d(in_channels, out_channels, kernel_size, bias=bias)
-
- def forward(self, x):
- x = self.activation(x)
- x = self.pad(x)
- x = self.conv(x)
- return x
-
-
-class WeightConv1d(Conv1d):
- """Conv1d module with customized initialization."""
-
- def __init__(self, *args, **kwargs):
- """Initialize Conv1d module."""
- super(Conv1d, self).__init__(*args, **kwargs)
-
- def reset_parameters(self):
- """Reset parameters."""
- torch.nn.init.kaiming_normal_(self.weight, nonlinearity="relu")
- if self.bias is not None:
- torch.nn.init.constant_(self.bias, 0.0)
-
-
-class Conv1d1x1(Conv1d):
- """1x1 Conv1d with customized initialization."""
-
- def __init__(self, in_channels, out_channels, bias):
- """Initialize 1x1 Conv1d module."""
- super(Conv1d1x1, self).__init__(in_channels, out_channels,
- kernel_size=1, padding=0,
- dilation=1, bias=bias)
-
-class DiffusionDBlock(nn.Module):
- def __init__(self, input_size, hidden_size, factor):
- super().__init__()
- self.factor = factor
- self.residual_dense = Conv1d(input_size, hidden_size, 1)
- self.conv = nn.ModuleList([
- Conv1d(input_size, hidden_size, 3, dilation=1, padding=1),
- Conv1d(hidden_size, hidden_size, 3, dilation=2, padding=2),
- Conv1d(hidden_size, hidden_size, 3, dilation=4, padding=4),
- ])
-
- def forward(self, x):
- size = x.shape[-1] // self.factor
-
- residual = self.residual_dense(x)
- residual = F.interpolate(residual, size=size)
-
- x = F.interpolate(x, size=size)
- for layer in self.conv:
- x = F.leaky_relu(x, 0.2)
- x = layer(x)
-
- return x + residual
-
-
-class TimeAware_LVCBlock(torch.nn.Module):
- ''' time-aware location-variable convolutions
- '''
- def __init__(self,
- in_channels,
- cond_channels,
- upsample_ratio,
- conv_layers=4,
- conv_kernel_size=3,
- cond_hop_length=256,
- kpnet_hidden_channels=64,
- kpnet_conv_size=3,
- kpnet_dropout=0.0,
- noise_scale_embed_dim_out=512
- ):
- super().__init__()
-
- self.cond_hop_length = cond_hop_length
- self.conv_layers = conv_layers
- self.conv_kernel_size = conv_kernel_size
- self.convs = torch.nn.ModuleList()
-
- self.upsample = torch.nn.ConvTranspose1d(in_channels, in_channels,
- kernel_size=upsample_ratio*2, stride=upsample_ratio,
- padding=upsample_ratio // 2 + upsample_ratio % 2,
- output_padding=upsample_ratio % 2)
-
-
- self.kernel_predictor = KernelPredictor(
- cond_channels=cond_channels,
- conv_in_channels=in_channels,
- conv_out_channels=2 * in_channels,
- conv_layers=conv_layers,
- conv_kernel_size=conv_kernel_size,
- kpnet_hidden_channels=kpnet_hidden_channels,
- kpnet_conv_size=kpnet_conv_size,
- kpnet_dropout=kpnet_dropout
- )
-
- # the layer-specific fc for noise scale embedding
- self.fc_t = torch.nn.Linear(noise_scale_embed_dim_out, cond_channels)
-
- for i in range(conv_layers):
- padding = (3 ** i) * int((conv_kernel_size - 1) / 2)
- conv = torch.nn.Conv1d(in_channels, in_channels, kernel_size=conv_kernel_size, padding=padding, dilation=3 ** i)
-
- self.convs.append(conv)
-
-
- def forward(self, data):
- ''' forward propagation of the time-aware location-variable convolutions.
- Args:
- x (Tensor): the input sequence (batch, in_channels, in_length)
- c (Tensor): the conditioning sequence (batch, cond_channels, cond_length)
-
- Returns:
- Tensor: the output sequence (batch, in_channels, in_length)
- '''
- x, audio_down, c, noise_embedding = data
- batch, in_channels, in_length = x.shape
-
- noise = (self.fc_t(noise_embedding)).unsqueeze(-1) # (B, 80)
- condition = c + noise # (B, 80, T)
- kernels, bias = self.kernel_predictor(condition)
- x = F.leaky_relu(x, 0.2)
- x = self.upsample(x)
-
- for i in range(self.conv_layers):
- x += audio_down
- y = F.leaky_relu(x, 0.2)
- y = self.convs[i](y)
- y = F.leaky_relu(y, 0.2)
-
- k = kernels[:, i, :, :, :, :]
- b = bias[:, i, :, :]
- y = self.location_variable_convolution(y, k, b, 1, self.cond_hop_length)
- x = x + torch.sigmoid(y[:, :in_channels, :]) * torch.tanh(y[:, in_channels:, :])
- return x
-
- def location_variable_convolution(self, x, kernel, bias, dilation, hop_size):
- ''' perform location-variable convolution operation on the input sequence (x) using the local convolution kernl.
- Time: 414 μs ± 309 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each), test on NVIDIA V100.
- Args:
- x (Tensor): the input sequence (batch, in_channels, in_length).
- kernel (Tensor): the local convolution kernel (batch, in_channel, out_channels, kernel_size, kernel_length)
- bias (Tensor): the bias for the local convolution (batch, out_channels, kernel_length)
- dilation (int): the dilation of convolution.
- hop_size (int): the hop_size of the conditioning sequence.
- Returns:
- (Tensor): the output sequence after performing local convolution. (batch, out_channels, in_length).
- '''
- batch, in_channels, in_length = x.shape
- batch, in_channels, out_channels, kernel_size, kernel_length = kernel.shape
-
-
- assert in_length == (kernel_length * hop_size), "length of (x, kernel) is not matched"
-
- padding = dilation * int((kernel_size - 1) / 2)
- x = F.pad(x, (padding, padding), 'constant', 0) # (batch, in_channels, in_length + 2*padding)
- x = x.unfold(2, hop_size + 2 * padding, hop_size) # (batch, in_channels, kernel_length, hop_size + 2*padding)
-
- if hop_size < dilation:
- x = F.pad(x, (0, dilation), 'constant', 0)
- x = x.unfold(3, dilation,
- dilation) # (batch, in_channels, kernel_length, (hop_size + 2*padding)/dilation, dilation)
- x = x[:, :, :, :, :hop_size]
- x = x.transpose(3, 4) # (batch, in_channels, kernel_length, dilation, (hop_size + 2*padding)/dilation)
- x = x.unfold(4, kernel_size, 1) # (batch, in_channels, kernel_length, dilation, _, kernel_size)
-
- o = torch.einsum('bildsk,biokl->bolsd', x, kernel)
- o = o + bias.unsqueeze(-1).unsqueeze(-1)
- o = o.contiguous().view(batch, out_channels, -1)
- return o
-
-
-
-class KernelPredictor(torch.nn.Module):
- ''' Kernel predictor for the time-aware location-variable convolutions
- '''
-
- def __init__(self,
- cond_channels,
- conv_in_channels,
- conv_out_channels,
- conv_layers,
- conv_kernel_size=3,
- kpnet_hidden_channels=64,
- kpnet_conv_size=3,
- kpnet_dropout=0.0,
- kpnet_nonlinear_activation="LeakyReLU",
- kpnet_nonlinear_activation_params={"negative_slope": 0.1}
- ):
- '''
- Args:
- cond_channels (int): number of channel for the conditioning sequence,
- conv_in_channels (int): number of channel for the input sequence,
- conv_out_channels (int): number of channel for the output sequence,
- conv_layers (int):
- kpnet_
- '''
- super().__init__()
-
- self.conv_in_channels = conv_in_channels
- self.conv_out_channels = conv_out_channels
- self.conv_kernel_size = conv_kernel_size
- self.conv_layers = conv_layers
-
- l_w = conv_in_channels * conv_out_channels * conv_kernel_size * conv_layers
- l_b = conv_out_channels * conv_layers
-
- padding = (kpnet_conv_size - 1) // 2
- self.input_conv = torch.nn.Sequential(
- torch.nn.Conv1d(cond_channels, kpnet_hidden_channels, 5, padding=(5 - 1) // 2, bias=True),
- getattr(torch.nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params),
- )
-
- self.residual_conv = torch.nn.Sequential(
- torch.nn.Dropout(kpnet_dropout),
- torch.nn.Conv1d(kpnet_hidden_channels, kpnet_hidden_channels, kpnet_conv_size, padding=padding, bias=True),
- getattr(torch.nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params),
- torch.nn.Conv1d(kpnet_hidden_channels, kpnet_hidden_channels, kpnet_conv_size, padding=padding, bias=True),
- getattr(torch.nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params),
- torch.nn.Dropout(kpnet_dropout),
- torch.nn.Conv1d(kpnet_hidden_channels, kpnet_hidden_channels, kpnet_conv_size, padding=padding, bias=True),
- getattr(torch.nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params),
- torch.nn.Conv1d(kpnet_hidden_channels, kpnet_hidden_channels, kpnet_conv_size, padding=padding, bias=True),
- getattr(torch.nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params),
- torch.nn.Dropout(kpnet_dropout),
- torch.nn.Conv1d(kpnet_hidden_channels, kpnet_hidden_channels, kpnet_conv_size, padding=padding, bias=True),
- getattr(torch.nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params),
- torch.nn.Conv1d(kpnet_hidden_channels, kpnet_hidden_channels, kpnet_conv_size, padding=padding, bias=True),
- getattr(torch.nn, kpnet_nonlinear_activation)(**kpnet_nonlinear_activation_params),
- )
-
- self.kernel_conv = torch.nn.Conv1d(kpnet_hidden_channels, l_w, kpnet_conv_size,
- padding=padding, bias=True)
- self.bias_conv = torch.nn.Conv1d(kpnet_hidden_channels, l_b, kpnet_conv_size, padding=padding,
- bias=True)
-
- def forward(self, c):
- '''
- Args:
- c (Tensor): the conditioning sequence (batch, cond_channels, cond_length)
- Returns:
- '''
- batch, cond_channels, cond_length = c.shape
-
- c = self.input_conv(c)
- c = c + self.residual_conv(c)
- k = self.kernel_conv(c)
- b = self.bias_conv(c)
-
- kernels = k.contiguous().view(batch,
- self.conv_layers,
- self.conv_in_channels,
- self.conv_out_channels,
- self.conv_kernel_size,
- cond_length)
- bias = b.contiguous().view(batch,
- self.conv_layers,
- self.conv_out_channels,
- cond_length)
- return kernels, bias
diff --git a/spaces/SIGGRAPH2022/DCT-Net/README.md b/spaces/SIGGRAPH2022/DCT-Net/README.md
deleted file mode 100644
index ef6ddbc185afe0b8238589d308a929dbd9d71e04..0000000000000000000000000000000000000000
--- a/spaces/SIGGRAPH2022/DCT-Net/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: DCT Net
-emoji: 📉
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Salesforce/EDICT/my_diffusers/utils/dummy_transformers_objects.py b/spaces/Salesforce/EDICT/my_diffusers/utils/dummy_transformers_objects.py
deleted file mode 100644
index e05eb814d17b3a49eb550a89dfd13ee24fdda134..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_diffusers/utils/dummy_transformers_objects.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# This file is autogenerated by the command `make fix-copies`, do not edit.
-# flake8: noqa
-
-from ..utils import DummyObject, requires_backends
-
-
-class LDMTextToImagePipeline(metaclass=DummyObject):
- _backends = ["transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["transformers"])
-
-
-class StableDiffusionImg2ImgPipeline(metaclass=DummyObject):
- _backends = ["transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["transformers"])
-
-
-class StableDiffusionInpaintPipeline(metaclass=DummyObject):
- _backends = ["transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["transformers"])
-
-
-class StableDiffusionPipeline(metaclass=DummyObject):
- _backends = ["transformers"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["transformers"])
diff --git a/spaces/SankarSrin/image-matting-app/ppmatting/utils/__init__.py b/spaces/SankarSrin/image-matting-app/ppmatting/utils/__init__.py
deleted file mode 100644
index 79717c71036b5b730cce8548bc27f6fef7222c21..0000000000000000000000000000000000000000
--- a/spaces/SankarSrin/image-matting-app/ppmatting/utils/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .estimate_foreground_ml import estimate_foreground_ml
-from .utils import get_files, get_image_list, mkdir
diff --git a/spaces/Shad0ws/Voice_Cloning/app.py b/spaces/Shad0ws/Voice_Cloning/app.py
deleted file mode 100644
index 18e84fe677fd23a472fd4ef71be564a5cbc94929..0000000000000000000000000000000000000000
--- a/spaces/Shad0ws/Voice_Cloning/app.py
+++ /dev/null
@@ -1,165 +0,0 @@
-from turtle import title
-import gradio as gr
-
-import git
-import os
-os.system('git clone https://github.com/Edresson/Coqui-TTS -b multilingual-torchaudio-SE TTS')
-os.system('pip install -q -e TTS/')
-os.system('pip install -q torchaudio==0.9.0')
-
-import sys
-TTS_PATH = "TTS/"
-
-# add libraries into environment
-sys.path.append(TTS_PATH) # set this if TTS is not installed globally
-
-import os
-import string
-import time
-import argparse
-import json
-
-import numpy as np
-import IPython
-from IPython.display import Audio
-
-
-import torch
-
-from TTS.tts.utils.synthesis import synthesis
-from TTS.tts.utils.text.symbols import make_symbols, phonemes, symbols
-try:
- from TTS.utils.audio import AudioProcessor
-except:
- from TTS.utils.audio import AudioProcessor
-
-
-from TTS.tts.models import setup_model
-from TTS.config import load_config
-from TTS.tts.models.vits import *
-
-OUT_PATH = 'out/'
-
-# create output path
-os.makedirs(OUT_PATH, exist_ok=True)
-
-# model vars
-MODEL_PATH = '/home/user/app/best_model_latest.pth.tar'
-CONFIG_PATH = '/home/user/app/config.json'
-TTS_LANGUAGES = "/home/user/app/language_ids.json"
-TTS_SPEAKERS = "/home/user/app/speakers.json"
-USE_CUDA = torch.cuda.is_available()
-
-# load the config
-C = load_config(CONFIG_PATH)
-
-
-# load the audio processor
-ap = AudioProcessor(**C.audio)
-
-speaker_embedding = None
-
-C.model_args['d_vector_file'] = TTS_SPEAKERS
-C.model_args['use_speaker_encoder_as_loss'] = False
-
-model = setup_model(C)
-model.language_manager.set_language_ids_from_file(TTS_LANGUAGES)
-# print(model.language_manager.num_languages, model.embedded_language_dim)
-# print(model.emb_l)
-cp = torch.load(MODEL_PATH, map_location=torch.device('cpu'))
-# remove speaker encoder
-model_weights = cp['model'].copy()
-for key in list(model_weights.keys()):
- if "speaker_encoder" in key:
- del model_weights[key]
-
-model.load_state_dict(model_weights)
-
-
-model.eval()
-
-if USE_CUDA:
- model = model.cuda()
-
-# synthesize voice
-use_griffin_lim = False
-
-os.system('pip install -q pydub ffmpeg-normalize')
-
-CONFIG_SE_PATH = "config_se.json"
-CHECKPOINT_SE_PATH = "SE_checkpoint.pth.tar"
-
-from TTS.tts.utils.speakers import SpeakerManager
-from pydub import AudioSegment
-import librosa
-
-SE_speaker_manager = SpeakerManager(encoder_model_path=CHECKPOINT_SE_PATH, encoder_config_path=CONFIG_SE_PATH, use_cuda=USE_CUDA)
-
-def compute_spec(ref_file):
- y, sr = librosa.load(ref_file, sr=ap.sample_rate)
- spec = ap.spectrogram(y)
- spec = torch.FloatTensor(spec).unsqueeze(0)
- return spec
-
-
-
-def greet(Text,Voicetoclone,VoiceMicrophone):
- text= "%s" % (Text)
- if Voicetoclone is not None:
- reference_files= "%s" % (Voicetoclone)
- print("path url")
- print(Voicetoclone)
- sample= str(Voicetoclone)
- else:
- reference_files= "%s" % (VoiceMicrophone)
- print("path url")
- print(VoiceMicrophone)
- sample= str(VoiceMicrophone)
- size= len(reference_files)*sys.getsizeof(reference_files)
- size2= size / 1000000
- if (size2 > 0.012) or len(text)>2000:
- message="File is greater than 30mb or Text inserted is longer than 2000 characters. Please re-try with smaller sizes."
- print(message)
- raise SystemExit("File is greater than 30mb. Please re-try or Text inserted is longer than 2000 characters. Please re-try with smaller sizes.")
- else:
- os.system('ffmpeg-normalize $sample -nt rms -t=-27 -o $sample -ar 16000 -f')
- reference_emb = SE_speaker_manager.compute_d_vector_from_clip(reference_files)
- model.length_scale = 1 # scaler for the duration predictor. The larger it is, the slower the speech.
- model.inference_noise_scale = 0.3 # defines the noise variance applied to the random z vector at inference.
- model.inference_noise_scale_dp = 0.3 # defines the noise variance applied to the duration predictor z vector at inference.
- text = text
- model.language_manager.language_id_mapping
- language_id = 0
-
- print(" > text: {}".format(text))
- wav, alignment, _, _ = synthesis(
- model,
- text,
- C,
- "cuda" in str(next(model.parameters()).device),
- ap,
- speaker_id=None,
- d_vector=reference_emb,
- style_wav=None,
- language_id=language_id,
- enable_eos_bos_chars=C.enable_eos_bos_chars,
- use_griffin_lim=True,
- do_trim_silence=False,
- ).values()
- print("Generated Audio")
- IPython.display.display(Audio(wav, rate=ap.sample_rate))
- #file_name = text.replace(" ", "_")
- #file_name = file_name.translate(str.maketrans('', '', string.punctuation.replace('_', ''))) + '.wav'
- file_name="Audio.wav"
- out_path = os.path.join(OUT_PATH, file_name)
- print(" > Saving output to {}".format(out_path))
- ap.save_wav(wav, out_path)
- return out_path
-
-demo = gr.Interface(
- fn=greet,
- inputs=[gr.inputs.Textbox(label='What would you like the voice to say? (max. 2000 characters per request)'),gr.Audio(type="filepath",source="upload",label='Please upload a voice to clone (max. 30mb)'),gr.Audio(source="microphone", type="filepath", streaming=True)],
- outputs="audio",
- title="Cloning Interface"
- )
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Smotto/Vocal-Isolator/src/models/MDX_net/mdx_net.py b/spaces/Smotto/Vocal-Isolator/src/models/MDX_net/mdx_net.py
deleted file mode 100644
index 793b9e58ef474c10c9fd9e3034063d970d4900a7..0000000000000000000000000000000000000000
--- a/spaces/Smotto/Vocal-Isolator/src/models/MDX_net/mdx_net.py
+++ /dev/null
@@ -1,275 +0,0 @@
-# Third-party
-import torch
-import torch.nn as nn
-
-# Local
-from src.Sound_Feature_Extraction.short_time_fourier_transform import STFT
-
-COMPUTATION_DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
-
-
-class Conv_TDF(nn.Module):
- """
- Convolutional Time-Domain Filter (TDF) Module.
-
- Args:
- c (int): The number of input and output channels for the convolutional layers.
- l (int): The number of convolutional layers within the module.
- f (int): The number of features (or units) in the time-domain filter.
- k (int): The size of the convolutional kernels (filters).
- bn (int or None): Batch normalization factor (controls TDF behavior). If None, TDF is not used.
- bias (bool): A boolean flag indicating whether bias terms are included in the linear layers.
-
- Attributes:
- use_tdf (bool): Flag indicating whether TDF is used.
-
- Methods:
- forward(x): Forward pass through the TDF module.
- """
-
- def __init__(self, c, l, f, k, bn, bias=True):
- super(Conv_TDF, self).__init__()
-
- # Determine whether to use TDF (Time-Domain Filter)
- self.use_tdf = bn is not None
-
- # Define a list of convolutional layers within the module
- self.H = nn.ModuleList()
- for i in range(l):
- self.H.append(
- nn.Sequential(
- nn.Conv2d(
- in_channels=c,
- out_channels=c,
- kernel_size=k,
- stride=1,
- padding=k // 2,
- ),
- nn.GroupNorm(2, c),
- nn.ReLU(),
- )
- )
-
- # Define the Time-Domain Filter (TDF) layers if enabled
- if self.use_tdf:
- if bn == 0:
- self.tdf = nn.Sequential(
- nn.Linear(f, f, bias=bias), nn.GroupNorm(2, c), nn.ReLU()
- )
- else:
- self.tdf = nn.Sequential(
- nn.Linear(f, f // bn, bias=bias),
- nn.GroupNorm(2, c),
- nn.ReLU(),
- nn.Linear(f // bn, f, bias=bias),
- nn.GroupNorm(2, c),
- nn.ReLU(),
- )
-
- def forward(self, x):
- # Apply the convolutional layers sequentially
- for h in self.H:
- x = h(x)
-
- # Apply the Time-Domain Filter (TDF) if enabled, and add the result to the orignal input
- return x + self.tdf(x) if self.use_tdf else x
-
-
-class Conv_TDF_net_trimm(nn.Module):
- """
- Convolutional Time-Domain Filter (TDF) Network with Trimming.
-
- Args:
- L (int): This parameter controls the number of down-sampling (DS) blocks in the network.
- It's divided by 2 to determine how many DS blocks should be created.
- l (int): This parameter represents the number of convolutional layers (or filters) within each dense (fully connected) block.
- g (int): This parameter specifies the number of output channels for the first convolutional layer and is also used to determine the number of channels for subsequent layers in the network.
- dim_f (int): This parameter represents the number of frequency bins (spectrogram columns) in the input audio data.
- dim_t (int): This parameter represents the number of time frames (spectrogram rows) in the input audio data.
- k (int): This parameter specifies the size of convolutional kernels (filters) used in the network's convolutional layers.
- bn (int or None): This parameter controls whether batch normalization is used in the network.
- If it's None, batch normalization may or may not be used based on other conditions in the code.
- bias (bool): This parameter is a boolean flag that controls whether bias terms are included in the convolutional layers.
- overlap (int): This parameter specifies the amount of overlap between consecutive chunks of audio data during processing.
-
- Attributes:
- n (int): The calculated number of down-sampling (DS) blocks.
- dim_f (int): The number of frequency bins (spectrogram columns) in the input audio data.
- dim_t (int): The number of time frames (spectrogram rows) in the input audio data.
- n_fft (int): The size of the Fast Fourier Transform (FFT) window.
- hop (int): The hop size used in the STFT calculations.
- n_bins (int): The number of bins in the frequency domain.
- chunk_size (int): The size of each chunk of audio data.
- target_name (str): The name of the target instrument being separated.
- overlap (int): The amount of overlap between consecutive chunks of audio data during processing.
-
- Methods:
- forward(x): Forward pass through the Conv_TDF_net_trimm network.
- """
-
- def __init__(
- self,
- model_path,
- use_onnx,
- target_name,
- L,
- l,
- g,
- dim_f,
- dim_t,
- k=3,
- hop=1024,
- bn=None,
- bias=True,
- overlap=1500,
- ):
- super(Conv_TDF_net_trimm, self).__init__()
- # Dictionary specifying the scale for the number of FFT bins for different target names
- n_fft_scale = {"vocals": 3, "*": 2}
-
- # Number of input and output channels for the initial and final convolutional layers
- out_c = in_c = 4
-
- # Number of down-sampling (DS) blocks
- self.n = L // 2
-
- # Dimensions of the frequency and time axes of the input data
- self.dim_f = 3072
- self.dim_t = 256
-
- # Number of FFT bins (frequencies) and hop size for the Short-Time Fourier Transform (STFT)
- self.n_fft = 7680
- self.hop = hop
- self.n_bins = self.n_fft // 2 + 1
-
- # Chunk size used for processing
- self.chunk_size = hop * (self.dim_t - 1)
-
- # Target name for the model
- self.target_name = target_name
-
- # Overlap between consecutive chunks of audio data during processing
- self.overlap = overlap
-
- # STFT module for audio processing
- self.stft = STFT(self.n_fft, self.hop, self.dim_f)
-
- # Check if ONNX representation of the model should be used
- if not use_onnx:
- # First convolutional layer
- self.first_conv = nn.Sequential(
- nn.Conv2d(in_channels=in_c, out_channels=g, kernel_size=1, stride=1),
- nn.BatchNorm2d(g),
- nn.ReLU(),
- )
-
- # Initialize variables for dense (fully connected) blocks and downsampling (DS) blocks
- f = self.dim_f
- c = g
- self.ds_dense = nn.ModuleList()
- self.ds = nn.ModuleList()
-
- # Loop through down-sampling (DS) blocks
- for i in range(self.n):
- # Create dense (fully connected) block for down-sampling
- self.ds_dense.append(Conv_TDF(c, l, f, k, bn, bias=bias))
-
- # Create down-sampling (DS) block
- scale = (2, 2)
- self.ds.append(
- nn.Sequential(
- nn.Conv2d(
- in_channels=c,
- out_channels=c + g,
- kernel_size=scale,
- stride=scale,
- ),
- nn.BatchNorm2d(c + g),
- nn.ReLU(),
- )
- )
- f = f // 2
- c += g
-
- # Middle dense (fully connected block)
- self.mid_dense = Conv_TDF(c, l, f, k, bn, bias=bias)
-
- # If batch normalization is not specified and mid_tdf is True, use Conv_TDF with bn=0 and bias=False
- if bn is None and mid_tdf:
- self.mid_dense = Conv_TDF(c, l, f, k, bn=0, bias=False)
-
- # Initialize variables for up-sampling (US) blocks
- self.us_dense = nn.ModuleList()
- self.us = nn.ModuleList()
-
- # Loop through up-sampling (US) blocks
- for i in range(self.n):
- scale = (2, 2)
- # Create up-sampling (US) block
- self.us.append(
- nn.Sequential(
- nn.ConvTranspose2d(
- in_channels=c,
- out_channels=c - g,
- kernel_size=scale,
- stride=scale,
- ),
- nn.BatchNorm2d(c - g),
- nn.ReLU(),
- )
- )
- f = f * 2
- c -= g
-
- # Create dense (fully connected) block for up-sampling
- self.us_dense.append(Conv_TDF(c, l, f, k, bn, bias=bias))
-
- # Final convolutional layer
- self.final_conv = nn.Sequential(
- nn.Conv2d(in_channels=c, out_channels=out_c, kernel_size=1, stride=1),
- )
-
- try:
- # Load model state from a file
- self.load_state_dict(
- torch.load(
- f"{model_path}/{target_name}.pt",
- map_location=COMPUTATION_DEVICE,
- )
- )
- print(f"Loading model ({target_name})")
- except FileNotFoundError:
- print(f"Random init ({target_name})")
-
- def forward(self, x):
- """
- Forward pass through the Conv_TDF_net_trimm network.
-
- Args:
- x (torch.Tensor): Input tensor.
-
- Returns:
- torch.Tensor: Output tensor after passing through the network.
- """
- x = self.first_conv(x)
-
- x = x.transpose(-1, -2)
-
- ds_outputs = []
- for i in range(self.n):
- x = self.ds_dense[i](x)
- ds_outputs.append(x)
- x = self.ds[i](x)
-
- x = self.mid_dense(x)
-
- for i in range(self.n):
- x = self.us[i](x)
- x *= ds_outputs[-i - 1]
- x = self.us_dense[i](x)
-
- x = x.transpose(-1, -2)
-
- x = self.final_conv(x)
-
- return x
diff --git a/spaces/Sumit7864/Image-Enhancer/docs/anime_video_model.md b/spaces/Sumit7864/Image-Enhancer/docs/anime_video_model.md
deleted file mode 100644
index 0ad5c85804c1f8636c3720a652b40bbd9df0fe2e..0000000000000000000000000000000000000000
--- a/spaces/Sumit7864/Image-Enhancer/docs/anime_video_model.md
+++ /dev/null
@@ -1,136 +0,0 @@
-# Anime Video Models
-
-:white_check_mark: We add small models that are optimized for anime videos :-)
-More comparisons can be found in [anime_comparisons.md](anime_comparisons.md)
-
-- [How to Use](#how-to-use)
-- [PyTorch Inference](#pytorch-inference)
-- [ncnn Executable File](#ncnn-executable-file)
- - [Step 1: Use ffmpeg to extract frames from video](#step-1-use-ffmpeg-to-extract-frames-from-video)
- - [Step 2: Inference with Real-ESRGAN executable file](#step-2-inference-with-real-esrgan-executable-file)
- - [Step 3: Merge the enhanced frames back into a video](#step-3-merge-the-enhanced-frames-back-into-a-video)
-- [More Demos](#more-demos)
-
-| Models | Scale | Description |
-| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- |
-| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X4 1 | Anime video model with XS size |
-
-Note:
-1 This model can also be used for X1, X2, X3.
-
----
-
-The following are some demos (best view in the full screen mode).
-
-
-
-
-
-
-
-## How to Use
-
-### PyTorch Inference
-
-```bash
-# download model
-wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P weights
-# single gpu and single process inference
-CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2
-# single gpu and multi process inference (you can use multi-processing to improve GPU utilization)
-CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2
-# multi gpu and multi process inference
-CUDA_VISIBLE_DEVICES=0,1,2,3 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2
-```
-
-```console
-Usage:
---num_process_per_gpu The total number of process is num_gpu * num_process_per_gpu. The bottleneck of
- the program lies on the IO, so the GPUs are usually not fully utilized. To alleviate
- this issue, you can use multi-processing by setting this parameter. As long as it
- does not exceed the CUDA memory
---extract_frame_first If you encounter ffmpeg error when using multi-processing, you can turn this option on.
-```
-
-### NCNN Executable File
-
-#### Step 1: Use ffmpeg to extract frames from video
-
-```bash
-ffmpeg -i onepiece_demo.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 tmp_frames/frame%08d.png
-```
-
-- Remember to create the folder `tmp_frames` ahead
-
-#### Step 2: Inference with Real-ESRGAN executable file
-
-1. Download the latest portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**
-
-1. Taking the Windows as example, run:
-
- ```bash
- ./realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n realesr-animevideov3 -s 2 -f jpg
- ```
-
- - Remember to create the folder `out_frames` ahead
-
-#### Step 3: Merge the enhanced frames back into a video
-
-1. First obtain fps from input videos by
-
- ```bash
- ffmpeg -i onepiece_demo.mp4
- ```
-
- ```console
- Usage:
- -i input video path
- ```
-
- You will get the output similar to the following screenshot.
-
-
-
-
-
-2. Merge frames
-
- ```bash
- ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -c:v libx264 -r 23.98 -pix_fmt yuv420p output.mp4
- ```
-
- ```console
- Usage:
- -i input video path
- -c:v video encoder (usually we use libx264)
- -r fps, remember to modify it to meet your needs
- -pix_fmt pixel format in video
- ```
-
- If you also want to copy audio from the input videos, run:
-
- ```bash
- ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -i onepiece_demo.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 23.98 -pix_fmt yuv420p output_w_audio.mp4
- ```
-
- ```console
- Usage:
- -i input video path, here we use two input streams
- -c:v video encoder (usually we use libx264)
- -r fps, remember to modify it to meet your needs
- -pix_fmt pixel format in video
- ```
-
-## More Demos
-
-- Input video for One Piece:
-
-
-
-- Out video for One Piece
-
-
-
-**More comparisons**
-
-
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/config.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/config.py
deleted file mode 100644
index 9e1cb38c254f412cae88890bdd2da92da5232908..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/magics/config.py
+++ /dev/null
@@ -1,140 +0,0 @@
-"""Implementation of configuration-related magic functions.
-"""
-#-----------------------------------------------------------------------------
-# Copyright (c) 2012 The IPython Development Team.
-#
-# Distributed under the terms of the Modified BSD License.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-
-#-----------------------------------------------------------------------------
-# Imports
-#-----------------------------------------------------------------------------
-
-# Stdlib
-import re
-
-# Our own packages
-from IPython.core.error import UsageError
-from IPython.core.magic import Magics, magics_class, line_magic
-from logging import error
-
-#-----------------------------------------------------------------------------
-# Magic implementation classes
-#-----------------------------------------------------------------------------
-
-reg = re.compile(r'^\w+\.\w+$')
-@magics_class
-class ConfigMagics(Magics):
-
- def __init__(self, shell):
- super(ConfigMagics, self).__init__(shell)
- self.configurables = []
-
- @line_magic
- def config(self, s):
- """configure IPython
-
- %config Class[.trait=value]
-
- This magic exposes most of the IPython config system. Any
- Configurable class should be able to be configured with the simple
- line::
-
- %config Class.trait=value
-
- Where `value` will be resolved in the user's namespace, if it is an
- expression or variable name.
-
- Examples
- --------
-
- To see what classes are available for config, pass no arguments::
-
- In [1]: %config
- Available objects for config:
- AliasManager
- DisplayFormatter
- HistoryManager
- IPCompleter
- LoggingMagics
- MagicsManager
- OSMagics
- PrefilterManager
- ScriptMagics
- TerminalInteractiveShell
-
- To view what is configurable on a given class, just pass the class
- name::
-
- In [2]: %config LoggingMagics
- LoggingMagics(Magics) options
- ---------------------------
- LoggingMagics.quiet=
- Suppress output of log state when logging is enabled
- Current: False
-
- but the real use is in setting values::
-
- In [3]: %config LoggingMagics.quiet = True
-
- and these values are read from the user_ns if they are variables::
-
- In [4]: feeling_quiet=False
-
- In [5]: %config LoggingMagics.quiet = feeling_quiet
-
- """
- from traitlets.config.loader import Config
- # some IPython objects are Configurable, but do not yet have
- # any configurable traits. Exclude them from the effects of
- # this magic, as their presence is just noise:
- configurables = sorted(set([ c for c in self.shell.configurables
- if c.__class__.class_traits(config=True)
- ]), key=lambda x: x.__class__.__name__)
- classnames = [ c.__class__.__name__ for c in configurables ]
-
- line = s.strip()
- if not line:
- # print available configurable names
- print("Available objects for config:")
- for name in classnames:
- print(" ", name)
- return
- elif line in classnames:
- # `%config TerminalInteractiveShell` will print trait info for
- # TerminalInteractiveShell
- c = configurables[classnames.index(line)]
- cls = c.__class__
- help = cls.class_get_help(c)
- # strip leading '--' from cl-args:
- help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
- print(help)
- return
- elif reg.match(line):
- cls, attr = line.split('.')
- return getattr(configurables[classnames.index(cls)],attr)
- elif '=' not in line:
- msg = "Invalid config statement: %r, "\
- "should be `Class.trait = value`."
-
- ll = line.lower()
- for classname in classnames:
- if ll == classname.lower():
- msg = msg + '\nDid you mean %s (note the case)?' % classname
- break
-
- raise UsageError( msg % line)
-
- # otherwise, assume we are setting configurables.
- # leave quotes on args when splitting, because we want
- # unquoted args to eval in user_ns
- cfg = Config()
- exec("cfg."+line, self.shell.user_ns, locals())
-
- for configurable in configurables:
- try:
- configurable.update_config(cfg)
- except Exception as e:
- error(e)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_autocall.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_autocall.py
deleted file mode 100644
index 925a1ccae3758683dcee9e8235ae8b9d8969057d..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_autocall.py
+++ /dev/null
@@ -1,67 +0,0 @@
-"""These kinds of tests are less than ideal, but at least they run.
-
-This was an old test that was being run interactively in the top-level tests/
-directory, which we are removing. For now putting this here ensures at least
-we do run the test, though ultimately this functionality should all be tested
-with better-isolated tests that don't rely on the global instance in iptest.
-"""
-from IPython.core.splitinput import LineInfo
-from IPython.core.prefilter import AutocallChecker
-
-
-def doctest_autocall():
- """
- In [1]: def f1(a,b,c):
- ...: return a+b+c
- ...:
-
- In [2]: def f2(a):
- ...: return a + a
- ...:
-
- In [3]: def r(x):
- ...: return True
- ...:
-
- In [4]: ;f2 a b c
- Out[4]: 'a b ca b c'
-
- In [5]: assert _ == "a b ca b c"
-
- In [6]: ,f1 a b c
- Out[6]: 'abc'
-
- In [7]: assert _ == 'abc'
-
- In [8]: print(_)
- abc
-
- In [9]: /f1 1,2,3
- Out[9]: 6
-
- In [10]: assert _ == 6
-
- In [11]: /f2 4
- Out[11]: 8
-
- In [12]: assert _ == 8
-
- In [12]: del f1, f2
-
- In [13]: ,r a
- Out[13]: True
-
- In [14]: assert _ == True
-
- In [15]: r'a'
- Out[15]: 'a'
-
- In [16]: assert _ == 'a'
- """
-
-
-def test_autocall_should_ignore_raw_strings():
- line_info = LineInfo("r'a'")
- pm = ip.prefilter_manager
- ac = AutocallChecker(shell=pm.shell, prefilter_manager=pm, config=pm.config)
- assert ac.check(line_info) is None
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/v5/theme.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/v5/theme.py
deleted file mode 100644
index b536a1ddebe6c311672e6ce2757853ecffa6fb1e..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/vegalite/v5/theme.py
+++ /dev/null
@@ -1,59 +0,0 @@
-"""Tools for enabling and registering chart themes"""
-
-from ...utils.theme import ThemeRegistry
-
-VEGA_THEMES = [
- "ggplot2",
- "quartz",
- "vox",
- "fivethirtyeight",
- "dark",
- "latimes",
- "urbaninstitute",
- "excel",
- "googlecharts",
- "powerbi",
-]
-
-
-class VegaTheme:
- """Implementation of a builtin vega theme."""
-
- def __init__(self, theme):
- self.theme = theme
-
- def __call__(self):
- return {
- "usermeta": {"embedOptions": {"theme": self.theme}},
- "config": {"view": {"continuousWidth": 300, "continuousHeight": 300}},
- }
-
- def __repr__(self):
- return "VegaTheme({!r})".format(self.theme)
-
-
-# The entry point group that can be used by other packages to declare other
-# renderers that will be auto-detected. Explicit registration is also
-# allowed by the PluginRegistery API.
-ENTRY_POINT_GROUP = "altair.vegalite.v5.theme" # type: str
-themes = ThemeRegistry(entry_point_group=ENTRY_POINT_GROUP)
-
-themes.register(
- "default",
- lambda: {"config": {"view": {"continuousWidth": 300, "continuousHeight": 300}}},
-)
-themes.register(
- "opaque",
- lambda: {
- "config": {
- "background": "white",
- "view": {"continuousWidth": 300, "continuousHeight": 300},
- }
- },
-)
-themes.register("none", lambda: {})
-
-for theme in VEGA_THEMES:
- themes.register(theme, VegaTheme(theme))
-
-themes.enable("default")
diff --git a/spaces/Suniilkumaar/MusicGen-updated/CHANGELOG.md b/spaces/Suniilkumaar/MusicGen-updated/CHANGELOG.md
deleted file mode 100644
index 24fc214df236b40efead4b1585b01632d9658e9b..0000000000000000000000000000000000000000
--- a/spaces/Suniilkumaar/MusicGen-updated/CHANGELOG.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# Changelog
-
-All notable changes to this project will be documented in this file.
-
-The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
-
-## [0.0.2a] - TBD
-
-Improved demo, fixed top p (thanks @jnordberg).
-
-Compressor tanh on output to avoid clipping with some style (especially piano).
-Now repeating the conditioning periodically if it is too short.
-
-More options when launching Gradio app locally (thanks @ashleykleynhans).
-
-Testing out PyTorch 2.0 memory efficient attention.
-
-Added extended generation (infinite length) by slowly moving the windows.
-Note that other implementations exist: https://github.com/camenduru/MusicGen-colab.
-
-## [0.0.1] - 2023-06-09
-
-Initial release, with model evaluation only.
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/utils/env.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/utils/env.py
deleted file mode 100644
index e3f0d92529e193e6d8339419bcd9bed7901a7769..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/utils/env.py
+++ /dev/null
@@ -1,95 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-"""This file holding some environment constant for sharing by other files."""
-
-import os.path as osp
-import subprocess
-import sys
-from collections import defaultdict
-
-import cv2
-import torch
-
-import annotator.uniformer.mmcv as mmcv
-from .parrots_wrapper import get_build_config
-
-
-def collect_env():
- """Collect the information of the running environments.
-
- Returns:
- dict: The environment information. The following fields are contained.
-
- - sys.platform: The variable of ``sys.platform``.
- - Python: Python version.
- - CUDA available: Bool, indicating if CUDA is available.
- - GPU devices: Device type of each GPU.
- - CUDA_HOME (optional): The env var ``CUDA_HOME``.
- - NVCC (optional): NVCC version.
- - GCC: GCC version, "n/a" if GCC is not installed.
- - PyTorch: PyTorch version.
- - PyTorch compiling details: The output of \
- ``torch.__config__.show()``.
- - TorchVision (optional): TorchVision version.
- - OpenCV: OpenCV version.
- - MMCV: MMCV version.
- - MMCV Compiler: The GCC version for compiling MMCV ops.
- - MMCV CUDA Compiler: The CUDA version for compiling MMCV ops.
- """
- env_info = {}
- env_info['sys.platform'] = sys.platform
- env_info['Python'] = sys.version.replace('\n', '')
-
- cuda_available = torch.cuda.is_available()
- env_info['CUDA available'] = cuda_available
-
- if cuda_available:
- devices = defaultdict(list)
- for k in range(torch.cuda.device_count()):
- devices[torch.cuda.get_device_name(k)].append(str(k))
- for name, device_ids in devices.items():
- env_info['GPU ' + ','.join(device_ids)] = name
-
- from annotator.uniformer.mmcv.utils.parrots_wrapper import _get_cuda_home
- CUDA_HOME = _get_cuda_home()
- env_info['CUDA_HOME'] = CUDA_HOME
-
- if CUDA_HOME is not None and osp.isdir(CUDA_HOME):
- try:
- nvcc = osp.join(CUDA_HOME, 'bin/nvcc')
- nvcc = subprocess.check_output(
- f'"{nvcc}" -V | tail -n1', shell=True)
- nvcc = nvcc.decode('utf-8').strip()
- except subprocess.SubprocessError:
- nvcc = 'Not Available'
- env_info['NVCC'] = nvcc
-
- try:
- gcc = subprocess.check_output('gcc --version | head -n1', shell=True)
- gcc = gcc.decode('utf-8').strip()
- env_info['GCC'] = gcc
- except subprocess.CalledProcessError: # gcc is unavailable
- env_info['GCC'] = 'n/a'
-
- env_info['PyTorch'] = torch.__version__
- env_info['PyTorch compiling details'] = get_build_config()
-
- try:
- import torchvision
- env_info['TorchVision'] = torchvision.__version__
- except ModuleNotFoundError:
- pass
-
- env_info['OpenCV'] = cv2.__version__
-
- env_info['MMCV'] = mmcv.__version__
-
- try:
- from annotator.uniformer.mmcv.ops import get_compiler_version, get_compiling_cuda_version
- except ModuleNotFoundError:
- env_info['MMCV Compiler'] = 'n/a'
- env_info['MMCV CUDA Compiler'] = 'n/a'
- else:
- env_info['MMCV Compiler'] = get_compiler_version()
- env_info['MMCV CUDA Compiler'] = get_compiling_cuda_version()
-
- return env_info
diff --git a/spaces/TH5314/newbing/src/app/layout.tsx b/spaces/TH5314/newbing/src/app/layout.tsx
deleted file mode 100644
index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000
--- a/spaces/TH5314/newbing/src/app/layout.tsx
+++ /dev/null
@@ -1,47 +0,0 @@
-import { Metadata } from 'next'
-import { Toaster } from 'react-hot-toast'
-import { TailwindIndicator } from '@/components/tailwind-indicator'
-import { Providers } from '@/components/providers'
-import { Header } from '@/components/header'
-
-import '@/app/globals.scss'
-
-
-export const metadata: Metadata = {
- title: {
- default: 'Bing AI Chatbot',
- template: `%s - Bing AI Chatbot`
- },
- description: 'Bing AI Chatbot Web App.',
- themeColor: [
- { media: '(prefers-color-scheme: light)', color: 'white' },
- { media: '(prefers-color-scheme: dark)', color: 'dark' }
- ],
- icons: {
- icon: '/favicon.ico',
- shortcut: '../assets/images/logo.svg',
- apple: '../assets/images/logo.svg'
- }
-}
-
-interface RootLayoutProps {
- children: React.ReactNode
-}
-
-export default function RootLayout({ children }: RootLayoutProps) {
- return (
-
-
-
-
-
- {/* @ts-ignore */}
-
- {children}
-
-
-
-
-
- )
-}
diff --git a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/models/test_nn_interaction.py b/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/models/test_nn_interaction.py
deleted file mode 100644
index a8ee8792846ba367e76e6606a44c2512fd1f4bba..0000000000000000000000000000000000000000
--- a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/models/test_nn_interaction.py
+++ /dev/null
@@ -1,113 +0,0 @@
-from cmath import isnan
-import pytest
-
-import torch
-from mmcv import Config
-
-from risk_biased.models.nn_blocks import (
- SequenceDecoderLSTM,
- SequenceDecoderMLP,
- SequenceEncoderMaskedLSTM,
- SequenceEncoderMLP,
- AttentionBlock,
-)
-
-
-@pytest.fixture(scope="module")
-def params():
- torch.manual_seed(0)
- cfg = Config()
- cfg.batch_size = 4
- cfg.input_dim = 10
- cfg.output_dim = 15
- cfg.latent_dim = 3
- cfg.h_dim = 32
- cfg.num_attention_heads = 4
- cfg.num_h_layers = 2
- cfg.device = "cpu"
- return cfg
-
-
-def test_AttentionBlock(params):
- attention = AttentionBlock(params.h_dim, params.num_attention_heads)
- num_agents = 4
- num_map_objects = 8
- encoded_agents = torch.rand(params.batch_size, num_agents, params.h_dim)
- mask_agents = torch.rand(params.batch_size, num_agents) > 0.1
- encoded_absolute_agents = torch.rand(params.batch_size, num_agents, params.h_dim)
- encoded_map = torch.rand(params.batch_size, num_map_objects, params.h_dim)
- mask_map = torch.rand(params.batch_size, num_map_objects) > 0.1
- output = attention(
- encoded_agents, mask_agents, encoded_absolute_agents, encoded_map, mask_map
- )
- # check shape
- assert output.shape == (params.batch_size, num_agents, params.h_dim)
- assert not torch.isnan(output).any()
-
-
-def test_SequenceDecoder(params):
- decoder = SequenceDecoderLSTM(params.h_dim)
- num_agents = 8
- sequence_length = 16
-
- input = torch.rand(params.batch_size, num_agents, params.h_dim)
-
- output = decoder(input, sequence_length)
-
- assert output.shape == (
- params.batch_size,
- num_agents,
- sequence_length,
- params.h_dim,
- )
- assert not torch.isnan(output).any()
-
-
-def test_SequenceDecoderMLP(params):
- sequence_length = 16
- decoder = SequenceDecoderMLP(
- params.h_dim, params.num_h_layers, sequence_length, True
- )
- num_agents = 8
-
- input = torch.rand(params.batch_size, num_agents, params.h_dim)
-
- output = decoder(input, sequence_length)
-
- assert output.shape == (
- params.batch_size,
- num_agents,
- sequence_length,
- params.h_dim,
- )
- assert not torch.isnan(output).any()
-
-
-def test_SequenceEncoder(params):
- encoder = SequenceEncoderMaskedLSTM(params.input_dim, params.h_dim)
- num_agents = 8
- sequence_length = 16
-
- input = torch.rand(params.batch_size, num_agents, sequence_length, params.input_dim)
- mask_input = torch.rand(params.batch_size, num_agents, sequence_length) > 0.1
-
- output = encoder(input, mask_input)
-
- assert output.shape == (params.batch_size, num_agents, params.h_dim)
- assert not torch.isnan(output).any()
-
-
-def test_SequenceEncoderMLP(params):
- sequence_length = 16
- num_agents = 8
- encoder = SequenceEncoderMLP(
- params.input_dim, params.h_dim, params.num_h_layers, sequence_length, True
- )
-
- input = torch.rand(params.batch_size, num_agents, sequence_length, params.input_dim)
- mask_input = torch.rand(params.batch_size, num_agents, sequence_length) > 0.1
-
- output = encoder(input, mask_input)
-
- assert output.shape == (params.batch_size, num_agents, params.h_dim)
- assert not torch.isnan(output).any()
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/styles/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/styles/__init__.py
deleted file mode 100644
index 7401cf5d3a372da67d241dafe83ba756e015eafa..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/styles/__init__.py
+++ /dev/null
@@ -1,103 +0,0 @@
-"""
- pygments.styles
- ~~~~~~~~~~~~~~~
-
- Contains built-in styles.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pip._vendor.pygments.plugin import find_plugin_styles
-from pip._vendor.pygments.util import ClassNotFound
-
-#: A dictionary of built-in styles, mapping style names to
-#: ``'submodule::classname'`` strings.
-STYLE_MAP = {
- 'default': 'default::DefaultStyle',
- 'emacs': 'emacs::EmacsStyle',
- 'friendly': 'friendly::FriendlyStyle',
- 'friendly_grayscale': 'friendly_grayscale::FriendlyGrayscaleStyle',
- 'colorful': 'colorful::ColorfulStyle',
- 'autumn': 'autumn::AutumnStyle',
- 'murphy': 'murphy::MurphyStyle',
- 'manni': 'manni::ManniStyle',
- 'material': 'material::MaterialStyle',
- 'monokai': 'monokai::MonokaiStyle',
- 'perldoc': 'perldoc::PerldocStyle',
- 'pastie': 'pastie::PastieStyle',
- 'borland': 'borland::BorlandStyle',
- 'trac': 'trac::TracStyle',
- 'native': 'native::NativeStyle',
- 'fruity': 'fruity::FruityStyle',
- 'bw': 'bw::BlackWhiteStyle',
- 'vim': 'vim::VimStyle',
- 'vs': 'vs::VisualStudioStyle',
- 'tango': 'tango::TangoStyle',
- 'rrt': 'rrt::RrtStyle',
- 'xcode': 'xcode::XcodeStyle',
- 'igor': 'igor::IgorStyle',
- 'paraiso-light': 'paraiso_light::ParaisoLightStyle',
- 'paraiso-dark': 'paraiso_dark::ParaisoDarkStyle',
- 'lovelace': 'lovelace::LovelaceStyle',
- 'algol': 'algol::AlgolStyle',
- 'algol_nu': 'algol_nu::Algol_NuStyle',
- 'arduino': 'arduino::ArduinoStyle',
- 'rainbow_dash': 'rainbow_dash::RainbowDashStyle',
- 'abap': 'abap::AbapStyle',
- 'solarized-dark': 'solarized::SolarizedDarkStyle',
- 'solarized-light': 'solarized::SolarizedLightStyle',
- 'sas': 'sas::SasStyle',
- 'staroffice' : 'staroffice::StarofficeStyle',
- 'stata': 'stata_light::StataLightStyle',
- 'stata-light': 'stata_light::StataLightStyle',
- 'stata-dark': 'stata_dark::StataDarkStyle',
- 'inkpot': 'inkpot::InkPotStyle',
- 'zenburn': 'zenburn::ZenburnStyle',
- 'gruvbox-dark': 'gruvbox::GruvboxDarkStyle',
- 'gruvbox-light': 'gruvbox::GruvboxLightStyle',
- 'dracula': 'dracula::DraculaStyle',
- 'one-dark': 'onedark::OneDarkStyle',
- 'lilypond' : 'lilypond::LilyPondStyle',
- 'nord': 'nord::NordStyle',
- 'nord-darker': 'nord::NordDarkerStyle',
- 'github-dark': 'gh_dark::GhDarkStyle'
-}
-
-
-def get_style_by_name(name):
- """
- Return a style class by its short name. The names of the builtin styles
- are listed in :data:`pygments.styles.STYLE_MAP`.
-
- Will raise :exc:`pygments.util.ClassNotFound` if no style of that name is
- found.
- """
- if name in STYLE_MAP:
- mod, cls = STYLE_MAP[name].split('::')
- builtin = "yes"
- else:
- for found_name, style in find_plugin_styles():
- if name == found_name:
- return style
- # perhaps it got dropped into our styles package
- builtin = ""
- mod = name
- cls = name.title() + "Style"
-
- try:
- mod = __import__('pygments.styles.' + mod, None, None, [cls])
- except ImportError:
- raise ClassNotFound("Could not find style module %r" % mod +
- (builtin and ", though it should be builtin") + ".")
- try:
- return getattr(mod, cls)
- except AttributeError:
- raise ClassNotFound("Could not find style class %r in style module." % cls)
-
-
-def get_all_styles():
- """Return a generator for all styles by name, both builtin and plugin."""
- yield from STYLE_MAP
- for name, _ in find_plugin_styles():
- yield name
diff --git a/spaces/ThirdEyeData/TagDiciphering/static_shape.py b/spaces/ThirdEyeData/TagDiciphering/static_shape.py
deleted file mode 100644
index 4f91608328db22a63523db58dc6531b388c3c309..0000000000000000000000000000000000000000
--- a/spaces/ThirdEyeData/TagDiciphering/static_shape.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Helper functions to access TensorShape values.
-
-The rank 4 tensor_shape must be of the form [batch_size, height, width, depth].
-"""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-
-def get_dim_as_int(dim):
- """Utility to get v1 or v2 TensorShape dim as an int.
-
- Args:
- dim: The TensorShape dimension to get as an int
-
- Returns:
- None or an int.
- """
- try:
- return dim.value
- except AttributeError:
- return dim
-
-
-def get_batch_size(tensor_shape):
- """Returns batch size from the tensor shape.
-
- Args:
- tensor_shape: A rank 4 TensorShape.
-
- Returns:
- An integer representing the batch size of the tensor.
- """
- tensor_shape.assert_has_rank(rank=4)
- return get_dim_as_int(tensor_shape[0])
-
-
-def get_height(tensor_shape):
- """Returns height from the tensor shape.
-
- Args:
- tensor_shape: A rank 4 TensorShape.
-
- Returns:
- An integer representing the height of the tensor.
- """
- tensor_shape.assert_has_rank(rank=4)
- return get_dim_as_int(tensor_shape[1])
-
-
-def get_width(tensor_shape):
- """Returns width from the tensor shape.
-
- Args:
- tensor_shape: A rank 4 TensorShape.
-
- Returns:
- An integer representing the width of the tensor.
- """
- tensor_shape.assert_has_rank(rank=4)
- return get_dim_as_int(tensor_shape[2])
-
-
-def get_depth(tensor_shape):
- """Returns depth from the tensor shape.
-
- Args:
- tensor_shape: A rank 4 TensorShape.
-
- Returns:
- An integer representing the depth of the tensor.
- """
- tensor_shape.assert_has_rank(rank=4)
- return get_dim_as_int(tensor_shape[3])
diff --git a/spaces/TwoCH4/White-box-Cartoonization/wbc/network.py b/spaces/TwoCH4/White-box-Cartoonization/wbc/network.py
deleted file mode 100644
index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000
--- a/spaces/TwoCH4/White-box-Cartoonization/wbc/network.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import tensorflow as tf
-import numpy as np
-import tensorflow.contrib.slim as slim
-
-
-
-def resblock(inputs, out_channel=32, name='resblock'):
-
- with tf.variable_scope(name):
-
- x = slim.convolution2d(inputs, out_channel, [3, 3],
- activation_fn=None, scope='conv1')
- x = tf.nn.leaky_relu(x)
- x = slim.convolution2d(x, out_channel, [3, 3],
- activation_fn=None, scope='conv2')
-
- return x + inputs
-
-
-
-
-def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False):
- with tf.variable_scope(name, reuse=reuse):
-
- x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None)
- x0 = tf.nn.leaky_relu(x0)
-
- x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None)
- x1 = tf.nn.leaky_relu(x1)
- x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None)
- x1 = tf.nn.leaky_relu(x1)
-
- x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
- x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
-
- for idx in range(num_blocks):
- x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx))
-
- x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
-
- h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2]
- x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2))
- x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None)
- x3 = tf.nn.leaky_relu(x3)
- x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None)
- x3 = tf.nn.leaky_relu(x3)
-
- h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2]
- x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2))
- x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None)
- x4 = tf.nn.leaky_relu(x4)
- x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None)
-
- return x4
-
-if __name__ == '__main__':
-
-
- pass
\ No newline at end of file
diff --git a/spaces/User1342/WatchTower/Pinpoint/Sanitizer.py b/spaces/User1342/WatchTower/Pinpoint/Sanitizer.py
deleted file mode 100644
index f025934fb42a20c8fcfb9d640f9077264c7f8190..0000000000000000000000000000000000000000
--- a/spaces/User1342/WatchTower/Pinpoint/Sanitizer.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import os.path
-
-from nltk import *
-from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
-
-from Pinpoint.Logger import *
-
-# If NLTK data doesn't exist, downloads it
-try:
- tagged = pos_tag(["test"])
-except LookupError:
- download()
-
-
-# nltk.download() #todo how to get this to run once?
-
-class sanitization():
- """
- This class is used to sanitize a given corpus of data. In turn removing stop words, stemming words, removing small
- words, removing no alphabet words, and setting words to lower case. To save on repeat runs a local copy of the
- serialised corpus is saved that is used unless this feature is overwritten.
- """
-
- def sanitize(self, text, output_folder, force_new_data_and_dont_persisit=False):
- """
- Entry function for sanitizing text
- :param text:
- :param force_new_data_and_dont_persisit:
- :return: sanitized text
- """
- sanitize_file_name = os.path.join(output_folder, "{}-sanitized_text.txt".format(uuid.uuid4()))
- final_text = ""
-
- # If a file exists don't sanitize given text
- if os.path.isfile(sanitize_file_name) and not force_new_data_and_dont_persisit:
- logger.print_message("Sanitized file exists. Using data")
-
- with open(sanitize_file_name, 'r', encoding="utf8") as file_to_write:
- final_text = file_to_write.read()
-
- else:
- total_words = len(text.split(" "))
- number = 0
- logger.print_message("Starting sanitization... {} words to go".format(total_words))
- for word in text.split(" "):
- number = number + 1
- word = self.remove_non_alpha(word)
- word = self.lower(word)
- word = self.stemmer(word)
- word = self.remove_stop_words(word)
- word = self.remove_small_words(word)
-
- if word is None:
- continue
-
- final_text = final_text + word + " "
- logger.print_message("Completed {} of {} sanitized words".format(number, total_words))
-
- final_text = final_text.replace(" ", " ")
-
- if not force_new_data_and_dont_persisit:
- with open(sanitize_file_name, 'w', encoding="utf8") as file_to_write:
- file_to_write.write(final_text)
-
- final_text = final_text.strip()
- return final_text
-
- def stemmer(self, word):
- """
- Get stemms of words
- :param word:
- :return: the stemmed word using port stemmer
- """
-
- porter = PorterStemmer()
-
- # todo anouther stemmer be assessed?
- # lancaster = LancasterStemmer()
- # stemmed_word = lancaster.stem(word)
- stemmed_word = porter.stem(word)
-
- return stemmed_word
-
- def lower(self, word):
- """
- get the lower case representation of words
- :param word:
- :return: the lowercase representation of the word
- """
- return word.lower()
-
- def remove_stop_words(self, text):
- """
- Remove stop words
- :param text:
- :return: the word without stop words
- """
-
- text_without_stopwords = [word for word in text.split() if word not in ENGLISH_STOP_WORDS]
-
- final_string = ""
-
- for word in text_without_stopwords:
- final_string = final_string + word + " "
-
- return final_string
-
- def remove_non_alpha(self, word):
- """
- Removes non alphabet characters (Excluding spaces)
- :param word:
- :return: the word with non-alpha characters removed
- """
- word = word.replace("\n", " ").replace("\t", " ").replace(" ", " ")
- regex = re.compile('[^a-zA-Z ]')
-
- return regex.sub('', word)
-
- def remove_small_words(self, word, length_to_remove_if_not_equal=4):
- """
- Removes words that are too small, defaults to words words length 3 characters or below which are removed.
- :param word:
- :param length_to_remove_if_not_equal:
- :return: "" if word below 3 characters or the word if above
- """
-
- new_word = ""
- if len(word) >= length_to_remove_if_not_equal:
- new_word = word
-
- return new_word
diff --git a/spaces/VinayDBhagat/GenerateCustomerInsights/README.md b/spaces/VinayDBhagat/GenerateCustomerInsights/README.md
deleted file mode 100644
index b50081365d75e2a4f8ef55a6815a1730e1881da0..0000000000000000000000000000000000000000
--- a/spaces/VinayDBhagat/GenerateCustomerInsights/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: GenerateCustomerInsights
-emoji: 💻
-colorFrom: purple
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Violetmae14/images-to-audio/index.html b/spaces/Violetmae14/images-to-audio/index.html
deleted file mode 100644
index 58275de3b1c343a98420342baa076b9baaafa157..0000000000000000000000000000000000000000
--- a/spaces/Violetmae14/images-to-audio/index.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
-
-
-
- My static Space
-
-
-
-
-
Welcome to your static Space!
-
You can modify this app directly by editing index.html in the Files and versions tab.
-
-
diff --git a/spaces/VoiceHero69/changer/setup_tools/__init__.py b/spaces/VoiceHero69/changer/setup_tools/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/__init__.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/YONG627/456123/yolov5-code-main/utils/loggers/wandb/__init__.py b/spaces/YONG627/456123/yolov5-code-main/utils/loggers/wandb/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/backbone/utils.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/backbone/utils.py
deleted file mode 100644
index e71db21f1223c87cceeb422a70888f7bac42bb18..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/grit/modeling/backbone/utils.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# This code is from https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/utils.py
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-__all__ = [
- "window_partition",
- "window_unpartition",
- "add_decomposed_rel_pos",
- "get_abs_pos",
- "PatchEmbed",
-]
-
-def window_partition(x, window_size):
- """
- Partition into non-overlapping windows with padding if needed.
- Args:
- x (tensor): input tokens with [B, H, W, C].
- window_size (int): window size.
-
- Returns:
- windows: windows after partition with [B * num_windows, window_size, window_size, C].
- (Hp, Wp): padded height and width before partition
- """
- B, H, W, C = x.shape
-
- pad_h = (window_size - H % window_size) % window_size
- pad_w = (window_size - W % window_size) % window_size
- if pad_h > 0 or pad_w > 0:
- x = F.pad(x, (0, 0, 0, pad_w, 0, pad_h))
- Hp, Wp = H + pad_h, W + pad_w
-
- x = x.view(B, Hp // window_size, window_size, Wp // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows, (Hp, Wp)
-
-
-def window_unpartition(windows, window_size, pad_hw, hw):
- """
- Window unpartition into original sequences and removing padding.
- Args:
- x (tensor): input tokens with [B * num_windows, window_size, window_size, C].
- window_size (int): window size.
- pad_hw (Tuple): padded height and width (Hp, Wp).
- hw (Tuple): original height and width (H, W) before padding.
-
- Returns:
- x: unpartitioned sequences with [B, H, W, C].
- """
- Hp, Wp = pad_hw
- H, W = hw
- B = windows.shape[0] // (Hp * Wp // window_size // window_size)
- x = windows.view(B, Hp // window_size, Wp // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, Hp, Wp, -1)
-
- if Hp > H or Wp > W:
- x = x[:, :H, :W, :].contiguous()
- return x
-
-
-def get_rel_pos(q_size, k_size, rel_pos):
- """
- Get relative positional embeddings according to the relative positions of
- query and key sizes.
- Args:
- q_size (int): size of query q.
- k_size (int): size of key k.
- rel_pos (Tensor): relative position embeddings (L, C).
-
- Returns:
- Extracted positional embeddings according to relative positions.
- """
- max_rel_dist = int(2 * max(q_size, k_size) - 1)
- # Interpolate rel pos if needed.
- if rel_pos.shape[0] != max_rel_dist:
- # Interpolate rel pos.
- rel_pos_resized = F.interpolate(
- rel_pos.reshape(1, rel_pos.shape[0], -1).permute(0, 2, 1),
- size=max_rel_dist,
- mode="linear",
- )
- rel_pos_resized = rel_pos_resized.reshape(-1, max_rel_dist).permute(1, 0)
- else:
- rel_pos_resized = rel_pos
-
- # Scale the coords with short length if shapes for q and k are different.
- q_coords = torch.arange(q_size)[:, None] * max(k_size / q_size, 1.0)
- k_coords = torch.arange(k_size)[None, :] * max(q_size / k_size, 1.0)
- relative_coords = (q_coords - k_coords) + (k_size - 1) * max(q_size / k_size, 1.0)
-
- return rel_pos_resized[relative_coords.long()]
-
-
-def add_decomposed_rel_pos(attn, q, rel_pos_h, rel_pos_w, q_size, k_size):
- """
- Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`.
- https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py # noqa B950
- Args:
- attn (Tensor): attention map.
- q (Tensor): query q in the attention layer with shape (B, q_h * q_w, C).
- rel_pos_h (Tensor): relative position embeddings (Lh, C) for height axis.
- rel_pos_w (Tensor): relative position embeddings (Lw, C) for width axis.
- q_size (Tuple): spatial sequence size of query q with (q_h, q_w).
- k_size (Tuple): spatial sequence size of key k with (k_h, k_w).
-
- Returns:
- attn (Tensor): attention map with added relative positional embeddings.
- """
- q_h, q_w = q_size
- k_h, k_w = k_size
- Rh = get_rel_pos(q_h, k_h, rel_pos_h)
- Rw = get_rel_pos(q_w, k_w, rel_pos_w)
-
- B, _, dim = q.shape
- r_q = q.reshape(B, q_h, q_w, dim)
- rel_h = torch.einsum("bhwc,hkc->bhwk", r_q, Rh)
- rel_w = torch.einsum("bhwc,wkc->bhwk", r_q, Rw)
-
- attn = (
- attn.view(B, q_h, q_w, k_h, k_w) + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :]
- ).view(B, q_h * q_w, k_h * k_w)
-
- return attn
-
-
-def get_abs_pos(abs_pos, has_cls_token, hw):
- """
- Calculate absolute positional embeddings. If needed, resize embeddings and remove cls_token
- dimension for the original embeddings.
- Args:
- abs_pos (Tensor): absolute positional embeddings with (1, num_position, C).
- has_cls_token (bool): If true, has 1 embedding in abs_pos for cls token.
- hw (Tuple): size of input image tokens.
-
- Returns:
- Absolute positional embeddings after processing with shape (1, H, W, C)
- """
- h, w = hw
- if has_cls_token:
- abs_pos = abs_pos[:, 1:]
- xy_num = abs_pos.shape[1]
- size = int(math.sqrt(xy_num))
- assert size * size == xy_num
-
- if size != h or size != w:
- new_abs_pos = F.interpolate(
- abs_pos.reshape(1, size, size, -1).permute(0, 3, 1, 2),
- size=(h, w),
- mode="bicubic",
- align_corners=False,
- )
-
- return new_abs_pos.permute(0, 2, 3, 1)
- else:
- return abs_pos.reshape(1, h, w, -1)
-
-
-class PatchEmbed(nn.Module):
- """
- Image to Patch Embedding.
- """
-
- def __init__(
- self, kernel_size=(16, 16), stride=(16, 16), padding=(0, 0), in_chans=3, embed_dim=768
- ):
- """
- Args:
- kernel_size (Tuple): kernel size of the projection layer.
- stride (Tuple): stride of the projection layer.
- padding (Tuple): padding size of the projection layer.
- in_chans (int): Number of input image channels.
- embed_dim (int): embed_dim (int): Patch embedding dimension.
- """
- super().__init__()
-
- self.proj = nn.Conv2d(
- in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding
- )
-
- def forward(self, x):
- x = self.proj(x)
- # B C H W -> B H W C
- x = x.permute(0, 2, 3, 1)
- return x
diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/modules/activations.py b/spaces/Yudha515/Rvc-Models/audiocraft/modules/activations.py
deleted file mode 100644
index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000
--- a/spaces/Yudha515/Rvc-Models/audiocraft/modules/activations.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch import Tensor
-from typing import Union, Callable
-
-
-class CustomGLU(nn.Module):
- """Custom Gated Linear Unit activation.
- Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half
- of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation
- function (i.e. sigmoid, swish, etc.).
-
- Args:
- activation (nn.Module): The custom activation to apply in the Gated Linear Unit
- dim (int): the dimension on which to split the input. Default: -1
-
- Shape:
- - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional
- dimensions
- - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2`
-
- Examples::
- >>> m = CustomGLU(nn.Sigmoid())
- >>> input = torch.randn(4, 2)
- >>> output = m(input)
- """
- def __init__(self, activation: nn.Module, dim: int = -1):
- super(CustomGLU, self).__init__()
- self.dim = dim
- self.activation = activation
-
- def forward(self, x: Tensor):
- assert x.shape[self.dim] % 2 == 0 # M = N / 2
- a, b = torch.chunk(x, 2, dim=self.dim)
- return a * self.activation(b)
-
-
-class SwiGLU(CustomGLU):
- """SiLU Gated Linear Unit activation.
- Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(SwiGLU, self).__init__(nn.SiLU(), dim)
-
-
-class GeGLU(CustomGLU):
- """GeLU Gated Linear Unit activation.
- Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(GeGLU, self).__init__(nn.GELU(), dim)
-
-
-class ReGLU(CustomGLU):
- """ReLU Gated Linear Unit activation.
- Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is
- the first half of the input matrices, :math:`b` is the second half.
-
- Args:
- dim (int): the dimension on which to split the input. Default: -1
- """
- def __init__(self, dim: int = -1):
- super(ReGLU, self).__init__(nn.ReLU(), dim)
-
-
-def get_activation_fn(
- activation: Union[str, Callable[[Tensor], Tensor]]
-) -> Union[str, Callable[[Tensor], Tensor]]:
- """Helper function to map an activation string to the activation class.
- If the supplied activation is not a string that is recognized, the activation is passed back.
-
- Args:
- activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check
- """
- if isinstance(activation, str):
- if activation == "reglu":
- return ReGLU()
- elif activation == "geglu":
- return GeGLU()
- elif activation == "swiglu":
- return SwiGLU()
- return activation
diff --git a/spaces/Ziqi/ReVersion/style.css b/spaces/Ziqi/ReVersion/style.css
deleted file mode 100644
index af4e23927a03e13fd16ebc7b4eb6eb434c42f65b..0000000000000000000000000000000000000000
--- a/spaces/Ziqi/ReVersion/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
-}
\ No newline at end of file
diff --git a/spaces/Zwicky18/vits-models/transforms.py b/spaces/Zwicky18/vits-models/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/Zwicky18/vits-models/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/aadnk/faster-whisper-webui/src/hooks/subTaskProgressListener.py b/spaces/aadnk/faster-whisper-webui/src/hooks/subTaskProgressListener.py
deleted file mode 100644
index 9a8eaa876fcd18032875d67535e0558494842c60..0000000000000000000000000000000000000000
--- a/spaces/aadnk/faster-whisper-webui/src/hooks/subTaskProgressListener.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from src.hooks.progressListener import ProgressListener
-
-from typing import Union
-
-class SubTaskProgressListener(ProgressListener):
- """
- A sub task listener that reports the progress of a sub task to a base task listener
- Parameters
- ----------
- base_task_listener : ProgressListener
- The base progress listener to accumulate overall progress in.
- base_task_total : float
- The maximum total progress that will be reported to the base progress listener.
- sub_task_start : float
- The starting progress of a sub task, in respect to the base progress listener.
- sub_task_total : float
- The total amount of progress a sub task will report to the base progress listener.
- """
- def __init__(
- self,
- base_task_listener: ProgressListener,
- base_task_total: float,
- sub_task_start: float,
- sub_task_total: float,
- ):
- self.base_task_listener = base_task_listener
- self.base_task_total = base_task_total
- self.sub_task_start = sub_task_start
- self.sub_task_total = sub_task_total
-
- def on_progress(self, current: Union[int, float], total: Union[int, float]):
- sub_task_progress_frac = current / total
- sub_task_progress = self.sub_task_start + self.sub_task_total * sub_task_progress_frac
- self.base_task_listener.on_progress(sub_task_progress, self.base_task_total)
-
- def on_finished(self):
- self.base_task_listener.on_progress(self.sub_task_start + self.sub_task_total, self.base_task_total)
\ No newline at end of file
diff --git a/spaces/aaronb/Anything2Image/README.md b/spaces/aaronb/Anything2Image/README.md
deleted file mode 100644
index f4c1e88e38f9653d29e6dd512a751a09111eff80..0000000000000000000000000000000000000000
--- a/spaces/aaronb/Anything2Image/README.md
+++ /dev/null
@@ -1,83 +0,0 @@
----
-title: Anything2Image
-emoji: 🏃
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-# Anything To Image
-
-Generate image from anything with [ImageBind](https://github.com/facebookresearch/ImageBind)'s unified latent space and [stable-diffusion-2-1-unclip](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip).
-
-- No training is need.
-- Integration with 🤗 [Diffusers](https://github.com/huggingface/diffusers).
-- `imagebind` is directly copy from [official repo](https://github.com/facebookresearch/ImageBind) with modification.
-- Gradio Demo.
-
-## Audio to Image
-
-| `assets/wav/bird_audio.wav` | `assets/wav/dog_audio.wav` | `assets/wav/cattle.wav`
-| --- | --- | --- |
-|  |  | |
-
-```python
-import imagebind
-import torch
-from diffusers import StableUnCLIPImg2ImgPipeline
-
-# construct models
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
-)
-pipe = pipe.to(device)
-
-model = imagebind.imagebind_huge(pretrained=True)
-model.eval()
-model.to(device)
-
-# generate image
-with torch.no_grad():
- audio_paths=["assets/wav/bird_audio.wav"]
- embeddings = model.forward({
- imagebind.ModalityType.AUDIO: imagebind.load_and_transform_audio_data(audio_paths, device),
- })
- embeddings = embeddings[imagebind.ModalityType.AUDIO]
- images = pipe(image_embeds=embeddings.half()).images
- images[0].save("bird_audio.png")
-```
-
-## More
-
-Under construction
-
-
-## Citation
-
-Latent Diffusion
-
-```bibtex
-@InProceedings{Rombach_2022_CVPR,
- author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
- title = {High-Resolution Image Synthesis With Latent Diffusion Models},
- booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
- month = {June},
- year = {2022},
- pages = {10684-10695}
-}
-```
-
-ImageBind
-```bibtex
-@inproceedings{girdhar2023imagebind,
- title={ImageBind: One Embedding Space To Bind Them All},
- author={Girdhar, Rohit and El-Nouby, Alaaeldin and Liu, Zhuang
-and Singh, Mannat and Alwala, Kalyan Vasudev and Joulin, Armand and Misra, Ishan},
- booktitle={CVPR},
- year={2023}
-}
-```
\ No newline at end of file
diff --git a/spaces/aarontanzb/Langchain_query_app/README.md b/spaces/aarontanzb/Langchain_query_app/README.md
deleted file mode 100644
index 7c1459a6ba21ffee2031c3a3c19413c11139a61c..0000000000000000000000000000000000000000
--- a/spaces/aarontanzb/Langchain_query_app/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Langchain Query App
-emoji: 😻
-colorFrom: gray
-colorTo: pink
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/memory.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/memory.py
deleted file mode 100644
index 70cf9a838fb314e3bd3c07aadbc00921a81e83ed..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/memory.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-
-from .hook import HOOKS, Hook
-
-
-@HOOKS.register_module()
-class EmptyCacheHook(Hook):
-
- def __init__(self, before_epoch=False, after_epoch=True, after_iter=False):
- self._before_epoch = before_epoch
- self._after_epoch = after_epoch
- self._after_iter = after_iter
-
- def after_iter(self, runner):
- if self._after_iter:
- torch.cuda.empty_cache()
-
- def before_epoch(self, runner):
- if self._before_epoch:
- torch.cuda.empty_cache()
-
- def after_epoch(self, runner):
- if self._after_epoch:
- torch.cuda.empty_cache()
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/match_costs/builder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/match_costs/builder.py
deleted file mode 100644
index 92f0869ed4993167c504175d14315f7e9e8411f1..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/match_costs/builder.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from annotator.uniformer.mmcv.utils import Registry, build_from_cfg
-
-MATCH_COST = Registry('Match Cost')
-
-
-def build_match_cost(cfg, default_args=None):
- """Builder of IoU calculator."""
- return build_from_cfg(cfg, MATCH_COST, default_args)
diff --git a/spaces/abidlabs/cinemascope/app.py b/spaces/abidlabs/cinemascope/app.py
deleted file mode 100644
index 5a671a58c75488031a715dab61df366131636353..0000000000000000000000000000000000000000
--- a/spaces/abidlabs/cinemascope/app.py
+++ /dev/null
@@ -1,116 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-import random
-import tempfile
-
-import gradio as gr
-import imageio
-import numpy as np
-import torch
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-
-DESCRIPTION = '# [ModelScope Text to Video Synthesis](https://modelscope.cn/models/damo/text-to-video-synthesis/summary)'
-DESCRIPTION += '\n
For Colab usage, you can view this webpage.(the latest update on 2023.03.21)
'
-DESCRIPTION += '\n
This model can only be used for non-commercial purposes. To learn more about the model, take a look at the model card.
'
-if (SPACE_ID := os.getenv('SPACE_ID')) is not None:
- DESCRIPTION += f'\n
For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
'
-
-MAX_NUM_FRAMES = int(os.getenv('MAX_NUM_FRAMES', '200'))
-DEFAULT_NUM_FRAMES = min(MAX_NUM_FRAMES,
- int(os.getenv('DEFAULT_NUM_FRAMES', '16')))
-
-pipe = DiffusionPipeline.from_pretrained('damo-vilab/text-to-video-ms-1.7b',
- torch_dtype=torch.float16,
- variant='fp16')
-pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
-pipe.enable_model_cpu_offload()
-pipe.enable_vae_slicing()
-
-
-def to_video(frames: list[np.ndarray], fps: int) -> str:
- out_file = tempfile.NamedTemporaryFile(suffix='.mp4', delete=False)
- writer = imageio.get_writer(out_file.name, format='FFMPEG', fps=fps)
- for frame in frames:
- writer.append_data(frame)
- writer.close()
- return out_file.name
-
-
-def generate(prompt: str) -> str:
- seed = int(0)
- num_frames = int(16)
- num_inference_steps = int(25)
- if seed == -1:
- seed = random.randint(0, 1000000)
- generator = torch.Generator().manual_seed(seed)
- frames = pipe(prompt,
- num_inference_steps=num_inference_steps,
- num_frames=num_frames,
- generator=generator).frames
- return to_video(frames, 8)
-
-
-
-with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
- with gr.Group():
- with gr.Box():
- with gr.Row(elem_id='prompt-container').style(equal_height=True):
- prompt = gr.Text(
- label='Prompt',
- show_label=False,
- max_lines=1,
- placeholder='Enter your prompt',
- elem_id='prompt-text-input').style(container=False)
- run_button = gr.Button('Generate video').style(
- full_width=False)
- result = gr.Video(label='Result', show_label=False, elem_id='gallery')
- with gr.Accordion('Advanced options', open=False):
- seed = gr.Slider(
- label='Seed',
- minimum=-1,
- maximum=1000000,
- step=1,
- value=-1,
- info='If set to -1, a different seed will be used each time.')
- num_frames = gr.Slider(
- label='Number of frames',
- minimum=16,
- maximum=MAX_NUM_FRAMES,
- step=1,
- value=16,
- info=
- 'Note that the content of the video also changes when you change the number of frames.'
- )
- num_inference_steps = gr.Slider(label='Number of inference steps',
- minimum=10,
- maximum=50,
- step=1,
- value=25)
-
- inputs = [
- prompt,
- ]
- prompt.submit(fn=generate, inputs=inputs, outputs=result, api_name="predict")
- run_button.click(fn=generate, inputs=inputs, outputs=result)
-
-
- with gr.Accordion(label='Biases and content acknowledgment', open=False):
- gr.HTML("""
-
Biases and content acknowledgment
-
- Despite how impressive being able to turn text into video is, beware to the fact that this model may output content that reinforces or exacerbates societal biases. The training data includes LAION5B, ImageNet, Webvid and other public datasets. The model was not trained to realistically represent people or events, so using it to generate such content is beyond the model's capabilities.
-
-
- It is not intended to generate content that is demeaning or harmful to people or their environment, culture, religion, etc. Similarly, it is not allowed to generate pornographic, violent and bloody content generation. The model is meant for research purposes.
-
-
- To learn more about the model, head to its model card.
-
-
- d5da3c52bf
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Lame V3.98.3 For Audacity On Windows.md b/spaces/bioriAsaeru/text-to-voice/Lame V3.98.3 For Audacity On Windows.md
deleted file mode 100644
index 51eb28fa9be1f7df808004ef92ed0c0a48785d9c..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Lame V3.98.3 For Audacity On Windows.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
select lame3.exe and enter-q23.wav input_wave.wav output_wave.wav. then hit '' and '' to continue. repeat for each input file you wish to encode and each desired output-and save it to desktop. see the examples in the lame manual for additional information.
-
select lame3.exe and enter-q23.wav input_wave.wav output_wave.wav. then hit '' and '' to continue. repeat for each input file you wish to encode and each desired output-and save it to desktop. see the examples in the lame manual for additional information.
the ''-lh''''switch uses the new wave format from itu-t recommendation g.726 and g.721 the ''-v'' switch set the sample rate to 16 bit (native), channels to 2 (mono) the ''-y'' switch uses ''-_01 -3'' format flags, i.e. bitrate is 1 kbit, frame size is 3200 the ''-z'' switch set the quality (''normal'') to the default settings
-
note that you can also use the ''-vv '' parameter. if the original source file does not start with a multibyte character (e.g. wav files are encoded in 8-bit mode) you need to specify the ''-b'' switch too. the video is then encoded in 16-bit linear pcm. e. :
-
the ''-i'' switch specifies that the input file is in wav/aiff/au/svx file format. if there is no ''-i'' switch, the input file is assumed to be in pcm format. the aiff and svx structure is similar, with the binary format being the only difference.
-
a common thing is to make a copy of an existing audio file as a test file. on my machine the output duration is correct but the sample rate is wrong. the problem is that a copy of an existing wav file has its sample rate set to ''44.1'' by default. to solve the problem, you must specify the new sample rate to the output audio stream with the ''-r'' switch:
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/bla/tranny/App/Embedding/EmbeddingRoutes.py b/spaces/bla/tranny/App/Embedding/EmbeddingRoutes.py
deleted file mode 100644
index 5d038d9f001d027060bc20b66435685cc23b3a88..0000000000000000000000000000000000000000
--- a/spaces/bla/tranny/App/Embedding/EmbeddingRoutes.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from fastapi import APIRouter
-from App.Transcription.Model import Transcriptions
-from .utils.Initialize import generateChunks, encode, search
-from .Schemas import SearchRequest
-
-embeddigs_router = APIRouter(tags=["embeddings"])
-
-
-# create
-@embeddigs_router.get("/create_embeddings")
-async def create_embeddings(task_id):
- item = await Transcriptions.objects.filter(task_id=task_id).first()
- temp = item.content
- chunks = generateChunks(temp, task_id)
- encode(chunks)
-
- return
-
-
-@embeddigs_router.get("/create_summary")
-async def create_summary(task_id):
- item = await Transcriptions.objects.filter(task_id=task_id).first()
- temp = item.content
- chunks = generateChunks(temp, task_id)
- encode(chunks)
-
- return
-
-
-# search
-# update?
-@embeddigs_router.post("/search_embeddings")
-async def search_embeddings(req: SearchRequest):
- return search(query=req.query, task_id=req.taskId)
diff --git a/spaces/bortle/moon-detector/app.py b/spaces/bortle/moon-detector/app.py
deleted file mode 100644
index 3f4cdcb7ab2be42b74b54e81eaae7d1c85c06e20..0000000000000000000000000000000000000000
--- a/spaces/bortle/moon-detector/app.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-pipeline = pipeline(task="image-classification", model="bortle/moon-detector-v5.a")
-
-def predict(image):
- predictions = pipeline(image)
- return {p["label"]: p["score"] for p in predictions}
-
-gr.Interface(
- predict,
- inputs=gr.Image(shape=(1080, None), type="pil", label="Upload image"),
- outputs=gr.Label(num_top_classes=5),
- title="Moon Detector",
- allow_flagging="manual",
-).launch()
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_packaging.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_packaging.py
deleted file mode 100644
index a5b1661e8f341fe66a6e02c59fe172bce445782b..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_packaging.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import unittest
-
-from detectron2.utils.collect_env import collect_env_info
-
-
-class TestProjects(unittest.TestCase):
- def test_import(self):
- from detectron2.projects import point_rend
-
- _ = point_rend.add_pointrend_config
-
- import detectron2.projects.deeplab as deeplab
-
- _ = deeplab.add_deeplab_config
-
- # import detectron2.projects.panoptic_deeplab as panoptic_deeplab
-
- # _ = panoptic_deeplab.add_panoptic_deeplab_config
-
-
-class TestCollectEnv(unittest.TestCase):
- def test(self):
- _ = collect_env_info()
diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/augmentations.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/augmentations.py
deleted file mode 100644
index 3f764c06ae3b366496230bcba63c5e8621ce1c95..0000000000000000000000000000000000000000
--- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/augmentations.py
+++ /dev/null
@@ -1,284 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Image augmentation functions
-"""
-
-import math
-import random
-
-import cv2
-import numpy as np
-
-from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box
-from utils.metrics import bbox_ioa
-
-
-class Albumentations:
- # YOLOv5 Albumentations class (optional, only used if package is installed)
- def __init__(self):
- self.transform = None
- try:
- import albumentations as A
- check_version(A.__version__, '1.0.3', hard=True) # version requirement
-
- T = [
- A.Blur(p=0.01),
- A.MedianBlur(p=0.01),
- A.ToGray(p=0.01),
- A.CLAHE(p=0.01),
- A.RandomBrightnessContrast(p=0.0),
- A.RandomGamma(p=0.0),
- A.ImageCompression(quality_lower=75, p=0.0)] # transforms
- self.transform = A.Compose(T, bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))
-
- LOGGER.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p))
- except ImportError: # package not installed, skip
- pass
- except Exception as e:
- LOGGER.info(colorstr('albumentations: ') + f'{e}')
-
- def __call__(self, im, labels, p=1.0):
- if self.transform and random.random() < p:
- new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed
- im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])])
- return im, labels
-
-
-def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
- # HSV color-space augmentation
- if hgain or sgain or vgain:
- r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains
- hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV))
- dtype = im.dtype # uint8
-
- x = np.arange(0, 256, dtype=r.dtype)
- lut_hue = ((x * r[0]) % 180).astype(dtype)
- lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
- lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
-
- im_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val)))
- cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed
-
-
-def hist_equalize(im, clahe=True, bgr=False):
- # Equalize histogram on BGR image 'im' with im.shape(n,m,3) and range 0-255
- yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV)
- if clahe:
- c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
- yuv[:, :, 0] = c.apply(yuv[:, :, 0])
- else:
- yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram
- return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB
-
-
-def replicate(im, labels):
- # Replicate labels
- h, w = im.shape[:2]
- boxes = labels[:, 1:].astype(int)
- x1, y1, x2, y2 = boxes.T
- s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
- for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices
- x1b, y1b, x2b, y2b = boxes[i]
- bh, bw = y2b - y1b, x2b - x1b
- yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y
- x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
- im[y1a:y2a, x1a:x2a] = im[y1b:y2b, x1b:x2b] # im4[ymin:ymax, xmin:xmax]
- labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0)
-
- return im, labels
-
-
-def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
- # Resize and pad image while meeting stride-multiple constraints
- shape = im.shape[:2] # current shape [height, width]
- if isinstance(new_shape, int):
- new_shape = (new_shape, new_shape)
-
- # Scale ratio (new / old)
- r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
- if not scaleup: # only scale down, do not scale up (for better val mAP)
- r = min(r, 1.0)
-
- # Compute padding
- ratio = r, r # width, height ratios
- new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
- dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
- if auto: # minimum rectangle
- dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
- elif scaleFill: # stretch
- dw, dh = 0.0, 0.0
- new_unpad = (new_shape[1], new_shape[0])
- ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
-
- dw /= 2 # divide padding into 2 sides
- dh /= 2
-
- if shape[::-1] != new_unpad: # resize
- im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
- top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
- left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
- im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
- return im, ratio, (dw, dh)
-
-
-def random_perspective(im,
- targets=(),
- segments=(),
- degrees=10,
- translate=.1,
- scale=.1,
- shear=10,
- perspective=0.0,
- border=(0, 0)):
- # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10))
- # targets = [cls, xyxy]
-
- height = im.shape[0] + border[0] * 2 # shape(h,w,c)
- width = im.shape[1] + border[1] * 2
-
- # Center
- C = np.eye(3)
- C[0, 2] = -im.shape[1] / 2 # x translation (pixels)
- C[1, 2] = -im.shape[0] / 2 # y translation (pixels)
-
- # Perspective
- P = np.eye(3)
- P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y)
- P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x)
-
- # Rotation and Scale
- R = np.eye(3)
- a = random.uniform(-degrees, degrees)
- # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
- s = random.uniform(1 - scale, 1 + scale)
- # s = 2 ** random.uniform(-scale, scale)
- R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
-
- # Shear
- S = np.eye(3)
- S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg)
- S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg)
-
- # Translation
- T = np.eye(3)
- T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels)
- T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels)
-
- # Combined rotation matrix
- M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
- if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed
- if perspective:
- im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114))
- else: # affine
- im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114))
-
- # Visualize
- # import matplotlib.pyplot as plt
- # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
- # ax[0].imshow(im[:, :, ::-1]) # base
- # ax[1].imshow(im2[:, :, ::-1]) # warped
-
- # Transform label coordinates
- n = len(targets)
- if n:
- use_segments = any(x.any() for x in segments)
- new = np.zeros((n, 4))
- if use_segments: # warp segments
- segments = resample_segments(segments) # upsample
- for i, segment in enumerate(segments):
- xy = np.ones((len(segment), 3))
- xy[:, :2] = segment
- xy = xy @ M.T # transform
- xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine
-
- # clip
- new[i] = segment2box(xy, width, height)
-
- else: # warp boxes
- xy = np.ones((n * 4, 3))
- xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1
- xy = xy @ M.T # transform
- xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine
-
- # create new boxes
- x = xy[:, [0, 2, 4, 6]]
- y = xy[:, [1, 3, 5, 7]]
- new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
-
- # clip
- new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
- new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)
-
- # filter candidates
- i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10)
- targets = targets[i]
- targets[:, 1:5] = new[i]
-
- return im, targets
-
-
-def copy_paste(im, labels, segments, p=0.5):
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
- n = len(segments)
- if p and n:
- h, w, c = im.shape # height, width, channels
- im_new = np.zeros(im.shape, np.uint8)
- for j in random.sample(range(n), k=round(p * n)):
- l, s = labels[j], segments[j]
- box = w - l[3], l[2], w - l[1], l[4]
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- if (ioa < 0.30).all(): # allow 30% obscuration of existing labels
- labels = np.concatenate((labels, [[l[0], *box]]), 0)
- segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))
- cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED)
-
- result = cv2.bitwise_and(src1=im, src2=im_new)
- result = cv2.flip(result, 1) # augment segments (flip left-right)
- i = result > 0 # pixels to replace
- # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch
- im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug
-
- return im, labels, segments
-
-
-def cutout(im, labels, p=0.5):
- # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
- if random.random() < p:
- h, w = im.shape[:2]
- scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction
- for s in scales:
- mask_h = random.randint(1, int(h * s)) # create random masks
- mask_w = random.randint(1, int(w * s))
-
- # box
- xmin = max(0, random.randint(0, w) - mask_w // 2)
- ymin = max(0, random.randint(0, h) - mask_h // 2)
- xmax = min(w, xmin + mask_w)
- ymax = min(h, ymin + mask_h)
-
- # apply random color mask
- im[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)]
-
- # return unobscured labels
- if len(labels) and s > 0.03:
- box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
- labels = labels[ioa < 0.60] # remove >60% obscured labels
-
- return labels
-
-
-def mixup(im, labels, im2, labels2):
- # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf
- r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0
- im = (im * r + im2 * (1 - r)).astype(np.uint8)
- labels = np.concatenate((labels, labels2), 0)
- return im, labels
-
-
-def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n)
- # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
- w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
- w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
- ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
- return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates
diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ddim.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ddim.py
deleted file mode 100644
index 57ee8d302c77cb09bd73ef803ef9e715098feafc..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/latent_diffusion/ddim.py
+++ /dev/null
@@ -1,377 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-
-from audioldm.latent_diffusion.util import (
- make_ddim_sampling_parameters,
- make_ddim_timesteps,
- noise_like,
- extract_into_tensor,
-)
-import gradio as gr
-
-class DDIMSampler(object):
- def __init__(self, model, schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- if attr.device != torch.device("cuda"):
- attr = attr.to(torch.device("cuda"))
- setattr(self, name, attr)
-
- def make_schedule(
- self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0.0, verbose=True
- ):
- self.ddim_timesteps = make_ddim_timesteps(
- ddim_discr_method=ddim_discretize,
- num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps,
- verbose=verbose,
- )
- alphas_cumprod = self.model.alphas_cumprod
- assert (
- alphas_cumprod.shape[0] == self.ddpm_num_timesteps
- ), "alphas have to be defined for each timestep"
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer("betas", to_torch(self.model.betas))
- self.register_buffer("alphas_cumprod", to_torch(alphas_cumprod))
- self.register_buffer(
- "alphas_cumprod_prev", to_torch(self.model.alphas_cumprod_prev)
- )
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer(
- "sqrt_alphas_cumprod", to_torch(np.sqrt(alphas_cumprod.cpu()))
- )
- self.register_buffer(
- "sqrt_one_minus_alphas_cumprod",
- to_torch(np.sqrt(1.0 - alphas_cumprod.cpu())),
- )
- self.register_buffer(
- "log_one_minus_alphas_cumprod", to_torch(np.log(1.0 - alphas_cumprod.cpu()))
- )
- self.register_buffer(
- "sqrt_recip_alphas_cumprod", to_torch(np.sqrt(1.0 / alphas_cumprod.cpu()))
- )
- self.register_buffer(
- "sqrt_recipm1_alphas_cumprod",
- to_torch(np.sqrt(1.0 / alphas_cumprod.cpu() - 1)),
- )
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(
- alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta,
- verbose=verbose,
- )
- self.register_buffer("ddim_sigmas", ddim_sigmas)
- self.register_buffer("ddim_alphas", ddim_alphas)
- self.register_buffer("ddim_alphas_prev", ddim_alphas_prev)
- self.register_buffer("ddim_sqrt_one_minus_alphas", np.sqrt(1.0 - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev)
- / (1 - self.alphas_cumprod)
- * (1 - self.alphas_cumprod / self.alphas_cumprod_prev)
- )
- self.register_buffer(
- "ddim_sigmas_for_original_num_steps", sigmas_for_original_sampling_steps
- )
-
- @torch.no_grad()
- def sample(
- self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.0,
- mask=None,
- x0=None,
- temperature=1.0,
- noise_dropout=0.0,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- **kwargs,
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- cbs = conditioning[list(conditioning.keys())[0]].shape[0]
- if cbs != batch_size:
- print(
- f"Warning: Got {cbs} conditionings but batch-size is {batch_size}"
- )
- else:
- if conditioning.shape[0] != batch_size:
- print(
- f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}"
- )
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- samples, intermediates = self.ddim_sampling(
- conditioning,
- size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask,
- x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- return samples, intermediates
-
- @torch.no_grad()
- def ddim_sampling(
- self,
- cond,
- shape,
- x_T=None,
- ddim_use_original_steps=False,
- callback=None,
- timesteps=None,
- quantize_denoised=False,
- mask=None,
- x0=None,
- img_callback=None,
- log_every_t=100,
- temperature=1.0,
- noise_dropout=0.0,
- score_corrector=None,
- corrector_kwargs=None,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- ):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = (
- self.ddpm_num_timesteps
- if ddim_use_original_steps
- else self.ddim_timesteps
- )
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = (
- int(
- min(timesteps / self.ddim_timesteps.shape[0], 1)
- * self.ddim_timesteps.shape[0]
- )
- - 1
- )
- timesteps = self.ddim_timesteps[:subset_end]
-
- intermediates = {"x_inter": [img], "pred_x0": [img]}
- time_range = (
- reversed(range(0, timesteps))
- if ddim_use_original_steps
- else np.flip(timesteps)
- )
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
- # print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- # iterator = gr.Progress().tqdm(time_range, desc="DDIM Sampler", total=total_steps)
- iterator = tqdm(time_range, desc="DDIM Sampler", total=total_steps)
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(
- x0, ts
- ) # TODO deterministic forward pass?
- img = (
- img_orig * mask + (1.0 - mask) * img
- ) # In the first sampling step, img is pure gaussian noise
-
- outs = self.p_sample_ddim(
- img,
- cond,
- ts,
- index=index,
- use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised,
- temperature=temperature,
- noise_dropout=noise_dropout,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- img, pred_x0 = outs
- if callback:
- callback(i)
- if img_callback:
- img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates["x_inter"].append(img)
- intermediates["pred_x0"].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
- # fast, but does not allow for exact reconstruction
- # t serves as an index to gather the correct alphas
- if use_original_steps:
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
- else:
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
-
- if noise is None:
- noise = torch.randn_like(x0)
-
- return (
- extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0
- + extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise
- )
-
- @torch.no_grad()
- def decode(
- self,
- x_latent,
- cond,
- t_start,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- use_original_steps=False,
- ):
-
- timesteps = (
- np.arange(self.ddpm_num_timesteps)
- if use_original_steps
- else self.ddim_timesteps
- )
- timesteps = timesteps[:t_start]
-
- time_range = np.flip(timesteps)
- total_steps = timesteps.shape[0]
- # print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- # iterator = gr.Progress().tqdm(time_range, desc="Decoding image", total=total_steps)
- iterator = tqdm(time_range, desc="Decoding image", total=total_steps)
- x_dec = x_latent
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full(
- (x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long
- )
- x_dec, _ = self.p_sample_ddim(
- x_dec,
- cond,
- ts,
- index=index,
- use_original_steps=use_original_steps,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- return x_dec
-
- @torch.no_grad()
- def p_sample_ddim(
- self,
- x,
- c,
- t,
- index,
- repeat_noise=False,
- use_original_steps=False,
- quantize_denoised=False,
- temperature=1.0,
- noise_dropout=0.0,
- score_corrector=None,
- corrector_kwargs=None,
- unconditional_guidance_scale=1.0,
- unconditional_conditioning=None,
- ):
- b, *_, device = *x.shape, x.device
-
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.0:
- e_t = self.model.apply_model(x, t, c)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- c_in = torch.cat([unconditional_conditioning, c])
- e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
- # When unconditional_guidance_scale == 1: only e_t
- # When unconditional_guidance_scale == 0: only unconditional
- # When unconditional_guidance_scale > 1: add more unconditional guidance
- e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps"
- e_t = score_corrector.modify_score(
- self.model, e_t, x, t, c, **corrector_kwargs
- )
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = (
- self.model.alphas_cumprod_prev
- if use_original_steps
- else self.ddim_alphas_prev
- )
- sqrt_one_minus_alphas = (
- self.model.sqrt_one_minus_alphas_cumprod
- if use_original_steps
- else self.ddim_sqrt_one_minus_alphas
- )
- sigmas = (
- self.model.ddim_sigmas_for_original_num_steps
- if use_original_steps
- else self.ddim_sigmas
- )
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full(
- (b, 1, 1, 1), sqrt_one_minus_alphas[index], device=device
- )
-
- # current prediction for x_0
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
- # direction pointing to x_t
- dir_xt = (1.0 - a_prev - sigma_t**2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.0:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise # TODO
- return x_prev, pred_x0
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/__init__.py
deleted file mode 100644
index 259f669b78bd05815cb8d3351fd6c5fc9a1b85a1..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from . import transforms # isort:skip
-
-from .build import (
- build_batch_data_loader,
- build_detection_test_loader,
- build_detection_train_loader,
- get_detection_dataset_dicts,
- load_proposals_into_dataset,
- print_instances_class_histogram,
-)
-from .catalog import DatasetCatalog, MetadataCatalog, Metadata
-from .common import DatasetFromList, MapDataset, ToIterableDataset
-from .dataset_mapper import DatasetMapper
-
-# ensure the builtin datasets are registered
-from . import datasets, samplers # isort:skip
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/structures/test_imagelist.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/structures/test_imagelist.py
deleted file mode 100644
index e446e44a37f5d8f9a68362e4b93a291d314d5d68..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/structures/test_imagelist.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import unittest
-from typing import List, Sequence, Tuple
-import torch
-
-from detectron2.structures import ImageList
-
-
-class TestImageList(unittest.TestCase):
- def test_imagelist_padding_tracing(self):
- # test that the trace does not contain hard-coded constant sizes
- def to_imagelist(tensors: Sequence[torch.Tensor]):
- image_list = ImageList.from_tensors(tensors, 4)
- return image_list.tensor, image_list.image_sizes
-
- def _tensor(*shape):
- return torch.ones(shape, dtype=torch.float32)
-
- # test CHW (inputs needs padding vs. no padding)
- for shape in [(3, 10, 10), (3, 12, 12)]:
- func = torch.jit.trace(to_imagelist, ([_tensor(*shape)],))
- tensor, image_sizes = func([_tensor(3, 15, 20)])
- self.assertEqual(tensor.shape, (1, 3, 16, 20), tensor.shape)
- self.assertEqual(image_sizes[0].tolist(), [15, 20], image_sizes[0])
-
- # test HW
- func = torch.jit.trace(to_imagelist, ([_tensor(10, 10)],))
- tensor, image_sizes = func([_tensor(15, 20)])
- self.assertEqual(tensor.shape, (1, 16, 20), tensor.shape)
- self.assertEqual(image_sizes[0].tolist(), [15, 20], image_sizes[0])
-
- # test 2x CHW
- func = torch.jit.trace(
- to_imagelist,
- ([_tensor(3, 16, 10), _tensor(3, 13, 11)],),
- )
- tensor, image_sizes = func([_tensor(3, 25, 20), _tensor(3, 10, 10)])
- self.assertEqual(tensor.shape, (2, 3, 28, 20), tensor.shape)
- self.assertEqual(image_sizes[0].tolist(), [25, 20], image_sizes[0])
- self.assertEqual(image_sizes[1].tolist(), [10, 10], image_sizes[1])
- # support calling with different spatial sizes, but not with different #images
-
- def test_imagelist_scriptability(self):
- image_nums = 2
- image_tensor = torch.randn((image_nums, 10, 20), dtype=torch.float32)
- image_shape = [(10, 20)] * image_nums
-
- def f(image_tensor, image_shape: List[Tuple[int, int]]):
- return ImageList(image_tensor, image_shape)
-
- ret = f(image_tensor, image_shape)
- ret_script = torch.jit.script(f)(image_tensor, image_shape)
-
- self.assertEqual(len(ret), len(ret_script))
- for i in range(image_nums):
- self.assertTrue(torch.equal(ret[i], ret_script[i]))
-
- def test_imagelist_from_tensors_scriptability(self):
- image_tensor_0 = torch.randn(10, 20, dtype=torch.float32)
- image_tensor_1 = torch.randn(12, 22, dtype=torch.float32)
- inputs = [image_tensor_0, image_tensor_1]
-
- def f(image_tensor: List[torch.Tensor]):
- return ImageList.from_tensors(image_tensor, 10)
-
- ret = f(inputs)
- ret_script = torch.jit.script(f)(inputs)
-
- self.assertEqual(len(ret), len(ret_script))
- self.assertTrue(torch.equal(ret.tensor, ret_script.tensor))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/ccolas/TastyPiano/src/music/pipeline/processed2handcodedrep.py b/spaces/ccolas/TastyPiano/src/music/pipeline/processed2handcodedrep.py
deleted file mode 100644
index 0f752d4d36988d3734127fdd8519cd24a039e4ad..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/music/pipeline/processed2handcodedrep.py
+++ /dev/null
@@ -1,343 +0,0 @@
-import numpy as np
-from music21 import *
-from music21.features import native, jSymbolic, DataSet
-import pretty_midi as pm
-from src.music.utils import get_out_path
-from src.music.utilities.handcoded_rep_utilities.tht import tactus_hypothesis_tracker, tracker_analysis
-from src.music.utilities.handcoded_rep_utilities.loudness import get_loudness, compute_total_loudness, amplitude2db, velocity2amplitude, get_db_of_equivalent_loudness_at_440hz, pitch2freq
-import json
-import os
-environment.set('musicxmlPath', '/home/cedric/Desktop/test/')
-midi_path = "/home/cedric/Documents/pianocktail/data/music/processed/doug_mckenzie_processed/allthethings_reharmonized_processed.mid"
-
-FEATURES_DICT_SCORE = dict(
- # strongest pulse: measures how fast the melody is
- # stronger_pulse=jSymbolic.StrongestRhythmicPulseFeature,
- # weights of the two strongest pulse, measures rhythmic consistency: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#combinedstrengthoftwostrongestrhythmicpulsesfeature
- pulse_strength_two=jSymbolic.CombinedStrengthOfTwoStrongestRhythmicPulsesFeature,
- # weights of the strongest pulse, measures rhythmic consistency: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#combinedstrengthoftwostrongestrhythmicpulsesfeature
- pulse_strength = jSymbolic.StrengthOfStrongestRhythmicPulseFeature,
- # variability of attacks: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#variabilityoftimebetweenattacksfeature
-
-)
-FEATURES_DICT = dict(
- # bass register importance: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#importanceofbassregisterfeature
- # bass_register=jSymbolic.ImportanceOfBassRegisterFeature,
- # high register importance: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#importanceofbassregisterfeature
- # high_register=jSymbolic.ImportanceOfHighRegisterFeature,
- # medium register importance: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#importanceofbassregisterfeature
- # medium_register=jSymbolic.ImportanceOfMiddleRegisterFeature,
- # number of common pitches (at least 9% of all): https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#numberofcommonmelodicintervalsfeature
- # common_pitches=jSymbolic.NumberOfCommonPitchesFeature,
- # pitch class variety (used at least once): https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#pitchvarietyfeature
- # pitch_variety=jSymbolic.PitchVarietyFeature,
- # attack_variability = jSymbolic.VariabilityOfTimeBetweenAttacksFeature,
- # staccato fraction: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#staccatoincidencefeature
- # staccato_score = jSymbolic.StaccatoIncidenceFeature,
- # mode analysis: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesNative.html
- av_melodic_interval = jSymbolic.AverageMelodicIntervalFeature,
- # chromatic motion: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#chromaticmotionfeature
- chromatic_motion = jSymbolic.ChromaticMotionFeature,
- # direction of motion (fraction of rising intervals: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#directionofmotionfeature
- motion_direction = jSymbolic.DirectionOfMotionFeature,
- # duration of melodic arcs: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#durationofmelodicarcsfeature
- melodic_arcs_duration = jSymbolic.DurationOfMelodicArcsFeature,
- # melodic arcs size: https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#sizeofmelodicarcsfeature
- melodic_arcs_size = jSymbolic.SizeOfMelodicArcsFeature,
- # number of common melodic interval (at least 9% of all): https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#numberofcommonmelodicintervalsfeature
- # common_melodic_intervals = jSymbolic.NumberOfCommonMelodicIntervalsFeature,
- # https://web.mit.edu/music21/doc/moduleReference/moduleFeaturesJSymbolic.html#amountofarpeggiationfeature
- # arpeggiato=jSymbolic.AmountOfArpeggiationFeature,
-)
-
-
-
-
-
-
-def compute_beat_info(onsets):
- onsets_in_ms = np.array(onsets) * 1000
-
- tht = tactus_hypothesis_tracker.default_tht()
- trackers = tht(onsets_in_ms)
- top_hts = tracker_analysis.top_hypothesis(trackers, len(onsets_in_ms))
- beats = tracker_analysis.produce_beats_information(onsets_in_ms, top_hts, adapt_period=250 is not None,
- adapt_phase=tht.eval_f, max_delta_bpm=250, avoid_quickturns=None)
- tempo = 1 / (np.mean(np.diff(beats)) / 1000) * 60 # in bpm
- conf_values = tracker_analysis.tht_tracking_confs(trackers, len(onsets_in_ms))
- pulse_clarity = np.mean(np.array(conf_values), axis=0)[1]
- return tempo, pulse_clarity
-
-def dissonance_score(A):
- """
- Given a piano-roll indicator matrix representation of a musical work (128 pitches x beats),
- return the dissonance as a function of beats.
- Input:
- A - 128 x beats indicator matrix of MIDI pitch number
-
- """
- freq_rats = np.arange(1, 7) # Harmonic series ratios
- amps = np.exp(-.5 * freq_rats) # Partial amplitudes
- F0 = 8.1757989156 # base frequency for MIDI (note 0)
- diss = [] # List for dissonance values
- thresh = 1e-3
- for beat in A.T:
- idx = np.where(beat>thresh)[0]
- if len(idx):
- freqs, mags = [], [] # lists for frequencies, mags
- for i in idx:
- freqs.extend(F0*2**(i/12.0)*freq_rats)
- mags.extend(amps)
- freqs = np.array(freqs)
- mags = np.array(mags)
- sortIdx = freqs.argsort()
- d = compute_dissonance(freqs[sortIdx],mags[sortIdx])
- diss.extend([d])
- else:
- diss.extend([-1]) # Null value
- diss = np.array(diss)
- return diss[np.where(diss != -1)]
-
-def compute_dissonance(freqs, amps):
- """
- From https://notebook.community/soundspotter/consonance/week1_consonance
- Compute dissonance between partials with center frequencies in freqs, uses a model of critical bandwidth.
- and amplitudes in amps. Based on Sethares "Tuning, Timbre, Spectrum, Scale" (1998) after Plomp and Levelt (1965)
-
- inputs:
- freqs - list of partial frequencies
- amps - list of corresponding amplitudes [default, uniformly 1]
- """
- b1, b2, s1, s2, c1, c2, Dstar = (-3.51, -5.75, 0.0207, 19.96, 5, -5, 0.24)
- f = np.array(freqs)
- a = np.array(amps)
- idx = np.argsort(f)
- f = f[idx]
- a = a[idx]
- N = f.size
- D = 0
- for i in range(1, N):
- Fmin = f[ 0 : N - i ]
- S = Dstar / ( s1 * Fmin + s2)
- Fdif = f[ i : N ] - f[ 0 : N - i ]
- am = a[ i : N ] * a[ 0 : N - i ]
- Dnew = am * (c1 * np.exp (b1 * S * Fdif) + c2 * np.exp(b2 * S * Fdif))
- D += Dnew.sum()
- return D
-
-
-
-
-def store_new_midi(notes, out_path):
- midi = pm.PrettyMIDI()
- midi.instruments.append(pm.Instrument(program=0, is_drum=False))
- midi.instruments[0].notes = notes
- midi.write(out_path)
- return midi
-
-
-def processed2handcodedrep(midi_path, handcoded_rep_path=None, crop=30, verbose=False, save=True, return_rep=False, level=0):
- try:
- if not handcoded_rep_path:
- handcoded_rep_path, _, _ = get_out_path(in_path=midi_path, in_word='processed', out_word='handcoded_reps', out_extension='.mid')
- features = dict()
- if verbose: print(' ' * level + 'Computing handcoded representations')
- if os.path.exists(handcoded_rep_path):
- with open(handcoded_rep_path.replace('.mid', '.json'), 'r') as f:
- features = json.load(f)
- rep = np.array([features[k] for k in sorted(features.keys())])
- if rep.size == 49:
- os.remove(handcoded_rep_path)
- else:
- if verbose: print(' ' * (level + 2) + 'Already computed.')
- if return_rep:
- return handcoded_rep_path, np.array([features[k] for k in sorted(features.keys())]), ''
- else:
- return handcoded_rep_path, ''
- midi = pm.PrettyMIDI(midi_path) # load midi with pretty midi
- notes = midi.instruments[0].notes # get notes
- notes.sort(key=lambda x: (x.start, x.pitch)) # sort notes per start and pitch
- onsets, offsets, pitches, durations, velocities = [], [], [], [], []
- n_notes_cropped = len(notes)
- for i_n, n in enumerate(notes):
- onsets.append(n.start)
- offsets.append(n.end)
- durations.append(n.end-n.start)
- pitches.append(n.pitch)
- velocities.append(n.velocity)
- if crop is not None: # find how many notes to keep
- if n.start > crop and n_notes_cropped == len(notes):
- n_notes_cropped = i_n
- break
- notes = notes[:n_notes_cropped]
- midi = store_new_midi(notes, handcoded_rep_path)
- # pianoroll = midi.get_piano_roll() # extract piano roll representation
-
- # compute loudness
- amplitudes = velocity2amplitude(np.array(velocities))
- power_dbs = amplitude2db(amplitudes)
- frequencies = pitch2freq(np.array(pitches))
- loudness_values = get_loudness(power_dbs, frequencies)
- # compute average perceived loudness
- # for each power, compute loudness, then compute power such that the loudness at 440 Hz would be equivalent.
- # equivalent_powers_dbs = get_db_of_equivalent_loudness_at_440hz(frequencies, power_dbs)
- # then get the corresponding amplitudes
- # equivalent_amplitudes = 10 ** (equivalent_powers_dbs / 20)
- # not use a amplitude model across the sample to compute the instantaneous amplitude, turn it back to dbs, then to perceived loudness with unique freq 440 Hz
- # av_total_loudness, std_total_loudness = compute_total_loudness(equivalent_amplitudes, onsets, offsets)
-
- end_time = np.max(offsets)
- start_time = notes[0].start
-
-
- score = converter.parse(handcoded_rep_path)
- score.chordify()
- notes_without_chords = stream.Stream(score.flatten().getElementsByClass('Note'))
-
- velocities_wo_chords, pitches_wo_chords, amplitudes_wo_chords, dbs_wo_chords = [], [], [], []
- frequencies_wo_chords, loudness_values_wo_chords, onsets_wo_chords, offsets_wo_chords, durations_wo_chords = [], [], [], [], []
- for i_n in range(len(notes_without_chords)):
- n = notes_without_chords[i_n]
- velocities_wo_chords.append(n.volume.velocity)
- pitches_wo_chords.append(n.pitch.midi)
- onsets_wo_chords.append(n.offset)
- offsets_wo_chords.append(onsets_wo_chords[-1] + n.seconds)
- durations_wo_chords.append(n.seconds)
-
- amplitudes_wo_chords = velocity2amplitude(np.array(velocities_wo_chords))
- power_dbs_wo_chords = amplitude2db(amplitudes_wo_chords)
- frequencies_wo_chords = pitch2freq(np.array(pitches_wo_chords))
- loudness_values_wo_chords = get_loudness(power_dbs_wo_chords, frequencies_wo_chords)
- # compute average perceived loudness
- # for each power, compute loudness, then compute power such that the loudness at 440 Hz would be equivalent.
- # equivalent_powers_dbs_wo_chords = get_db_of_equivalent_loudness_at_440hz(frequencies_wo_chords, power_dbs_wo_chords)
- # then get the corresponding amplitudes
- # equivalent_amplitudes_wo_chords = 10 ** (equivalent_powers_dbs_wo_chords / 20)
- # not use a amplitude model across the sample to compute the instantaneous amplitude, turn it back to dbs, then to perceived loudness with unique freq 440 Hz
- # av_total_loudness_wo_chords, std_total_loudness_wo_chords = compute_total_loudness(equivalent_amplitudes_wo_chords, onsets_wo_chords, offsets_wo_chords)
-
- ds = DataSet(classLabel='test')
- f = list(FEATURES_DICT.values())
- ds.addFeatureExtractors(f)
- ds.addData(notes_without_chords)
- ds.process()
- for k, f in zip(FEATURES_DICT.keys(), ds.getFeaturesAsList()[0][1:-1]):
- features[k] = f
-
- ds = DataSet(classLabel='test')
- f = list(FEATURES_DICT_SCORE.values())
- ds.addFeatureExtractors(f)
- ds.addData(score)
- ds.process()
- for k, f in zip(FEATURES_DICT_SCORE.keys(), ds.getFeaturesAsList()[0][1:-1]):
- features[k] = f
-
- # # # # #
- # Register features
- # # # # #
-
- # features['av_pitch'] = np.mean(pitches)
- # features['std_pitch'] = np.std(pitches)
- # features['range_pitch'] = np.max(pitches) - np.min(pitches) # aka ambitus
-
- # # # # #
- # Rhythmic features
- # # # # #
-
- # tempo, pulse_clarity = compute_beat_info(onsets[:n_notes_cropped])
- # features['pulse_clarity'] = pulse_clarity
- # features['tempo'] = tempo
- features['tempo_pm'] = midi.estimate_tempo()
-
- # # # # #
- # Temporal features
- # # # # #
-
- features['av_duration'] = np.mean(durations)
- # features['std_duration'] = np.std(durations)
- features['note_density'] = len(notes) / (end_time - start_time)
- # intervals_wo_chords = np.diff(onsets_wo_chords)
- # articulations = [max((i-d)/i, 0) for d, i in zip(durations_wo_chords, intervals_wo_chords) if i != 0]
- # features['articulation'] = np.mean(articulations)
- # features['av_duration_wo_chords'] = np.mean(durations_wo_chords)
- # features['std_duration_wo_chords'] = np.std(durations_wo_chords)
-
- # # # # #
- # Dynamics features
- # # # # #
- features['av_velocity'] = np.mean(velocities)
- features['std_velocity'] = np.std(velocities)
- features['av_loudness'] = np.mean(loudness_values)
- # features['std_loudness'] = np.std(loudness_values)
- features['range_loudness'] = np.max(loudness_values) - np.min(loudness_values)
- # features['av_integrated_loudness'] = av_total_loudness
- # features['std_integrated_loudness'] = std_total_loudness
- # features['av_velocity_wo_chords'] = np.mean(velocities_wo_chords)
- # features['std_velocity_wo_chords'] = np.std(velocities_wo_chords)
- # features['av_loudness_wo_chords'] = np.mean(loudness_values_wo_chords)
- # features['std_loudness_wo_chords'] = np.std(loudness_values_wo_chords)
- features['range_loudness_wo_chords'] = np.max(loudness_values_wo_chords) - np.min(loudness_values_wo_chords)
- # features['av_integrated_loudness'] = av_total_loudness_wo_chords
- # features['std_integrated_loudness'] = std_total_loudness_wo_chords
- # indices_with_intervals = np.where(intervals_wo_chords > 0.01)
- # features['av_loudness_change'] = np.mean(np.abs(np.diff(np.array(loudness_values_wo_chords)[indices_with_intervals]))) # accentuation
- # features['av_velocity_change'] = np.mean(np.abs(np.diff(np.array(velocities_wo_chords)[indices_with_intervals]))) # accentuation
-
- # # # # #
- # Harmony features
- # # # # #
-
- # get major_minor score: https://web.mit.edu/music21/doc/moduleReference/moduleAnalysisDiscrete.html
- music_analysis = score.analyze('key')
- major_score = None
- minor_score = None
- for a in [music_analysis] + music_analysis.alternateInterpretations:
- if 'major' in a.__str__() and a.correlationCoefficient > 0:
- major_score = a.correlationCoefficient
- elif 'minor' in a.__str__() and a.correlationCoefficient > 0:
- minor_score = a.correlationCoefficient
- if major_score is not None and minor_score is not None:
- break
- features['major_minor'] = major_score / (major_score + minor_score)
- features['tonal_certainty'] = music_analysis.tonalCertainty()
- # features['av_sensory_dissonance'] = np.mean(dissonance_score(pianoroll))
- #TODO only works for chords, do something with melodic intervals: like proportion that is not third, fifth or sevenths?
-
- # # # # #
- # Interval features
- # # # # #
- #https://web.mit.edu/music21/doc/moduleReference/moduleAnalysisPatel.html
- # features['melodic_interval_variability'] = analysis.patel.melodicIntervalVariability(notes_without_chords)
-
- # # # # #
- # Suprize features
- # # # # #
- # https://web.mit.edu/music21/doc/moduleReference/moduleAnalysisMetrical.html
- # analysis.metrical.thomassenMelodicAccent(notes_without_chords)
- # melodic_accents = [n.melodicAccent for n in notes_without_chords]
- # features['melodic_accent'] = np.mean(melodic_accents)
-
- if save:
- for k, v in features.items():
- features[k] = float(features[k])
- with open(handcoded_rep_path.replace('.mid', '.json'), 'w') as f:
- json.dump(features, f)
- else:
- print(features)
- if os.path.exists(handcoded_rep_path):
- os.remove(handcoded_rep_path)
- if verbose: print(' ' * (level + 2) + 'Success.')
- if return_rep:
- return handcoded_rep_path, np.array([features[k] for k in sorted(features.keys())]), ''
- else:
- return handcoded_rep_path, ''
- except:
- if verbose: print(' ' * (level + 2) + 'Failed.')
- if return_rep:
- return None, None, 'error'
- else:
- return None, 'error'
-
-
-if __name__ == '__main__':
- processed2handcodedrep(midi_path, '/home/cedric/Desktop/test.mid', save=False)
\ No newline at end of file
diff --git a/spaces/chatpdfdemo/chatpdfdemo/app.py b/spaces/chatpdfdemo/chatpdfdemo/app.py
deleted file mode 100644
index 22e5ec520fd3b3381de0fb39968e822086aa51b0..0000000000000000000000000000000000000000
--- a/spaces/chatpdfdemo/chatpdfdemo/app.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import streamlit as st
-from dotenv import load_dotenv
-import pickle
-from PyPDF2 import PdfReader
-from streamlit_extras.add_vertical_space import add_vertical_space
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.embeddings.openai import OpenAIEmbeddings
-from langchain.vectorstores import FAISS
-from langchain.llms import OpenAI
-from langchain.chains.question_answering import load_qa_chain
-from langchain.callbacks import get_openai_callback
-import os
-
-# Sidebar contents
-with st.sidebar:
- st.title('LLM Chat App')
- st.markdown('''
- ## About
- This app is an LLM-powered chatbot built using:
- - [Streamlit](https://streamlit.io/)
- - [LangChain](https://python.langchain.com/)
- - [OpenAI](https://platform.openai.com/docs/models) LLM model
-
- ''')
- add_vertical_space(5)
-
-
-load_dotenv()
-
-def main():
- st.header("Chat with PDF 💬")
-
-
- # upload a PDF file
- pdf = st.file_uploader("Upload your PDF", type='pdf')
-
- # st.write(pdf)
- if pdf is not None:
- pdf_reader = PdfReader(pdf)
-
- text = ""
- for page in pdf_reader.pages:
- text += page.extract_text()
-
- text_splitter = RecursiveCharacterTextSplitter(
- chunk_size=1000,
- chunk_overlap=200,
- length_function=len
- )
- chunks = text_splitter.split_text(text=text)
-
- # # embeddings
- store_name = pdf.name[:-4]
- st.write(f'{store_name}')
- # st.write(chunks)
-
- if os.path.exists(f"{store_name}.pkl"):
- with open(f"{store_name}.pkl", "rb") as f:
- VectorStore = pickle.load(f)
- # st.write('Embeddings Loaded from the Disk')s
- else:
- embeddings = OpenAIEmbeddings()
- VectorStore = FAISS.from_texts(chunks, embedding=embeddings)
- with open(f"{store_name}.pkl", "wb") as f:
- pickle.dump(VectorStore, f)
-
- # embeddings = OpenAIEmbeddings()
- # VectorStore = FAISS.from_texts(chunks, embedding=embeddings)
-
- # Accept user questions/query
- query = st.text_input("Ask questions about your PDF file:")
- # st.write(query)
-
- if query:
- docs = VectorStore.similarity_search(query=query, k=3)
-
- llm = OpenAI(model_name='gpt-3.5-turbo')
- chain = load_qa_chain(llm=llm, chain_type="stuff")
- with get_openai_callback() as cb:
- response = chain.run(input_documents=docs, question=query)
- print(cb)
- st.write(response)
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiofiles/base.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiofiles/base.py
deleted file mode 100644
index 6201d95b4fec039a6a9bfe59ad1de722c4688c9a..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiofiles/base.py
+++ /dev/null
@@ -1,111 +0,0 @@
-"""Various base classes."""
-from types import coroutine
-from collections.abc import Coroutine
-from asyncio import get_running_loop
-
-
-class AsyncBase:
- def __init__(self, file, loop, executor):
- self._file = file
- self._executor = executor
- self._ref_loop = loop
-
- @property
- def _loop(self):
- return self._ref_loop or get_running_loop()
-
- def __aiter__(self):
- """We are our own iterator."""
- return self
-
- def __repr__(self):
- return super().__repr__() + " wrapping " + repr(self._file)
-
- async def __anext__(self):
- """Simulate normal file iteration."""
- line = await self.readline()
- if line:
- return line
- else:
- raise StopAsyncIteration
-
-
-class AsyncIndirectBase(AsyncBase):
- def __init__(self, name, loop, executor, indirect):
- self._indirect = indirect
- self._name = name
- super().__init__(None, loop, executor)
-
- @property
- def _file(self):
- return self._indirect()
-
- @_file.setter
- def _file(self, v):
- pass # discard writes
-
-
-class _ContextManager(Coroutine):
- __slots__ = ("_coro", "_obj")
-
- def __init__(self, coro):
- self._coro = coro
- self._obj = None
-
- def send(self, value):
- return self._coro.send(value)
-
- def throw(self, typ, val=None, tb=None):
- if val is None:
- return self._coro.throw(typ)
- elif tb is None:
- return self._coro.throw(typ, val)
- else:
- return self._coro.throw(typ, val, tb)
-
- def close(self):
- return self._coro.close()
-
- @property
- def gi_frame(self):
- return self._coro.gi_frame
-
- @property
- def gi_running(self):
- return self._coro.gi_running
-
- @property
- def gi_code(self):
- return self._coro.gi_code
-
- def __next__(self):
- return self.send(None)
-
- @coroutine
- def __iter__(self):
- resp = yield from self._coro
- return resp
-
- def __await__(self):
- resp = yield from self._coro
- return resp
-
- async def __anext__(self):
- resp = await self._coro
- return resp
-
- async def __aenter__(self):
- self._obj = await self._coro
- return self._obj
-
- async def __aexit__(self, exc_type, exc, tb):
- self._obj.close()
- self._obj = None
-
-
-class AiofilesContextManager(_ContextManager):
- """An adjusted async context manager for aiofiles."""
-
- async def __aexit__(self, exc_type, exc_val, exc_tb):
- await self._obj.close()
- self._obj = None
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/padding.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/padding.py
deleted file mode 100644
index fde3094b00ae4f118d81a2b15c18acb80702cdba..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/padding.py
+++ /dev/null
@@ -1,225 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-from __future__ import annotations
-
-import abc
-import typing
-
-from cryptography import utils
-from cryptography.exceptions import AlreadyFinalized
-from cryptography.hazmat.bindings._rust import (
- check_ansix923_padding,
- check_pkcs7_padding,
-)
-
-
-class PaddingContext(metaclass=abc.ABCMeta):
- @abc.abstractmethod
- def update(self, data: bytes) -> bytes:
- """
- Pads the provided bytes and returns any available data as bytes.
- """
-
- @abc.abstractmethod
- def finalize(self) -> bytes:
- """
- Finalize the padding, returns bytes.
- """
-
-
-def _byte_padding_check(block_size: int) -> None:
- if not (0 <= block_size <= 2040):
- raise ValueError("block_size must be in range(0, 2041).")
-
- if block_size % 8 != 0:
- raise ValueError("block_size must be a multiple of 8.")
-
-
-def _byte_padding_update(
- buffer_: typing.Optional[bytes], data: bytes, block_size: int
-) -> typing.Tuple[bytes, bytes]:
- if buffer_ is None:
- raise AlreadyFinalized("Context was already finalized.")
-
- utils._check_byteslike("data", data)
-
- buffer_ += bytes(data)
-
- finished_blocks = len(buffer_) // (block_size // 8)
-
- result = buffer_[: finished_blocks * (block_size // 8)]
- buffer_ = buffer_[finished_blocks * (block_size // 8) :]
-
- return buffer_, result
-
-
-def _byte_padding_pad(
- buffer_: typing.Optional[bytes],
- block_size: int,
- paddingfn: typing.Callable[[int], bytes],
-) -> bytes:
- if buffer_ is None:
- raise AlreadyFinalized("Context was already finalized.")
-
- pad_size = block_size // 8 - len(buffer_)
- return buffer_ + paddingfn(pad_size)
-
-
-def _byte_unpadding_update(
- buffer_: typing.Optional[bytes], data: bytes, block_size: int
-) -> typing.Tuple[bytes, bytes]:
- if buffer_ is None:
- raise AlreadyFinalized("Context was already finalized.")
-
- utils._check_byteslike("data", data)
-
- buffer_ += bytes(data)
-
- finished_blocks = max(len(buffer_) // (block_size // 8) - 1, 0)
-
- result = buffer_[: finished_blocks * (block_size // 8)]
- buffer_ = buffer_[finished_blocks * (block_size // 8) :]
-
- return buffer_, result
-
-
-def _byte_unpadding_check(
- buffer_: typing.Optional[bytes],
- block_size: int,
- checkfn: typing.Callable[[bytes], int],
-) -> bytes:
- if buffer_ is None:
- raise AlreadyFinalized("Context was already finalized.")
-
- if len(buffer_) != block_size // 8:
- raise ValueError("Invalid padding bytes.")
-
- valid = checkfn(buffer_)
-
- if not valid:
- raise ValueError("Invalid padding bytes.")
-
- pad_size = buffer_[-1]
- return buffer_[:-pad_size]
-
-
-class PKCS7:
- def __init__(self, block_size: int):
- _byte_padding_check(block_size)
- self.block_size = block_size
-
- def padder(self) -> PaddingContext:
- return _PKCS7PaddingContext(self.block_size)
-
- def unpadder(self) -> PaddingContext:
- return _PKCS7UnpaddingContext(self.block_size)
-
-
-class _PKCS7PaddingContext(PaddingContext):
- _buffer: typing.Optional[bytes]
-
- def __init__(self, block_size: int):
- self.block_size = block_size
- # TODO: more copies than necessary, we should use zero-buffer (#193)
- self._buffer = b""
-
- def update(self, data: bytes) -> bytes:
- self._buffer, result = _byte_padding_update(
- self._buffer, data, self.block_size
- )
- return result
-
- def _padding(self, size: int) -> bytes:
- return bytes([size]) * size
-
- def finalize(self) -> bytes:
- result = _byte_padding_pad(
- self._buffer, self.block_size, self._padding
- )
- self._buffer = None
- return result
-
-
-class _PKCS7UnpaddingContext(PaddingContext):
- _buffer: typing.Optional[bytes]
-
- def __init__(self, block_size: int):
- self.block_size = block_size
- # TODO: more copies than necessary, we should use zero-buffer (#193)
- self._buffer = b""
-
- def update(self, data: bytes) -> bytes:
- self._buffer, result = _byte_unpadding_update(
- self._buffer, data, self.block_size
- )
- return result
-
- def finalize(self) -> bytes:
- result = _byte_unpadding_check(
- self._buffer, self.block_size, check_pkcs7_padding
- )
- self._buffer = None
- return result
-
-
-class ANSIX923:
- def __init__(self, block_size: int):
- _byte_padding_check(block_size)
- self.block_size = block_size
-
- def padder(self) -> PaddingContext:
- return _ANSIX923PaddingContext(self.block_size)
-
- def unpadder(self) -> PaddingContext:
- return _ANSIX923UnpaddingContext(self.block_size)
-
-
-class _ANSIX923PaddingContext(PaddingContext):
- _buffer: typing.Optional[bytes]
-
- def __init__(self, block_size: int):
- self.block_size = block_size
- # TODO: more copies than necessary, we should use zero-buffer (#193)
- self._buffer = b""
-
- def update(self, data: bytes) -> bytes:
- self._buffer, result = _byte_padding_update(
- self._buffer, data, self.block_size
- )
- return result
-
- def _padding(self, size: int) -> bytes:
- return bytes([0]) * (size - 1) + bytes([size])
-
- def finalize(self) -> bytes:
- result = _byte_padding_pad(
- self._buffer, self.block_size, self._padding
- )
- self._buffer = None
- return result
-
-
-class _ANSIX923UnpaddingContext(PaddingContext):
- _buffer: typing.Optional[bytes]
-
- def __init__(self, block_size: int):
- self.block_size = block_size
- # TODO: more copies than necessary, we should use zero-buffer (#193)
- self._buffer = b""
-
- def update(self, data: bytes) -> bytes:
- self._buffer, result = _byte_unpadding_update(
- self._buffer, data, self.block_size
- )
- return result
-
- def finalize(self) -> bytes:
- result = _byte_unpadding_check(
- self._buffer,
- self.block_size,
- check_ansix923_padding,
- )
- self._buffer = None
- return result
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_m_e_t_a.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_m_e_t_a.py
deleted file mode 100644
index 3af9e543049f89f0da3ceb15bb58135854fef002..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_m_e_t_a.py
+++ /dev/null
@@ -1,104 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import bytesjoin, strjoin, readHex
-from fontTools.ttLib import TTLibError
-from . import DefaultTable
-
-# Apple's documentation of 'meta':
-# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6meta.html
-
-META_HEADER_FORMAT = """
- > # big endian
- version: L
- flags: L
- dataOffset: L
- numDataMaps: L
-"""
-
-
-DATA_MAP_FORMAT = """
- > # big endian
- tag: 4s
- dataOffset: L
- dataLength: L
-"""
-
-
-class table__m_e_t_a(DefaultTable.DefaultTable):
- def __init__(self, tag=None):
- DefaultTable.DefaultTable.__init__(self, tag)
- self.data = {}
-
- def decompile(self, data, ttFont):
- headerSize = sstruct.calcsize(META_HEADER_FORMAT)
- header = sstruct.unpack(META_HEADER_FORMAT, data[0:headerSize])
- if header["version"] != 1:
- raise TTLibError("unsupported 'meta' version %d" % header["version"])
- dataMapSize = sstruct.calcsize(DATA_MAP_FORMAT)
- for i in range(header["numDataMaps"]):
- dataMapOffset = headerSize + i * dataMapSize
- dataMap = sstruct.unpack(
- DATA_MAP_FORMAT, data[dataMapOffset : dataMapOffset + dataMapSize]
- )
- tag = dataMap["tag"]
- offset = dataMap["dataOffset"]
- self.data[tag] = data[offset : offset + dataMap["dataLength"]]
- if tag in ["dlng", "slng"]:
- self.data[tag] = self.data[tag].decode("utf-8")
-
- def compile(self, ttFont):
- keys = sorted(self.data.keys())
- headerSize = sstruct.calcsize(META_HEADER_FORMAT)
- dataOffset = headerSize + len(keys) * sstruct.calcsize(DATA_MAP_FORMAT)
- header = sstruct.pack(
- META_HEADER_FORMAT,
- {
- "version": 1,
- "flags": 0,
- "dataOffset": dataOffset,
- "numDataMaps": len(keys),
- },
- )
- dataMaps = []
- dataBlocks = []
- for tag in keys:
- if tag in ["dlng", "slng"]:
- data = self.data[tag].encode("utf-8")
- else:
- data = self.data[tag]
- dataMaps.append(
- sstruct.pack(
- DATA_MAP_FORMAT,
- {"tag": tag, "dataOffset": dataOffset, "dataLength": len(data)},
- )
- )
- dataBlocks.append(data)
- dataOffset += len(data)
- return bytesjoin([header] + dataMaps + dataBlocks)
-
- def toXML(self, writer, ttFont):
- for tag in sorted(self.data.keys()):
- if tag in ["dlng", "slng"]:
- writer.begintag("text", tag=tag)
- writer.newline()
- writer.write(self.data[tag])
- writer.newline()
- writer.endtag("text")
- writer.newline()
- else:
- writer.begintag("hexdata", tag=tag)
- writer.newline()
- data = self.data[tag]
- if min(data) >= 0x20 and max(data) <= 0x7E:
- writer.comment("ascii: " + data.decode("ascii"))
- writer.newline()
- writer.dumphex(data)
- writer.endtag("hexdata")
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "hexdata":
- self.data[attrs["tag"]] = readHex(content)
- elif name == "text" and attrs["tag"] in ["dlng", "slng"]:
- self.data[attrs["tag"]] = strjoin(content).strip()
- else:
- raise TTLibError("can't handle '%s' element" % name)
diff --git a/spaces/cihyFjudo/fairness-paper-search/Autodesk Inventor 2009 Keygenl Download and Activate Your Software for Free.md b/spaces/cihyFjudo/fairness-paper-search/Autodesk Inventor 2009 Keygenl Download and Activate Your Software for Free.md
deleted file mode 100644
index 6e1d8c94e80a05d282bf3409efe024e5f9b161fa..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Autodesk Inventor 2009 Keygenl Download and Activate Your Software for Free.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/BlackaliciousNia LINK Full Album Zip.md b/spaces/cihyFjudo/fairness-paper-search/BlackaliciousNia LINK Full Album Zip.md
deleted file mode 100644
index 2448d5548ff98e4d5646f9d919ed253b1cfa1ab1..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/BlackaliciousNia LINK Full Album Zip.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Parbona Ami Charte Toke Full Movie Download HD 720p A Must-Watch for Fans of Bonny and Koushani.md b/spaces/cihyFjudo/fairness-paper-search/Parbona Ami Charte Toke Full Movie Download HD 720p A Must-Watch for Fans of Bonny and Koushani.md
deleted file mode 100644
index 839bfc8980187134c8b488c446320eacfd4dffd6..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Parbona Ami Charte Toke Full Movie Download HD 720p A Must-Watch for Fans of Bonny and Koushani.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
parbona ami charter toke full movie download hd 72015
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Platillosvolantes2003downloadtorrent Everything You Need to Know About the Cult Classic.md b/spaces/cihyFjudo/fairness-paper-search/Platillosvolantes2003downloadtorrent Everything You Need to Know About the Cult Classic.md
deleted file mode 100644
index a18d998b0120e6842f6e946e5e5029291fc140df..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Platillosvolantes2003downloadtorrent Everything You Need to Know About the Cult Classic.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Sonny With A Chance So Sketchy Game Tips and Tricks to Collect All the Props and Avoid the Obstacles.md b/spaces/cihyFjudo/fairness-paper-search/Sonny With A Chance So Sketchy Game Tips and Tricks to Collect All the Props and Avoid the Obstacles.md
deleted file mode 100644
index 394e3748c1f720c3e8b4f84fb13a0fdf33874176..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Sonny With A Chance So Sketchy Game Tips and Tricks to Collect All the Props and Avoid the Obstacles.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
Congratulations! You have just miraculously landed in a magical world! Here you find an exhaustive collection of point-and-click, arcade, 2D games to satisfy any taste. Meet your friends from Disney Channel and let them take you on the adventure of your lifetime. The world offers you plenty of opportunities to play with your favorite characters. Now you might be helping Zack and Cody to throw the best pizza party of the summer while playing the Pizza Party Pickup game, and in the few minutes, you race through a hot and dangerous desert as Lightning McQueen, trying to get in front of Hicks and Weathers.
Most of you are already familiar with the majority of these old Disney Channel Games. Some of you might even get nostalgic while playing the games since it might remind you about the golden era of childhood when you've first interacted with all those famous Disney Characters. Whichever should be the case, there's a guarantee that no day will be the same after your steps in this magic world.
-
So far, there are 547 online Disney Games, for both kids and adults, that you can play on Disney--Games.com for free right now! All the games are organized into 29 categories, the most popular being Hannah Montana Games, with 4585830 total plays, and the most recently updated is Lilo and Stitch Games on Sunday, February 05, 2023. The most rated game is Winnie the Pooh's Home Run Derby, with 9655 votes received!
-
Starting with January 2021, most browsers stopped supporting Flash Technology, making it impossible to run games like Sonny With a Chance: So Sketchy. However, you can still play this game using our custom Browser. Give it a try!
-
Nico and Grady first met each other 13 years ago when they threw up on the same tea cup ride and are currently still the best of friends. These two have done a lot of mischief and trouble in a hilarious way. They are both on So Random and enemies of Murphy. The two also don't like Sonny and Chads' Relationship just like the rest of their cast mates. Nico and Grady have also been in a lot of great sketches together. Nico likes to give Grady dating advice, though neither of them have really been in a real relationship. Nico helped Grady talk to Mel and overcome the "Seamus McGregor" inside of him. They both enjoy playing video games, Grady's player 1 while Nico's player 2. They tend to get along with each other very well and make a great team. The two have fought before in "Sonny in the Middle", however, Nico and Grady will always have each other's backs and they are a great example in showing what true friendship really is. Nico and Grady were the only boys on So Random! but now that Chad has joined, they are not the only ones.
-
Sonny manages to convince Chad to play musical chairs after barging into the MacKenzie Falls set and bawking at him, with the terms that if the cast of So Random! wins, they get the parking space back, the MacKenzie Falls table, Chad has to buy them a new toilet paper holder and he will have to say something nice about So Random! on MacKenzie Falls; but if the cast of MacKenzie Falls wins, they get to say that MacKenzie Falls is better than So Random! on their show. The So Random! cast is the first to arrive at the commissary and start practicing sitting and walking, until they have to begin due to the arrival of the cast of MacKenzie Falls. Eventually when there is only one chair left between Sonny and Chad, the music stops and Sonny pretends to break her ankle, only to pull Chad down when he attempts to help her up and rush up to sit in the last chair, therefore winning the game. Chad, impressed with Sonny's acting, invites her onto his show, but Sonny declines and decides to stay at So Random!. At the end, the So Random! gang is watching MacKenzie Falls. On the show, Chad then says So Random! is his favorite show. The So Random! gang cheer and are pleased to hear this.
-
-
If you've got a knack for fashion, you'll want to give the old Disney Channel game Design Hannah Montana. You'll help the famous teenage superstar achieve the perfect look with plenty of fun and glittery clothes. You can also change Hannah's hairstyle by choosing different hair colors and experimenting with them. When you're done with Hannah's final look, you have the option to print out your work and impress your pals with your fashion skills.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cloudstack/CSV-ChatBot/modules/sidebar.py b/spaces/cloudstack/CSV-ChatBot/modules/sidebar.py
deleted file mode 100644
index 85afcdace77244c0fc6bc983d7a0295f1e3033a2..0000000000000000000000000000000000000000
--- a/spaces/cloudstack/CSV-ChatBot/modules/sidebar.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import streamlit as st
-
-
-class Sidebar:
- MODEL_OPTIONS = ["gpt-3.5-turbo", "gpt-4"]
- TEMPERATURE_MIN_VALUE = 0.0
- TEMPERATURE_MAX_VALUE = 1.0
- TEMPERATURE_DEFAULT_VALUE = 0.0
- TEMPERATURE_STEP = 0.01
-
- @staticmethod
- def about():
- about = st.sidebar.expander("🤖Info")
- sections = [
- "#### CSV-ChatBotis an AI chatbot with interactive memory features designed to help users discuss CSV data in a more intuitive way. 📄",
- "#### It uses a large language model to provide users with seamless, contextual, natural language interactions to better understand CSV data.. 🌐",
- "#### [Langchain](https://github.com/hwchase17/langchain), [OpenAI](https://platform.openai.com/docs/models/gpt-3-5) [Streamlit](https://github.com/streamlit/streamlit)⚡",
- "#### Source code : [RustX/ChatBot-CSV](https://github.com/RustX2802/CSV-ChatBot)",
- ]
- for section in sections:
- about.write(section)
-
- @staticmethod
- def reset_chat_button():
- if st.button("Reset chat"):
- st.session_state["reset_chat"] = True
- st.session_state.setdefault("reset_chat", False)
-
- def model_selector(self):
- model = st.selectbox(label="Model ", options=self.MODEL_OPTIONS)
- st.session_state["model"] = model
-
- def temperature_slider(self):
- temperature = st.slider(
- label="Temperature ",
- min_value=self.TEMPERATURE_MIN_VALUE,
- max_value=self.TEMPERATURE_MAX_VALUE,
- value=self.TEMPERATURE_DEFAULT_VALUE,
- step=self.TEMPERATURE_STEP,
- )
- st.session_state["temperature"] = temperature
-
- def csv_agent_button(self):
- st.session_state.setdefault("show_csv_agent", False)
- if st.sidebar.button("CSV Agent"):
- st.session_state["show_csv_agent"] = not st.session_state["show_csv_agent"]
-
- def show_options(self):
- with st.sidebar.expander("🛠️ Tools ", expanded=False):
- self.reset_chat_button()
- self.csv_agent_button()
- self.model_selector()
- self.temperature_slider()
- st.session_state.setdefault("model", self.MODEL_OPTIONS[0])
- st.session_state.setdefault("temperature", self.TEMPERATURE_DEFAULT_VALUE)
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/decorator.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/decorator.py
deleted file mode 100644
index 2479b6f7ba723b933978d10a6f80e28f60c3c1c6..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/decorator.py
+++ /dev/null
@@ -1,451 +0,0 @@
-# ######################### LICENSE ############################ #
-
-# Copyright (c) 2005-2021, Michele Simionato
-# All rights reserved.
-
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are
-# met:
-
-# Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# Redistributions in bytecode form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
-# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
-# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
-# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
-# DAMAGE.
-
-"""
-Decorator module, see
-https://github.com/micheles/decorator/blob/master/docs/documentation.md
-for the documentation.
-"""
-import re
-import sys
-import inspect
-import operator
-import itertools
-from contextlib import _GeneratorContextManager
-from inspect import getfullargspec, iscoroutinefunction, isgeneratorfunction
-
-__version__ = '5.1.1'
-
-DEF = re.compile(r'\s*def\s*([_\w][_\w\d]*)\s*\(')
-POS = inspect.Parameter.POSITIONAL_OR_KEYWORD
-EMPTY = inspect.Parameter.empty
-
-
-# this is not used anymore in the core, but kept for backward compatibility
-class FunctionMaker(object):
- """
- An object with the ability to create functions with a given signature.
- It has attributes name, doc, module, signature, defaults, dict and
- methods update and make.
- """
-
- # Atomic get-and-increment provided by the GIL
- _compile_count = itertools.count()
-
- # make pylint happy
- args = varargs = varkw = defaults = kwonlyargs = kwonlydefaults = ()
-
- def __init__(self, func=None, name=None, signature=None,
- defaults=None, doc=None, module=None, funcdict=None):
- self.shortsignature = signature
- if func:
- # func can be a class or a callable, but not an instance method
- self.name = func.__name__
- if self.name == '': # small hack for lambda functions
- self.name = '_lambda_'
- self.doc = func.__doc__
- self.module = func.__module__
- if inspect.isroutine(func):
- argspec = getfullargspec(func)
- self.annotations = getattr(func, '__annotations__', {})
- for a in ('args', 'varargs', 'varkw', 'defaults', 'kwonlyargs',
- 'kwonlydefaults'):
- setattr(self, a, getattr(argspec, a))
- for i, arg in enumerate(self.args):
- setattr(self, 'arg%d' % i, arg)
- allargs = list(self.args)
- allshortargs = list(self.args)
- if self.varargs:
- allargs.append('*' + self.varargs)
- allshortargs.append('*' + self.varargs)
- elif self.kwonlyargs:
- allargs.append('*') # single star syntax
- for a in self.kwonlyargs:
- allargs.append('%s=None' % a)
- allshortargs.append('%s=%s' % (a, a))
- if self.varkw:
- allargs.append('**' + self.varkw)
- allshortargs.append('**' + self.varkw)
- self.signature = ', '.join(allargs)
- self.shortsignature = ', '.join(allshortargs)
- self.dict = func.__dict__.copy()
- # func=None happens when decorating a caller
- if name:
- self.name = name
- if signature is not None:
- self.signature = signature
- if defaults:
- self.defaults = defaults
- if doc:
- self.doc = doc
- if module:
- self.module = module
- if funcdict:
- self.dict = funcdict
- # check existence required attributes
- assert hasattr(self, 'name')
- if not hasattr(self, 'signature'):
- raise TypeError('You are decorating a non function: %s' % func)
-
- def update(self, func, **kw):
- """
- Update the signature of func with the data in self
- """
- func.__name__ = self.name
- func.__doc__ = getattr(self, 'doc', None)
- func.__dict__ = getattr(self, 'dict', {})
- func.__defaults__ = self.defaults
- func.__kwdefaults__ = self.kwonlydefaults or None
- func.__annotations__ = getattr(self, 'annotations', None)
- try:
- frame = sys._getframe(3)
- except AttributeError: # for IronPython and similar implementations
- callermodule = '?'
- else:
- callermodule = frame.f_globals.get('__name__', '?')
- func.__module__ = getattr(self, 'module', callermodule)
- func.__dict__.update(kw)
-
- def make(self, src_templ, evaldict=None, addsource=False, **attrs):
- """
- Make a new function from a given template and update the signature
- """
- src = src_templ % vars(self) # expand name and signature
- evaldict = evaldict or {}
- mo = DEF.search(src)
- if mo is None:
- raise SyntaxError('not a valid function template\n%s' % src)
- name = mo.group(1) # extract the function name
- names = set([name] + [arg.strip(' *') for arg in
- self.shortsignature.split(',')])
- for n in names:
- if n in ('_func_', '_call_'):
- raise NameError('%s is overridden in\n%s' % (n, src))
-
- if not src.endswith('\n'): # add a newline for old Pythons
- src += '\n'
-
- # Ensure each generated function has a unique filename for profilers
- # (such as cProfile) that depend on the tuple of (,
- # , ) being unique.
- filename = '' % next(self._compile_count)
- try:
- code = compile(src, filename, 'single')
- exec(code, evaldict)
- except Exception:
- print('Error in generated code:', file=sys.stderr)
- print(src, file=sys.stderr)
- raise
- func = evaldict[name]
- if addsource:
- attrs['__source__'] = src
- self.update(func, **attrs)
- return func
-
- @classmethod
- def create(cls, obj, body, evaldict, defaults=None,
- doc=None, module=None, addsource=True, **attrs):
- """
- Create a function from the strings name, signature and body.
- evaldict is the evaluation dictionary. If addsource is true an
- attribute __source__ is added to the result. The attributes attrs
- are added, if any.
- """
- if isinstance(obj, str): # "name(signature)"
- name, rest = obj.strip().split('(', 1)
- signature = rest[:-1] # strip a right parens
- func = None
- else: # a function
- name = None
- signature = None
- func = obj
- self = cls(func, name, signature, defaults, doc, module)
- ibody = '\n'.join(' ' + line for line in body.splitlines())
- caller = evaldict.get('_call_') # when called from `decorate`
- if caller and iscoroutinefunction(caller):
- body = ('async def %(name)s(%(signature)s):\n' + ibody).replace(
- 'return', 'return await')
- else:
- body = 'def %(name)s(%(signature)s):\n' + ibody
- return self.make(body, evaldict, addsource, **attrs)
-
-
-def fix(args, kwargs, sig):
- """
- Fix args and kwargs to be consistent with the signature
- """
- ba = sig.bind(*args, **kwargs)
- ba.apply_defaults() # needed for test_dan_schult
- return ba.args, ba.kwargs
-
-
-def decorate(func, caller, extras=(), kwsyntax=False):
- """
- Decorates a function/generator/coroutine using a caller.
- If kwsyntax is True calling the decorated functions with keyword
- syntax will pass the named arguments inside the ``kw`` dictionary,
- even if such argument are positional, similarly to what functools.wraps
- does. By default kwsyntax is False and the the arguments are untouched.
- """
- sig = inspect.signature(func)
- if iscoroutinefunction(caller):
- async def fun(*args, **kw):
- if not kwsyntax:
- args, kw = fix(args, kw, sig)
- return await caller(func, *(extras + args), **kw)
- elif isgeneratorfunction(caller):
- def fun(*args, **kw):
- if not kwsyntax:
- args, kw = fix(args, kw, sig)
- for res in caller(func, *(extras + args), **kw):
- yield res
- else:
- def fun(*args, **kw):
- if not kwsyntax:
- args, kw = fix(args, kw, sig)
- return caller(func, *(extras + args), **kw)
- fun.__name__ = func.__name__
- fun.__doc__ = func.__doc__
- fun.__wrapped__ = func
- fun.__signature__ = sig
- fun.__qualname__ = func.__qualname__
- # builtin functions like defaultdict.__setitem__ lack many attributes
- try:
- fun.__defaults__ = func.__defaults__
- except AttributeError:
- pass
- try:
- fun.__kwdefaults__ = func.__kwdefaults__
- except AttributeError:
- pass
- try:
- fun.__annotations__ = func.__annotations__
- except AttributeError:
- pass
- try:
- fun.__module__ = func.__module__
- except AttributeError:
- pass
- try:
- fun.__dict__.update(func.__dict__)
- except AttributeError:
- pass
- return fun
-
-
-def decoratorx(caller):
- """
- A version of "decorator" implemented via "exec" and not via the
- Signature object. Use this if you are want to preserve the `.__code__`
- object properties (https://github.com/micheles/decorator/issues/129).
- """
- def dec(func):
- return FunctionMaker.create(
- func,
- "return _call_(_func_, %(shortsignature)s)",
- dict(_call_=caller, _func_=func),
- __wrapped__=func, __qualname__=func.__qualname__)
- return dec
-
-
-def decorator(caller, _func=None, kwsyntax=False):
- """
- decorator(caller) converts a caller function into a decorator
- """
- if _func is not None: # return a decorated function
- # this is obsolete behavior; you should use decorate instead
- return decorate(_func, caller, (), kwsyntax)
- # else return a decorator function
- sig = inspect.signature(caller)
- dec_params = [p for p in sig.parameters.values() if p.kind is POS]
-
- def dec(func=None, *args, **kw):
- na = len(args) + 1
- extras = args + tuple(kw.get(p.name, p.default)
- for p in dec_params[na:]
- if p.default is not EMPTY)
- if func is None:
- return lambda func: decorate(func, caller, extras, kwsyntax)
- else:
- return decorate(func, caller, extras, kwsyntax)
- dec.__signature__ = sig.replace(parameters=dec_params)
- dec.__name__ = caller.__name__
- dec.__doc__ = caller.__doc__
- dec.__wrapped__ = caller
- dec.__qualname__ = caller.__qualname__
- dec.__kwdefaults__ = getattr(caller, '__kwdefaults__', None)
- dec.__dict__.update(caller.__dict__)
- return dec
-
-
-# ####################### contextmanager ####################### #
-
-
-class ContextManager(_GeneratorContextManager):
- def __init__(self, g, *a, **k):
- _GeneratorContextManager.__init__(self, g, a, k)
-
- def __call__(self, func):
- def caller(f, *a, **k):
- with self.__class__(self.func, *self.args, **self.kwds):
- return f(*a, **k)
- return decorate(func, caller)
-
-
-_contextmanager = decorator(ContextManager)
-
-
-def contextmanager(func):
- # Enable Pylint config: contextmanager-decorators=decorator.contextmanager
- return _contextmanager(func)
-
-
-# ############################ dispatch_on ############################ #
-
-def append(a, vancestors):
- """
- Append ``a`` to the list of the virtual ancestors, unless it is already
- included.
- """
- add = True
- for j, va in enumerate(vancestors):
- if issubclass(va, a):
- add = False
- break
- if issubclass(a, va):
- vancestors[j] = a
- add = False
- if add:
- vancestors.append(a)
-
-
-# inspired from simplegeneric by P.J. Eby and functools.singledispatch
-def dispatch_on(*dispatch_args):
- """
- Factory of decorators turning a function into a generic function
- dispatching on the given arguments.
- """
- assert dispatch_args, 'No dispatch args passed'
- dispatch_str = '(%s,)' % ', '.join(dispatch_args)
-
- def check(arguments, wrong=operator.ne, msg=''):
- """Make sure one passes the expected number of arguments"""
- if wrong(len(arguments), len(dispatch_args)):
- raise TypeError('Expected %d arguments, got %d%s' %
- (len(dispatch_args), len(arguments), msg))
-
- def gen_func_dec(func):
- """Decorator turning a function into a generic function"""
-
- # first check the dispatch arguments
- argset = set(getfullargspec(func).args)
- if not set(dispatch_args) <= argset:
- raise NameError('Unknown dispatch arguments %s' % dispatch_str)
-
- typemap = {}
-
- def vancestors(*types):
- """
- Get a list of sets of virtual ancestors for the given types
- """
- check(types)
- ras = [[] for _ in range(len(dispatch_args))]
- for types_ in typemap:
- for t, type_, ra in zip(types, types_, ras):
- if issubclass(t, type_) and type_ not in t.mro():
- append(type_, ra)
- return [set(ra) for ra in ras]
-
- def ancestors(*types):
- """
- Get a list of virtual MROs, one for each type
- """
- check(types)
- lists = []
- for t, vas in zip(types, vancestors(*types)):
- n_vas = len(vas)
- if n_vas > 1:
- raise RuntimeError(
- 'Ambiguous dispatch for %s: %s' % (t, vas))
- elif n_vas == 1:
- va, = vas
- mro = type('t', (t, va), {}).mro()[1:]
- else:
- mro = t.mro()
- lists.append(mro[:-1]) # discard t and object
- return lists
-
- def register(*types):
- """
- Decorator to register an implementation for the given types
- """
- check(types)
-
- def dec(f):
- check(getfullargspec(f).args, operator.lt, ' in ' + f.__name__)
- typemap[types] = f
- return f
- return dec
-
- def dispatch_info(*types):
- """
- An utility to introspect the dispatch algorithm
- """
- check(types)
- lst = []
- for anc in itertools.product(*ancestors(*types)):
- lst.append(tuple(a.__name__ for a in anc))
- return lst
-
- def _dispatch(dispatch_args, *args, **kw):
- types = tuple(type(arg) for arg in dispatch_args)
- try: # fast path
- f = typemap[types]
- except KeyError:
- pass
- else:
- return f(*args, **kw)
- combinations = itertools.product(*ancestors(*types))
- next(combinations) # the first one has been already tried
- for types_ in combinations:
- f = typemap.get(types_)
- if f is not None:
- return f(*args, **kw)
-
- # else call the default implementation
- return func(*args, **kw)
-
- return FunctionMaker.create(
- func, 'return _f_(%s, %%(shortsignature)s)' % dispatch_str,
- dict(_f_=_dispatch), register=register, default=func,
- typemap=typemap, vancestors=vancestors, ancestors=ancestors,
- dispatch_info=dispatch_info, __wrapped__=func)
-
- gen_func_dec.__name__ = 'dispatch_on' + dispatch_str
- return gen_func_dec
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/otBase.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/otBase.py
deleted file mode 100644
index 9c80400e9420577f0d9d6f747e15b83e49f68e49..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/otBase.py
+++ /dev/null
@@ -1,1458 +0,0 @@
-from fontTools.config import OPTIONS
-from fontTools.misc.textTools import Tag, bytesjoin
-from .DefaultTable import DefaultTable
-from enum import IntEnum
-import sys
-import array
-import struct
-import logging
-from functools import lru_cache
-from typing import Iterator, NamedTuple, Optional, Tuple
-
-log = logging.getLogger(__name__)
-
-have_uharfbuzz = False
-try:
- import uharfbuzz as hb
-
- # repack method added in uharfbuzz >= 0.23; if uharfbuzz *can* be
- # imported but repack method is missing, behave as if uharfbuzz
- # is not available (fallback to the slower Python implementation)
- have_uharfbuzz = callable(getattr(hb, "repack", None))
-except ImportError:
- pass
-
-USE_HARFBUZZ_REPACKER = OPTIONS[f"{__name__}:USE_HARFBUZZ_REPACKER"]
-
-
-class OverflowErrorRecord(object):
- def __init__(self, overflowTuple):
- self.tableType = overflowTuple[0]
- self.LookupListIndex = overflowTuple[1]
- self.SubTableIndex = overflowTuple[2]
- self.itemName = overflowTuple[3]
- self.itemIndex = overflowTuple[4]
-
- def __repr__(self):
- return str(
- (
- self.tableType,
- "LookupIndex:",
- self.LookupListIndex,
- "SubTableIndex:",
- self.SubTableIndex,
- "ItemName:",
- self.itemName,
- "ItemIndex:",
- self.itemIndex,
- )
- )
-
-
-class OTLOffsetOverflowError(Exception):
- def __init__(self, overflowErrorRecord):
- self.value = overflowErrorRecord
-
- def __str__(self):
- return repr(self.value)
-
-
-class RepackerState(IntEnum):
- # Repacking control flow is implemnted using a state machine. The state machine table:
- #
- # State | Packing Success | Packing Failed | Exception Raised |
- # ------------+-----------------+----------------+------------------+
- # PURE_FT | Return result | PURE_FT | Return failure |
- # HB_FT | Return result | HB_FT | FT_FALLBACK |
- # FT_FALLBACK | HB_FT | FT_FALLBACK | Return failure |
-
- # Pack only with fontTools, don't allow sharing between extensions.
- PURE_FT = 1
-
- # Attempt to pack with harfbuzz (allowing sharing between extensions)
- # use fontTools to attempt overflow resolution.
- HB_FT = 2
-
- # Fallback if HB/FT packing gets stuck. Pack only with fontTools, don't allow sharing between
- # extensions.
- FT_FALLBACK = 3
-
-
-class BaseTTXConverter(DefaultTable):
-
- """Generic base class for TTX table converters. It functions as an
- adapter between the TTX (ttLib actually) table model and the model
- we use for OpenType tables, which is necessarily subtly different.
- """
-
- def decompile(self, data, font):
- """Create an object from the binary data. Called automatically on access."""
- from . import otTables
-
- reader = OTTableReader(data, tableTag=self.tableTag)
- tableClass = getattr(otTables, self.tableTag)
- self.table = tableClass()
- self.table.decompile(reader, font)
-
- def compile(self, font):
- """Compiles the table into binary. Called automatically on save."""
-
- # General outline:
- # Create a top-level OTTableWriter for the GPOS/GSUB table.
- # Call the compile method for the the table
- # for each 'converter' record in the table converter list
- # call converter's write method for each item in the value.
- # - For simple items, the write method adds a string to the
- # writer's self.items list.
- # - For Struct/Table/Subtable items, it add first adds new writer to the
- # to the writer's self.items, then calls the item's compile method.
- # This creates a tree of writers, rooted at the GUSB/GPOS writer, with
- # each writer representing a table, and the writer.items list containing
- # the child data strings and writers.
- # call the getAllData method
- # call _doneWriting, which removes duplicates
- # call _gatherTables. This traverses the tables, adding unique occurences to a flat list of tables
- # Traverse the flat list of tables, calling getDataLength on each to update their position
- # Traverse the flat list of tables again, calling getData each get the data in the table, now that
- # pos's and offset are known.
-
- # If a lookup subtable overflows an offset, we have to start all over.
- overflowRecord = None
- # this is 3-state option: default (None) means automatically use hb.repack or
- # silently fall back if it fails; True, use it and raise error if not possible
- # or it errors out; False, don't use it, even if you can.
- use_hb_repack = font.cfg[USE_HARFBUZZ_REPACKER]
- if self.tableTag in ("GSUB", "GPOS"):
- if use_hb_repack is False:
- log.debug(
- "hb.repack disabled, compiling '%s' with pure-python serializer",
- self.tableTag,
- )
- elif not have_uharfbuzz:
- if use_hb_repack is True:
- raise ImportError("No module named 'uharfbuzz'")
- else:
- assert use_hb_repack is None
- log.debug(
- "uharfbuzz not found, compiling '%s' with pure-python serializer",
- self.tableTag,
- )
-
- if (
- use_hb_repack in (None, True)
- and have_uharfbuzz
- and self.tableTag in ("GSUB", "GPOS")
- ):
- state = RepackerState.HB_FT
- else:
- state = RepackerState.PURE_FT
-
- hb_first_error_logged = False
- lastOverflowRecord = None
- while True:
- try:
- writer = OTTableWriter(tableTag=self.tableTag)
- self.table.compile(writer, font)
- if state == RepackerState.HB_FT:
- return self.tryPackingHarfbuzz(writer, hb_first_error_logged)
- elif state == RepackerState.PURE_FT:
- return self.tryPackingFontTools(writer)
- elif state == RepackerState.FT_FALLBACK:
- # Run packing with FontTools only, but don't return the result as it will
- # not be optimally packed. Once a successful packing has been found, state is
- # changed back to harfbuzz packing to produce the final, optimal, packing.
- self.tryPackingFontTools(writer)
- log.debug(
- "Re-enabling sharing between extensions and switching back to "
- "harfbuzz+fontTools packing."
- )
- state = RepackerState.HB_FT
-
- except OTLOffsetOverflowError as e:
- hb_first_error_logged = True
- ok = self.tryResolveOverflow(font, e, lastOverflowRecord)
- lastOverflowRecord = e.value
-
- if ok:
- continue
-
- if state is RepackerState.HB_FT:
- log.debug(
- "Harfbuzz packing out of resolutions, disabling sharing between extensions and "
- "switching to fontTools only packing."
- )
- state = RepackerState.FT_FALLBACK
- else:
- raise
-
- def tryPackingHarfbuzz(self, writer, hb_first_error_logged):
- try:
- log.debug("serializing '%s' with hb.repack", self.tableTag)
- return writer.getAllDataUsingHarfbuzz(self.tableTag)
- except (ValueError, MemoryError, hb.RepackerError) as e:
- # Only log hb repacker errors the first time they occur in
- # the offset-overflow resolution loop, they are just noisy.
- # Maybe we can revisit this if/when uharfbuzz actually gives
- # us more info as to why hb.repack failed...
- if not hb_first_error_logged:
- error_msg = f"{type(e).__name__}"
- if str(e) != "":
- error_msg += f": {e}"
- log.warning(
- "hb.repack failed to serialize '%s', attempting fonttools resolutions "
- "; the error message was: %s",
- self.tableTag,
- error_msg,
- )
- hb_first_error_logged = True
- return writer.getAllData(remove_duplicate=False)
-
- def tryPackingFontTools(self, writer):
- return writer.getAllData()
-
- def tryResolveOverflow(self, font, e, lastOverflowRecord):
- ok = 0
- if lastOverflowRecord == e.value:
- # Oh well...
- return ok
-
- overflowRecord = e.value
- log.info("Attempting to fix OTLOffsetOverflowError %s", e)
-
- if overflowRecord.itemName is None:
- from .otTables import fixLookupOverFlows
-
- ok = fixLookupOverFlows(font, overflowRecord)
- else:
- from .otTables import fixSubTableOverFlows
-
- ok = fixSubTableOverFlows(font, overflowRecord)
-
- if ok:
- return ok
-
- # Try upgrading lookup to Extension and hope
- # that cross-lookup sharing not happening would
- # fix overflow...
- from .otTables import fixLookupOverFlows
-
- return fixLookupOverFlows(font, overflowRecord)
-
- def toXML(self, writer, font):
- self.table.toXML2(writer, font)
-
- def fromXML(self, name, attrs, content, font):
- from . import otTables
-
- if not hasattr(self, "table"):
- tableClass = getattr(otTables, self.tableTag)
- self.table = tableClass()
- self.table.fromXML(name, attrs, content, font)
- self.table.populateDefaults()
-
- def ensureDecompiled(self, recurse=True):
- self.table.ensureDecompiled(recurse=recurse)
-
-
-# https://github.com/fonttools/fonttools/pull/2285#issuecomment-834652928
-assert len(struct.pack("i", 0)) == 4
-assert array.array("i").itemsize == 4, "Oops, file a bug against fonttools."
-
-
-class OTTableReader(object):
-
- """Helper class to retrieve data from an OpenType table."""
-
- __slots__ = ("data", "offset", "pos", "localState", "tableTag")
-
- def __init__(self, data, localState=None, offset=0, tableTag=None):
- self.data = data
- self.offset = offset
- self.pos = offset
- self.localState = localState
- self.tableTag = tableTag
-
- def advance(self, count):
- self.pos += count
-
- def seek(self, pos):
- self.pos = pos
-
- def copy(self):
- other = self.__class__(self.data, self.localState, self.offset, self.tableTag)
- other.pos = self.pos
- return other
-
- def getSubReader(self, offset):
- offset = self.offset + offset
- return self.__class__(self.data, self.localState, offset, self.tableTag)
-
- def readValue(self, typecode, staticSize):
- pos = self.pos
- newpos = pos + staticSize
- (value,) = struct.unpack(f">{typecode}", self.data[pos:newpos])
- self.pos = newpos
- return value
-
- def readArray(self, typecode, staticSize, count):
- pos = self.pos
- newpos = pos + count * staticSize
- value = array.array(typecode, self.data[pos:newpos])
- if sys.byteorder != "big":
- value.byteswap()
- self.pos = newpos
- return value.tolist()
-
- def readInt8(self):
- return self.readValue("b", staticSize=1)
-
- def readInt8Array(self, count):
- return self.readArray("b", staticSize=1, count=count)
-
- def readShort(self):
- return self.readValue("h", staticSize=2)
-
- def readShortArray(self, count):
- return self.readArray("h", staticSize=2, count=count)
-
- def readLong(self):
- return self.readValue("i", staticSize=4)
-
- def readLongArray(self, count):
- return self.readArray("i", staticSize=4, count=count)
-
- def readUInt8(self):
- return self.readValue("B", staticSize=1)
-
- def readUInt8Array(self, count):
- return self.readArray("B", staticSize=1, count=count)
-
- def readUShort(self):
- return self.readValue("H", staticSize=2)
-
- def readUShortArray(self, count):
- return self.readArray("H", staticSize=2, count=count)
-
- def readULong(self):
- return self.readValue("I", staticSize=4)
-
- def readULongArray(self, count):
- return self.readArray("I", staticSize=4, count=count)
-
- def readUInt24(self):
- pos = self.pos
- newpos = pos + 3
- (value,) = struct.unpack(">l", b"\0" + self.data[pos:newpos])
- self.pos = newpos
- return value
-
- def readUInt24Array(self, count):
- return [self.readUInt24() for _ in range(count)]
-
- def readTag(self):
- pos = self.pos
- newpos = pos + 4
- value = Tag(self.data[pos:newpos])
- assert len(value) == 4, value
- self.pos = newpos
- return value
-
- def readData(self, count):
- pos = self.pos
- newpos = pos + count
- value = self.data[pos:newpos]
- self.pos = newpos
- return value
-
- def __setitem__(self, name, value):
- state = self.localState.copy() if self.localState else dict()
- state[name] = value
- self.localState = state
-
- def __getitem__(self, name):
- return self.localState and self.localState[name]
-
- def __contains__(self, name):
- return self.localState and name in self.localState
-
-
-class OTTableWriter(object):
-
- """Helper class to gather and assemble data for OpenType tables."""
-
- def __init__(self, localState=None, tableTag=None, offsetSize=2):
- self.items = []
- self.pos = None
- self.localState = localState
- self.tableTag = tableTag
- self.offsetSize = offsetSize
- self.parent = None
-
- # DEPRECATED: 'longOffset' is kept as a property for backward compat with old code.
- # You should use 'offsetSize' instead (2, 3 or 4 bytes).
- @property
- def longOffset(self):
- return self.offsetSize == 4
-
- @longOffset.setter
- def longOffset(self, value):
- self.offsetSize = 4 if value else 2
-
- def __setitem__(self, name, value):
- state = self.localState.copy() if self.localState else dict()
- state[name] = value
- self.localState = state
-
- def __getitem__(self, name):
- return self.localState[name]
-
- def __delitem__(self, name):
- del self.localState[name]
-
- # assembler interface
-
- def getDataLength(self):
- """Return the length of this table in bytes, without subtables."""
- l = 0
- for item in self.items:
- if hasattr(item, "getCountData"):
- l += item.size
- elif hasattr(item, "getData"):
- l += item.offsetSize
- else:
- l = l + len(item)
- return l
-
- def getData(self):
- """Assemble the data for this writer/table, without subtables."""
- items = list(self.items) # make a shallow copy
- pos = self.pos
- numItems = len(items)
- for i in range(numItems):
- item = items[i]
-
- if hasattr(item, "getData"):
- if item.offsetSize == 4:
- items[i] = packULong(item.pos - pos)
- elif item.offsetSize == 2:
- try:
- items[i] = packUShort(item.pos - pos)
- except struct.error:
- # provide data to fix overflow problem.
- overflowErrorRecord = self.getOverflowErrorRecord(item)
-
- raise OTLOffsetOverflowError(overflowErrorRecord)
- elif item.offsetSize == 3:
- items[i] = packUInt24(item.pos - pos)
- else:
- raise ValueError(item.offsetSize)
-
- return bytesjoin(items)
-
- def getDataForHarfbuzz(self):
- """Assemble the data for this writer/table with all offset field set to 0"""
- items = list(self.items)
- packFuncs = {2: packUShort, 3: packUInt24, 4: packULong}
- for i, item in enumerate(items):
- if hasattr(item, "getData"):
- # Offset value is not needed in harfbuzz repacker, so setting offset to 0 to avoid overflow here
- if item.offsetSize in packFuncs:
- items[i] = packFuncs[item.offsetSize](0)
- else:
- raise ValueError(item.offsetSize)
-
- return bytesjoin(items)
-
- def __hash__(self):
- # only works after self._doneWriting() has been called
- return hash(self.items)
-
- def __ne__(self, other):
- result = self.__eq__(other)
- return result if result is NotImplemented else not result
-
- def __eq__(self, other):
- if type(self) != type(other):
- return NotImplemented
- return self.offsetSize == other.offsetSize and self.items == other.items
-
- def _doneWriting(self, internedTables, shareExtension=False):
- # Convert CountData references to data string items
- # collapse duplicate table references to a unique entry
- # "tables" are OTTableWriter objects.
-
- # For Extension Lookup types, we can
- # eliminate duplicates only within the tree under the Extension Lookup,
- # as offsets may exceed 64K even between Extension LookupTable subtables.
- isExtension = hasattr(self, "Extension")
-
- # Certain versions of Uniscribe reject the font if the GSUB/GPOS top-level
- # arrays (ScriptList, FeatureList, LookupList) point to the same, possibly
- # empty, array. So, we don't share those.
- # See: https://github.com/fonttools/fonttools/issues/518
- dontShare = hasattr(self, "DontShare")
-
- if isExtension and not shareExtension:
- internedTables = {}
-
- items = self.items
- for i in range(len(items)):
- item = items[i]
- if hasattr(item, "getCountData"):
- items[i] = item.getCountData()
- elif hasattr(item, "getData"):
- item._doneWriting(internedTables, shareExtension=shareExtension)
- # At this point, all subwriters are hashable based on their items.
- # (See hash and comparison magic methods above.) So the ``setdefault``
- # call here will return the first writer object we've seen with
- # equal content, or store it in the dictionary if it's not been
- # seen yet. We therefore replace the subwriter object with an equivalent
- # object, which deduplicates the tree.
- if not dontShare:
- items[i] = item = internedTables.setdefault(item, item)
- self.items = tuple(items)
-
- def _gatherTables(self, tables, extTables, done):
- # Convert table references in self.items tree to a flat
- # list of tables in depth-first traversal order.
- # "tables" are OTTableWriter objects.
- # We do the traversal in reverse order at each level, in order to
- # resolve duplicate references to be the last reference in the list of tables.
- # For extension lookups, duplicate references can be merged only within the
- # writer tree under the extension lookup.
-
- done[id(self)] = True
-
- numItems = len(self.items)
- iRange = list(range(numItems))
- iRange.reverse()
-
- isExtension = hasattr(self, "Extension")
-
- selfTables = tables
-
- if isExtension:
- assert (
- extTables is not None
- ), "Program or XML editing error. Extension subtables cannot contain extensions subtables"
- tables, extTables, done = extTables, None, {}
-
- # add Coverage table if it is sorted last.
- sortCoverageLast = False
- if hasattr(self, "sortCoverageLast"):
- # Find coverage table
- for i in range(numItems):
- item = self.items[i]
- if getattr(item, "name", None) == "Coverage":
- sortCoverageLast = True
- break
- if id(item) not in done:
- item._gatherTables(tables, extTables, done)
- else:
- # We're a new parent of item
- pass
-
- for i in iRange:
- item = self.items[i]
- if not hasattr(item, "getData"):
- continue
-
- if (
- sortCoverageLast
- and (i == 1)
- and getattr(item, "name", None) == "Coverage"
- ):
- # we've already 'gathered' it above
- continue
-
- if id(item) not in done:
- item._gatherTables(tables, extTables, done)
- else:
- # Item is already written out by other parent
- pass
-
- selfTables.append(self)
-
- def _gatherGraphForHarfbuzz(self, tables, obj_list, done, objidx, virtual_edges):
- real_links = []
- virtual_links = []
- item_idx = objidx
-
- # Merge virtual_links from parent
- for idx in virtual_edges:
- virtual_links.append((0, 0, idx))
-
- sortCoverageLast = False
- coverage_idx = 0
- if hasattr(self, "sortCoverageLast"):
- # Find coverage table
- for i, item in enumerate(self.items):
- if getattr(item, "name", None) == "Coverage":
- sortCoverageLast = True
- if id(item) not in done:
- coverage_idx = item_idx = item._gatherGraphForHarfbuzz(
- tables, obj_list, done, item_idx, virtual_edges
- )
- else:
- coverage_idx = done[id(item)]
- virtual_edges.append(coverage_idx)
- break
-
- child_idx = 0
- offset_pos = 0
- for i, item in enumerate(self.items):
- if hasattr(item, "getData"):
- pos = offset_pos
- elif hasattr(item, "getCountData"):
- offset_pos += item.size
- continue
- else:
- offset_pos = offset_pos + len(item)
- continue
-
- if id(item) not in done:
- child_idx = item_idx = item._gatherGraphForHarfbuzz(
- tables, obj_list, done, item_idx, virtual_edges
- )
- else:
- child_idx = done[id(item)]
-
- real_edge = (pos, item.offsetSize, child_idx)
- real_links.append(real_edge)
- offset_pos += item.offsetSize
-
- tables.append(self)
- obj_list.append((real_links, virtual_links))
- item_idx += 1
- done[id(self)] = item_idx
- if sortCoverageLast:
- virtual_edges.pop()
-
- return item_idx
-
- def getAllDataUsingHarfbuzz(self, tableTag):
- """The Whole table is represented as a Graph.
- Assemble graph data and call Harfbuzz repacker to pack the table.
- Harfbuzz repacker is faster and retain as much sub-table sharing as possible, see also:
- https://github.com/harfbuzz/harfbuzz/blob/main/docs/repacker.md
- The input format for hb.repack() method is explained here:
- https://github.com/harfbuzz/uharfbuzz/blob/main/src/uharfbuzz/_harfbuzz.pyx#L1149
- """
- internedTables = {}
- self._doneWriting(internedTables, shareExtension=True)
- tables = []
- obj_list = []
- done = {}
- objidx = 0
- virtual_edges = []
- self._gatherGraphForHarfbuzz(tables, obj_list, done, objidx, virtual_edges)
- # Gather all data in two passes: the absolute positions of all
- # subtable are needed before the actual data can be assembled.
- pos = 0
- for table in tables:
- table.pos = pos
- pos = pos + table.getDataLength()
-
- data = []
- for table in tables:
- tableData = table.getDataForHarfbuzz()
- data.append(tableData)
-
- if hasattr(hb, "repack_with_tag"):
- return hb.repack_with_tag(str(tableTag), data, obj_list)
- else:
- return hb.repack(data, obj_list)
-
- def getAllData(self, remove_duplicate=True):
- """Assemble all data, including all subtables."""
- if remove_duplicate:
- internedTables = {}
- self._doneWriting(internedTables)
- tables = []
- extTables = []
- done = {}
- self._gatherTables(tables, extTables, done)
- tables.reverse()
- extTables.reverse()
- # Gather all data in two passes: the absolute positions of all
- # subtable are needed before the actual data can be assembled.
- pos = 0
- for table in tables:
- table.pos = pos
- pos = pos + table.getDataLength()
-
- for table in extTables:
- table.pos = pos
- pos = pos + table.getDataLength()
-
- data = []
- for table in tables:
- tableData = table.getData()
- data.append(tableData)
-
- for table in extTables:
- tableData = table.getData()
- data.append(tableData)
-
- return bytesjoin(data)
-
- # interface for gathering data, as used by table.compile()
-
- def getSubWriter(self, offsetSize=2):
- subwriter = self.__class__(
- self.localState, self.tableTag, offsetSize=offsetSize
- )
- subwriter.parent = (
- self # because some subtables have idential values, we discard
- )
- # the duplicates under the getAllData method. Hence some
- # subtable writers can have more than one parent writer.
- # But we just care about first one right now.
- return subwriter
-
- def writeValue(self, typecode, value):
- self.items.append(struct.pack(f">{typecode}", value))
-
- def writeArray(self, typecode, values):
- a = array.array(typecode, values)
- if sys.byteorder != "big":
- a.byteswap()
- self.items.append(a.tobytes())
-
- def writeInt8(self, value):
- assert -128 <= value < 128, value
- self.items.append(struct.pack(">b", value))
-
- def writeInt8Array(self, values):
- self.writeArray("b", values)
-
- def writeShort(self, value):
- assert -32768 <= value < 32768, value
- self.items.append(struct.pack(">h", value))
-
- def writeShortArray(self, values):
- self.writeArray("h", values)
-
- def writeLong(self, value):
- self.items.append(struct.pack(">i", value))
-
- def writeLongArray(self, values):
- self.writeArray("i", values)
-
- def writeUInt8(self, value):
- assert 0 <= value < 256, value
- self.items.append(struct.pack(">B", value))
-
- def writeUInt8Array(self, values):
- self.writeArray("B", values)
-
- def writeUShort(self, value):
- assert 0 <= value < 0x10000, value
- self.items.append(struct.pack(">H", value))
-
- def writeUShortArray(self, values):
- self.writeArray("H", values)
-
- def writeULong(self, value):
- self.items.append(struct.pack(">I", value))
-
- def writeULongArray(self, values):
- self.writeArray("I", values)
-
- def writeUInt24(self, value):
- assert 0 <= value < 0x1000000, value
- b = struct.pack(">L", value)
- self.items.append(b[1:])
-
- def writeUInt24Array(self, values):
- for value in values:
- self.writeUInt24(value)
-
- def writeTag(self, tag):
- tag = Tag(tag).tobytes()
- assert len(tag) == 4, tag
- self.items.append(tag)
-
- def writeSubTable(self, subWriter):
- self.items.append(subWriter)
-
- def writeCountReference(self, table, name, size=2, value=None):
- ref = CountReference(table, name, size=size, value=value)
- self.items.append(ref)
- return ref
-
- def writeStruct(self, format, values):
- data = struct.pack(*(format,) + values)
- self.items.append(data)
-
- def writeData(self, data):
- self.items.append(data)
-
- def getOverflowErrorRecord(self, item):
- LookupListIndex = SubTableIndex = itemName = itemIndex = None
- if self.name == "LookupList":
- LookupListIndex = item.repeatIndex
- elif self.name == "Lookup":
- LookupListIndex = self.repeatIndex
- SubTableIndex = item.repeatIndex
- else:
- itemName = getattr(item, "name", "")
- if hasattr(item, "repeatIndex"):
- itemIndex = item.repeatIndex
- if self.name == "SubTable":
- LookupListIndex = self.parent.repeatIndex
- SubTableIndex = self.repeatIndex
- elif self.name == "ExtSubTable":
- LookupListIndex = self.parent.parent.repeatIndex
- SubTableIndex = self.parent.repeatIndex
- else: # who knows how far below the SubTable level we are! Climb back up to the nearest subtable.
- itemName = ".".join([self.name, itemName])
- p1 = self.parent
- while p1 and p1.name not in ["ExtSubTable", "SubTable"]:
- itemName = ".".join([p1.name, itemName])
- p1 = p1.parent
- if p1:
- if p1.name == "ExtSubTable":
- LookupListIndex = p1.parent.parent.repeatIndex
- SubTableIndex = p1.parent.repeatIndex
- else:
- LookupListIndex = p1.parent.repeatIndex
- SubTableIndex = p1.repeatIndex
-
- return OverflowErrorRecord(
- (self.tableTag, LookupListIndex, SubTableIndex, itemName, itemIndex)
- )
-
-
-class CountReference(object):
- """A reference to a Count value, not a count of references."""
-
- def __init__(self, table, name, size=None, value=None):
- self.table = table
- self.name = name
- self.size = size
- if value is not None:
- self.setValue(value)
-
- def setValue(self, value):
- table = self.table
- name = self.name
- if table[name] is None:
- table[name] = value
- else:
- assert table[name] == value, (name, table[name], value)
-
- def getValue(self):
- return self.table[self.name]
-
- def getCountData(self):
- v = self.table[self.name]
- if v is None:
- v = 0
- return {1: packUInt8, 2: packUShort, 4: packULong}[self.size](v)
-
-
-def packUInt8(value):
- return struct.pack(">B", value)
-
-
-def packUShort(value):
- return struct.pack(">H", value)
-
-
-def packULong(value):
- assert 0 <= value < 0x100000000, value
- return struct.pack(">I", value)
-
-
-def packUInt24(value):
- assert 0 <= value < 0x1000000, value
- return struct.pack(">I", value)[1:]
-
-
-class BaseTable(object):
-
- """Generic base class for all OpenType (sub)tables."""
-
- def __getattr__(self, attr):
- reader = self.__dict__.get("reader")
- if reader:
- del self.reader
- font = self.font
- del self.font
- self.decompile(reader, font)
- return getattr(self, attr)
-
- raise AttributeError(attr)
-
- def ensureDecompiled(self, recurse=False):
- reader = self.__dict__.get("reader")
- if reader:
- del self.reader
- font = self.font
- del self.font
- self.decompile(reader, font)
- if recurse:
- for subtable in self.iterSubTables():
- subtable.value.ensureDecompiled(recurse)
-
- def __getstate__(self):
- # before copying/pickling 'lazy' objects, make a shallow copy of OTTableReader
- # https://github.com/fonttools/fonttools/issues/2965
- if "reader" in self.__dict__:
- state = self.__dict__.copy()
- state["reader"] = self.__dict__["reader"].copy()
- return state
- return self.__dict__
-
- @classmethod
- def getRecordSize(cls, reader):
- totalSize = 0
- for conv in cls.converters:
- size = conv.getRecordSize(reader)
- if size is NotImplemented:
- return NotImplemented
- countValue = 1
- if conv.repeat:
- if conv.repeat in reader:
- countValue = reader[conv.repeat] + conv.aux
- else:
- return NotImplemented
- totalSize += size * countValue
- return totalSize
-
- def getConverters(self):
- return self.converters
-
- def getConverterByName(self, name):
- return self.convertersByName[name]
-
- def populateDefaults(self, propagator=None):
- for conv in self.getConverters():
- if conv.repeat:
- if not hasattr(self, conv.name):
- setattr(self, conv.name, [])
- countValue = len(getattr(self, conv.name)) - conv.aux
- try:
- count_conv = self.getConverterByName(conv.repeat)
- setattr(self, conv.repeat, countValue)
- except KeyError:
- # conv.repeat is a propagated count
- if propagator and conv.repeat in propagator:
- propagator[conv.repeat].setValue(countValue)
- else:
- if conv.aux and not eval(conv.aux, None, self.__dict__):
- continue
- if hasattr(self, conv.name):
- continue # Warn if it should NOT be present?!
- if hasattr(conv, "writeNullOffset"):
- setattr(self, conv.name, None) # Warn?
- # elif not conv.isCount:
- # # Warn?
- # pass
- if hasattr(conv, "DEFAULT"):
- # OptionalValue converters (e.g. VarIndex)
- setattr(self, conv.name, conv.DEFAULT)
-
- def decompile(self, reader, font):
- self.readFormat(reader)
- table = {}
- self.__rawTable = table # for debugging
- for conv in self.getConverters():
- if conv.name == "SubTable":
- conv = conv.getConverter(reader.tableTag, table["LookupType"])
- if conv.name == "ExtSubTable":
- conv = conv.getConverter(reader.tableTag, table["ExtensionLookupType"])
- if conv.name == "FeatureParams":
- conv = conv.getConverter(reader["FeatureTag"])
- if conv.name == "SubStruct":
- conv = conv.getConverter(reader.tableTag, table["MorphType"])
- try:
- if conv.repeat:
- if isinstance(conv.repeat, int):
- countValue = conv.repeat
- elif conv.repeat in table:
- countValue = table[conv.repeat]
- else:
- # conv.repeat is a propagated count
- countValue = reader[conv.repeat]
- countValue += conv.aux
- table[conv.name] = conv.readArray(reader, font, table, countValue)
- else:
- if conv.aux and not eval(conv.aux, None, table):
- continue
- table[conv.name] = conv.read(reader, font, table)
- if conv.isPropagated:
- reader[conv.name] = table[conv.name]
- except Exception as e:
- name = conv.name
- e.args = e.args + (name,)
- raise
-
- if hasattr(self, "postRead"):
- self.postRead(table, font)
- else:
- self.__dict__.update(table)
-
- del self.__rawTable # succeeded, get rid of debugging info
-
- def compile(self, writer, font):
- self.ensureDecompiled()
- # TODO Following hack to be removed by rewriting how FormatSwitching tables
- # are handled.
- # https://github.com/fonttools/fonttools/pull/2238#issuecomment-805192631
- if hasattr(self, "preWrite"):
- deleteFormat = not hasattr(self, "Format")
- table = self.preWrite(font)
- deleteFormat = deleteFormat and hasattr(self, "Format")
- else:
- deleteFormat = False
- table = self.__dict__.copy()
-
- # some count references may have been initialized in a custom preWrite; we set
- # these in the writer's state beforehand (instead of sequentially) so they will
- # be propagated to all nested subtables even if the count appears in the current
- # table only *after* the offset to the subtable that it is counting.
- for conv in self.getConverters():
- if conv.isCount and conv.isPropagated:
- value = table.get(conv.name)
- if isinstance(value, CountReference):
- writer[conv.name] = value
-
- if hasattr(self, "sortCoverageLast"):
- writer.sortCoverageLast = 1
-
- if hasattr(self, "DontShare"):
- writer.DontShare = True
-
- if hasattr(self.__class__, "LookupType"):
- writer["LookupType"].setValue(self.__class__.LookupType)
-
- self.writeFormat(writer)
- for conv in self.getConverters():
- value = table.get(
- conv.name
- ) # TODO Handle defaults instead of defaulting to None!
- if conv.repeat:
- if value is None:
- value = []
- countValue = len(value) - conv.aux
- if isinstance(conv.repeat, int):
- assert len(value) == conv.repeat, "expected %d values, got %d" % (
- conv.repeat,
- len(value),
- )
- elif conv.repeat in table:
- CountReference(table, conv.repeat, value=countValue)
- else:
- # conv.repeat is a propagated count
- writer[conv.repeat].setValue(countValue)
- try:
- conv.writeArray(writer, font, table, value)
- except Exception as e:
- e.args = e.args + (conv.name + "[]",)
- raise
- elif conv.isCount:
- # Special-case Count values.
- # Assumption: a Count field will *always* precede
- # the actual array(s).
- # We need a default value, as it may be set later by a nested
- # table. We will later store it here.
- # We add a reference: by the time the data is assembled
- # the Count value will be filled in.
- # We ignore the current count value since it will be recomputed,
- # unless it's a CountReference that was already initialized in a custom preWrite.
- if isinstance(value, CountReference):
- ref = value
- ref.size = conv.staticSize
- writer.writeData(ref)
- table[conv.name] = ref.getValue()
- else:
- ref = writer.writeCountReference(table, conv.name, conv.staticSize)
- table[conv.name] = None
- if conv.isPropagated:
- writer[conv.name] = ref
- elif conv.isLookupType:
- # We make sure that subtables have the same lookup type,
- # and that the type is the same as the one set on the
- # Lookup object, if any is set.
- if conv.name not in table:
- table[conv.name] = None
- ref = writer.writeCountReference(
- table, conv.name, conv.staticSize, table[conv.name]
- )
- writer["LookupType"] = ref
- else:
- if conv.aux and not eval(conv.aux, None, table):
- continue
- try:
- conv.write(writer, font, table, value)
- except Exception as e:
- name = value.__class__.__name__ if value is not None else conv.name
- e.args = e.args + (name,)
- raise
- if conv.isPropagated:
- writer[conv.name] = value
-
- if deleteFormat:
- del self.Format
-
- def readFormat(self, reader):
- pass
-
- def writeFormat(self, writer):
- pass
-
- def toXML(self, xmlWriter, font, attrs=None, name=None):
- tableName = name if name else self.__class__.__name__
- if attrs is None:
- attrs = []
- if hasattr(self, "Format"):
- attrs = attrs + [("Format", self.Format)]
- xmlWriter.begintag(tableName, attrs)
- xmlWriter.newline()
- self.toXML2(xmlWriter, font)
- xmlWriter.endtag(tableName)
- xmlWriter.newline()
-
- def toXML2(self, xmlWriter, font):
- # Simpler variant of toXML, *only* for the top level tables (like GPOS, GSUB).
- # This is because in TTX our parent writes our main tag, and in otBase.py we
- # do it ourselves. I think I'm getting schizophrenic...
- for conv in self.getConverters():
- if conv.repeat:
- value = getattr(self, conv.name, [])
- for i in range(len(value)):
- item = value[i]
- conv.xmlWrite(xmlWriter, font, item, conv.name, [("index", i)])
- else:
- if conv.aux and not eval(conv.aux, None, vars(self)):
- continue
- value = getattr(
- self, conv.name, None
- ) # TODO Handle defaults instead of defaulting to None!
- conv.xmlWrite(xmlWriter, font, value, conv.name, [])
-
- def fromXML(self, name, attrs, content, font):
- try:
- conv = self.getConverterByName(name)
- except KeyError:
- raise # XXX on KeyError, raise nice error
- value = conv.xmlRead(attrs, content, font)
- if conv.repeat:
- seq = getattr(self, conv.name, None)
- if seq is None:
- seq = []
- setattr(self, conv.name, seq)
- seq.append(value)
- else:
- setattr(self, conv.name, value)
-
- def __ne__(self, other):
- result = self.__eq__(other)
- return result if result is NotImplemented else not result
-
- def __eq__(self, other):
- if type(self) != type(other):
- return NotImplemented
-
- self.ensureDecompiled()
- other.ensureDecompiled()
-
- return self.__dict__ == other.__dict__
-
- class SubTableEntry(NamedTuple):
- """See BaseTable.iterSubTables()"""
-
- name: str
- value: "BaseTable"
- index: Optional[int] = None # index into given array, None for single values
-
- def iterSubTables(self) -> Iterator[SubTableEntry]:
- """Yield (name, value, index) namedtuples for all subtables of current table.
-
- A sub-table is an instance of BaseTable (or subclass thereof) that is a child
- of self, the current parent table.
- The tuples also contain the attribute name (str) of the of parent table to get
- a subtable, and optionally, for lists of subtables (i.e. attributes associated
- with a converter that has a 'repeat'), an index into the list containing the
- given subtable value.
- This method can be useful to traverse trees of otTables.
- """
- for conv in self.getConverters():
- name = conv.name
- value = getattr(self, name, None)
- if value is None:
- continue
- if isinstance(value, BaseTable):
- yield self.SubTableEntry(name, value)
- elif isinstance(value, list):
- yield from (
- self.SubTableEntry(name, v, index=i)
- for i, v in enumerate(value)
- if isinstance(v, BaseTable)
- )
-
- # instance (not @class)method for consistency with FormatSwitchingBaseTable
- def getVariableAttrs(self):
- return getVariableAttrs(self.__class__)
-
-
-class FormatSwitchingBaseTable(BaseTable):
-
- """Minor specialization of BaseTable, for tables that have multiple
- formats, eg. CoverageFormat1 vs. CoverageFormat2."""
-
- @classmethod
- def getRecordSize(cls, reader):
- return NotImplemented
-
- def getConverters(self):
- try:
- fmt = self.Format
- except AttributeError:
- # some FormatSwitchingBaseTables (e.g. Coverage) no longer have 'Format'
- # attribute after fully decompiled, only gain one in preWrite before being
- # recompiled. In the decompiled state, these hand-coded classes defined in
- # otTables.py lose their format-specific nature and gain more high-level
- # attributes that are not tied to converters.
- return []
- return self.converters.get(self.Format, [])
-
- def getConverterByName(self, name):
- return self.convertersByName[self.Format][name]
-
- def readFormat(self, reader):
- self.Format = reader.readUShort()
-
- def writeFormat(self, writer):
- writer.writeUShort(self.Format)
-
- def toXML(self, xmlWriter, font, attrs=None, name=None):
- BaseTable.toXML(self, xmlWriter, font, attrs, name)
-
- def getVariableAttrs(self):
- return getVariableAttrs(self.__class__, self.Format)
-
-
-class UInt8FormatSwitchingBaseTable(FormatSwitchingBaseTable):
- def readFormat(self, reader):
- self.Format = reader.readUInt8()
-
- def writeFormat(self, writer):
- writer.writeUInt8(self.Format)
-
-
-formatSwitchingBaseTables = {
- "uint16": FormatSwitchingBaseTable,
- "uint8": UInt8FormatSwitchingBaseTable,
-}
-
-
-def getFormatSwitchingBaseTableClass(formatType):
- try:
- return formatSwitchingBaseTables[formatType]
- except KeyError:
- raise TypeError(f"Unsupported format type: {formatType!r}")
-
-
-# memoize since these are parsed from otData.py, thus stay constant
-@lru_cache()
-def getVariableAttrs(cls: BaseTable, fmt: Optional[int] = None) -> Tuple[str]:
- """Return sequence of variable table field names (can be empty).
-
- Attributes are deemed "variable" when their otData.py's description contain
- 'VarIndexBase + {offset}', e.g. COLRv1 PaintVar* tables.
- """
- if not issubclass(cls, BaseTable):
- raise TypeError(cls)
- if issubclass(cls, FormatSwitchingBaseTable):
- if fmt is None:
- raise TypeError(f"'fmt' is required for format-switching {cls.__name__}")
- converters = cls.convertersByName[fmt]
- else:
- converters = cls.convertersByName
- # assume if no 'VarIndexBase' field is present, table has no variable fields
- if "VarIndexBase" not in converters:
- return ()
- varAttrs = {}
- for name, conv in converters.items():
- offset = conv.getVarIndexOffset()
- if offset is not None:
- varAttrs[name] = offset
- return tuple(sorted(varAttrs, key=varAttrs.__getitem__))
-
-
-#
-# Support for ValueRecords
-#
-# This data type is so different from all other OpenType data types that
-# it requires quite a bit of code for itself. It even has special support
-# in OTTableReader and OTTableWriter...
-#
-
-valueRecordFormat = [
- # Mask Name isDevice signed
- (0x0001, "XPlacement", 0, 1),
- (0x0002, "YPlacement", 0, 1),
- (0x0004, "XAdvance", 0, 1),
- (0x0008, "YAdvance", 0, 1),
- (0x0010, "XPlaDevice", 1, 0),
- (0x0020, "YPlaDevice", 1, 0),
- (0x0040, "XAdvDevice", 1, 0),
- (0x0080, "YAdvDevice", 1, 0),
- # reserved:
- (0x0100, "Reserved1", 0, 0),
- (0x0200, "Reserved2", 0, 0),
- (0x0400, "Reserved3", 0, 0),
- (0x0800, "Reserved4", 0, 0),
- (0x1000, "Reserved5", 0, 0),
- (0x2000, "Reserved6", 0, 0),
- (0x4000, "Reserved7", 0, 0),
- (0x8000, "Reserved8", 0, 0),
-]
-
-
-def _buildDict():
- d = {}
- for mask, name, isDevice, signed in valueRecordFormat:
- d[name] = mask, isDevice, signed
- return d
-
-
-valueRecordFormatDict = _buildDict()
-
-
-class ValueRecordFactory(object):
-
- """Given a format code, this object convert ValueRecords."""
-
- def __init__(self, valueFormat):
- format = []
- for mask, name, isDevice, signed in valueRecordFormat:
- if valueFormat & mask:
- format.append((name, isDevice, signed))
- self.format = format
-
- def __len__(self):
- return len(self.format)
-
- def readValueRecord(self, reader, font):
- format = self.format
- if not format:
- return None
- valueRecord = ValueRecord()
- for name, isDevice, signed in format:
- if signed:
- value = reader.readShort()
- else:
- value = reader.readUShort()
- if isDevice:
- if value:
- from . import otTables
-
- subReader = reader.getSubReader(value)
- value = getattr(otTables, name)()
- value.decompile(subReader, font)
- else:
- value = None
- setattr(valueRecord, name, value)
- return valueRecord
-
- def writeValueRecord(self, writer, font, valueRecord):
- for name, isDevice, signed in self.format:
- value = getattr(valueRecord, name, 0)
- if isDevice:
- if value:
- subWriter = writer.getSubWriter()
- writer.writeSubTable(subWriter)
- value.compile(subWriter, font)
- else:
- writer.writeUShort(0)
- elif signed:
- writer.writeShort(value)
- else:
- writer.writeUShort(value)
-
-
-class ValueRecord(object):
-
- # see ValueRecordFactory
-
- def __init__(self, valueFormat=None, src=None):
- if valueFormat is not None:
- for mask, name, isDevice, signed in valueRecordFormat:
- if valueFormat & mask:
- setattr(self, name, None if isDevice else 0)
- if src is not None:
- for key, val in src.__dict__.items():
- if not hasattr(self, key):
- continue
- setattr(self, key, val)
- elif src is not None:
- self.__dict__ = src.__dict__.copy()
-
- def getFormat(self):
- format = 0
- for name in self.__dict__.keys():
- format = format | valueRecordFormatDict[name][0]
- return format
-
- def getEffectiveFormat(self):
- format = 0
- for name, value in self.__dict__.items():
- if value:
- format = format | valueRecordFormatDict[name][0]
- return format
-
- def toXML(self, xmlWriter, font, valueName, attrs=None):
- if attrs is None:
- simpleItems = []
- else:
- simpleItems = list(attrs)
- for mask, name, isDevice, format in valueRecordFormat[:4]: # "simple" values
- if hasattr(self, name):
- simpleItems.append((name, getattr(self, name)))
- deviceItems = []
- for mask, name, isDevice, format in valueRecordFormat[4:8]: # device records
- if hasattr(self, name):
- device = getattr(self, name)
- if device is not None:
- deviceItems.append((name, device))
- if deviceItems:
- xmlWriter.begintag(valueName, simpleItems)
- xmlWriter.newline()
- for name, deviceRecord in deviceItems:
- if deviceRecord is not None:
- deviceRecord.toXML(xmlWriter, font, name=name)
- xmlWriter.endtag(valueName)
- xmlWriter.newline()
- else:
- xmlWriter.simpletag(valueName, simpleItems)
- xmlWriter.newline()
-
- def fromXML(self, name, attrs, content, font):
- from . import otTables
-
- for k, v in attrs.items():
- setattr(self, k, int(v))
- for element in content:
- if not isinstance(element, tuple):
- continue
- name, attrs, content = element
- value = getattr(otTables, name)()
- for elem2 in content:
- if not isinstance(elem2, tuple):
- continue
- name2, attrs2, content2 = elem2
- value.fromXML(name2, attrs2, content2, font)
- setattr(self, name, value)
-
- def __ne__(self, other):
- result = self.__eq__(other)
- return result if result is NotImplemented else not result
-
- def __eq__(self, other):
- if type(self) != type(other):
- return NotImplemented
- return self.__dict__ == other.__dict__
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_internal.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_internal.h
deleted file mode 100644
index e585c779341fc970fff23783f3a4545554beadac..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_internal.h
+++ /dev/null
@@ -1,253 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_CBS_INTERNAL_H
-#define AVCODEC_CBS_INTERNAL_H
-
-#include
-
-#include "libavutil/buffer.h"
-#include "libavutil/log.h"
-
-#include "cbs.h"
-#include "codec_id.h"
-#include "get_bits.h"
-#include "put_bits.h"
-
-
-enum CBSContentType {
- // Unit content may contain some references to other structures, but all
- // managed via buffer reference counting. The descriptor defines the
- // structure offsets of every buffer reference.
- CBS_CONTENT_TYPE_INTERNAL_REFS,
- // Unit content is something more complex. The descriptor defines
- // special functions to manage the content.
- CBS_CONTENT_TYPE_COMPLEX,
-};
-
-enum {
- // Maximum number of unit types described by the same non-range
- // unit type descriptor.
- CBS_MAX_LIST_UNIT_TYPES = 3,
- // Maximum number of reference buffer offsets in any one unit.
- CBS_MAX_REF_OFFSETS = 2,
- // Special value used in a unit type descriptor to indicate that it
- // applies to a large range of types rather than a set of discrete
- // values.
- CBS_UNIT_TYPE_RANGE = -1,
-};
-
-typedef const struct CodedBitstreamUnitTypeDescriptor {
- // Number of entries in the unit_types array, or the special value
- // CBS_UNIT_TYPE_RANGE to indicate that the range fields should be
- // used instead.
- int nb_unit_types;
-
- union {
- // Array of unit types that this entry describes.
- CodedBitstreamUnitType list[CBS_MAX_LIST_UNIT_TYPES];
- // Start and end of unit type range, used if nb_unit_types is
- // CBS_UNIT_TYPE_RANGE.
- struct {
- CodedBitstreamUnitType start;
- CodedBitstreamUnitType end;
- } range;
- } unit_type;
-
- // The type of content described.
- enum CBSContentType content_type;
- // The size of the structure which should be allocated to contain
- // the decomposed content of this type of unit.
- size_t content_size;
-
- union {
- // This union's state is determined by content_type:
- // ref for CBS_CONTENT_TYPE_INTERNAL_REFS,
- // complex for CBS_CONTENT_TYPE_COMPLEX.
- struct {
- // Number of entries in the ref_offsets array.
- // May be zero, then the structure is POD-like.
- int nb_offsets;
- // The structure must contain two adjacent elements:
- // type *field;
- // AVBufferRef *field_ref;
- // where field points to something in the buffer referred to by
- // field_ref. This offset is then set to offsetof(struct, field).
- size_t offsets[CBS_MAX_REF_OFFSETS];
- } ref;
-
- struct {
- void (*content_free)(void *opaque, uint8_t *data);
- int (*content_clone)(AVBufferRef **ref, CodedBitstreamUnit *unit);
- } complex;
- } type;
-} CodedBitstreamUnitTypeDescriptor;
-
-typedef struct CodedBitstreamType {
- enum AVCodecID codec_id;
-
- // A class for the private data, used to declare private AVOptions.
- // This field is NULL for types that do not declare any options.
- // If this field is non-NULL, the first member of the filter private data
- // must be a pointer to AVClass.
- const AVClass *priv_class;
-
- size_t priv_data_size;
-
- // List of unit type descriptors for this codec.
- // Terminated by a descriptor with nb_unit_types equal to zero.
- const CodedBitstreamUnitTypeDescriptor *unit_types;
-
- // Split frag->data into coded bitstream units, creating the
- // frag->units array. Fill data but not content on each unit.
- // The header argument should be set if the fragment came from
- // a header block, which may require different parsing for some
- // codecs (e.g. the AVCC header in H.264).
- int (*split_fragment)(CodedBitstreamContext *ctx,
- CodedBitstreamFragment *frag,
- int header);
-
- // Read the unit->data bitstream and decompose it, creating
- // unit->content.
- int (*read_unit)(CodedBitstreamContext *ctx,
- CodedBitstreamUnit *unit);
-
- // Write the data bitstream from unit->content into pbc.
- // Return value AVERROR(ENOSPC) indicates that pbc was too small.
- int (*write_unit)(CodedBitstreamContext *ctx,
- CodedBitstreamUnit *unit,
- PutBitContext *pbc);
-
- // Read the data from all of frag->units and assemble it into
- // a bitstream for the whole fragment.
- int (*assemble_fragment)(CodedBitstreamContext *ctx,
- CodedBitstreamFragment *frag);
-
- // Reset the codec internal state.
- void (*flush)(CodedBitstreamContext *ctx);
-
- // Free the codec internal state.
- void (*close)(CodedBitstreamContext *ctx);
-} CodedBitstreamType;
-
-
-// Helper functions for trace output.
-
-void ff_cbs_trace_header(CodedBitstreamContext *ctx,
- const char *name);
-
-void ff_cbs_trace_syntax_element(CodedBitstreamContext *ctx, int position,
- const char *name, const int *subscripts,
- const char *bitstring, int64_t value);
-
-
-// Helper functions for read/write of common bitstream elements, including
-// generation of trace output.
-
-int ff_cbs_read_unsigned(CodedBitstreamContext *ctx, GetBitContext *gbc,
- int width, const char *name,
- const int *subscripts, uint32_t *write_to,
- uint32_t range_min, uint32_t range_max);
-
-int ff_cbs_write_unsigned(CodedBitstreamContext *ctx, PutBitContext *pbc,
- int width, const char *name,
- const int *subscripts, uint32_t value,
- uint32_t range_min, uint32_t range_max);
-
-int ff_cbs_read_signed(CodedBitstreamContext *ctx, GetBitContext *gbc,
- int width, const char *name,
- const int *subscripts, int32_t *write_to,
- int32_t range_min, int32_t range_max);
-
-int ff_cbs_write_signed(CodedBitstreamContext *ctx, PutBitContext *pbc,
- int width, const char *name,
- const int *subscripts, int32_t value,
- int32_t range_min, int32_t range_max);
-
-// The largest unsigned value representable in N bits, suitable for use as
-// range_max in the above functions.
-#define MAX_UINT_BITS(length) ((UINT64_C(1) << (length)) - 1)
-
-// The largest signed value representable in N bits, suitable for use as
-// range_max in the above functions.
-#define MAX_INT_BITS(length) ((INT64_C(1) << ((length) - 1)) - 1)
-
-// The smallest signed value representable in N bits, suitable for use as
-// range_min in the above functions.
-#define MIN_INT_BITS(length) (-(INT64_C(1) << ((length) - 1)))
-
-#define TYPE_LIST(...) { __VA_ARGS__ }
-#define CBS_UNIT_TYPE_POD(type_, structure) { \
- .nb_unit_types = 1, \
- .unit_type.list = { type_ }, \
- .content_type = CBS_CONTENT_TYPE_INTERNAL_REFS, \
- .content_size = sizeof(structure), \
- .type.ref = { .nb_offsets = 0 }, \
- }
-#define CBS_UNIT_RANGE_POD(range_start, range_end, structure) { \
- .nb_unit_types = CBS_UNIT_TYPE_RANGE, \
- .unit_type.range.start = range_start, \
- .unit_type.range.end = range_end, \
- .content_type = CBS_CONTENT_TYPE_INTERNAL_REFS, \
- .content_size = sizeof(structure), \
- .type.ref = { .nb_offsets = 0 }, \
- }
-
-#define CBS_UNIT_TYPES_INTERNAL_REF(types, structure, ref_field) { \
- .nb_unit_types = FF_ARRAY_ELEMS((CodedBitstreamUnitType[])TYPE_LIST types), \
- .unit_type.list = TYPE_LIST types, \
- .content_type = CBS_CONTENT_TYPE_INTERNAL_REFS, \
- .content_size = sizeof(structure), \
- .type.ref = { .nb_offsets = 1, \
- .offsets = { offsetof(structure, ref_field) } }, \
- }
-#define CBS_UNIT_TYPE_INTERNAL_REF(type, structure, ref_field) \
- CBS_UNIT_TYPES_INTERNAL_REF((type), structure, ref_field)
-
-#define CBS_UNIT_RANGE_INTERNAL_REF(range_start, range_end, structure, ref_field) { \
- .nb_unit_types = CBS_UNIT_TYPE_RANGE, \
- .unit_type.range.start = range_start, \
- .unit_type.range.end = range_end, \
- .content_type = CBS_CONTENT_TYPE_INTERNAL_REFS, \
- .content_size = sizeof(structure), \
- .type.ref = { .nb_offsets = 1, \
- .offsets = { offsetof(structure, ref_field) } }, \
- }
-
-#define CBS_UNIT_TYPES_COMPLEX(types, structure, free_func) { \
- .nb_unit_types = FF_ARRAY_ELEMS((CodedBitstreamUnitType[])TYPE_LIST types), \
- .unit_type.list = TYPE_LIST types, \
- .content_type = CBS_CONTENT_TYPE_COMPLEX, \
- .content_size = sizeof(structure), \
- .type.complex = { .content_free = free_func }, \
- }
-#define CBS_UNIT_TYPE_COMPLEX(type, structure, free_func) \
- CBS_UNIT_TYPES_COMPLEX((type), structure, free_func)
-
-#define CBS_UNIT_TYPE_END_OF_LIST { .nb_unit_types = 0 }
-
-
-extern const CodedBitstreamType ff_cbs_type_av1;
-extern const CodedBitstreamType ff_cbs_type_h264;
-extern const CodedBitstreamType ff_cbs_type_h265;
-extern const CodedBitstreamType ff_cbs_type_jpeg;
-extern const CodedBitstreamType ff_cbs_type_mpeg2;
-extern const CodedBitstreamType ff_cbs_type_vp9;
-
-
-#endif /* AVCODEC_CBS_INTERNAL_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/videodsp_init.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/videodsp_init.c
deleted file mode 100644
index 89409fc8fd2055c5cb806210930d8d44331f6f8c..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/videodsp_init.c
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
- * Copyright (c) 2017 Kaustubh Raste (kaustubh.raste@imgtec.com)
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/mips/cpu.h"
-#include "config.h"
-#include "libavutil/attributes.h"
-#include "libavutil/mips/asmdefs.h"
-#include "libavcodec/videodsp.h"
-
-static void prefetch_mips(const uint8_t *mem, ptrdiff_t stride, int h)
-{
- register const uint8_t *p = mem;
-
- __asm__ volatile (
- "1: \n\t"
- "pref 4, 0(%[p]) \n\t"
- "pref 4, 32(%[p]) \n\t"
- PTR_ADDIU" %[h], %[h], -1 \n\t"
- PTR_ADDU " %[p], %[p], %[stride] \n\t"
-
- "bnez %[h], 1b \n\t"
-
- : [p] "+r" (p), [h] "+r" (h)
- : [stride] "r" (stride)
- );
-}
-
-av_cold void ff_videodsp_init_mips(VideoDSPContext *ctx, int bpc)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_msa(cpu_flags))
- ctx->prefetch = prefetch_mips;
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Apkmonk Download and Play Together with Millions of Players.md b/spaces/congsaPfin/Manga-OCR/logs/Apkmonk Download and Play Together with Millions of Players.md
deleted file mode 100644
index 30474426a13a20c591182839b1b60a1e071285ea..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Apkmonk Download and Play Together with Millions of Players.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
Play Together APKMonk: A Fun and Social Game for Everyone
-
Do you love playing games with your friends online? Do you want to experience a virtual world where you can do anything you want? Do you like cute and colorful graphics and characters? If you answered yes to any of these questions, then you should try Play Together APKMonk, a fun and social game for everyone.
-
What is Play Together APKMonk?
-
Play Together APKMonk is a game developed by Haegin Co., Ltd., a Korean company that specializes in creating casual and social games. It is a game that lets you create your own character and explore a vast island with other players. You can chat, make friends, join clubs, play mini-games, go fishing, shopping, camping, cooking, and more. You can also customize your character and your home with various items and outfits that you can buy or earn in the game.