diff --git "a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/? Equalizer Bass Booster Pro V1.2.6\302\240Apk.md" "b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/? Equalizer Bass Booster Pro V1.2.6\302\240Apk.md"
deleted file mode 100644
index 6f3c7af1a4dd08c501abf2bc474eb6d7d477547d..0000000000000000000000000000000000000000
--- "a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/? Equalizer Bass Booster Pro V1.2.6\302\240Apk.md"
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
How to Enhance Your Music Experience with Equalizer Bass Booster Pro v1.2.6 Apk
-If you are looking for a way to improve the sound quality of your music on your Android device, you might want to try Equalizer Bass Booster Pro v1.2.6 Apk. This is a powerful app that lets you adjust the sound level and frequency of your music with a five band equalizer, a bass boost effect, and a virtualizer effect. You can also choose from 22 equalizer presets or customize your own effects.
-– Equalizer Bass Booster Pro v1.2.6 Apk DOWNLOAD ––– https://byltly.com/2uKwn0
-Equalizer Bass Booster Pro v1.2.6 Apk works with all music and video players, as well as streaming music services. You can easily control the equalizer effect on and off with a notification shortcut or a desktop widget. You don't need to root your device to use this app.
-With Equalizer Bass Booster Pro v1.2.6 Apk, you can enjoy a more immersive and realistic music experience with 3D surround sound and enhanced bass. Whether you are listening to music with headphones or speakers, you will notice the difference in sound quality and clarity.
-To download Equalizer Bass Booster Pro v1.2.6 Apk, you can visit the link below and follow the instructions to install it on your device. You will need Android 4.4 or higher to run this app.
-Download Equalizer Bass Booster Pro v1.2.6 Apk
-
-How to Use Equalizer Bass Booster Pro v1.2.6 Apk
-Once you have installed Equalizer Bass Booster Pro v1.2.6 Apk on your device, you can start using it to enhance your music experience. Here are some steps to follow:
-
-
-Open the app and grant the necessary permissions.
-Select the music or video player that you want to use with the app.
-Play your music or video and adjust the volume level with the slider on the app.
-Tap on the equalizer icon to open the five band equalizer. You can drag the sliders to change the sound frequency or tap on the presets to choose from different sound effects.
-Tap on the bass boost icon to activate the bass boost effect. You can adjust the intensity of the bass with the slider.
-Tap on the virtualizer icon to activate the virtualizer effect. This will create a 3D surround sound effect for your music or video.
-You can also access the app settings by tapping on the menu icon. Here you can customize the notification shortcut, the desktop widget, and other options.
-
-Enjoy your enhanced music experience with Equalizer Bass Booster Pro v1.2.6 Apk!
-
-Why Choose Equalizer Bass Booster Pro v1.2.6 Apk
-There are many reasons why you should choose Equalizer Bass Booster Pro v1.2.6 Apk over other similar apps. Here are some of them:
-
-It is easy to use and compatible with all music and video players.
-It has a powerful five band equalizer that lets you adjust the sound frequency to your preference.
-It has a bass boost effect that enhances the low frequencies and makes your music more punchy and dynamic.
-It has a virtualizer effect that creates a 3D surround sound effect and makes your music more immersive and realistic.
-It has 22 equalizer presets that you can choose from or customize your own effects.
-It has a volume booster that increases the sound level and clarity of your music or video.
-It has a notification shortcut and a desktop widget that let you access the app quickly and easily.
-It does not require root access and does not affect your device performance or battery life.
-
-With Equalizer Bass Booster Pro v1.2.6 Apk, you can enjoy a better music experience on your Android device. Download it now and see the difference for yourself!
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Cracking City A Simple and Effective Solution to Get a Free License Code for Your IDM Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Cracking City A Simple and Effective Solution to Get a Free License Code for Your IDM Software.md
deleted file mode 100644
index 3735bdf73857e725b45e308a35cfc5360fd7feb4..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download IDM Cracking City A Simple and Effective Solution to Get a Free License Code for Your IDM Software.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-How to Download IDM Cracking City for Free and Enjoy Fast Downloads
-If you are looking for a way to download IDM Cracking City for free, you are in the right place. IDM Cracking City is a website that provides you with the latest IDM crack or patch that can activate your Internet Download Manager (IDM) software and let you enjoy fast and reliable downloads. In this article, we will show you how to download IDM Cracking City for free and how to use it to activate your IDM software.
-What is IDM and Why You Need It
-IDM is a popular download manager that can increase your download speed up to five times, resume and schedule downloads, and handle various types of files and protocols. IDM can also integrate with most browsers and support proxy servers, firewalls, redirects, cookies, authorization, audio, and video content processing. With IDM, you can download anything from the internet with ease and efficiency.
-download idm cracking city Download ⏩ https://byltly.com/2uKvpW
-However, IDM is not a free software and you need a license code to activate it. If you do not have a license code, you will get a fake serial number or key error message and your IDM will stop working. That is why you need IDM Cracking City to get a free license code for your IDM software.
-What is IDM Cracking City and How It Works
-IDM Cracking City is a website that provides you with the best IDM crack or patch that can bypass the original registration of the software and generate a valid license code for your IDM software. The IDM crack or patch is a small program that modifies the original files of the software and makes it think that it is registered with a genuine license code. This way, you can use all the features and modules of IDM without any limitations or errors.
-To use IDM Cracking City, you need to follow these steps:
-
-Download the latest version of IDM from the official website or from here .
-Install IDM on your computer and close it if it is running.
-Go to IDM Cracking City and download the latest IDM crack or patch from there.
-Extract the downloaded file and run the IDM crack or patch as administrator.
-Select "1" to activate or "2" to reset your IDM software.
-Wait for the process to complete and enjoy your activated IDM software.
-
-Tips and Warnings
-Here are some tips and warnings that you should keep in mind when using IDM Cracking City:
-
-Before downloading the IDM crack or patch, make sure to disable your antivirus or firewall as they may detect it as a virus or trojan. This is because the IDM crack or patch modifies the original files of the software, which may trigger some security alerts. However, the IDM crack or patch is safe to use and does not harm your computer.
-After activating your IDM software, do not update it as it may revert back to the trial version. If you want to update your IDM software, you need to download and apply the latest IDM crack or patch again.
-If you have any problems with using IDM Cracking City, you can contact them through their website or leave a comment on their posts. They will try to help you as soon as possible.
-
-Conclusion
-In this article, we have shown you how to download IDM Cracking City for free and how to use it to activate your IDM software. With IDM Cracking City, you can enjoy fast and reliable downloads without paying anything. However, we recommend that you support the developers of IDM by purchasing a genuine license code if you can afford it. This way, you can get regular updates and support from them.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/2pm Tokyo Dome Concert Full 11 Watch the Legendary Performance of the K-pop Icons.md b/spaces/1gistliPinn/ChatGPT4/Examples/2pm Tokyo Dome Concert Full 11 Watch the Legendary Performance of the K-pop Icons.md
deleted file mode 100644
index 9da7fe204f898ecd4acf823ed8cd863de96428c1..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/2pm Tokyo Dome Concert Full 11 Watch the Legendary Performance of the K-pop Icons.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-No one is able to deny the popularity of TVXQ as they are considered one of the longest standing K-Pop idols who have been constantly filling the dome with their fans. 2012 marked the year where they first held their concert in the Tokyo Dome.
-2pm Tokyo Dome Concert Full 11 Download File https://imgfil.com/2uy05J
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Flash Builder 4.6 Premium Crack [TOP] Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Flash Builder 4.6 Premium Crack [TOP] Download.md
deleted file mode 100644
index db43106fcb926ed53e0820becf5a555104742b62..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Flash Builder 4.6 Premium Crack [TOP] Download.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
- Download Full Ultimate 2019 Latest Version 'Adobe Flash Builder Premium Version' Premium Crack/ License Key This tool is specially designed to help developers quickly and easily create fully-working mobile games and applications for Android, iOS and BlackBerry devices. This solution provides developers with full support for Adobe Flash Builder 4.7, including the creation of ActionScript code and increased automation and customization of mobile games and applications, mobile security, debugging and application testing capabilities, and access to the Flash Builder Marketplace.
- Adobe Flash Builder 4.6.1 Premium Keygen Codes includes FFT and FMOD. The application includes a powerful set of tools that can be used to work with both your game and mobile applications.
-adobe flash builder 4.6 premium crack download Download File ☆☆☆☆☆ https://imgfil.com/2uy1Hv
- Adobe Flash Builder 4.6 Premium 3.0.1.667 Serial Key Adobe Flex Builder environment together with a robust code editor. The package places the Flex Builder environment within Eclipse and provides unparalleled set of tools for creating advanced mobile applications. You can use the IDE to edit your code, debug, profile, manage the build process, and run your application on iOS or Android. This environment includes a mobile emulator that allows users to simulate the operation of the application on the target device. In addition, the application supports a local connection to a test mobile device.
- Adobe Flash Builder 4.6.1 License Key Latest Adobe Flash Builder 4.6.1 license key is a powerful environment that is used to develop highly compatible mobile applications and games. It also provides an intuitive interface in the form of a dashboard. When you develop this application, all the features and functions can be used from a single place. As a result, the work can be done more easily and faster.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Buddha.dll Hitman Sniper Challenge.rar VERIFIED.md b/spaces/1gistliPinn/ChatGPT4/Examples/Buddha.dll Hitman Sniper Challenge.rar VERIFIED.md
deleted file mode 100644
index 2d257cd03ba7499a44798a48daf52515701c6cc8..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Buddha.dll Hitman Sniper Challenge.rar VERIFIED.md
+++ /dev/null
@@ -1,8 +0,0 @@
-buddha.dll Hitman Sniper Challenge.rar Download Zip --->>> https://imgfil.com/2uxXUB
-
-, we grant
-
-3 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Crack !EXCLUSIVE!.Nitro.Pro.8.0.4.6.x86x64.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Crack !EXCLUSIVE!.Nitro.Pro.8.0.4.6.x86x64.rar.md
deleted file mode 100644
index c6130aa7e64da16e1870cb3d24a8e071e66c9b2e..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Crack !EXCLUSIVE!.Nitro.Pro.8.0.4.6.x86x64.rar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Crack.Nitro.Pro.8.0.4.6.x86x64.rar DOWNLOAD ››››› https://imgfil.com/2uy0h4
-
-Crack.Nitro.Pro.8.0.4.6.x86x64.rar · Bhabhi Pedia movie in hindi torrent download · mathrubhumi malayalam calendar 1994 with stars 4d29de3e1b
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Evanescence My Immortal Mp3 Torrent Download [BETTER].md b/spaces/1gistliPinn/ChatGPT4/Examples/Evanescence My Immortal Mp3 Torrent Download [BETTER].md
deleted file mode 100644
index 0bff50e8f7e895f4a9c2ae0343e95352dea93d55..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Evanescence My Immortal Mp3 Torrent Download [BETTER].md
+++ /dev/null
@@ -1,6 +0,0 @@
-evanescence my immortal mp3 torrent download Download >>>>> https://imgfil.com/2uy0Ei
-
-evanescence my immortal free mp3 download - موسيقى mp3 mp4 download songs and music. Numbered Musical Notation Preview 1. Numbered Musical ... 1fdad05405
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/4 Images 1 Mot The Ultimate Word Game for French Speakers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/4 Images 1 Mot The Ultimate Word Game for French Speakers.md
deleted file mode 100644
index cad775226c75f6243c1d7c40310e38e7a68fc6f6..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/4 Images 1 Mot The Ultimate Word Game for French Speakers.md
+++ /dev/null
@@ -1,104 +0,0 @@
-
-4 Images 1 Mot: A Fun and Challenging Word Game
-Do you love word games? Do you enjoy solving puzzles and testing your brain? If you answered yes, then you should try 4 Images 1 Mot , a game that will keep you entertained and challenged for hours. In this game, you have to guess the word that connects four images that have something in common. Sounds easy, right? Well, not always. Some puzzles are easy, but some are very tricky and require a lot of thinking. Are you ready to take on the challenge?
-What is 4 Images 1 Mot?
-4 Images 1 Mot is a game developed by Lotum GmbH, a German company that specializes in creating mobile games. The game was first released in 2013 and has since become a worldwide phenomenon, with more than 250 million players in 9 languages. The game is also known as 4 Pics 1 Word in English, 4 Fotos 1 Palabra in Spanish, 4 Immagini 1 Parola in Italian, and so on.
-4 images 1 mot download DOWNLOAD ⇒⇒⇒ https://urlin.us/2uSS93
-How to play 4 Images 1 Mot
-The game is very simple to play. You will see four images on the screen, and below them, a set of letters. Your task is to use those letters to form the word that links the four images. For example, if you see a picture of a dog, a bone, a tooth, and a dentist, the word would be DENT . If you get stuck, you can use hints to reveal some letters or remove some letters that are not part of the word. You can also skip the puzzle and move on to the next one.
-Why you should download 4 Images 1 Mot
-If you are looking for a fun and challenging word game, then you should definitely download 4 Images 1 Mot. Here are some reasons why:
-
-The game is free to download and play. You can enjoy it without spending any money.
-The game is suitable for all ages and levels. You can play it with your family, friends, or by yourself.
-The game is offline-friendly. You can play it without an internet connection.
-The game is educational. You can learn new words and improve your vocabulary.
-The game is addictive. You will never get bored with the endless puzzles and levels.
-
-Features of 4 Images 1 Mot
-4 Images 1 Mot has many features that make it one of the best word games on the market. Here are some of them:
-French puzzles made to measure just for you
-If you are playing the French version of the game, you will be happy to know that the puzzles are specially designed for you. The developers have created puzzles that are relevant to your culture, language, and interests. You will see images of famous landmarks, celebrities, food, animals, and more.
-Endless fun with new puzzles
-The game has thousands of puzzles for you to solve, and new ones are added regularly. You will never run out of words to guess and images to discover. The game also has different modes and categories for you to choose from, such as daily puzzles, seasonal puzzles, themed puzzles, and more.
-Simple Simple and addictive gameplay
- The game is very easy to play, but hard to master. You just need to look at the four images and find the word that connects them. The game will test your logic, creativity, and intuition. You will also have fun trying to guess the words and seeing the images. The game is so addictive that you will want to play it every day.
-One of the most popular brain games in the world
-4 Images 1 Mot is not only a game, but also a brain exercise. The game will challenge your mental skills and improve your cognitive abilities. You will have to think fast, use your memory, and expand your vocabulary. The game will also stimulate your curiosity and imagination. You will learn new things and discover new perspectives. 4 Images 1 Mot is one of the most popular brain games in the world, with millions of fans and positive reviews.
-How to download 4 Images 1 Mot
-If you are interested in playing 4 Images 1 Mot, you can download it easily from different sources. Here are some of them:
-4 images 1 mot jeu gratuit
-4 images 1 mot solution
-4 images 1 mot apk
-4 images 1 mot android
-4 images 1 mot ios
-4 images 1 mot en ligne
-4 images 1 mot pc
-4 images 1 mot niveau
-4 images 1 mot français
-4 images 1 mot astuces
-4 images 1 mot réponses
-4 images 1 mot devinettes
-4 images 1 mot mots cachés
-4 images 1 mot énigmes
-4 images 1 mot lettres
-4 images 1 mot fun
-4 images 1 mot quiz
-4 images 1 mot casse-tête
-4 images 1 mot facile
-4 images 1 mot difficile
-4 images 1 mot avis
-4 images 1 mot commentaires
-4 images 1 mot conseils
-4 images 1 mot stratégies
-4 images 1 mot trucs
-Télécharger gratuitement le jeu de mots "4 Images et un Mot"
-Télécharger la dernière version de "4 Images et un Mot" pour Android
-Télécharger l'application "4 Images et un Mot" sur l'App Store
-Jouer à "4 Images et un Mot" sur le navigateur web
-Installer "4 Images et un Mot" sur le PC avec un émulateur Android
-Trouver les solutions de tous les niveaux de "4 Images et un Mot"
-Chercher les mots cachés dans les images de "4 Images et un Mot"
-Résoudre les énigmes de "4 Images et un Mot" avec des indices
-Former des mots avec les lettres proposées dans "4 Images et un Mot"
-S'amuser avec le jeu de devinettes "4 Images et un Mot"
-Tester sa culture générale avec le quiz "4 Images et un Mot"
-Stimuler son cerveau avec les casse-tête de "4 Images et un Mot"
-Choisir le niveau de difficulté de "4 Images et un Mot"
-Lire les avis des utilisateurs de "4 Images et un Mot"
-Donner son avis sur le jeu "4 Images et un Mot"
-Suivre les conseils des experts pour réussir à "4 Images et un Mot"
-Appliquer les stratégies gagnantes pour "4 Images et un Mot"
-Utiliser les trucs et astuces pour "4 Images et un Mot"
-Download from Google Play Store
-The easiest way to download 4 Images 1 Mot is from the Google Play Store. You just need to open the app on your Android device and search for 4 Images 1 Mot . You will see the game icon with a blue background and four white squares. Tap on it and then tap on Install . The game will be downloaded and installed on your device in a few minutes. You can then open it and start playing.
-Download from APKCombo
-If you want to download 4 Images 1 Mot from a third-party source, you can use APKCombo. This is a website that offers free APK files for Android apps and games. You can visit the website at https://apkcombo.com/ and search for 4 Images 1 Mot . You will see the game icon with a blue background and four white squares. Tap on it and then tap on Download APK . You will be asked to choose a version and a server. Choose the latest version and a fast server. The APK file will be downloaded to your device. You can then open it and install it manually.
-Download from Poki.com
-If you want to play 4 Images 1 Mot on your computer, you can use Poki.com. This is a website that offers free online games for different platforms. You can visit the website at https://poki.com/ and search for 4 Pics 1 Word . You will see the game icon with a blue background and four white squares. Click on it and then click on Play . The game will load on your browser and you can start playing.
-Tips and tricks for 4 Images 1 Mot
-If you want to improve your performance and enjoy the game more, here are some tips and tricks for you:
-Use hints wisely
-The game offers you two types of hints: reveal a letter or remove letters. You can use them when you are stuck or unsure of the word. However, you should use them wisely, as they cost coins that you earn by solving puzzles or watching ads. You should save your coins for harder puzzles or when you really need them.
-Ask your friends for help
-The game also allows you to ask your friends for help when you are stuck or unsure of the word. You can do this by tapping on the share button at the bottom of the screen. You can then choose to send the puzzle to your friends via Facebook, WhatsApp, Messenger, or other apps. Your friends can then reply with their guesses or hints.
-Learn new words and improve your vocabulary
-The game is not only fun, but also educational. You can learn new words and improve your vocabulary by playing it regularly. You can also use a dictionary or an online translator to look up the meaning of unfamiliar words or check their spelling. You can also try to guess the word before looking at the letters or using hints.
-Conclusion
-4 Images 1 Mot is a fun and challenging word game that will keep you entertained and challenged for hours. You have to guess the word that connects four images that have something in common. The game has thousands of puzzles for you to solve, with different modes and categories. The game is also simple, addictive, educational, and popular among millions of players around the world. You can download the game from different sources, such as Google Play Store, APKCombo, or Poki.com. You can also use hints, ask your friends, or learn new words to help you with the game. 4 Images 1 Mot is a game that you should not miss if you love word games. Download it now and have fun!
-FAQs
-Here are some frequently asked questions about 4 Images 1 Mot:
-
-What is the difference between 4 Images 1 Mot and 4 Pics 1 Word?
-4 Images 1 Mot and 4 Pics 1 Word are the same game, but with different names depending on the language. 4 Images 1 Mot is the French version, while 4 Pics 1 Word is the English version. The game has other versions in other languages, such as Spanish, Italian, German, and more.
-How many levels are there in 4 Images 1 Mot?
-There are thousands of levels in 4 Images 1 Mot, and new ones are added regularly. The game also has different modes and categories for you to choose from, such as daily puzzles, seasonal puzzles, themed puzzles, and more.
-How can I get more coins in 4 Images 1 Mot?
-You can get more coins in 4 Images 1 Mot by solving puzzles, watching ads, or buying them with real money. You can use coins to buy hints or skip puzzles.
-How can I contact the developers of 4 Images 1 Mot?
-You can contact the developers of 4 Images 1 Mot by sending an email to games@lotum.de . You can also visit their website at https://www.lotum.de/ or follow them on Facebook at https://www.facebook.com/4pics1word/ .
-Is 4 Images 1 Mot safe to download and play?
-Yes, 4 Images 1 Mot is safe to download and play. The game does not contain any viruses, malware, or inappropriate content. However, you should always download the game from official sources, such as Google Play Store, APKCombo, or Poki.com. You should also avoid downloading any modded or hacked versions of the game, as they may harm your device or compromise your privacy.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Caa Palavras Brasileiro Um jogo de palavras divertido e educativo.md b/spaces/1phancelerku/anime-remove-background/Caa Palavras Brasileiro Um jogo de palavras divertido e educativo.md
deleted file mode 100644
index 14dc1595cf35ed7f623888a46da65c0a1e7b2398..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Caa Palavras Brasileiro Um jogo de palavras divertido e educativo.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-Caça Palavras Brasileiro APK: A Fun and Educational Game for All Ages
-If you are looking for a fun and educational game that can keep you entertained for hours, you should try Caça Palavras Brasileiro APK. This is a word search game that will challenge your brain, improve your vocabulary, and teach you Portuguese. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, how to play it, and what are the benefits of playing it.
- What is Caça Palavras Brasileiro APK?
-Caça Palavras Brasileiro APK is a word search game developed by Berni Mobile. It is available for Android devices and can be downloaded for free from various sources. The game has five levels of difficulty: Easy, Normal, Hard, Big, and Huge. It also has different categories of words, such as animals, fruits, colors, countries, sports, etc. The game is in Portuguese, so it is ideal for native speakers or learners of this language.
-caça palavras brasileiro apk Download File ✫✫✫ https://jinyurl.com/2uNPcd
- A word search game with different levels of difficulty
-A word search game is a type of puzzle game where you have to find hidden words in a grid of letters. The words can be horizontal, vertical, diagonal, or backwards. The game has different levels of difficulty depending on the size of the grid and the number of words to find. The easy level has a 9x9 grid with 10 words, while the huge level has a 20x20 grid with 40 words. You can choose the level that suits your preference and skill.
- A game that helps you learn Portuguese vocabulary and spelling
-One of the main benefits of playing Caça Palavras Brasileiro APK is that it helps you learn Portuguese vocabulary and spelling. The game has hundreds of words from different categories, so you can expand your knowledge of this language. You can also learn how to spell the words correctly by looking at the grid and checking if they match the letters. The game will also show you the meaning of each word when you select it, so you can understand what it means.
-caça palavras brasileiro apk download
-caça palavras brasileiro apk mod
-caça palavras brasileiro apk offline
-caça palavras brasileiro apk atualizado
-caça palavras brasileiro apk gratis
-caça palavras brasileiro apk para pc
-caça palavras brasileiro apk android
-caça palavras brasileiro apk online
-caça palavras brasileiro apk 2023
-caça palavras brasileiro apk hack
-caça palavras brasileiro apk premium
-caça palavras brasileiro apk infinito
-caça palavras brasileiro apk sem anúncios
-caça palavras brasileiro apk com dicas
-caça palavras brasileiro apk fácil
-caça palavras brasileiro apk difícil
-caça palavras brasileiro apk com som
-caça palavras brasileiro apk com temas
-caça palavras brasileiro apk educativo
-caça palavras brasileiro apk divertido
-caça palavras brasileiro apk desafio
-caça palavras brasileiro apk relaxante
-caça palavras brasileiro apk inteligente
-caça palavras brasileiro apk criativo
-caça palavras brasileiro apk clássico
-caça palavras brasileiro apk moderno
-caça palavras brasileiro apk colorido
-caça palavras brasileiro apk animado
-caça palavras brasileiro apk viciante
-caça palavras brasileiro apk gratuito
-baixar caça palavras brasileiro apk
-instalar caça palavras brasileiro apk
-jogar caça palavras brasileiro apk
-como baixar caça palavras brasileiro apk
-como jogar caça palavras brasileiro apk
-como instalar caça palavras brasileiro apk
-melhor caça palavras brasileiro apk
-novo caça palavras brasileiro apk
-jogo de caça palavras brasileiro apk
-aplicativo de caça palavras brasileiro apk
-app de caça palavras brasileiro apk
-versão mais recente de caça palavras brasileiro apk
-versão antiga de caça palavras brasileiro apk
-resenha de caça palavras brasileiro apk
-avaliação de caça palavras brasileiro apk
-comentários de caça palavras brasileiro apk
-dúvidas sobre caça palavras brasileiro apk
-suporte para caça palavras brasileiro apk
-contato para caça palavras brasileiro apk
- A game that challenges your brain and improves your concentration
-Another benefit of playing Caça Palavras Brasileiro APK is that it challenges your brain and improves your concentration. The game requires you to scan the grid carefully and find the hidden words as fast as possible. You have to pay attention to every letter and avoid getting distracted by other words or letters. The game also tests your memory, as you have to remember where you saw a word or letter before. The game will keep your brain active and alert.
- How to download and install Caça Palavras Brasileiro APK?
-If you want to play Caça Palavras Brasileiro APK on your Android device, you have to download and install the APK file first. An APK file is an application package file that contains all the files needed to run an app on Android. Here are the steps to download and install Caça Pal avras Brasileiro APK:
- Download the APK file from a trusted source
-The first step is to download the APK file from a trusted source. You can find the official link to the game on the developer's website. You can also search for other sources that offer the APK file, but make sure they are safe and reliable. You can use a browser or a downloader app to download the file. The file size is about 13 MB, so it should not take long to download.
- Enable unknown sources in your device settings
-The second step is to enable unknown sources in your device settings. This is necessary because Android does not allow installing apps from sources other than the Google Play Store by default. To enable unknown sources, go to your device settings and look for the security or privacy option. Then, find the option that says "allow installation of apps from unknown sources" or something similar. Toggle it on and confirm your choice.
- Install the APK file and enjoy the game
-The third step is to install the APK file and enjoy the game. To install the file, locate it in your device storage and tap on it. You may see a warning message that says "this type of file can harm your device". Ignore it and tap on "install anyway". The installation process should take a few seconds. Once it is done, you can open the game and start playing.
- How to play Caça Palavras Brasileiro APK?
-Playing Caça Palavras Brasileiro APK is very easy and fun. Here are the basic steps to play the game:
- Choose a level of difficulty and a category
-When you open the game, you will see a menu with five options: Easy, Normal, Hard, Big, and Huge. These are the levels of difficulty that determine the size of the grid and the number of words to find. Tap on the level you want to play. Then, you will see another menu with different categories of words, such as animals, fruits, colors, countries, sports, etc. Tap on the category you want to play.
- Find the hidden words in the grid of letters
-After choosing a level and a category, you will see a grid of letters with some words hidden in it. The words can be horizontal, vertical, diagonal, or backwards. You have to find all the words in the grid as fast as possible. You can see the list of words on the bottom of the screen.
- Tap and drag your finger over the letters to select a word
-To select a word, tap and drag your finger over the letters that form the word. The word will be highlighted in green if it is correct or red if it is wrong. The word will also disappear from the list if it is correct. You can also tap on a word in the list to see its meaning on the top of the screen.
- Check your progress and score on the top of the screen
-You can check your progress and score on the top of the screen. The progress bar shows how many words you have found out of the total number of words in the grid. The score shows how many points you have earned based on the time and difficulty level. You can also pause or restart the game by tapping on the buttons on the top right corner of the screen.
- What are the benefits of playing Caça Palavras Brasileiro APK?
-Playing Caça Palavras Brasileiro APK is not only fun, but also beneficial for your brain and your language skills. Here are some of the benefits of playing this game:
- It improves your vocabulary and spelling skills
-By playing this game, you can learn new words and their meanings in Portuguese. You can also improve your spelling skills by checking if the words you select match the letters in the grid. The game will help you expand your vocabulary and enhance your spelling accuracy.
- It stimulates your brain and memory
-By playing this game, you can stimulate your brain and memory. You have to use your logic, observation, and analysis skills to find the hidden words in the grid. You also have to use your memory to remember where you saw a word or letter before. The game will help you sharpen your cognitive abilities and boost your memory power.
- It relaxes your mind and reduces stress
-By playing this game, you can relax your mind and reduce stress. The game has a soothing background music and a colorful design that create a pleasant atmosphere. The game also has no time limit or pressure, so you can play at your own pace and enjoy the process. The game will help you calm your nerves and relieve your tension.
- It entertains you and keeps you engaged
-By playing this game, you can entertain yourself and keep yourself engaged. The game has different levels of difficulty and categories of words that offer variety and challenge. The game also has a scoring system that motivates you to improve your performance and beat your own records. The game will help you have fun and stay focused.
- Conclusion
-Caça Palavras Brasileiro APK is a word search game that is fun and educational for all ages. It is a game that helps you learn Portuguese vocabulary and spelling, challenges your brain and memory, relaxes your mind and reduces stress, and entertains you and keeps you engaged. You can download and install the game for free on your Android device and start playing right away. If you are looking for a game that combines learning and entertainment, you should try Caça Palavras Brasileiro APK.
- FAQs
-Here are some frequently asked questions about Caça Palavras Brasileiro APK:
- Q: Is Caça Palavras Brasileiro APK safe to download and install?
-A: Yes, Caça Palavras Brasileiro APK is safe to download and install as long as you get it from a trusted source. You should always check the reputation and reviews of the source before downloading any APK file. You should also scan the file with an antivirus app before installing it.
- Q: Is Caça Palavras Brasileiro APK compatible with all Android devices?
-A: Caça Palavras Brasileiro APK is compatible with most Android devices that run on Android 4.1 or higher. However, some devices may have different specifications or settings that may affect the performance or functionality of the game. You should always check the compatibility of the game with your device before installing it.
- Q: Can I play Caça Palavras Brasileiro APK offline?
-A: Yes, you can play Caça Palavras Brasileiro APK offline without an internet connection. The game does not require any online features or services to run. However, if you want to update the game or access some optional features, such as sharing your score or rating the game, you will need an internet connection.
- Q: Can I play Caça Palavras Brasileiro APK in other languages?
-A: No, Caça Palavras Brasileiro APK is only available in Portuguese. The game is designed for native speakers or learners of this language. If you want to play a word search game in another language, you will have to look for another app that offers that option.
- Q: How can I contact the developer of Caça Palavras Brasileiro APK?
-A: If you have any questions, suggestions, or feedback about Caça Palavras Brasileiro APK, you can contact the developer by sending an email to bernimobile@gmail.com. You can also visit their website at https://www.bernimobile.com/ or follow them on Facebook at https://www.facebook.com/bernimobile/.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Clash Royale Elixir Infinito APK Cmo conseguir recursos ilimitados en el juego de estrategia ms popular.md b/spaces/1phancelerku/anime-remove-background/Clash Royale Elixir Infinito APK Cmo conseguir recursos ilimitados en el juego de estrategia ms popular.md
deleted file mode 100644
index 9169294ba514b150faabe51f3edb92c8a82e416b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Clash Royale Elixir Infinito APK Cmo conseguir recursos ilimitados en el juego de estrategia ms popular.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-Clash Royale Elixir Infinito Apk: How to Get Unlimited Elixir, Gems and Gold in Clash Royale
-Clash Royale is one of the most popular and addictive mobile games in the world. It is a real-time strategy game where you collect cards, build decks, and battle other players online. However, to progress faster and unlock more cards, you need to spend elixir, gems, and gold, which are the in-game currencies. These resources are limited and hard to earn, unless you are willing to spend real money on them.
-clash royale elixir infinito apk DOWNLOAD ————— https://jinyurl.com/2uNS74
-But what if there was a way to get unlimited elixir, gems, and gold in Clash Royale without spending a dime? Well, that's what Clash Royale Elixir Infinito Apk is all about. In this article, we will tell you everything you need to know about this modded version of Clash Royale, including its features, how to download and install it, its pros and cons, and some tips and tricks for playing with it. Let's get started!
- What is Clash Royale Elixir Infinito Apk?
-Clash Royale Elixir Infinito Apk is a modified version of the original Clash Royale game that gives you unlimited elixir, gems, and gold. This means that you can play the game without any restrictions or limitations. You can upgrade your cards, unlock new ones, open chests, buy items, and more without worrying about running out of resources.
-Clash Royale Elixir Infinito Apk is not an official app from Supercell, the developer of Clash Royale. It is a third-party app that has been created by some fans of the game who wanted to enhance their gaming experience. Therefore, it is not available on the Google Play Store or the App Store. You have to download it from an external source, which we will explain later.
-clash royale mod apk unlimited gems and elixir
-clash royale private server with infinite elixir
-clash royale hack apk download with elixir cheat
-clash royale apk modificado com elixir infinito
-clash royale elixir infinito apk mediafire
-clash royale unlimited elixir mod apk latest version
-clash royale elixir infinito apk 2023
-clash royale mod apk elixir ilimitado y gemas
-clash royale servidor privado con elixir infinito
-clash royale hack apk descargar con truco de elixir
-clash royale apk modificado con elixir infinito
-clash royale elixir infinito apk malavida
-clash royale mod apk unlimited everything 2023
-clash royale private server apk download 2023
-clash royale hack apk no root no survey
-clash royale apk mod com elixir infinito 2023
-clash royale elixir infinito apk atualizado
-clash royale mod apk elixir ilimitado y oro
-clash royale servidor privado apk descargar 2023
-clash royale hack apk sin root ni encuestas
-clash royale papua news mod apk download
-clash royale master royale infinity apk gratis
-clash royale arctic royale apk download youtube
-clash royale mod apk unlimited cards and chests
-clash royale private server with custom cards
-clash royale hack apk online generator tool
-clash royale apk mod com cartas infinitas
-clash royale elixir infinito apk sem root
-clash royale mod apk estrategia y defensa ilimitada
-clash royale servidor privado con cartas personalizadas
-clash royale hack apk generador online de recursos
-clash royale new scientist mod apk free download
-clash royale master royale infinity latest version
-clash royale arctic royale gameplay video youtube
-clash royale mod apk unlimited troops and spells
-clash royale private server with all skins unlocked
-clash royale hack apk easy and fast method
-clash royale apk mod com tropas e feitiços infinitos
-clash royale elixir infinito apk sem anúncios
-clash royale mod apk tropas y hechizos ilimitados
-clash royale servidor privado con todos los skins desbloqueados
-clash royale hack apk método fácil y rápido
- Features of Clash Royale Elixir Infinito Apk
-Clash Royale Elixir Infinito Apk has many features that make it different from the original game. Here are some of them:
- Unlimited Elixir
-Elixir is the main resource that you use to play cards in Clash Royale. It is generated automatically during battles at a constant rate. However, with Clash Royale Elixir Infinito Apk, you have unlimited elixir at your disposal. This means that you can play any card at any time without waiting for your elixir bar to fill up. You can also spam your opponent with multiple cards and overwhelm them easily.
- Unlimited Gems
-Gems are the premium currency in Clash Royale. They are used to buy chests, cards, gold, and other items in the game. They are also used to speed up the unlocking process of chests and skip waiting times. However, gems are very rare and hard to obtain in the game. You can only get them by completing achievements, winning battles, or buying them with real money.
-But with Clash Royale Elixir Infinito Apk, you have unlimited gems at your disposal. This means that you can buy anything you want in the game without spending any money. You can also open as many chests as you want and get all the cards you need. You can also speed up your progress and reach higher levels faster.
- Unlimited Gold
-Gold is the basic currency in Clash Royale. It is used to upgrade your cards, buy cards from the shop, and create or join clans. You can earn gold by winning battles, opening chests, donating cards, or buying it with gems.
-But with Clash Royale Elixir Infinito Apk, you have unlimited gold at your disposal. This means that you can upgrade your cards to the maximum level without any cost. You can also buy any card you want from the shop and create or join any clan you like.
- How to Download and Install Clash Royale Elixir Infinito Apk
-As we mentioned earlier, Clash Royale Elixir Infinito Apk is not an official app from Supercell. It is a third-party app that has been modified by some fans of the game. Therefore, you cannot download it from the Google Play Store or the App Store. You have to download it from an external source, such as a website or a file-sharing platform.
-However, before you download and install Clash Royale Elixir Infinito Apk, you need to make sure that your device meets the following requirements:
-
-Your device must have Android 4.1 or higher.
-Your device must have at least 100 MB of free storage space.
-Your device must have a stable internet connection.
-Your device must allow installation of apps from unknown sources.
-
-If your device meets these requirements, then you can follow these steps to download and install Clash Royale Elixir Infinito Apk:
- Step 1: Enable Unknown Sources
-The first step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store or the App Store. To do this, go to your device settings and look for the security option. Then, find the unknown sources option and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device. Ignore this message and tap OK.
- Step 2: Download the Apk File
-The next step is to download the apk file of Clash Royale Elixir Infinito Apk. You can find many websites and platforms that offer this file for free. However, be careful and choose a reliable and trustworthy source. Some sources may contain viruses or malware that can harm your device or steal your personal information.
-One of the best sources to download Clash Royale Elixir Infinito Apk is [this website]. It is safe, secure, and fast. All you have to do is click on the download button and wait for the file to be downloaded on your device.
- Step 3: Install the Apk File
-The third step is to install the apk file of Clash Royale Elixir Infinito Apk. To do this, go to your file manager and locate the downloaded file. Then, tap on it and follow the instructions on the screen. You may see a pop-up message that says this app may harm your device or request certain permissions. Ignore this message and tap install.
- Step 4: Launch the Game and Enjoy
-The final step is to launch the game and enjoy it. To do this, go to your app drawer and look for the Clash Royale icon. Then, tap on it and wait for the game to load. You may see a loading screen that says "Clash Royale Elixir Infinito". This means that you have successfully installed the modded version of Clash Royale.
-Now, you can play the game with unlimited elixir, gems, and gold. You can also access all the features and modes of the game without any restrictions or limitations. Have fun!
- Pros and Cons of Clash Royale Elixir Infinito Apk
-Clash Royale Elixir Infinito Apk has many advantages and disadvantages that you should be aware of before using it. Here are some of them:
- Pros
-
-You can play Clash Royale with unlimited elixir, gems, and gold.
-You can upgrade your cards, unlock new ones, open chests, buy items, and more without any cost.
-You can speed up your progress and reach higher levels faster.
-You can experiment with different decks and strategies without any risk.
-You can enjoy all the features and modes of the game without any restrictions or limitations.
-
- Cons
-You may face some technical issues or bugs while playing the game.
-You may not be able to play online with other players who are using the original version of the game.
-You may get banned or suspended by Supercell for using a modded version of the game.
-You may lose your progress or data if you uninstall the app or switch to the original version of the game.
-You may miss out on the updates and new features that Supercell releases for the original version of the game.
-
- Tips and Tricks for Playing Clash Royale with Elixir Infinito Apk
- Clash Royale Elixir Infinito Apk can make your gaming experience more fun and exciting, but it can also make it more challenging and competitive. Here are some tips and tricks that can help you play better and win more battles with this modded version of Clash Royale:
- Use Your Elixir Wisely
-Even though you have unlimited elixir, you still need to use it wisely. Don't just spam your cards randomly and hope for the best. You need to have a strategy and a plan for each battle. You need to know when to attack, when to defend, and when to save your elixir. You also need to know which cards work well together and which ones counter your opponent's cards. You need to balance your elixir spending and income, and avoid wasting elixir on unnecessary or ineffective moves.
- Build a Balanced Deck
-Having unlimited elixir, gems, and gold means that you can build any deck you want in Clash Royale. However, that doesn't mean that you should build a random or unbalanced deck. You still need to have a balanced deck that can deal with different situations and threats. You need to have a mix of cards that can attack, defend, support, and counter. You also need to have cards that can target different types of units, such as air, ground, swarm, tank, etc. You need to have cards that can synergize with each other and create powerful combos. You also need to have cards that suit your playstyle and preferences.
- Learn from Your Opponents
-Playing with Clash Royale Elixir Infinito Apk can give you an edge over your opponents, but it can also make them more challenging and unpredictable. You may face opponents who are also using the modded version of the game, or who are using the original version but have more skills and experience than you. Therefore, you need to learn from your opponents and adapt to their strategies and tactics. You need to observe their moves and patterns, and find their weaknesses and strengths. You also need to analyze your own mistakes and improve your performance.
- Join a Clan and Share Cards
-Clash Royale is not only a solo game, but also a social game. You can join a clan and interact with other players who share your passion for the game. You can chat with them, share tips and tricks, request and donate cards, participate in clan wars, and more. Joining a clan can help you improve your skills, expand your card collection, earn more rewards, and have more fun.
- Conclusion and FAQs
-Clash Royale Elixir Infinito Apk is a modded version of Clash Royale that gives you unlimited elixir, gems, and gold. It is a great way to enjoy the game without any restrictions or limitations. However, it also has some drawbacks and risks that you should be aware of before using it. In this article, we have explained everything you need to know about this app, including its features, how to download and install it, its pros and cons, and some tips and tricks for playing with it.
-We hope that this article has been helpful and informative for you. If you have any questions or doubts about Clash Royale Elixir Infinito Apk, here are some FAQs that may answer them:
-
-Question Answer
-Is Clash Royale Elixir Infinito Apk safe to use? Clash Royale Elixir Infinito Apk is not an official app from Supercell. It is a third-party app that has been modified by some fans of the game. Therefore, it is not guaranteed to be safe or secure. It may contain viruses or malware that can harm your device or steal your personal information. It may also cause some technical issues or bugs while playing the game. Therefore, use it at your own risk.
-Is Clash Royale Elixir Infinito Apk legal to use? Clash Royale Elixir Infinit o Apk is not legal to use. It violates the terms and conditions of Supercell and Clash Royale. It also infringes the intellectual property rights of Supercell and Clash Royale. It may also be considered as cheating or hacking by other players and authorities. Therefore, using it may result in legal actions or penalties from Supercell, such as banning or suspending your account.
-Will Clash Royale Elixir Infinito Apk work on my device? Clash Royale Elixir Infinito Apk may or may not work on your device. It depends on various factors, such as your device model, operating system, software version, storage space, internet connection, etc. Some devices may be compatible with the app, while others may not. Some devices may run the app smoothly, while others may experience crashes or errors. Therefore, you have to try it yourself and see if it works on your device.
-Can I play online with Clash Royale Elixir Infinito Apk? Clash Royale Elixir Infinito Apk may or may not allow you to play online with other players. It depends on the version of the app and the server of the game. Some versions of the app may connect you to the original server of Clash Royale, where you can play with other players who are using the original version of the game. However, this may also expose you to detection and banning by Supercell. Other versions of the app may connect you to a private server of Clash Royale, where you can play with other players who are using the modded version of the game. However, this may also limit your options and features in the game.
-Can I switch back to the original version of Clash Royale after using Clash Royale Elixir Infinito Apk? You can switch back to the original version of Clash Royale after using Clash Royale Elixir Infinito Apk, but you may lose your progress or data in the process. To switch back, you have to uninstall the modded version of the game and install the original version from the Google Play Store or the App Store. However, this may erase your account and data in the modded version of the game. You may also not be able to restore your account and data in the original version of the game if you have not linked it to a Supercell ID or a Google Play Games account.
-Is there any alternative to Clash Royale Elixir Infinito Apk? If you are looking for an alternative to Clash Royale Elixir Infinito Apk, you may try some other modded versions of Clash Royale that offer similar features and benefits. However, be careful and choose a reliable and trustworthy source for downloading them. Some of them are:
-
-
-Clash Royale Mod Apk: This is another modded version of Clash Royale that gives you unlimited resources and access to all features and modes of the game. You can download it from [this website].
-Clash Royale Hack Apk: This is a hacked version of Clash Royale that gives you unlimited resources and allows you to customize your game settings and preferences. You can download it from [this website].
-Clash Royale Private Server Apk: This is a private server version of Clash Royale that connects you to a different server where you can play with other players who are using the same version of the game. You can download it from [this website].
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Dmod APK Download Everything You Need to Know About the New Action Game.md b/spaces/1phancelerku/anime-remove-background/Dmod APK Download Everything You Need to Know About the New Action Game.md
deleted file mode 100644
index 1488c1e0408b31c2e37fc4a6eb23b248062a29e1..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Dmod APK Download Everything You Need to Know About the New Action Game.md
+++ /dev/null
@@ -1,152 +0,0 @@
-
-What is dmod apk and why you should download it
-If you are a fan of Android games, you may have heard of dmod apk . But what is it exactly and what can it do for you? In this article, we will tell you everything you need to know about this amazing app and why you should download it right away.
-Dmod apk is an app that allows you to download and install modified versions of your favorite Android games. These modified versions, also known as mods, can give you access to unlimited resources, premium features, custom skins, and more. With dmod apk, you can enjoy your games like never before and have more fun and excitement.
-dmod apk download Download File 🔗 https://jinyurl.com/2uNTfg
-Here are some of the benefits of using dmod apk over other apps:
-
-It is free and easy to use. You don't need to pay anything or register an account to use dmod apk. You just need to download it from a reliable source and install it on your device.
-It is safe and secure. Dmod apk does not contain any viruses or malware that can harm your device or data. It also does not require root access or any special permissions to work.
-It is compatible and flexible. Dmod apk works with most Android devices and games. You can choose from a wide range of mods for different games and customize them according to your preferences.
-
-Here are some of the features of dmod apk that make it unique and useful:
-
-It has a user-friendly interface. Dmod apk has a simple and intuitive design that makes it easy to navigate and use. You can easily find the mods you want by browsing through categories or searching by keywords.
-It has a large collection of mods. Dmod apk has thousands of mods for hundreds of games, including popular ones like PUBG, Free Fire, Clash of Clans, Subway Surfers, Candy Crush, and more. You can always find something new and interesting to try.
-It has a fast download speed. Dmod apk downloads the mods directly from its servers, which are fast and stable. You don't have to wait for long or face any interruptions while downloading.
-
- How to download and install dmod apk on your Android device
-Now that you know what dmod apk is and what it can do for you, you may be wondering how to get it on your device. Don't worry, we will show you how in this section. Just follow these simple steps:
-Before you start, make sure that your device meets these minimum requirements:
-
-It has Android 4.4 or higher version installed.
-It has at least 50 MB of free storage space available. It has enabled the option to install apps from unknown sources. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
-
-Once you have checked these requirements, you can proceed with the following steps:
-
-Download the dmod apk file from a trusted source. You can use this link to download it: dmod apk download .
-Locate the downloaded file on your device and tap on it to open it.
-Follow the instructions on the screen and click on Install to start the installation process.
-Wait for a few seconds until the installation is completed and click on Open to launch the app.
-Congratulations! You have successfully installed dmod apk on your device. You can now enjoy using it to play your favorite games.
-
-Here are some tips on how to use dmod apk safely and effectively:
-dmod apk download for android
-dmod apk download latest version
-dmod apk download free
-dmod apk download 2023
-dmod apk download update
-dmod apk download modded
-dmod apk download offline
-dmod apk download no root
-dmod apk download unlimited money
-dmod apk download hack
-dmod apk download cracked
-dmod apk download full
-dmod apk download premium
-dmod apk download pro
-dmod apk download unlocked
-dmod apk download game
-dmod apk download app
-dmod apk download tool
-dmod apk download software
-dmod apk download emulator
-dmod apk download online
-dmod apk download pc
-dmod apk download windows 10
-dmod apk download mac
-dmod apk download linux
-dmod apk download chromebook
-dmod apk download bluestacks
-dmod apk download nox player
-dmod apk download memu play
-dmod apk download ld player
-dmod apk download smartgaga
-dmod apk download gameloop
-dmod apk download android studio
-dmod apk download apkpure
-dmod apk download uptodown
-dmod apk download apkmirror
-dmod apk download apkmody
-dmod apk download apknite
-dmod apk download apksfree
-dmod apk download apksfull
-
-Always download the mods from the official dmod apk app or website. Do not download them from other sources as they may contain viruses or malware.
-Always backup your original game data before installing any mods. This way, you can restore it if anything goes wrong or if you want to switch back to the original version.
-Always check the compatibility and reviews of the mods before installing them. Some mods may not work well with your device or game version, or they may have bugs or glitches. Read the comments and ratings of other users to see if they have any issues or complaints.
-
- How to use dmod apk to play your favorite games
-Now that you have installed dmod apk on your device, you may be wondering how to use it to play your favorite games. Don't worry, we will show you how in this section. Just follow these simple steps:
-First, you need to find the mods for the games you want to play. You can do this by using the dmod apk app or website. Here are some of the popular games that you can play with dmod apk:
-
-Game Description
-PUBG A battle royale game where you fight against 99 other players in a shrinking map. You can use mods to get unlimited health, ammo, weapons, skins, and more.
-Free Fire A similar game to PUBG but with more features and modes. You can use mods to get unlimited diamonds, coins, characters, pets, and more.
-Clash of Clans A strategy game where you build your own village and army and fight against other players. You can use mods to get unlimited gems, gold, elixir, dark elixir, troops, and more.
-Subway Surfers A running game where you dodge obstacles and collect coins and power-ups. You can use mods to get unlimited keys, coins, hoverboards, characters, and more.
-Candy Crush A puzzle game where you match candies and clear levels. You can use mods to get unlimited lives, moves, boosters, and more.
-
- To download and install the mods for these games, you can follow these steps:
-
-Open the dmod apk app or website and search for the game you want to play.
-Select the mod you want to download and click on Download. You will be redirected to a page where you need to complete a verification process. This is to ensure that you are not a robot and to protect the mod from being abused.
-Complete the verification process by following the instructions on the screen. This may involve completing a survey, watching a video, or installing an app.
-Once you have completed the verification process, you will be able to download the mod file. Locate the downloaded file on your device and tap on it to open it.
-Follow the instructions on the screen and click on Install to start the installation process.
-Wait for a few seconds until the installation is completed and click on Open to launch the game with the mod enabled.
-Congratulations! You have successfully installed the mod for your game. You can now enjoy playing it with enhanced features and items.
-
- To use the mods in your games, you can follow these steps:
-
-Open the game with the mod enabled and start playing as usual.
-To access the mod menu, you need to tap on a special button or icon on the screen. This may vary depending on the game and the mod. For example, in PUBG, you may need to tap on the floating icon on the top left corner of the screen.
-Once you have opened the mod menu, you will see a list of options and features that you can enable or disable. You can also adjust the settings and values of some features according to your preferences.
-Select the features that you want to use and close the mod menu. You will see the effects of the features in your game. For example, in Free Fire, you may see your health bar increase or your enemies become visible through walls.
-Enjoy playing your game with the mod enabled and have more fun and excitement.
-
- How to troubleshoot common issues with dmod apk
-While dmod apk is a great app that can enhance your gaming experience, it may also cause some issues or problems sometimes. Don't worry, we will show you how to troubleshoot some of the common issues with dmod apk in this section. Here are some of the common issues and how to fix them:
-
-Error: App not installed . This error may occur when you try to install dmod apk or a mod file on your device. This may be due to several reasons, such as insufficient storage space, corrupted file, incompatible device, or security settings. To fix this error, you can try these solutions:
- Clear some storage space on your device by deleting unwanted files or apps.
- Download the file again from a reliable source and make sure it is not corrupted or damaged.
- Check if your device meets the minimum requirements for installing dmod apk or the mod file.
- Enable the option to install apps from unknown sources by going to Settings > Security > Unknown Sources and toggling it on.
-
-
-Error: Game not working or crashing . This error may occur when you try to play a game with a mod enabled. This may be due to several reasons, such as outdated mod, incompatible game version, conflicting features, or bugs or glitches. To fix this error, you can try these solutions:
- Update the mod to the latest version by using the dmod apk app or website.
- Update the game to the latest version by using the Google Play Store or other sources.
- Disable or adjust some features that may cause conflicts or errors in your game.
- Report any bugs or glitches to the developers of dmod apk or the mod file.
-
-
-Error: Mod not working or showing . This error may occur when you try to use a mod in your game but it does not work or show up. This may be due to several reasons, such as incorrect installation, wrong file format, missing files, or disabled features. To fix this error, you can try these solutions:
- Reinstall the mod by following the steps in section 3 of this article.
- Make sure that the mod file is in APK format and not ZIP or RAR format.
- Make sure that all the files and folders of the mod are present and intact on your device.
- Enable the features that you want to use by opening the mod menu and selecting them.
-
-
-
- Conclusion
-In conclusion, dmod apk is an app that allows you to download and install modified versions of your favorite Android games. These mods can give you access to unlimited resources, premium features, custom skins, and more. With dmod apk, you can enjoy your games like never before and have more fun and excitement.
-If you want to try dmod apk for yourself, you can download it from this link: dmod apk download . You can also visit their website for more information and updates: dmod apk website .
-We hope that this article has helped you understand what dmod apk is and how to use it. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.
- FAQs
- Here are some of the frequently asked questions about dmod apk:
-
-Is dmod apk legal?
-Dmod apk is legal as long as you use it for personal and educational purposes only. However, using it for commercial or malicious purposes may violate the terms and conditions of the original game developers and publishers. Therefore, we advise you to use dmod apk at your own risk and discretion.
- Is dmod apk safe?
-Dmod apk is safe as long as you download it from a trusted source and install it on your device. It does not contain any viruses or malware that can harm your device or data. It also does not require root access or any special permissions to work. However, some mods may contain harmful or inappropriate content that may affect your game or device. Therefore, we advise you to check the compatibility and reviews of the mods before installing them.
- Can dmod apk ban my game account?
-Dmod apk does not ban your game account directly, but it may increase the risk of getting banned by the game servers or authorities. This is because some mods may give you an unfair advantage over other players or violate the game rules and policies. Therefore, we advise you to use dmod apk with caution and moderation, and avoid using it in online or multiplayer modes.
- How can I update dmod apk?
-You can update dmod apk by using the dmod apk app or website. You can check for updates by opening the app or visiting the website and looking for the latest version. You can also enable the auto-update option in the app settings to get notified and download the updates automatically.
- How can I contact dmod apk?
-You can contact dmod apk by using their email address or social media accounts. You can find their contact information on their website or app. You can also leave a comment or feedback on their website or app. They are always happy to hear from their users and provide support and assistance.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download SuperStar JYPNATION and Collect Over 700 Cards of Your Favorite Artists.md b/spaces/1phancelerku/anime-remove-background/Download SuperStar JYPNATION and Collect Over 700 Cards of Your Favorite Artists.md
deleted file mode 100644
index 81cb9bd9329dd70f80fc2e89c937f8a3545614ed..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download SuperStar JYPNATION and Collect Over 700 Cards of Your Favorite Artists.md
+++ /dev/null
@@ -1,210 +0,0 @@
-
-How to Download Superstar Jypnation
-Do you love K-pop music and rhythm games? If so, you should definitely try out superstar jypnation , a fun and exciting music game with your favorite artists from JYP Entertainment . In this article, we will show you how to download superstar jypnation on your device, whether it's Android, iOS, or PC. We will also tell you about the features of the game, such as the artists, songs, cards, rankings, and more. Plus, we will give you some tips and tricks to help you play better and score higher. So, let's get started!
- Features of Superstar Jypnation
-Superstar jypnation is a rhythm game that lets you play along with the songs from JYP Entertainment's artists. The game has many features that make it fun and addictive, such as:
-how to download superstar jypnation Download File ---> https://jinyurl.com/2uNP6Q
-
-24 group artists from JYP Entertainment : You can choose from a variety of artists, such as J.Y. Park, Wonder Girls, Sunmi, 2AM, 2PM, miss A, JJ Project, Baek A Yeon, 15&, GOT7, DAY6, TWICE, Stray Kids, YUBIN, ITZY, and NiziU.
-270+ songs from JYP Entertainment's artists : You can play songs from different genres and eras, from the debut songs to the latest hits. You can also unlock more songs as you progress in the game.
-660+ cards to collect : You can collect cards that feature the images of your favorite artists. Each card has different attributes and abilities that affect your score. You can also equip and upgrade your cards to make them more powerful.
-Weekly Ranking, Best Record for each song, and many more competitions inside : You can compete with other players around the world and see how you rank in different categories. You can also check your best record for each song and try to beat it.
-
- Participating Artists
-Superstar jypnation has 24 group artists from JYP Entertainment that you can choose from. Each artist has their own songs and cards that you can play with. Here is the list of the participating artists:
-
-
-Artist
-Debut Year
-Genre
-
-
-J.Y. Park
-1994
-K-pop, R&B
-
-
-Wonder Girls
-2007
-K-pop, Retro-pop
-
-
-Sunmi
- 2007
-K-pop, Dance-pop
-
-
-2AM
-2008
-K-pop, Ballad
-
-
-2PM
-2008
-K-pop, Dance-pop
-
-
-miss A
-2010
-K-pop, Dance-pop
-
-
-JJ Project
-2012
-K-pop, Hip-hop
-
-
-Baek A Yeon
-2012
-K-pop, Ballad
-
-
-15&
-2012
-K-pop, R&B
-
-
-GOT7
-2014
-K-pop, Hip-hop
-
-
-DAY6
-2015
-K-rock, Pop-rock
-
-
-TWICE
-2015
-K-pop, Bubblegum pop
-
-
-Stray Kids
-2018
-K-pop, Hip-hop
-
-
- YUBIN
-2018
-K-pop, Retro-pop
-
-
-ITZY
-2019
-K-pop, Teen pop
-
-
-NiziU
-2020
-J-pop, K-pop
-
-
- Songs and Levels
-Superstar jypnation has over 270 songs from JYP Entertainment's artists that you can play in the game. Each song has three levels of difficulty: Easy, Normal, and Hard. The higher the difficulty, the more notes you have to tap and the faster they move. You can choose the level that suits your skill and preference. You can also unlock more songs by completing missions and achievements in the game.
- Cards and Abilities
-Superstar jypnation has over 660 cards that you can collect in the game. Each card features an image of an artist from JYP Entertainment. The cards have different attributes, such as Vocal, Dance, Rhythm, and Center. The cards also have different abilities, such as Score Up, Perfect Lock, Combo Bonus, and more. The cards can help you improve your score and performance in the game.
- You can equip up to five cards for each artist in your deck. The cards you equip will affect the score you get for each note you tap. You can also enhance your cards by using other cards or materials as fodder. Enhancing your cards will increase their level and stats. You can also upgrade your cards by using duplicates or special items. Upgrading your cards will increase their rank and rarity.
- Rankings and Competitions
-Superstar jypnation has various rankings and competitions that you can participate in. You can compete with other players around the world and see how you rank in different categories, such as:
-
-Weekly Ranking : You can compete with other players in your league and try to get the highest score for each song. The higher your score, the higher your rank. You can also get rewards based on your rank at the end of each week.
-Best Record for each song : You can check your best record for each song and try to beat it. You can also see the best records of other players and compare them with yours.
-World Record for each song : You can check the world record for each song and see who holds it. You can also try to break the world record and make history.
-Superstar League : You can compete with other players in a special league that features a random song every day. You can get rewards based on your score and rank at the end of each week.
-Arena Mode : You can compete with other players in a real-time mode that features three songs in a row. You can get rewards based on your score and rank at the end of each match.
-
- How to Download Superstar Jypnation on Android
-If you have an Android device, you can download superstar jypnation from Google Play Store. Here are the steps to do so:
-
-Open Google Play Store on your device.
-Search for "superstar jypnation" in the search bar.
-Select the game from the search results and tap on "Install".
-Wait for the game to download and install on your device.
-Open the game and enjoy playing!
-
- How to Download Superstar Jypnation on iOS
-If you have an iOS device, you can download superstar jypnation from App Store. Here are the steps to do so:
-How to download superstar jypnation on android
-How to download superstar jypnation on ios
-How to download superstar jypnation on pc
-How to download superstar jypnation songs
-How to download superstar jypnation apk
-How to download superstar jypnation mod
-How to download superstar jypnation update
-How to download superstar jypnation for free
-How to download superstar jypnation on mac
-How to download superstar jypnation on laptop
-How to download superstar jypnation on windows 10
-How to download superstar jypnation on chromebook
-How to download superstar jypnation on bluestacks
-How to download superstar jypnation on nox player
-How to download superstar jypnation on memu play
-How to download superstar jypnation on ldplayer
-How to download superstar jypnation on qooapp
-How to download superstar jypnation on google play store
-How to download superstar jypnation on app store
-How to download superstar jypnation on amazon appstore
-How to download superstar jypnation offline
-How to download superstar jypnation without wifi
-How to download superstar jypnation without vpn
-How to download superstar jypnation without ads
-How to download superstar jypnation without root
-How to download superstar jypnation with data transfer
-How to download superstar jypnation with cardbook sync
-How to download superstar jypnation with all songs unlocked
-How to download superstar jypnation with unlimited diamonds
-How to download superstar jypnation with cheat engine
-How to install superstar jypnation on android phone
-How to install superstar jypnation on iphone or ipad
-How to install superstar jypnation on windows pc or macbook
-How to install superstar jypnation from apk file
-How to install superstar jypnation from modded apk file
-How to install superstar jypnation from qooapp apk file
-How to install superstar jypnation from google play store apk file
-How to install superstar jypnation from app store ipa file
-How to install superstar jypnation from amazon appstore apk file
-How to install superstar jypnation using emulator software
-
-Open App Store on your device.
-Search for "superstar jypnation" in the search bar.
-Select the game from the search results and tap on "Get".
-Wait for the game to download and install on your device.
-Open the game and enjoy playing!
-
- How to Download Superstar Jypnation on PC
-If you want to play superstar jypnation on your PC, you will need to use an emulator that can run Android apps on your computer. One of the best emulators for this purpose is MuMu Player, which is fast, stable, and easy to use. Here are the steps to download superstar jypnation on PC using MuMu Player:
-
- Download MuMu Player from its official website: https://m umuplayer.com/en/ .
-Install MuMu Player on your PC by following the instructions on the screen.
-Open MuMu Player and click on the Google Play icon on the home screen.
-Sign in with your Google account or create a new one.
-Search for "superstar jypnation" in the Google Play Store and install it.
-Open the game and enjoy playing!
-
- Tips and Tricks for Superstar Jypnation
-Now that you know how to download superstar jypnation on your device, you might want to learn some tips and tricks to improve your gameplay and score higher. Here are some of them:
-
-Practice makes perfect : The best way to get better at the game is to practice as much as you can. Try to play different songs and levels and learn the patterns and timings of the notes. You can also use the practice mode to replay any part of the song you want.
-Use headphones or earphones : Playing with headphones or earphones can help you hear the music better and focus on the rhythm. It can also block out any distractions or noises around you.
-Adjust the speed and sync settings : You can adjust the speed and sync settings of the game to suit your preference and device. The speed setting controls how fast the notes move on the screen. The sync setting controls how well the notes match with the music. You can find these settings in the options menu.
-Equip and upgrade your cards wisely : You can equip up to five cards for each artist in your deck. You should equip cards that have high attributes and abilities that match with the song you are playing. For example, if the song has more vocal notes, you should equip cards that have high vocal attributes and score up abilities. You should also upgrade your cards regularly to increase their level, stats, rank, and rarity.
-Use items and rewards : You can use items and rewards to help you play better and get more benefits. For example, you can use headphones to play more songs, diamonds to buy more cards or items, RP to enhance or upgrade your cards, emeralds to buy special items or rewards, etc. You can get these items and rewards by completing missions, achievements, events, or daily login bonuses.
-
- Conclusion
-Superstar jypnation is a great game for K-pop fans and rhythm game lovers. It has many features that make it fun and addictive, such as 24 group artists from JYP Entertainment, 270+ songs from different genres and eras, 660+ cards to collect and enhance, and various rankings and competitions to join. You can download superstar jypnation on your Android, iOS, or PC device easily by following our step-by-step guide. You can also use our tips and tricks to improve your gameplay and score higher. So, what are you waiting for? Download superstar jypnation today and enjoy playing with your favorite artists!
- FAQs
-Here are some frequently asked questions and answers about superstar jypnation:
-
-Q: How can I change my profile picture in the game?
-A: You can change your profile picture in the game by tapping on your profile icon on the top left corner of the screen. Then, tap on "Change Profile" and choose an image from your device or take a photo with your camera.
-Q: How can I change my nickname in the game?
-A: You can change your nickname in the game by tapping on your profile icon on the top left corner of the screen. Then, tap on "Change Nickname" and enter a new nickname. You can only change your nickname once for free, so choose wisely.
-Q: How can I add friends in the game?
-A: You can add friends in the game by tapping on the friends icon on the bottom right corner of the screen. Then, tap on "Add Friend" and enter their nickname or user code. You can also accept friend requests from other players or send friend requests to players you meet in the game.
-Q: How can I contact customer service in the game?
-A: You can contact customer service in the game by tapping on the settings icon on the top right corner of the screen. Then, tap on "Customer Service" and choose a topic that matches your issue or inquiry. You can also send an email to support.superstar.jyp @dalcomsoft.com for more assistance.
-Q: How can I update the game to the latest version?
-A: You can update the game to the latest version by visiting the Google Play Store or App Store and checking for updates. You can also turn on the auto-update option in your device settings to update the game automatically. You should always update the game to enjoy the latest features and bug fixes.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/A666sxr/Genshin_TTS/commons.py b/spaces/A666sxr/Genshin_TTS/commons.py
deleted file mode 100644
index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000
--- a/spaces/A666sxr/Genshin_TTS/commons.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/models/musicgen.py b/spaces/AIConsultant/MusicGen/audiocraft/models/musicgen.py
deleted file mode 100644
index 1d4b2292eaec5016e208bbdf61ec5c99b40b67da..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/models/musicgen.py
+++ /dev/null
@@ -1,409 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Main model for using MusicGen. This will combine all the required components
-and provide easy access to the generation API.
-"""
-
-import typing as tp
-import warnings
-
-import torch
-
-from .encodec import CompressionModel
-from .lm import LMModel
-from .builders import get_debug_compression_model, get_debug_lm_model
-from .loaders import load_compression_model, load_lm_model
-from ..data.audio_utils import convert_audio
-from ..modules.conditioners import ConditioningAttributes, WavCondition
-from ..utils.autocast import TorchAutocast
-
-
-MelodyList = tp.List[tp.Optional[torch.Tensor]]
-MelodyType = tp.Union[torch.Tensor, MelodyList]
-
-
-# backward compatible names mapping
-_HF_MODEL_CHECKPOINTS_MAP = {
- "small": "GrandaddyShmax/musicgen-small",
- "medium": "GrandaddyShmax/musicgen-medium",
- "large": "GrandaddyShmax/musicgen-large",
- "melody": "GrandaddyShmax/musicgen-melody",
-}
-
-
-class MusicGen:
- """MusicGen main model with convenient generation API.
-
- Args:
- name (str): name of the model.
- compression_model (CompressionModel): Compression model
- used to map audio to invertible discrete representations.
- lm (LMModel): Language model over discrete representations.
- max_duration (float, optional): maximum duration the model can produce,
- otherwise, inferred from the training params.
- """
- def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel,
- max_duration: tp.Optional[float] = None):
- self.name = name
- self.compression_model = compression_model
- self.lm = lm
- if max_duration is None:
- if hasattr(lm, 'cfg'):
- max_duration = lm.cfg.dataset.segment_duration # type: ignore
- else:
- raise ValueError("You must provide max_duration when building directly MusicGen")
- assert max_duration is not None
- self.max_duration: float = max_duration
- self.device = next(iter(lm.parameters())).device
- self.generation_params: dict = {}
- self.set_generation_params(duration=15) # 15 seconds by default
- self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None
- if self.device.type == 'cpu':
- self.autocast = TorchAutocast(enabled=False)
- else:
- self.autocast = TorchAutocast(
- enabled=True, device_type=self.device.type, dtype=torch.float16)
-
- @property
- def frame_rate(self) -> float:
- """Roughly the number of AR steps per seconds."""
- return self.compression_model.frame_rate
-
- @property
- def sample_rate(self) -> int:
- """Sample rate of the generated audio."""
- return self.compression_model.sample_rate
-
- @property
- def audio_channels(self) -> int:
- """Audio channels of the generated audio."""
- return self.compression_model.channels
-
- @staticmethod
- def get_pretrained(name: str = 'GrandaddyShmax/musicgen-melody', device=None):
- """Return pretrained model, we provide four models:
- - facebook/musicgen-small (300M), text to music,
- # see: https://huggingface.co/facebook/musicgen-small
- - facebook/musicgen-medium (1.5B), text to music,
- # see: https://huggingface.co/facebook/musicgen-medium
- - facebook/musicgen-melody (1.5B) text to music and text+melody to music,
- # see: https://huggingface.co/facebook/musicgen-melody
- - facebook/musicgen-large (3.3B), text to music,
- # see: https://huggingface.co/facebook/musicgen-large
- """
- if device is None:
- if torch.cuda.device_count():
- device = 'cuda'
- else:
- device = 'cpu'
-
- if name == 'debug':
- # used only for unit tests
- compression_model = get_debug_compression_model(device)
- lm = get_debug_lm_model(device)
- return MusicGen(name, compression_model, lm, max_duration=30)
-
- lm = load_lm_model(name, device=device)
- compression_model = load_compression_model(name, device=device)
- if 'self_wav' in lm.condition_provider.conditioners:
- lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True
-
- return MusicGen(name, compression_model, lm)
-
- def set_generation_params(self, use_sampling: bool = True, top_k: int = 250,
- top_p: float = 0.0, temperature: float = 1.0,
- duration: float = 30.0, cfg_coef: float = 3.0,
- two_step_cfg: bool = False, extend_stride: float = 18):
- """Set the generation parameters for MusicGen.
-
- Args:
- use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True.
- top_k (int, optional): top_k used for sampling. Defaults to 250.
- top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0.
- temperature (float, optional): Softmax temperature parameter. Defaults to 1.0.
- duration (float, optional): Duration of the generated waveform. Defaults to 30.0.
- cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0.
- two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance,
- instead of batching together the two. This has some impact on how things
- are padded but seems to have little impact in practice.
- extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much
- should we extend the audio each time. Larger values will mean less context is
- preserved, and shorter value will require extra computations.
- """
- assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration."
- self.extend_stride = extend_stride
- self.duration = duration
- self.generation_params = {
- 'use_sampling': use_sampling,
- 'temp': temperature,
- 'top_k': top_k,
- 'top_p': top_p,
- 'cfg_coef': cfg_coef,
- 'two_step_cfg': two_step_cfg,
- }
-
- def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None):
- """Override the default progress callback."""
- self._progress_callback = progress_callback
-
- def generate_unconditional(self, num_samples: int, progress: bool = False, return_tokens: bool = False) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]:
- """Generate samples in an unconditional manner.
-
- Args:
- num_samples (int): Number of samples to be generated.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- descriptions: tp.List[tp.Optional[str]] = [None] * num_samples
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- tokens = self._generate_tokens(attributes, prompt_tokens, progress)
- if return_tokens:
- return self.generate_audio(tokens), tokens
- return self.generate_audio(tokens)
-
- def generate(self, descriptions: tp.List[str], progress: bool = False, return_tokens: bool = False) \
- -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]:
- """Generate samples conditioned on text.
-
- Args:
- descriptions (list of str): A list of strings used as text conditioning.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- assert prompt_tokens is None
- tokens = self._generate_tokens(attributes, prompt_tokens, progress)
- if return_tokens:
- return self.generate_audio(tokens), tokens
- return self.generate_audio(tokens)
-
- def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, melody_sample_rate: int, progress: bool = False, return_tokens: bool = False) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]:
- """Generate samples conditioned on text and melody.
-
- Args:
- descriptions (list of str): A list of strings used as text conditioning.
- melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as
- melody conditioning. Should have shape [B, C, T] with B matching the description length,
- C=1 or 2. It can be [C, T] if there is a single description. It can also be
- a list of [C, T] tensors.
- melody_sample_rate: (int): Sample rate of the melody waveforms.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if isinstance(melody_wavs, torch.Tensor):
- if melody_wavs.dim() == 2:
- melody_wavs = melody_wavs[None]
- if melody_wavs.dim() != 3:
- raise ValueError("Melody wavs should have a shape [B, C, T].")
- melody_wavs = list(melody_wavs)
- else:
- for melody in melody_wavs:
- if melody is not None:
- assert melody.dim() == 2, "One melody in the list has the wrong number of dims."
-
- melody_wavs = [
- convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels)
- if wav is not None else None
- for wav in melody_wavs]
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None,
- melody_wavs=melody_wavs)
- assert prompt_tokens is None
- tokens = self._generate_tokens(attributes, prompt_tokens, progress)
- if return_tokens:
- return self.generate_audio(tokens), tokens
- return self.generate_audio(tokens)
-
- def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int,
- descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None,
- progress: bool = False, return_tokens: bool = False) \
- -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]:
- """Generate samples conditioned on audio prompts.
-
- Args:
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- Prompt should be [B, C, T], or [C, T] if only one sample is generated.
- prompt_sample_rate (int): Sampling rate of the given audio waveforms.
- descriptions (list of str, optional): A list of strings used as text conditioning. Defaults to None.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if prompt.dim() == 2:
- prompt = prompt[None]
- if prompt.dim() != 3:
- raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).")
- prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels)
- if descriptions is None:
- descriptions = [None] * len(prompt)
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt)
- assert prompt_tokens is not None
- tokens = self._generate_tokens(attributes, prompt_tokens, progress)
- if return_tokens:
- return self.generate_audio(tokens), tokens
- return self.generate_audio(tokens)
-
- @torch.no_grad()
- def _prepare_tokens_and_attributes(
- self,
- descriptions: tp.Sequence[tp.Optional[str]],
- prompt: tp.Optional[torch.Tensor],
- melody_wavs: tp.Optional[MelodyList] = None,
- ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]:
- """Prepare model inputs.
-
- Args:
- descriptions (list of str): A list of strings used as text conditioning.
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- melody_wavs (torch.Tensor, optional): A batch of waveforms
- used as melody conditioning. Defaults to None.
- """
- attributes = [
- ConditioningAttributes(text={'description': description})
- for description in descriptions]
-
- if melody_wavs is None:
- for attr in attributes:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- sample_rate=[self.sample_rate],
- path=[None])
- else:
- if 'self_wav' not in self.lm.condition_provider.conditioners:
- raise RuntimeError("This model doesn't support melody conditioning. "
- "Use the `melody` model.")
- assert len(melody_wavs) == len(descriptions), \
- f"number of melody wavs must match number of descriptions! " \
- f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}"
- for attr, melody in zip(attributes, melody_wavs):
- if melody is None:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- sample_rate=[self.sample_rate],
- path=[None])
- else:
- attr.wav['self_wav'] = WavCondition(
- melody[None].to(device=self.device),
- torch.tensor([melody.shape[-1]], device=self.device),
- sample_rate=[self.sample_rate],
- path=[None],
- )
-
- if prompt is not None:
- if descriptions is not None:
- assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match"
- prompt = prompt.to(self.device)
- prompt_tokens, scale = self.compression_model.encode(prompt)
- assert scale is None
- else:
- prompt_tokens = None
- return attributes, prompt_tokens
-
- def _generate_tokens(self, attributes: tp.List[ConditioningAttributes],
- prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor:
- """Generate discrete audio tokens given audio prompt and/or conditions.
-
- Args:
- attributes (list of ConditioningAttributes): Conditions used for generation (text/melody).
- prompt_tokens (torch.Tensor, optional): Audio prompt used for continuation.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- Returns:
- torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params.
- """
- i = 0
- prompt_list = attributes[0].text['description']
- total_gen_len = int(self.duration * self.frame_rate)
- max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate)
- current_gen_offset: int = 0
-
- def _progress_callback(generated_tokens: int, tokens_to_generate: int):
- generated_tokens += current_gen_offset
- if current_gen_offset > 0:
- generated_tokens += (self.max_duration - self.extend_stride) * self.frame_rate
- if self._progress_callback is not None:
- # Note that total_gen_len might be quite wrong depending on the
- # codebook pattern used, but with delay it is almost accurate.
- self._progress_callback(generated_tokens, total_gen_len)
- else:
- print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r')
-
- if prompt_tokens is not None:
- assert max_prompt_len >= prompt_tokens.shape[-1], \
- "Prompt is longer than audio to generate"
-
- callback = None
- if progress:
- callback = _progress_callback
-
- if self.duration <= self.max_duration:
- # generate by sampling from LM, simple case.
- with self.autocast:
- attributes[0].text['description'] = prompt_list[0]
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=total_gen_len, **self.generation_params)
-
- else:
- # now this gets a bit messier, we need to handle prompts,
- # melody conditioning etc.
- ref_wavs = [attr.wav['self_wav'] for attr in attributes]
- all_tokens = []
- if prompt_tokens is None:
- prompt_length = 0
- else:
- all_tokens.append(prompt_tokens)
- prompt_length = prompt_tokens.shape[-1]
-
- stride_tokens = int(self.frame_rate * self.extend_stride)
-
- while current_gen_offset + prompt_length < total_gen_len:
- time_offset = current_gen_offset / self.frame_rate
- chunk_duration = min(self.duration - time_offset, self.max_duration)
- max_gen_len = int(chunk_duration * self.frame_rate)
- for attr, ref_wav in zip(attributes, ref_wavs):
- wav_length = ref_wav.length.item()
- if wav_length == 0:
- continue
- # We will extend the wav periodically if it not long enough.
- # we have to do it here rather than in conditioners.py as otherwise
- # we wouldn't have the full wav.
- initial_position = int(time_offset * self.sample_rate)
- wav_target_length = int(self.max_duration * self.sample_rate)
- positions = torch.arange(initial_position,
- initial_position + wav_target_length, device=self.device)
- attr.wav['self_wav'] = WavCondition(
- ref_wav[0][..., positions % wav_length],
- torch.full_like(ref_wav[1], wav_target_length),
- [self.sample_rate] * ref_wav[0].size(0),
- [None], [0.])
- with self.autocast:
- if i >= len(prompt_list):
- i = len(prompt_list) - 1
- attributes[0].text['description'] = prompt_list[i]
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=max_gen_len, **self.generation_params)
- i = i + 1
- if prompt_tokens is None:
- all_tokens.append(gen_tokens)
- else:
- all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:])
- prompt_tokens = gen_tokens[:, :, stride_tokens:]
- prompt_length = prompt_tokens.shape[-1]
- current_gen_offset += stride_tokens
-
- gen_tokens = torch.cat(all_tokens, dim=-1)
- return gen_tokens
-
- def generate_audio(self, gen_tokens: torch.Tensor):
- """Generate Audio from tokens"""
- assert gen_tokens.dim() == 3
- with torch.no_grad():
- gen_audio = self.compression_model.decode(gen_tokens, None)
- return gen_audio
-
- def to(self, device: str):
- self.compression_model.to(device)
- self.lm.to(device)
- return self
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_tokenizer_inference.py b/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_tokenizer_inference.py
deleted file mode 100644
index 46b757ec327d1ed8108ec8cff9daeb267028750e..0000000000000000000000000000000000000000
--- a/spaces/AILab-CVC/SEED-LLaMA/scripts/seed_tokenizer_inference.py
+++ /dev/null
@@ -1,33 +0,0 @@
-import hydra
-from omegaconf import OmegaConf
-from PIL import Image
-import pyrootutils
-import os
-
-pyrootutils.setup_root(__file__, indicator='.project-root', pythonpath=True)
-
-tokenizer_cfg_path = 'configs/tokenizer/seed_llama_tokenizer.yaml'
-transform_cfg_path = 'configs/transform/clip_transform.yaml'
-
-image_path = 'images/cat.jpg'
-save_dir = './'
-save_path = os.path.join(save_dir, os.path.basename(image_path))
-
-os.makedirs(save_dir, exist_ok=True)
-
-device = 'cuda'
-
-tokenizer_cfg = OmegaConf.load(tokenizer_cfg_path)
-tokenizer = hydra.utils.instantiate(tokenizer_cfg, device=device, load_diffusion=True)
-
-transform_cfg = OmegaConf.load(transform_cfg_path)
-transform = hydra.utils.instantiate(transform_cfg)
-
-image = Image.open(image_path).convert('RGB')
-
-image_tensor = transform(image).to(device)
-image_ids = tokenizer.encode_image(image_torch=image_tensor)
-
-images = tokenizer.decode_image(image_ids)
-
-images[0].save(save_path)
\ No newline at end of file
diff --git a/spaces/AIxPha/Real-CUGAN/README.md b/spaces/AIxPha/Real-CUGAN/README.md
deleted file mode 100644
index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000
--- a/spaces/AIxPha/Real-CUGAN/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Real CUGAN
-emoji: 🐢
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: gpl-3.0
-duplicated_from: DianXian/Real-CUGAN
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Aaaaaaaabdualh/topic2poem/README.md b/spaces/Aaaaaaaabdualh/topic2poem/README.md
deleted file mode 100644
index 21782276176ef29031dd0c6c6566c31f11730da0..0000000000000000000000000000000000000000
--- a/spaces/Aaaaaaaabdualh/topic2poem/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Topic2poem
-emoji: 💻
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.2
-app_file: app.py
-pinned: false
-license: afl-3.0
-duplicated_from: mareloraby/topic2poem
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/sde_team.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/sde_team.py
deleted file mode 100644
index 325c1eac08a8566079e7047411ca76d3e44b7de9..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/sde_team.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import asyncio
-import logging
-from typing import Any, Dict, List
-import json
-
-from agentverse.agents.simulation_agent.conversation import BaseAgent
-
-# from agentverse.environments.simulation_env.rules.base import Rule
-from agentverse.environments.simulation_env.rules.base import SimulationRule as Rule
-from agentverse.message import Message
-
-from .. import env_registry as EnvironmentRegistry
-from ..base import BaseEnvironment
-
-from agentverse.initialization import load_tools
-
-
-@EnvironmentRegistry.register("sde_team")
-class SdeTeamEnvironment(BaseEnvironment):
- """
- A basic environment implementing the logic of conversation to craft code.
-
- Args:
- agents: List of agents
- rule: Rule for the environment
- max_turns: Maximum number of turns
- cnt_turn: Current turn number
- last_messages: Messages from last turn
- rule_params: Variables set by the rule
- """
-
- agents: List[BaseAgent]
- rule: Rule
- max_turns: int = 10
- cnt_turn: int = 0
- last_messages: List[Message] = []
- rule_params: Dict = {}
- task_name: str = "test"
-
- def __init__(self, rule, **kwargs):
- rule_config = rule
- order_config = rule_config.get("order", {"type": "sde_team"})
- visibility_config = rule_config.get("visibility", {"type": "base"})
- selector_config = rule_config.get("selector", {"type": "sde_team"})
- updater_config = rule_config.get("updater", {"type": "sde_team"})
- describer_config = rule_config.get("describer", {"type": "base"})
- rule = Rule(
- order_config,
- visibility_config,
- selector_config,
- updater_config,
- describer_config,
- )
- super().__init__(rule=rule, **kwargs)
- self.rule_params["first_round"] = True
- self.rule_params["end_flag"] = False
-
- # # Test code
- self.rule_params["name_to_tools"] = {
- tool.name: tool
- for tool in load_tools(
- [
- {
- "tool_name": "code_interpreter",
- "tool_url": "http://127.0.0.1:8079/tools/code_interpreter/",
- }
- ]
- )
- }
- tool = self.rule_params["name_to_tools"]["execute_unit_tests"]
- # print(type(tool))
-
- # d = {
- # "func_impl": "def f(x):\n\treturn x + 1",
- # "tests": ["assert f(1) == 2"]
- # }
- # # input_str = json.dumps(d)
- # json.loads(input_str)
- # tool.run(input_str, verbose=True)
- # exit()
-
- async def step(self) -> List[Message]:
- """Run one step of the environment"""
-
- # Get the next agent index
- agent_ids = self.rule.get_next_agent_idx(self) # order
-
- # Generate current environment description
- # env_descriptions = self.rule.get_env_description(self) # describer
-
- # # Generate the next message
- # messages = await asyncio.gather(
- # *[self.agents[i].astep(env_descriptions[i]) for i in agent_ids]
- # ) # call chatgpt api
-
- messages = await asyncio.gather(*[self.agents[i].astep("") for i in agent_ids])
-
- # Track the messages to get the role of the sender
- self.last_messages = messages
-
- # Some rules will select certain messages from all the messages
- selected_messages = self.rule.select_message(self, messages) # selector
- self.last_messages = selected_messages
- self.print_messages(selected_messages)
-
- # Update the memory of the agents
- self.rule.update_memory(self) # updater: update memory
-
- # Update the set of visible agents for each agent
- self.rule.update_visible_agents(self) # change receiver
-
- self.cnt_turn += 1
-
- return selected_messages
-
- def print_messages(self, messages: List[Message]) -> None:
- for message in messages:
- if message is not None:
- logging.info(f"{message.sender}: {message.content}")
-
- def reset(self) -> None:
- """Reset the environment"""
- self.cnt_turn = 0
- self.rule.reset()
- for agent in self.agents:
- agent.reset()
-
- def is_done(self) -> bool:
- """Check if the environment is done"""
- if self.cnt_turn >= self.max_turns or self.rule_params["end_flag"]:
- # with open("record_human_eval.txt", "a") as f:
- # wd = dict()
- # wd['task_id'] = self.task_name
- # wd['code'] = self.rule_params['code']
- # f.write(json.dumps(wd))
- return True
- return False
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/dropdown-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/dropdown-plugin.js
deleted file mode 100644
index 58eda7d70194e94de5bb9f3cc827f30d68da4e59..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/dropdown-plugin.js
+++ /dev/null
@@ -1,18 +0,0 @@
-import DropDown from './behaviors/dropdown/DropDown.js';
-
-class DropDownPlugin extends Phaser.Plugins.BasePlugin {
- constructor(pluginManager) {
- super(pluginManager);
- }
-
- start() {
- var eventEmitter = this.game.events;
- eventEmitter.on('destroy', this.destroy, this);
- }
-
- add(gameObject, config) {
- return new DropDown(gameObject, config);
- }
-}
-
-export default DropDownPlugin;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circlemaskimage/CircleMaskImage.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circlemaskimage/CircleMaskImage.d.ts
deleted file mode 100644
index bb69ef4fe2799c2489d4206c1e992b1d5712e4c2..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circlemaskimage/CircleMaskImage.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import CircleMaskImage from '../../../plugins/circlemaskimage';
-export default CircleMaskImage;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/Factory.js
deleted file mode 100644
index d81af7fa2efcb971a7a4473a2f82e1c1010832e9..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinputbase/Factory.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import ColorInputBase from './ColorInputBase.js';
-import ObjectFactory from '../../ObjectFactory.js';
-import SetValue from '../../../../plugins/utils/object/SetValue.js';
-
-ObjectFactory.register('colorInputLite', function (config) {
- var gameObject = new ColorInputBase(this.scene, config);
- this.scene.add.existing(gameObject);
- return gameObject;
-});
-
-SetValue(window, 'RexPlugins.UI.ColorInputBase', ColorInputBase);
-
-export default ColorInputBase;
\ No newline at end of file
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/alignment.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/alignment.py
deleted file mode 100644
index 46f58c79061ed8030562300f131f97f04e5ea42f..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/alignment.py
+++ /dev/null
@@ -1,233 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-
-import os
-import argparse
-import numpy as np
-import torch
-from torch.utils.data import DataLoader
-from torchvision.transforms import transforms
-from utils.ImagesDataset import ImagesDataset
-
-import cv2
-import time
-import copy
-import imutils
-
-# for openpose body keypoint detector : # (src:https://github.com/Hzzone/pytorch-openpose)
-from openpose.src import util
-from openpose.src.body import Body
-
-# for paddlepaddle human segmentation : #(src: https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.5/contrib/PP-HumanSeg/)
-from PP_HumanSeg.deploy.infer import Predictor as PP_HumenSeg_Predictor
-
-import math
-
-
-def angle_between_points(p0, p1, p2):
- if p0[1] == -1 or p1[1] == -1 or p2[1] == -1:
- return -1
- a = (p1[0]-p0[0])**2 + (p1[1]-p0[1])**2
- b = (p1[0]-p2[0])**2 + (p1[1]-p2[1])**2
- c = (p2[0]-p0[0])**2 + (p2[1]-p0[1])**2
- if a * b == 0:
- return -1
- return math.acos((a+b-c) / math.sqrt(4*a*b)) * 180 / math.pi
-
-
-def crop_img_with_padding(img, keypoints, rect):
- person_xmin, person_xmax, ymin, ymax = rect
- img_h, img_w, _ = img.shape # find body center using keypoints
- middle_shoulder_x = keypoints[1][0]
- middle_hip_x = (keypoints[8][0] + keypoints[11][0]) // 2
- mid_x = (middle_hip_x + middle_shoulder_x) // 2
- mid_y = (ymin + ymax) // 2
- # find which side (l or r) is further than center x, use the further side
- if abs(mid_x-person_xmin) > abs(person_xmax-mid_x): # left further
- xmin = person_xmin
- xmax = mid_x + (mid_x-person_xmin)
- else:
- # may be negtive
- # in this case, the script won't output any image, leave the case like this
- # since we don't want to pad human body
- xmin = mid_x - (person_xmax-mid_x)
- xmax = person_xmax
-
- w = xmax - xmin
- h = ymax - ymin
- # pad rectangle to w:h = 1:2 ## calculate desired border length
- if h / w >= 2: # pad horizontally
- target_w = h // 2
- xmin_prime = int(mid_x - target_w / 2)
- xmax_prime = int(mid_x + target_w / 2)
- if xmin_prime < 0:
- pad_left = abs(xmin_prime) # - xmin
- xmin = 0
- else:
- pad_left = 0
- xmin = xmin_prime
- if xmax_prime > img_w:
- pad_right = xmax_prime - img_w
- xmax = img_w
- else:
- pad_right = 0
- xmax = xmax_prime
-
- cropped_img = img[int(ymin):int(ymax), int(xmin):int(xmax)]
- im_pad = cv2.copyMakeBorder(cropped_img, 0, 0, int(
- pad_left), int(pad_right), cv2.BORDER_REPLICATE)
- else: # pad vertically
- target_h = w * 2
- ymin_prime = mid_y - (target_h / 2)
- ymax_prime = mid_y + (target_h / 2)
- if ymin_prime < 0:
- pad_up = abs(ymin_prime) # - ymin
- ymin = 0
- else:
- pad_up = 0
- ymin = ymin_prime
- if ymax_prime > img_h:
- pad_down = ymax_prime - img_h
- ymax = img_h
- else:
- pad_down = 0
- ymax = ymax_prime
- print(ymin, ymax, xmin, xmax, img.shape)
-
- cropped_img = img[int(ymin):int(ymax), int(xmin):int(xmax)]
- im_pad = cv2.copyMakeBorder(cropped_img, int(pad_up), int(pad_down), 0,
- 0, cv2.BORDER_REPLICATE)
- result = cv2.resize(im_pad, (512, 1024), interpolation=cv2.INTER_AREA)
- return result
-
-
-def run(args):
- os.makedirs(args.output_folder, exist_ok=True)
- dataset = ImagesDataset(
- args.image_folder, transforms.Compose([transforms.ToTensor()]))
- dataloader = DataLoader(dataset, batch_size=1, shuffle=False)
-
- body_estimation = Body('openpose/model/body_pose_model.pth')
-
- total = len(dataloader)
- print('Num of dataloader : ', total)
- os.makedirs(f'{args.output_folder}', exist_ok=True)
- # os.makedirs(f'{args.output_folder}/middle_result', exist_ok=True)
-
- # initialzide HumenSeg
- human_seg_args = {}
- human_seg_args['cfg'] = 'PP_HumanSeg/export_model/deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax/deploy.yaml'
- human_seg_args['input_shape'] = [1024, 512]
- human_seg_args['save_dir'] = args.output_folder
- human_seg_args['soft_predict'] = False
- human_seg_args['use_gpu'] = True
- human_seg_args['test_speed'] = False
- human_seg_args['use_optic_flow'] = False
- human_seg_args['add_argmax'] = True
- human_seg_args = argparse.Namespace(**human_seg_args)
- human_seg = PP_HumenSeg_Predictor(human_seg_args)
-
- from tqdm import tqdm
- for fname, image in tqdm(dataloader):
- # try:
- # tensor to numpy image
- fname = fname[0]
- print(f'Processing \'{fname}\'.')
-
- image = (image.permute(0, 2, 3, 1) * 255).clamp(0, 255)
- image = image.squeeze(0).numpy() # --> tensor to numpy, (H,W,C)
- # avoid super high res img
- if image.shape[0] >= 2000: # height ### for shein image
- ratio = image.shape[0]/1200 # height
- dim = (int(image.shape[1]/ratio), 1200) # (width, height)
- image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
- image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
-
- # create segmentation
- # mybg = cv2.imread('mybg.png')
- comb, segmentation, bg, ori_img = human_seg.run(image, None) # mybg)
- # cv2.imwrite('comb.png',comb) # [0,255]
- # cv2.imwrite('alpha.png',segmentation*255) # segmentation [0,1] --> [0.255]
- # cv2.imwrite('bg.png',bg) #[0,255]
- # cv2.imwrite('ori_img.png',ori_img) # [0,255]
-
- masks_np = (segmentation * 255) # .byte().cpu().numpy() #1024,512,1
- mask0_np = masks_np[:, :, 0].astype(np.uint8) # [0, :, :]
- contours = cv2.findContours(
- mask0_np, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
- cnts = imutils.grab_contours(contours)
- c = max(cnts, key=cv2.contourArea)
- extTop = tuple(c[c[:, :, 1].argmin()][0])
- extBot = tuple(c[c[:, :, 1].argmax()][0])
- extBot = list(extBot)
- extTop = list(extTop)
- pad_range = int((extBot[1]-extTop[1])*0.05)
- # seg mask already reaches to the edge
- if (int(extTop[1]) <= 5 and int(extTop[1]) > 0) and (comb.shape[0] > int(extBot[1]) and int(extBot[1]) >= comb.shape[0]-5):
- # pad with pure white, top 100 px, bottom 100 px
- comb = cv2.copyMakeBorder(
- comb, pad_range+5, pad_range+5, 0, 0, cv2.BORDER_CONSTANT, value=[255, 255, 255])
- elif int(extTop[1]) <= 0 or int(extBot[1]) >= comb.shape[0]:
- print('PAD: body out of boundary', fname) # should not happened
- return {}
- else:
- # 105 instead of 100: give some extra space
- comb = cv2.copyMakeBorder(
- comb, pad_range+5, pad_range+5, 0, 0, cv2.BORDER_REPLICATE)
- extBot[1] = extBot[1] + pad_range+5
- extTop[1] = extTop[1] + pad_range+5
-
- extLeft = tuple(c[c[:, :, 0].argmin()][0])
- extRight = tuple(c[c[:, :, 0].argmax()][0])
- extLeft = list(extLeft)
- extRight = list(extRight)
- person_ymin = int(extTop[1])-pad_range # 100
- person_ymax = int(extBot[1])+pad_range # 100 #height
- if person_ymin < 0 or person_ymax > comb.shape[0]: # out of range
- return {}
- person_xmin = int(extLeft[0])
- person_xmax = int(extRight[0])
- rect = [person_xmin, person_xmax, person_ymin, person_ymax]
- # recimg = copy.deepcopy(comb)
- # cv2.rectangle(recimg,(person_xmin,person_ymin),(person_xmax,person_ymax),(0,255,0),2)
- # cv2.imwrite(f'{args.output_folder}/middle_result/{fname}_rec.png',recimg)
-
- # detect keypoints
- keypoints, subset = body_estimation(comb)
- # print(keypoints, subset, len(subset))
- if len(subset) != 1 or (len(subset) == 1 and subset[0][-1] < 15):
- print(
- f'Processing \'{fname}\'. Please import image contains one person only. Also can check segmentation mask. ')
- continue
-
- # canvas = copy.deepcopy(comb)
- # canvas = util.draw_bodypose(canvas, keypoints, subset, show_number=True)
- # cv2.imwrite(f'{args.output_folder}/middle_result/{fname}_keypoints.png',canvas)
-
- comb = crop_img_with_padding(comb, keypoints, rect)
-
- cv2.imwrite(f'{args.output_folder}/{fname}.png', comb)
- print(f' -- Finished processing \'{fname}\'. --')
- # except:
- # print(f'Processing \'{fname}\'. Not satisfied the alignment strategy.')
-
-
-if __name__ == '__main__':
- torch.backends.cudnn.benchmark = True
- torch.backends.cudnn.deterministic = False
-
- t1 = time.time()
- arg_formatter = argparse.ArgumentDefaultsHelpFormatter
- description = 'StyleGAN-Human data process'
- parser = argparse.ArgumentParser(formatter_class=arg_formatter,
- description=description)
- parser.add_argument('--image-folder', type=str, dest='image_folder')
- parser.add_argument('--output-folder',
- dest='output_folder', default='results', type=str)
- # parser.add_argument('--cfg', dest='cfg for segmentation', default='PP_HumanSeg/export_model/ppseg_lite_portrait_398x224_with_softmax/deploy.yaml', type=str)
-
- print('parsing arguments')
- cmd_args = parser.parse_args()
- run(cmd_args)
-
- print('total time elapsed: ', str(time.time() - t1))
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py
deleted file mode 100644
index 55929854a284626862af6666d3d981e83ad486fa..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# empty
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/__init__.py
deleted file mode 100644
index 4ad4af9199bbe297dbc6679fd9ecb46baa976053..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/commands/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from abc import ABC, abstractmethod
-from argparse import ArgumentParser
-
-
-class BaseDiffusersCLICommand(ABC):
- @staticmethod
- @abstractmethod
- def register_subcommand(parser: ArgumentParser):
- raise NotImplementedError()
-
- @abstractmethod
- def run(self):
- raise NotImplementedError()
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_ema.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_ema.py
deleted file mode 100644
index 32f7ae8a9a8e02abb60c59d54873015a443aa53b..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_ema.py
+++ /dev/null
@@ -1,159 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import tempfile
-import unittest
-
-import torch
-
-from diffusers import UNet2DConditionModel
-from diffusers.training_utils import EMAModel
-from diffusers.utils.testing_utils import enable_full_determinism, skip_mps, torch_device
-
-
-enable_full_determinism()
-
-
-class EMAModelTests(unittest.TestCase):
- model_id = "hf-internal-testing/tiny-stable-diffusion-pipe"
- batch_size = 1
- prompt_length = 77
- text_encoder_hidden_dim = 32
- num_in_channels = 4
- latent_height = latent_width = 64
- generator = torch.manual_seed(0)
-
- def get_models(self, decay=0.9999):
- unet = UNet2DConditionModel.from_pretrained(self.model_id, subfolder="unet")
- unet = unet.to(torch_device)
- ema_unet = EMAModel(unet.parameters(), decay=decay, model_cls=UNet2DConditionModel, model_config=unet.config)
- return unet, ema_unet
-
- def get_dummy_inputs(self):
- noisy_latents = torch.randn(
- self.batch_size, self.num_in_channels, self.latent_height, self.latent_width, generator=self.generator
- ).to(torch_device)
- timesteps = torch.randint(0, 1000, size=(self.batch_size,), generator=self.generator).to(torch_device)
- encoder_hidden_states = torch.randn(
- self.batch_size, self.prompt_length, self.text_encoder_hidden_dim, generator=self.generator
- ).to(torch_device)
- return noisy_latents, timesteps, encoder_hidden_states
-
- def simulate_backprop(self, unet):
- updated_state_dict = {}
- for k, param in unet.state_dict().items():
- updated_param = torch.randn_like(param) + (param * torch.randn_like(param))
- updated_state_dict.update({k: updated_param})
- unet.load_state_dict(updated_state_dict)
- return unet
-
- def test_optimization_steps_updated(self):
- unet, ema_unet = self.get_models()
- # Take the first (hypothetical) EMA step.
- ema_unet.step(unet.parameters())
- assert ema_unet.optimization_step == 1
-
- # Take two more.
- for _ in range(2):
- ema_unet.step(unet.parameters())
- assert ema_unet.optimization_step == 3
-
- def test_shadow_params_not_updated(self):
- unet, ema_unet = self.get_models()
- # Since the `unet` is not being updated (i.e., backprop'd)
- # there won't be any difference between the `params` of `unet`
- # and `ema_unet` even if we call `ema_unet.step(unet.parameters())`.
- ema_unet.step(unet.parameters())
- orig_params = list(unet.parameters())
- for s_param, param in zip(ema_unet.shadow_params, orig_params):
- assert torch.allclose(s_param, param)
-
- # The above holds true even if we call `ema.step()` multiple times since
- # `unet` params are still not being updated.
- for _ in range(4):
- ema_unet.step(unet.parameters())
- for s_param, param in zip(ema_unet.shadow_params, orig_params):
- assert torch.allclose(s_param, param)
-
- def test_shadow_params_updated(self):
- unet, ema_unet = self.get_models()
- # Here we simulate the parameter updates for `unet`. Since there might
- # be some parameters which are initialized to zero we take extra care to
- # initialize their values to something non-zero before the multiplication.
- unet_pseudo_updated_step_one = self.simulate_backprop(unet)
-
- # Take the EMA step.
- ema_unet.step(unet_pseudo_updated_step_one.parameters())
-
- # Now the EMA'd parameters won't be equal to the original model parameters.
- orig_params = list(unet_pseudo_updated_step_one.parameters())
- for s_param, param in zip(ema_unet.shadow_params, orig_params):
- assert ~torch.allclose(s_param, param)
-
- # Ensure this is the case when we take multiple EMA steps.
- for _ in range(4):
- ema_unet.step(unet.parameters())
- for s_param, param in zip(ema_unet.shadow_params, orig_params):
- assert ~torch.allclose(s_param, param)
-
- def test_consecutive_shadow_params_updated(self):
- # If we call EMA step after a backpropagation consecutively for two times,
- # the shadow params from those two steps should be different.
- unet, ema_unet = self.get_models()
-
- # First backprop + EMA
- unet_step_one = self.simulate_backprop(unet)
- ema_unet.step(unet_step_one.parameters())
- step_one_shadow_params = ema_unet.shadow_params
-
- # Second backprop + EMA
- unet_step_two = self.simulate_backprop(unet_step_one)
- ema_unet.step(unet_step_two.parameters())
- step_two_shadow_params = ema_unet.shadow_params
-
- for step_one, step_two in zip(step_one_shadow_params, step_two_shadow_params):
- assert ~torch.allclose(step_one, step_two)
-
- def test_zero_decay(self):
- # If there's no decay even if there are backprops, EMA steps
- # won't take any effect i.e., the shadow params would remain the
- # same.
- unet, ema_unet = self.get_models(decay=0.0)
- unet_step_one = self.simulate_backprop(unet)
- ema_unet.step(unet_step_one.parameters())
- step_one_shadow_params = ema_unet.shadow_params
-
- unet_step_two = self.simulate_backprop(unet_step_one)
- ema_unet.step(unet_step_two.parameters())
- step_two_shadow_params = ema_unet.shadow_params
-
- for step_one, step_two in zip(step_one_shadow_params, step_two_shadow_params):
- assert torch.allclose(step_one, step_two)
-
- @skip_mps
- def test_serialization(self):
- unet, ema_unet = self.get_models()
- noisy_latents, timesteps, encoder_hidden_states = self.get_dummy_inputs()
-
- with tempfile.TemporaryDirectory() as tmpdir:
- ema_unet.save_pretrained(tmpdir)
- loaded_unet = UNet2DConditionModel.from_pretrained(tmpdir, model_cls=UNet2DConditionModel)
- loaded_unet = loaded_unet.to(unet.device)
-
- # Since no EMA step has been performed the outputs should match.
- output = unet(noisy_latents, timesteps, encoder_hidden_states).sample
- output_loaded = loaded_unet(noisy_latents, timesteps, encoder_hidden_states).sample
-
- assert torch.allclose(output, output_loaded, atol=1e-4)
diff --git a/spaces/AndyCer/TheBloke-stable-vicuna-13B-HF/app.py b/spaces/AndyCer/TheBloke-stable-vicuna-13B-HF/app.py
deleted file mode 100644
index 7239a6282fcebda631c725b867b40980d06b0a1d..0000000000000000000000000000000000000000
--- a/spaces/AndyCer/TheBloke-stable-vicuna-13B-HF/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/TheBloke/stable-vicuna-13B-HF").launch()
\ No newline at end of file
diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/readme/README_en.md b/spaces/Anthony7906/MengHuiMXD_GPT/readme/README_en.md
deleted file mode 100644
index a906ecb3ebc411f5cdeb33d661266a489a20c3b0..0000000000000000000000000000000000000000
--- a/spaces/Anthony7906/MengHuiMXD_GPT/readme/README_en.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-
-川虎 Chat 🐯 Chuanhu Chat
-
-
-
-
-
-
-
Lightweight and User-friendly Web-UI for LLMs including ChatGPT/ChatGLM/LLaMA
-
-
-
-
-
-
-
-
-
-
-
- Streaming / Unlimited conversations / Save history / Preset prompts / Chat with files / Web search
- LaTeX rendering / Table rendering / Code highlighting
- Auto dark mode / Adaptive web interface / WeChat-like theme
- Multi-parameters tuning / Multi-API-Key support / Multi-user support
- Compatible with GPT-4 / Local deployment for LLMs
-
-
Video Tutorial
- ·
-
2.0 Introduction
- ·
-
3.0 Introduction & Tutorial
- ||
-
Online trial
- ·
-
One-Click deployment
-
-
-
-
-
-
-
-## Usage Tips
-
-- To better control the ChatGPT, use System Prompt.
-- To use a Prompt Template, select the Prompt Template Collection file first, and then choose certain prompt from the drop-down menu.
-- To try again if the response is unsatisfactory, use `🔄 Regenerate` button.
-- To start a new line in the input box, press Shift + Enter keys.
-- To quickly switch between input history, press ↑ and ↓ key in the input box.
-- To deploy the program onto a server, change the last line of the program to `demo.launch(server_name="0.0.0.0", server_port=)`.
-- To get a public shared link, change the last line of the program to `demo.launch(share=True)`. Please be noted that the program must be running in order to be accessed via a public link.
-- To use it in Hugging Face Spaces: It is recommended to **Duplicate Space** and run the program in your own Space for a faster and more secure experience.
-
-## Installation
-
-```shell
-git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git
-cd ChuanhuChatGPT
-pip install -r requirements.txt
-```
-
-Then make a copy of `config_example.json`, rename it to `config.json`, and then fill in your API-Key and other settings in the file.
-
-```shell
-python ChuanhuChatbot.py
-```
-
-A browser window will open and you will be able to chat with ChatGPT.
-
-> **Note**
->
-> Please check our [wiki page](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程) for detailed instructions.
-
-## Troubleshooting
-
-When you encounter problems, you should try manually pulling the latest changes of this project first. The steps are as follows:
-
-1. Download the latest code archive by clicking on `Download ZIP` on the webpage, or
- ```shell
- git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f
- ```
-2. Try installing the dependencies again (as this project may have introduced new dependencies)
- ```
- pip install -r requirements.txt
- ```
-3. Update Gradio
- ```
- pip install gradio --upgrade --force-reinstall
- ```
-
-Generally, you can solve most problems by following these steps.
-
-If the problem still exists, please refer to this page: [Frequently Asked Questions (FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)
-
-This page lists almost all the possible problems and solutions. Please read it carefully.
-
-## More Information
-
-More information could be found in our [wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki):
-
-- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization)
-- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)
-- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目)
-- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志)
-- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可)
-
-## Starchart
-
-[](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date)
-
-## Contributors
-
-
-
-
-
-## Sponsor
-
-🐯 If you find this project helpful, feel free to buy me a coke or a cup of coffee~
-
-
-
-
diff --git a/spaces/AriaMei/TTSdemo/losses.py b/spaces/AriaMei/TTSdemo/losses.py
deleted file mode 100644
index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000
--- a/spaces/AriaMei/TTSdemo/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
-
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/backbone/utils.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/backbone/utils.py
deleted file mode 100644
index e71db21f1223c87cceeb422a70888f7bac42bb18..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/backbone/utils.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# This code is from https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/utils.py
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-__all__ = [
- "window_partition",
- "window_unpartition",
- "add_decomposed_rel_pos",
- "get_abs_pos",
- "PatchEmbed",
-]
-
-def window_partition(x, window_size):
- """
- Partition into non-overlapping windows with padding if needed.
- Args:
- x (tensor): input tokens with [B, H, W, C].
- window_size (int): window size.
-
- Returns:
- windows: windows after partition with [B * num_windows, window_size, window_size, C].
- (Hp, Wp): padded height and width before partition
- """
- B, H, W, C = x.shape
-
- pad_h = (window_size - H % window_size) % window_size
- pad_w = (window_size - W % window_size) % window_size
- if pad_h > 0 or pad_w > 0:
- x = F.pad(x, (0, 0, 0, pad_w, 0, pad_h))
- Hp, Wp = H + pad_h, W + pad_w
-
- x = x.view(B, Hp // window_size, window_size, Wp // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows, (Hp, Wp)
-
-
-def window_unpartition(windows, window_size, pad_hw, hw):
- """
- Window unpartition into original sequences and removing padding.
- Args:
- x (tensor): input tokens with [B * num_windows, window_size, window_size, C].
- window_size (int): window size.
- pad_hw (Tuple): padded height and width (Hp, Wp).
- hw (Tuple): original height and width (H, W) before padding.
-
- Returns:
- x: unpartitioned sequences with [B, H, W, C].
- """
- Hp, Wp = pad_hw
- H, W = hw
- B = windows.shape[0] // (Hp * Wp // window_size // window_size)
- x = windows.view(B, Hp // window_size, Wp // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, Hp, Wp, -1)
-
- if Hp > H or Wp > W:
- x = x[:, :H, :W, :].contiguous()
- return x
-
-
-def get_rel_pos(q_size, k_size, rel_pos):
- """
- Get relative positional embeddings according to the relative positions of
- query and key sizes.
- Args:
- q_size (int): size of query q.
- k_size (int): size of key k.
- rel_pos (Tensor): relative position embeddings (L, C).
-
- Returns:
- Extracted positional embeddings according to relative positions.
- """
- max_rel_dist = int(2 * max(q_size, k_size) - 1)
- # Interpolate rel pos if needed.
- if rel_pos.shape[0] != max_rel_dist:
- # Interpolate rel pos.
- rel_pos_resized = F.interpolate(
- rel_pos.reshape(1, rel_pos.shape[0], -1).permute(0, 2, 1),
- size=max_rel_dist,
- mode="linear",
- )
- rel_pos_resized = rel_pos_resized.reshape(-1, max_rel_dist).permute(1, 0)
- else:
- rel_pos_resized = rel_pos
-
- # Scale the coords with short length if shapes for q and k are different.
- q_coords = torch.arange(q_size)[:, None] * max(k_size / q_size, 1.0)
- k_coords = torch.arange(k_size)[None, :] * max(q_size / k_size, 1.0)
- relative_coords = (q_coords - k_coords) + (k_size - 1) * max(q_size / k_size, 1.0)
-
- return rel_pos_resized[relative_coords.long()]
-
-
-def add_decomposed_rel_pos(attn, q, rel_pos_h, rel_pos_w, q_size, k_size):
- """
- Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`.
- https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py # noqa B950
- Args:
- attn (Tensor): attention map.
- q (Tensor): query q in the attention layer with shape (B, q_h * q_w, C).
- rel_pos_h (Tensor): relative position embeddings (Lh, C) for height axis.
- rel_pos_w (Tensor): relative position embeddings (Lw, C) for width axis.
- q_size (Tuple): spatial sequence size of query q with (q_h, q_w).
- k_size (Tuple): spatial sequence size of key k with (k_h, k_w).
-
- Returns:
- attn (Tensor): attention map with added relative positional embeddings.
- """
- q_h, q_w = q_size
- k_h, k_w = k_size
- Rh = get_rel_pos(q_h, k_h, rel_pos_h)
- Rw = get_rel_pos(q_w, k_w, rel_pos_w)
-
- B, _, dim = q.shape
- r_q = q.reshape(B, q_h, q_w, dim)
- rel_h = torch.einsum("bhwc,hkc->bhwk", r_q, Rh)
- rel_w = torch.einsum("bhwc,wkc->bhwk", r_q, Rw)
-
- attn = (
- attn.view(B, q_h, q_w, k_h, k_w) + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :]
- ).view(B, q_h * q_w, k_h * k_w)
-
- return attn
-
-
-def get_abs_pos(abs_pos, has_cls_token, hw):
- """
- Calculate absolute positional embeddings. If needed, resize embeddings and remove cls_token
- dimension for the original embeddings.
- Args:
- abs_pos (Tensor): absolute positional embeddings with (1, num_position, C).
- has_cls_token (bool): If true, has 1 embedding in abs_pos for cls token.
- hw (Tuple): size of input image tokens.
-
- Returns:
- Absolute positional embeddings after processing with shape (1, H, W, C)
- """
- h, w = hw
- if has_cls_token:
- abs_pos = abs_pos[:, 1:]
- xy_num = abs_pos.shape[1]
- size = int(math.sqrt(xy_num))
- assert size * size == xy_num
-
- if size != h or size != w:
- new_abs_pos = F.interpolate(
- abs_pos.reshape(1, size, size, -1).permute(0, 3, 1, 2),
- size=(h, w),
- mode="bicubic",
- align_corners=False,
- )
-
- return new_abs_pos.permute(0, 2, 3, 1)
- else:
- return abs_pos.reshape(1, h, w, -1)
-
-
-class PatchEmbed(nn.Module):
- """
- Image to Patch Embedding.
- """
-
- def __init__(
- self, kernel_size=(16, 16), stride=(16, 16), padding=(0, 0), in_chans=3, embed_dim=768
- ):
- """
- Args:
- kernel_size (Tuple): kernel size of the projection layer.
- stride (Tuple): stride of the projection layer.
- padding (Tuple): padding size of the projection layer.
- in_chans (int): Number of input image channels.
- embed_dim (int): embed_dim (int): Patch embedding dimension.
- """
- super().__init__()
-
- self.proj = nn.Conv2d(
- in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding
- )
-
- def forward(self, x):
- x = self.proj(x)
- # B C H W -> B H W C
- x = x.permute(0, 2, 3, 1)
- return x
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-Detection/fcos_R_50_FPN_1x.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-Detection/fcos_R_50_FPN_1x.py
deleted file mode 100644
index 86f83c68786f5995c462ade5f3067072d69f047e..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-Detection/fcos_R_50_FPN_1x.py
+++ /dev/null
@@ -1,11 +0,0 @@
-from ..common.optim import SGD as optimizer
-from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
-from ..common.data.coco import dataloader
-from ..common.models.fcos import model
-from ..common.train import train
-
-dataloader.train.mapper.use_instance_mask = False
-optimizer.lr = 0.01
-
-model.backbone.bottom_up.freeze_at = 2
-train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/build.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/build.py
deleted file mode 100644
index a31369d1693f86154a7a9249fc043d49f3e9f390..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/build.py
+++ /dev/null
@@ -1,542 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import itertools
-import logging
-import numpy as np
-import operator
-import pickle
-from typing import Any, Callable, Dict, List, Optional, Union
-import torch
-import torch.utils.data as torchdata
-from tabulate import tabulate
-from termcolor import colored
-
-from detectron2.config import configurable
-from detectron2.structures import BoxMode
-from detectron2.utils.comm import get_world_size
-from detectron2.utils.env import seed_all_rng
-from detectron2.utils.file_io import PathManager
-from detectron2.utils.logger import _log_api_usage, log_first_n
-
-from .catalog import DatasetCatalog, MetadataCatalog
-from .common import AspectRatioGroupedDataset, DatasetFromList, MapDataset, ToIterableDataset
-from .dataset_mapper import DatasetMapper
-from .detection_utils import check_metadata_consistency
-from .samplers import (
- InferenceSampler,
- RandomSubsetTrainingSampler,
- RepeatFactorTrainingSampler,
- TrainingSampler,
-)
-
-"""
-This file contains the default logic to build a dataloader for training or testing.
-"""
-
-__all__ = [
- "build_batch_data_loader",
- "build_detection_train_loader",
- "build_detection_test_loader",
- "get_detection_dataset_dicts",
- "load_proposals_into_dataset",
- "print_instances_class_histogram",
-]
-
-
-def filter_images_with_only_crowd_annotations(dataset_dicts):
- """
- Filter out images with none annotations or only crowd annotations
- (i.e., images without non-crowd annotations).
- A common training-time preprocessing on COCO dataset.
-
- Args:
- dataset_dicts (list[dict]): annotations in Detectron2 Dataset format.
-
- Returns:
- list[dict]: the same format, but filtered.
- """
- num_before = len(dataset_dicts)
-
- def valid(anns):
- for ann in anns:
- if ann.get("iscrowd", 0) == 0:
- return True
- return False
-
- dataset_dicts = [x for x in dataset_dicts if valid(x["annotations"])]
- num_after = len(dataset_dicts)
- logger = logging.getLogger(__name__)
- logger.info(
- "Removed {} images with no usable annotations. {} images left.".format(
- num_before - num_after, num_after
- )
- )
- return dataset_dicts
-
-
-def filter_images_with_few_keypoints(dataset_dicts, min_keypoints_per_image):
- """
- Filter out images with too few number of keypoints.
-
- Args:
- dataset_dicts (list[dict]): annotations in Detectron2 Dataset format.
-
- Returns:
- list[dict]: the same format as dataset_dicts, but filtered.
- """
- num_before = len(dataset_dicts)
-
- def visible_keypoints_in_image(dic):
- # Each keypoints field has the format [x1, y1, v1, ...], where v is visibility
- annotations = dic["annotations"]
- return sum(
- (np.array(ann["keypoints"][2::3]) > 0).sum()
- for ann in annotations
- if "keypoints" in ann
- )
-
- dataset_dicts = [
- x for x in dataset_dicts if visible_keypoints_in_image(x) >= min_keypoints_per_image
- ]
- num_after = len(dataset_dicts)
- logger = logging.getLogger(__name__)
- logger.info(
- "Removed {} images with fewer than {} keypoints.".format(
- num_before - num_after, min_keypoints_per_image
- )
- )
- return dataset_dicts
-
-
-def load_proposals_into_dataset(dataset_dicts, proposal_file):
- """
- Load precomputed object proposals into the dataset.
-
- The proposal file should be a pickled dict with the following keys:
-
- - "ids": list[int] or list[str], the image ids
- - "boxes": list[np.ndarray], each is an Nx4 array of boxes corresponding to the image id
- - "objectness_logits": list[np.ndarray], each is an N sized array of objectness scores
- corresponding to the boxes.
- - "bbox_mode": the BoxMode of the boxes array. Defaults to ``BoxMode.XYXY_ABS``.
-
- Args:
- dataset_dicts (list[dict]): annotations in Detectron2 Dataset format.
- proposal_file (str): file path of pre-computed proposals, in pkl format.
-
- Returns:
- list[dict]: the same format as dataset_dicts, but added proposal field.
- """
- logger = logging.getLogger(__name__)
- logger.info("Loading proposals from: {}".format(proposal_file))
-
- with PathManager.open(proposal_file, "rb") as f:
- proposals = pickle.load(f, encoding="latin1")
-
- # Rename the key names in D1 proposal files
- rename_keys = {"indexes": "ids", "scores": "objectness_logits"}
- for key in rename_keys:
- if key in proposals:
- proposals[rename_keys[key]] = proposals.pop(key)
-
- # Fetch the indexes of all proposals that are in the dataset
- # Convert image_id to str since they could be int.
- img_ids = set({str(record["image_id"]) for record in dataset_dicts})
- id_to_index = {str(id): i for i, id in enumerate(proposals["ids"]) if str(id) in img_ids}
-
- # Assuming default bbox_mode of precomputed proposals are 'XYXY_ABS'
- bbox_mode = BoxMode(proposals["bbox_mode"]) if "bbox_mode" in proposals else BoxMode.XYXY_ABS
-
- for record in dataset_dicts:
- # Get the index of the proposal
- i = id_to_index[str(record["image_id"])]
-
- boxes = proposals["boxes"][i]
- objectness_logits = proposals["objectness_logits"][i]
- # Sort the proposals in descending order of the scores
- inds = objectness_logits.argsort()[::-1]
- record["proposal_boxes"] = boxes[inds]
- record["proposal_objectness_logits"] = objectness_logits[inds]
- record["proposal_bbox_mode"] = bbox_mode
-
- return dataset_dicts
-
-
-def print_instances_class_histogram(dataset_dicts, class_names):
- """
- Args:
- dataset_dicts (list[dict]): list of dataset dicts.
- class_names (list[str]): list of class names (zero-indexed).
- """
- num_classes = len(class_names)
- hist_bins = np.arange(num_classes + 1)
- histogram = np.zeros((num_classes,), dtype=np.int)
- for entry in dataset_dicts:
- annos = entry["annotations"]
- classes = np.asarray(
- [x["category_id"] for x in annos if not x.get("iscrowd", 0)], dtype=np.int
- )
- if len(classes):
- assert classes.min() >= 0, f"Got an invalid category_id={classes.min()}"
- assert (
- classes.max() < num_classes
- ), f"Got an invalid category_id={classes.max()} for a dataset of {num_classes} classes"
- histogram += np.histogram(classes, bins=hist_bins)[0]
-
- N_COLS = min(6, len(class_names) * 2)
-
- def short_name(x):
- # make long class names shorter. useful for lvis
- if len(x) > 13:
- return x[:11] + ".."
- return x
-
- data = list(
- itertools.chain(*[[short_name(class_names[i]), int(v)] for i, v in enumerate(histogram)])
- )
- total_num_instances = sum(data[1::2])
- data.extend([None] * (N_COLS - (len(data) % N_COLS)))
- if num_classes > 1:
- data.extend(["total", total_num_instances])
- data = itertools.zip_longest(*[data[i::N_COLS] for i in range(N_COLS)])
- table = tabulate(
- data,
- headers=["category", "#instances"] * (N_COLS // 2),
- tablefmt="pipe",
- numalign="left",
- stralign="center",
- )
- log_first_n(
- logging.INFO,
- "Distribution of instances among all {} categories:\n".format(num_classes)
- + colored(table, "cyan"),
- key="message",
- )
-
-
-def get_detection_dataset_dicts(
- names,
- filter_empty=True,
- min_keypoints=0,
- proposal_files=None,
- check_consistency=True,
-):
- """
- Load and prepare dataset dicts for instance detection/segmentation and semantic segmentation.
-
- Args:
- names (str or list[str]): a dataset name or a list of dataset names
- filter_empty (bool): whether to filter out images without instance annotations
- min_keypoints (int): filter out images with fewer keypoints than
- `min_keypoints`. Set to 0 to do nothing.
- proposal_files (list[str]): if given, a list of object proposal files
- that match each dataset in `names`.
- check_consistency (bool): whether to check if datasets have consistent metadata.
-
- Returns:
- list[dict]: a list of dicts following the standard dataset dict format.
- """
- if isinstance(names, str):
- names = [names]
- assert len(names), names
- dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in names]
- for dataset_name, dicts in zip(names, dataset_dicts):
- assert len(dicts), "Dataset '{}' is empty!".format(dataset_name)
-
- if proposal_files is not None:
- assert len(names) == len(proposal_files)
- # load precomputed proposals from proposal files
- dataset_dicts = [
- load_proposals_into_dataset(dataset_i_dicts, proposal_file)
- for dataset_i_dicts, proposal_file in zip(dataset_dicts, proposal_files)
- ]
-
- if isinstance(dataset_dicts[0], torchdata.Dataset):
- return torchdata.ConcatDataset(dataset_dicts)
-
- dataset_dicts = list(itertools.chain.from_iterable(dataset_dicts))
-
- has_instances = "annotations" in dataset_dicts[0]
- if filter_empty and has_instances:
- dataset_dicts = filter_images_with_only_crowd_annotations(dataset_dicts)
- if min_keypoints > 0 and has_instances:
- dataset_dicts = filter_images_with_few_keypoints(dataset_dicts, min_keypoints)
-
- if check_consistency and has_instances:
- try:
- class_names = MetadataCatalog.get(names[0]).thing_classes
- check_metadata_consistency("thing_classes", names)
- print_instances_class_histogram(dataset_dicts, class_names)
- except AttributeError: # class names are not available for this dataset
- pass
-
- assert len(dataset_dicts), "No valid data found in {}.".format(",".join(names))
- return dataset_dicts
-
-
-def build_batch_data_loader(
- dataset,
- sampler,
- total_batch_size,
- *,
- aspect_ratio_grouping=False,
- num_workers=0,
- collate_fn=None,
-):
- """
- Build a batched dataloader. The main differences from `torch.utils.data.DataLoader` are:
- 1. support aspect ratio grouping options
- 2. use no "batch collation", because this is common for detection training
-
- Args:
- dataset (torch.utils.data.Dataset): a pytorch map-style or iterable dataset.
- sampler (torch.utils.data.sampler.Sampler or None): a sampler that produces indices.
- Must be provided iff. ``dataset`` is a map-style dataset.
- total_batch_size, aspect_ratio_grouping, num_workers, collate_fn: see
- :func:`build_detection_train_loader`.
-
- Returns:
- iterable[list]. Length of each list is the batch size of the current
- GPU. Each element in the list comes from the dataset.
- """
- world_size = get_world_size()
- assert (
- total_batch_size > 0 and total_batch_size % world_size == 0
- ), "Total batch size ({}) must be divisible by the number of gpus ({}).".format(
- total_batch_size, world_size
- )
- batch_size = total_batch_size // world_size
-
- if isinstance(dataset, torchdata.IterableDataset):
- assert sampler is None, "sampler must be None if dataset is IterableDataset"
- else:
- dataset = ToIterableDataset(dataset, sampler)
-
- if aspect_ratio_grouping:
- data_loader = torchdata.DataLoader(
- dataset,
- num_workers=num_workers,
- collate_fn=operator.itemgetter(0), # don't batch, but yield individual elements
- worker_init_fn=worker_init_reset_seed,
- ) # yield individual mapped dict
- data_loader = AspectRatioGroupedDataset(data_loader, batch_size)
- if collate_fn is None:
- return data_loader
- return MapDataset(data_loader, collate_fn)
- else:
- return torchdata.DataLoader(
- dataset,
- batch_size=batch_size,
- drop_last=True,
- num_workers=num_workers,
- collate_fn=trivial_batch_collator if collate_fn is None else collate_fn,
- worker_init_fn=worker_init_reset_seed,
- )
-
-
-def _train_loader_from_config(cfg, mapper=None, *, dataset=None, sampler=None):
- if dataset is None:
- dataset = get_detection_dataset_dicts(
- cfg.DATASETS.TRAIN,
- filter_empty=cfg.DATALOADER.FILTER_EMPTY_ANNOTATIONS,
- min_keypoints=cfg.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE
- if cfg.MODEL.KEYPOINT_ON
- else 0,
- proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None,
- )
- _log_api_usage("dataset." + cfg.DATASETS.TRAIN[0])
-
- if mapper is None:
- mapper = DatasetMapper(cfg, True)
-
- if sampler is None:
- sampler_name = cfg.DATALOADER.SAMPLER_TRAIN
- logger = logging.getLogger(__name__)
- logger.info("Using training sampler {}".format(sampler_name))
- if sampler_name == "TrainingSampler":
- sampler = TrainingSampler(len(dataset))
- elif sampler_name == "RepeatFactorTrainingSampler":
- repeat_factors = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency(
- dataset, cfg.DATALOADER.REPEAT_THRESHOLD
- )
- sampler = RepeatFactorTrainingSampler(repeat_factors)
- elif sampler_name == "RandomSubsetTrainingSampler":
- sampler = RandomSubsetTrainingSampler(len(dataset), cfg.DATALOADER.RANDOM_SUBSET_RATIO)
- else:
- raise ValueError("Unknown training sampler: {}".format(sampler_name))
-
- return {
- "dataset": dataset,
- "sampler": sampler,
- "mapper": mapper,
- "total_batch_size": cfg.SOLVER.IMS_PER_BATCH,
- "aspect_ratio_grouping": cfg.DATALOADER.ASPECT_RATIO_GROUPING,
- "num_workers": cfg.DATALOADER.NUM_WORKERS,
- }
-
-
-@configurable(from_config=_train_loader_from_config)
-def build_detection_train_loader(
- dataset,
- *,
- mapper,
- sampler=None,
- total_batch_size,
- aspect_ratio_grouping=True,
- num_workers=0,
- collate_fn=None,
-):
- """
- Build a dataloader for object detection with some default features.
-
- Args:
- dataset (list or torch.utils.data.Dataset): a list of dataset dicts,
- or a pytorch dataset (either map-style or iterable). It can be obtained
- by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`.
- mapper (callable): a callable which takes a sample (dict) from dataset and
- returns the format to be consumed by the model.
- When using cfg, the default choice is ``DatasetMapper(cfg, is_train=True)``.
- sampler (torch.utils.data.sampler.Sampler or None): a sampler that produces
- indices to be applied on ``dataset``.
- If ``dataset`` is map-style, the default sampler is a :class:`TrainingSampler`,
- which coordinates an infinite random shuffle sequence across all workers.
- Sampler must be None if ``dataset`` is iterable.
- total_batch_size (int): total batch size across all workers.
- aspect_ratio_grouping (bool): whether to group images with similar
- aspect ratio for efficiency. When enabled, it requires each
- element in dataset be a dict with keys "width" and "height".
- num_workers (int): number of parallel data loading workers
- collate_fn: a function that determines how to do batching, same as the argument of
- `torch.utils.data.DataLoader`. Defaults to do no collation and return a list of
- data. No collation is OK for small batch size and simple data structures.
- If your batch size is large and each sample contains too many small tensors,
- it's more efficient to collate them in data loader.
-
- Returns:
- torch.utils.data.DataLoader:
- a dataloader. Each output from it is a ``list[mapped_element]`` of length
- ``total_batch_size / num_workers``, where ``mapped_element`` is produced
- by the ``mapper``.
- """
- if isinstance(dataset, list):
- dataset = DatasetFromList(dataset, copy=False)
- if mapper is not None:
- dataset = MapDataset(dataset, mapper)
-
- if isinstance(dataset, torchdata.IterableDataset):
- assert sampler is None, "sampler must be None if dataset is IterableDataset"
- else:
- if sampler is None:
- sampler = TrainingSampler(len(dataset))
- assert isinstance(sampler, torchdata.Sampler), f"Expect a Sampler but got {type(sampler)}"
- return build_batch_data_loader(
- dataset,
- sampler,
- total_batch_size,
- aspect_ratio_grouping=aspect_ratio_grouping,
- num_workers=num_workers,
- collate_fn=collate_fn,
- )
-
-
-def _test_loader_from_config(cfg, dataset_name, mapper=None):
- """
- Uses the given `dataset_name` argument (instead of the names in cfg), because the
- standard practice is to evaluate each test set individually (not combining them).
- """
- if isinstance(dataset_name, str):
- dataset_name = [dataset_name]
-
- dataset = get_detection_dataset_dicts(
- dataset_name,
- filter_empty=False,
- proposal_files=[
- cfg.DATASETS.PROPOSAL_FILES_TEST[list(cfg.DATASETS.TEST).index(x)] for x in dataset_name
- ]
- if cfg.MODEL.LOAD_PROPOSALS
- else None,
- )
- if mapper is None:
- mapper = DatasetMapper(cfg, False)
- return {
- "dataset": dataset,
- "mapper": mapper,
- "num_workers": cfg.DATALOADER.NUM_WORKERS,
- "sampler": InferenceSampler(len(dataset)),
- }
-
-
-@configurable(from_config=_test_loader_from_config)
-def build_detection_test_loader(
- dataset: Union[List[Any], torchdata.Dataset],
- *,
- mapper: Callable[[Dict[str, Any]], Any],
- sampler: Optional[torchdata.Sampler] = None,
- batch_size: int = 1,
- num_workers: int = 0,
- collate_fn: Optional[Callable[[List[Any]], Any]] = None,
-) -> torchdata.DataLoader:
- """
- Similar to `build_detection_train_loader`, with default batch size = 1,
- and sampler = :class:`InferenceSampler`. This sampler coordinates all workers
- to produce the exact set of all samples.
-
- Args:
- dataset: a list of dataset dicts,
- or a pytorch dataset (either map-style or iterable). They can be obtained
- by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`.
- mapper: a callable which takes a sample (dict) from dataset
- and returns the format to be consumed by the model.
- When using cfg, the default choice is ``DatasetMapper(cfg, is_train=False)``.
- sampler: a sampler that produces
- indices to be applied on ``dataset``. Default to :class:`InferenceSampler`,
- which splits the dataset across all workers. Sampler must be None
- if `dataset` is iterable.
- batch_size: the batch size of the data loader to be created.
- Default to 1 image per worker since this is the standard when reporting
- inference time in papers.
- num_workers: number of parallel data loading workers
- collate_fn: same as the argument of `torch.utils.data.DataLoader`.
- Defaults to do no collation and return a list of data.
-
- Returns:
- DataLoader: a torch DataLoader, that loads the given detection
- dataset, with test-time transformation and batching.
-
- Examples:
- ::
- data_loader = build_detection_test_loader(
- DatasetRegistry.get("my_test"),
- mapper=DatasetMapper(...))
-
- # or, instantiate with a CfgNode:
- data_loader = build_detection_test_loader(cfg, "my_test")
- """
- if isinstance(dataset, list):
- dataset = DatasetFromList(dataset, copy=False)
- if mapper is not None:
- dataset = MapDataset(dataset, mapper)
- if isinstance(dataset, torchdata.IterableDataset):
- assert sampler is None, "sampler must be None if dataset is IterableDataset"
- else:
- if sampler is None:
- sampler = InferenceSampler(len(dataset))
- return torchdata.DataLoader(
- dataset,
- batch_size=batch_size,
- sampler=sampler,
- drop_last=False,
- num_workers=num_workers,
- collate_fn=trivial_batch_collator if collate_fn is None else collate_fn,
- )
-
-
-def trivial_batch_collator(batch):
- """
- A batch collator that does nothing.
- """
- return batch
-
-
-def worker_init_reset_seed(worker_id):
- initial_seed = torch.initial_seed() % 2 ** 31
- seed_all_rng(initial_seed + worker_id)
diff --git a/spaces/Benson/text-generation/Dockerfile b/spaces/Benson/text-generation/Dockerfile
deleted file mode 100644
index 14271315789e0c87a912ea3cc090345eb9d90595..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Dockerfile
+++ /dev/null
@@ -1,28 +0,0 @@
-# Use the official Python 3.9 image
-FROM python:3.9
-
-# Set the working directory to /code
-WORKDIR /code
-
-# Copy the current directory contents into the container at /code
-COPY ./requirements.txt /code/requirements.txt
-
-# Install requirements.txt
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -m -u 1000 user
-# Switch to the "user" user
-USER user
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app
-
-# Start the FastAPI app on port 7860, the default port expected by Spaces
-CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
diff --git a/spaces/Benson/text-generation/Examples/Baloncesto Estrellas Multijugador Mod Apk Dinero Ilimitado Y Oro.md b/spaces/Benson/text-generation/Examples/Baloncesto Estrellas Multijugador Mod Apk Dinero Ilimitado Y Oro.md
deleted file mode 100644
index 52d8eadfe0e604bdbb7455303dcc5b2989be8d82..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Baloncesto Estrellas Multijugador Mod Apk Dinero Ilimitado Y Oro.md
+++ /dev/null
@@ -1,50 +0,0 @@
-
-Estrellas de baloncesto multijugador Mod APK: Dinero ilimitado y oro
-¿Te gusta jugar al baloncesto en tu dispositivo móvil? ¿Quieres experimentar la emoción de los partidos en línea 1v1 contra jugadores reales de todo el mundo? ¿Quieres tener recursos ilimitados para personalizar a tu personaje y desbloquear nuevas pelotas de baloncesto? Si respondiste sí a cualquiera de estas preguntas, entonces usted debe tratar de estrellas de baloncesto multijugador mod APK. Esta es una versión modificada del popular juego de baloncesto de Miniclip que te da dinero y oro ilimitados, así como otras características increíbles que mejorarán tu experiencia de juego. En este artículo, le diremos qué es Basketball Stars, qué es un mod APK, por qué debe usar Basketball Stars mod multijugador APK, qué características ofrece y cómo descargarlo e instalarlo en su dispositivo.
- Introducción
-¿Qué son las estrellas del baloncesto?
-Basketball Stars es un juego de baloncesto gratuito de Miniclip que te permite jugar partidos en línea 1v1 contra jugadores reales de todo el mundo. Puedes elegir entre diferentes modos de juego, como Attacker-Defender, Shooting Race o Dunk Contest. También puedes personalizar a tu personaje con diferentes atuendos, accesorios, peinados, tatuajes y más. También puede recoger y actualizar diferentes balones de baloncesto con efectos y habilidades únicas. Basketball Stars es un juego divertido y adictivo que pondrá a prueba tus habilidades y reflejos en la cancha.
-baloncesto estrellas multijugador mod apk dinero ilimitado y oro Download ☑ https://bltlly.com/2v6IWG
- ¿Qué es un mod APK?
-
- ¿Por qué utilizar las estrellas de baloncesto multijugador mod APK?
-Estrellas de baloncesto multijugador mod APK es uno de los mejores APK mod para estrellas de baloncesto que se pueden encontrar en línea. Te da dinero ilimitado y oro que puedes usar para comprar lo que quieras en el juego. También puedes disfrutar de un juego premium y controles que te harán sentir como un profesional en la cancha. También puedes experimentar gráficos 3D realistas que te sumergirán en el mundo del juego. También puedes elegir entre dos diferentes modos de juego multijugador en línea que te desafiarán de diferentes maneras. También puede recoger fácilmente el juego y jugar en cualquier momento, en cualquier lugar. También puedes personalizar a tu personaje con cientos de opciones y desbloquear nuevas pelotas de baloncesto con efectos especiales. Estrellas de baloncesto multijugador mod APK es un deber-tener para cualquier aficionado al baloncesto que quiere tener más diversión y emoción en el juego.
- Características de Estrellas de baloncesto multijugador mod APK
-Dinero y oro ilimitados
-Una de las principales características de Baloncesto Estrellas multijugador mod APK es que le da dinero ilimitado y oro que se puede utilizar para comprar cualquier cosa que quieras en el juego. El dinero y el oro son las principales monedas en las Estrellas del Baloncesto que necesitas para desbloquear nuevos objetos, mejorar tus pelotas de baloncesto, entrar en el juego
Premium y controles
-Otra característica de Estrellas de baloncesto multijugador mod APK es que le da premium gameplay y controles que te harán sentir como un profesional en la cancha. Puedes disfrutar de controles suaves y sensibles que te permitirán driblar, disparar, bloquear, robar y encestar con facilidad. También puedes usar diferentes movimientos y trucos para superar a tu oponente y ganar más puntos. También puede ajustar la sensibilidad y el ángulo de la cámara para adaptarse a sus preferencias. Estrellas de baloncesto multijugador mod APK le dará la mejor experiencia de juego posible.
- Gráficos 3D realistas
-
- Dos diferentes modos de juego multijugador en línea
-Estrellas de baloncesto multijugador mod APK también ofrece dos diferentes modos de juego multijugador en línea que le desafiará de diferentes maneras. Puedes elegir entre Attacker-Defender o Shooting Race. En Attacker-Defender, tienes que anotar tantos puntos como puedas mientras defiendes tu canasta de tu oponente. En Shooting Race, tienes que anotar tantas canastas como puedas antes de que se acabe el tiempo. Ambos modos de juego son rápidos y competitivos, y requieren habilidad y estrategia para ganar. También puedes jugar con tus amigos o con jugadores aleatorios de todo el mundo. Estrellas de baloncesto multijugador mod APK pondrá a prueba sus habilidades de baloncesto y reflejos.
- Fácil de recoger, difícil de dominar
-Estrellas de baloncesto multijugador mod APK también es fácil de recoger, pero difícil de dominar. Puedes aprender lo básico del juego en pocos minutos, pero necesitarás horas de práctica y dedicación para convertirte en una estrella del baloncesto. También puedes mejorar tus habilidades jugando contra diferentes oponentes con diferentes estilos y habilidades. También puedes ganar recompensas y logros al completar varios desafíos y misiones. Estrellas de baloncesto multijugador mod APK es un juego que te mantendrá enganchado durante mucho tiempo.
- Amplias opciones de personalización
-Estrellas de baloncesto multijugador mod APK también le da amplias opciones de personalización que le permitirá crear su propio personaje único. Puedes elegir entre cientos de trajes, accesorios, peinados, tatuajes y más. También puedes mezclar y combinar diferentes elementos para crear tu propio estilo y personalidad. También puedes cambiar la apariencia de tu personaje cuando quieras. Estrellas de baloncesto multijugador mod APK le permitirá expresarse en la cancha.
- Colección de baloncesto desbloqueable
-
- Cómo descargar e instalar Baloncesto Estrellas multijugador mod APK
-Paso 1: Descargar el archivo mod APK de una fuente de confianza
-El primer paso para descargar e instalar Baloncesto Estrellas multijugador mod APK es encontrar una fuente de confianza que ofrece la última versión del archivo APK mod. Puede buscar en línea para varios sitios web que proporcionan archivos APK mod para diferentes juegos, pero tenga cuidado con los enlaces falsos o maliciosos que pueden dañar su dispositivo o robar sus datos. También puede utilizar este enlace para descargar el archivo mod APK para Basketball Stars directamente.
- Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo
-El segundo paso para descargar e instalar Estrellas de baloncesto multijugador mod APK es habilitar fuentes desconocidas en la configuración de su dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store o App Store. Para hacer esto, vaya a la configuración del dispositivo, luego la seguridad o la privacidad, luego cambie las fuentes desconocidas o permita desde esta fuente. Esto puede variar dependiendo del modelo de dispositivo y del sistema operativo.
- Paso 3: Instalar el archivo APK mod y lanzar el juego
-El tercer paso para descargar e instalar Baloncesto Estrellas multijugador mod APK es instalar el archivo APK mod y lanzar el juego. Para hacer esto, localizar el archivo APK mod descargado en el almacenamiento del dispositivo, a continuación, toque en él para iniciar el proceso de instalación. Sigue las instrucciones de la pantalla hasta que se complete la instalación. Luego, inicia el juego desde el cajón de la app o la pantalla de inicio. Disfruta jugando Basketball Stars con dinero ilimitado y oro
Conclusión
-
- Preguntas frecuentes
-Aquí hay algunas preguntas frecuentes sobre Basketball Stars mod multijugador APK:
-
-
-Q: ¿Está libre el mod multijugador Basketball Stars?
-A: Sí, Estrellas de baloncesto multijugador mod APK es libre de descargar y usar. No necesitas pagar nada para disfrutar del juego con dinero y oro ilimitados.
-Q: ¿Es seguro el mod multijugador APK de Basketball Stars?
-A: Sí, Estrellas de baloncesto multijugador mod APK es seguro de usar. No contiene virus, malware, spyware u otros componentes dañinos que puedan dañar su dispositivo o comprometer su privacidad. Sin embargo, siempre debe descargarlo de una fuente confiable y escanearlo con un software antivirus antes de instalarlo.
-Q: ¿Es Basketball Stars multijugador mod APK compatible con mi dispositivo?
-A: Estrellas de baloncesto multijugador mod APK es compatible con la mayoría de los dispositivos Android que tienen Android 4.1 o superior. Sin embargo, algunos dispositivos pueden no ser compatibles con el juego o el mod APK debido a limitaciones de hardware o software.
-Q: ¿Voy a ser prohibido para el uso de estrellas de baloncesto multijugador mod APK?
-A: No, no se le prohibió el uso de estrellas de baloncesto multijugador mod APK. El mod APK no interfiere con los servidores del juego o el sistema de emparejamiento en línea. Puedes jugar el juego normalmente sin ningún riesgo de ser prohibido.
-Q: ¿Cómo puedo actualizar Basketball Stars mod multijugador APK?
-A: Para actualizar Estrellas de baloncesto multijugador mod APK, es necesario descargar la última versión del archivo mod APK de la misma fuente que lo descargó desde antes. A continuación, es necesario desinstalar la versión anterior del mod APK e instalar el nuevo. No necesita preocuparse por perder su progreso o datos, ya que se almacenan en su dispositivo y no en el archivo APK mod.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/method.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/method.py
deleted file mode 100644
index b7520088c2ad0ac9767245c77338d317668c44ba..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/method.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# https://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-from botocore.docs.method import document_model_driven_method
-
-
-def document_model_driven_resource_method(
- section,
- method_name,
- operation_model,
- event_emitter,
- method_description=None,
- example_prefix=None,
- include_input=None,
- include_output=None,
- exclude_input=None,
- exclude_output=None,
- document_output=True,
- resource_action_model=None,
- include_signature=True,
-):
-
- document_model_driven_method(
- section=section,
- method_name=method_name,
- operation_model=operation_model,
- event_emitter=event_emitter,
- method_description=method_description,
- example_prefix=example_prefix,
- include_input=include_input,
- include_output=include_output,
- exclude_input=exclude_input,
- exclude_output=exclude_output,
- document_output=document_output,
- include_signature=include_signature,
- )
-
- # If this action returns a resource modify the return example to
- # appropriately reflect that.
- if resource_action_model.resource:
- if 'return' in section.available_sections:
- section.delete_section('return')
- resource_type = resource_action_model.resource.type
-
- new_return_section = section.add_new_section('return')
- return_resource_type = '{}.{}'.format(
- operation_model.service_model.service_name, resource_type
- )
-
- return_type = f':py:class:`{return_resource_type}`'
- return_description = f'{resource_type} resource'
-
- if _method_returns_resource_list(resource_action_model.resource):
- return_type = f'list({return_type})'
- return_description = f'A list of {resource_type} resources'
-
- new_return_section.style.new_line()
- new_return_section.write(f':rtype: {return_type}')
- new_return_section.style.new_line()
- new_return_section.write(f':returns: {return_description}')
- new_return_section.style.new_line()
-
-
-def _method_returns_resource_list(resource):
- for identifier in resource.identifiers:
- if identifier.path and '[]' in identifier.path:
- return True
-
- return False
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/fields.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/fields.py
deleted file mode 100644
index 9d630f491d9a39644ae65564dac88eb51f0bbe78..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/fields.py
+++ /dev/null
@@ -1,274 +0,0 @@
-from __future__ import absolute_import
-
-import email.utils
-import mimetypes
-import re
-
-from .packages import six
-
-
-def guess_content_type(filename, default="application/octet-stream"):
- """
- Guess the "Content-Type" of a file.
-
- :param filename:
- The filename to guess the "Content-Type" of using :mod:`mimetypes`.
- :param default:
- If no "Content-Type" can be guessed, default to `default`.
- """
- if filename:
- return mimetypes.guess_type(filename)[0] or default
- return default
-
-
-def format_header_param_rfc2231(name, value):
- """
- Helper function to format and quote a single header parameter using the
- strategy defined in RFC 2231.
-
- Particularly useful for header parameters which might contain
- non-ASCII values, like file names. This follows
- `RFC 2388 Section 4.4 `_.
-
- :param name:
- The name of the parameter, a string expected to be ASCII only.
- :param value:
- The value of the parameter, provided as ``bytes`` or `str``.
- :ret:
- An RFC-2231-formatted unicode string.
- """
- if isinstance(value, six.binary_type):
- value = value.decode("utf-8")
-
- if not any(ch in value for ch in '"\\\r\n'):
- result = u'%s="%s"' % (name, value)
- try:
- result.encode("ascii")
- except (UnicodeEncodeError, UnicodeDecodeError):
- pass
- else:
- return result
-
- if six.PY2: # Python 2:
- value = value.encode("utf-8")
-
- # encode_rfc2231 accepts an encoded string and returns an ascii-encoded
- # string in Python 2 but accepts and returns unicode strings in Python 3
- value = email.utils.encode_rfc2231(value, "utf-8")
- value = "%s*=%s" % (name, value)
-
- if six.PY2: # Python 2:
- value = value.decode("utf-8")
-
- return value
-
-
-_HTML5_REPLACEMENTS = {
- u"\u0022": u"%22",
- # Replace "\" with "\\".
- u"\u005C": u"\u005C\u005C",
-}
-
-# All control characters from 0x00 to 0x1F *except* 0x1B.
-_HTML5_REPLACEMENTS.update(
- {
- six.unichr(cc): u"%{:02X}".format(cc)
- for cc in range(0x00, 0x1F + 1)
- if cc not in (0x1B,)
- }
-)
-
-
-def _replace_multiple(value, needles_and_replacements):
- def replacer(match):
- return needles_and_replacements[match.group(0)]
-
- pattern = re.compile(
- r"|".join([re.escape(needle) for needle in needles_and_replacements.keys()])
- )
-
- result = pattern.sub(replacer, value)
-
- return result
-
-
-def format_header_param_html5(name, value):
- """
- Helper function to format and quote a single header parameter using the
- HTML5 strategy.
-
- Particularly useful for header parameters which might contain
- non-ASCII values, like file names. This follows the `HTML5 Working Draft
- Section 4.10.22.7`_ and matches the behavior of curl and modern browsers.
-
- .. _HTML5 Working Draft Section 4.10.22.7:
- https://w3c.github.io/html/sec-forms.html#multipart-form-data
-
- :param name:
- The name of the parameter, a string expected to be ASCII only.
- :param value:
- The value of the parameter, provided as ``bytes`` or `str``.
- :ret:
- A unicode string, stripped of troublesome characters.
- """
- if isinstance(value, six.binary_type):
- value = value.decode("utf-8")
-
- value = _replace_multiple(value, _HTML5_REPLACEMENTS)
-
- return u'%s="%s"' % (name, value)
-
-
-# For backwards-compatibility.
-format_header_param = format_header_param_html5
-
-
-class RequestField(object):
- """
- A data container for request body parameters.
-
- :param name:
- The name of this request field. Must be unicode.
- :param data:
- The data/value body.
- :param filename:
- An optional filename of the request field. Must be unicode.
- :param headers:
- An optional dict-like object of headers to initially use for the field.
- :param header_formatter:
- An optional callable that is used to encode and format the headers. By
- default, this is :func:`format_header_param_html5`.
- """
-
- def __init__(
- self,
- name,
- data,
- filename=None,
- headers=None,
- header_formatter=format_header_param_html5,
- ):
- self._name = name
- self._filename = filename
- self.data = data
- self.headers = {}
- if headers:
- self.headers = dict(headers)
- self.header_formatter = header_formatter
-
- @classmethod
- def from_tuples(cls, fieldname, value, header_formatter=format_header_param_html5):
- """
- A :class:`~urllib3.fields.RequestField` factory from old-style tuple parameters.
-
- Supports constructing :class:`~urllib3.fields.RequestField` from
- parameter of key/value strings AND key/filetuple. A filetuple is a
- (filename, data, MIME type) tuple where the MIME type is optional.
- For example::
-
- 'foo': 'bar',
- 'fakefile': ('foofile.txt', 'contents of foofile'),
- 'realfile': ('barfile.txt', open('realfile').read()),
- 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'),
- 'nonamefile': 'contents of nonamefile field',
-
- Field names and filenames must be unicode.
- """
- if isinstance(value, tuple):
- if len(value) == 3:
- filename, data, content_type = value
- else:
- filename, data = value
- content_type = guess_content_type(filename)
- else:
- filename = None
- content_type = None
- data = value
-
- request_param = cls(
- fieldname, data, filename=filename, header_formatter=header_formatter
- )
- request_param.make_multipart(content_type=content_type)
-
- return request_param
-
- def _render_part(self, name, value):
- """
- Overridable helper function to format a single header parameter. By
- default, this calls ``self.header_formatter``.
-
- :param name:
- The name of the parameter, a string expected to be ASCII only.
- :param value:
- The value of the parameter, provided as a unicode string.
- """
-
- return self.header_formatter(name, value)
-
- def _render_parts(self, header_parts):
- """
- Helper function to format and quote a single header.
-
- Useful for single headers that are composed of multiple items. E.g.,
- 'Content-Disposition' fields.
-
- :param header_parts:
- A sequence of (k, v) tuples or a :class:`dict` of (k, v) to format
- as `k1="v1"; k2="v2"; ...`.
- """
- parts = []
- iterable = header_parts
- if isinstance(header_parts, dict):
- iterable = header_parts.items()
-
- for name, value in iterable:
- if value is not None:
- parts.append(self._render_part(name, value))
-
- return u"; ".join(parts)
-
- def render_headers(self):
- """
- Renders the headers for this request field.
- """
- lines = []
-
- sort_keys = ["Content-Disposition", "Content-Type", "Content-Location"]
- for sort_key in sort_keys:
- if self.headers.get(sort_key, False):
- lines.append(u"%s: %s" % (sort_key, self.headers[sort_key]))
-
- for header_name, header_value in self.headers.items():
- if header_name not in sort_keys:
- if header_value:
- lines.append(u"%s: %s" % (header_name, header_value))
-
- lines.append(u"\r\n")
- return u"\r\n".join(lines)
-
- def make_multipart(
- self, content_disposition=None, content_type=None, content_location=None
- ):
- """
- Makes this request field into a multipart request field.
-
- This method overrides "Content-Disposition", "Content-Type" and
- "Content-Location" headers to the request parameter.
-
- :param content_type:
- The 'Content-Type' of the request body.
- :param content_location:
- The 'Content-Location' of the request body.
-
- """
- self.headers["Content-Disposition"] = content_disposition or u"form-data"
- self.headers["Content-Disposition"] += u"; ".join(
- [
- u"",
- self._render_parts(
- ((u"name", self._name), (u"filename", self._filename))
- ),
- ]
- )
- self.headers["Content-Type"] = content_type
- self.headers["Content-Location"] = content_location
diff --git a/spaces/CVPR/LIVE/thrust/thrust/limits.h b/spaces/CVPR/LIVE/thrust/thrust/limits.h
deleted file mode 100644
index f83dde9c370a73dff878a21a0ca919c8d0859f18..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/limits.h
+++ /dev/null
@@ -1,19 +0,0 @@
-// Copyright (c) 2018 NVIDIA Corporation
-// Author: Bryce Adelstein Lelbach
-//
-// Distributed under the Boost Software License v1.0 (boost.org/LICENSE_1_0.txt)
-
-#pragma once
-
-#include
-
-#include
-
-namespace thrust
-{
-
-template
-struct numeric_limits : std::numeric_limits {};
-
-} // end namespace thrust
-
diff --git a/spaces/CVPR/WALT/mmdet/models/losses/utils.py b/spaces/CVPR/WALT/mmdet/models/losses/utils.py
deleted file mode 100644
index 4756d7fcefd7cda1294c2662b4ca3e90c0a8e124..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/losses/utils.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import functools
-
-import mmcv
-import torch.nn.functional as F
-
-
-def reduce_loss(loss, reduction):
- """Reduce loss as specified.
-
- Args:
- loss (Tensor): Elementwise loss tensor.
- reduction (str): Options are "none", "mean" and "sum".
-
- Return:
- Tensor: Reduced loss tensor.
- """
- reduction_enum = F._Reduction.get_enum(reduction)
- # none: 0, elementwise_mean:1, sum: 2
- if reduction_enum == 0:
- return loss
- elif reduction_enum == 1:
- return loss.mean()
- elif reduction_enum == 2:
- return loss.sum()
-
-
-@mmcv.jit(derivate=True, coderize=True)
-def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
- """Apply element-wise weight and reduce loss.
-
- Args:
- loss (Tensor): Element-wise loss.
- weight (Tensor): Element-wise weights.
- reduction (str): Same as built-in losses of PyTorch.
- avg_factor (float): Avarage factor when computing the mean of losses.
-
- Returns:
- Tensor: Processed loss values.
- """
- # if weight is specified, apply element-wise weight
- if weight is not None:
- loss = loss * weight
-
- # if avg_factor is not specified, just reduce the loss
- if avg_factor is None:
- loss = reduce_loss(loss, reduction)
- else:
- # if reduction is mean, then average the loss by avg_factor
- if reduction == 'mean':
- loss = loss.sum() / avg_factor
- # if reduction is 'none', then do nothing, otherwise raise an error
- elif reduction != 'none':
- raise ValueError('avg_factor can not be used with reduction="sum"')
- return loss
-
-
-def weighted_loss(loss_func):
- """Create a weighted version of a given loss function.
-
- To use this decorator, the loss function must have the signature like
- `loss_func(pred, target, **kwargs)`. The function only needs to compute
- element-wise loss without any reduction. This decorator will add weight
- and reduction arguments to the function. The decorated function will have
- the signature like `loss_func(pred, target, weight=None, reduction='mean',
- avg_factor=None, **kwargs)`.
-
- :Example:
-
- >>> import torch
- >>> @weighted_loss
- >>> def l1_loss(pred, target):
- >>> return (pred - target).abs()
-
- >>> pred = torch.Tensor([0, 2, 3])
- >>> target = torch.Tensor([1, 1, 1])
- >>> weight = torch.Tensor([1, 0, 1])
-
- >>> l1_loss(pred, target)
- tensor(1.3333)
- >>> l1_loss(pred, target, weight)
- tensor(1.)
- >>> l1_loss(pred, target, reduction='none')
- tensor([1., 1., 2.])
- >>> l1_loss(pred, target, weight, avg_factor=2)
- tensor(1.5000)
- """
-
- @functools.wraps(loss_func)
- def wrapper(pred,
- target,
- weight=None,
- reduction='mean',
- avg_factor=None,
- **kwargs):
- # get element-wise loss
- loss = loss_func(pred, target, **kwargs)
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
- return loss
-
- return wrapper
diff --git a/spaces/CVPR/lama-example/fetch_data/places_standard_evaluation_prepare_data.sh b/spaces/CVPR/lama-example/fetch_data/places_standard_evaluation_prepare_data.sh
deleted file mode 100644
index 15d753cc2ccb17754ee3d23a45c73a8bbb3dbd06..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/fetch_data/places_standard_evaluation_prepare_data.sh
+++ /dev/null
@@ -1,52 +0,0 @@
-# 0. folder preparation
-mkdir -p places_standard_dataset/evaluation/hires/
-mkdir -p places_standard_dataset/evaluation/random_thick_512/
-mkdir -p places_standard_dataset/evaluation/random_thin_512/
-mkdir -p places_standard_dataset/evaluation/random_medium_512/
-mkdir -p places_standard_dataset/evaluation/random_thick_256/
-mkdir -p places_standard_dataset/evaluation/random_thin_256/
-mkdir -p places_standard_dataset/evaluation/random_medium_256/
-
-# 1. sample 2000 new images
-OUT=$(python3 fetch_data/eval_sampler.py)
-echo ${OUT}
-
-FILELIST=$(cat places_standard_dataset/original/eval_random_files.txt)
-for i in $FILELIST
-do
- $(cp ${i} places_standard_dataset/evaluation/hires/)
-done
-
-
-# 2. generate all kinds of masks
-
-# all 512
-python3 bin/gen_mask_dataset.py \
-$(pwd)/configs/data_gen/random_thick_512.yaml \
-places_standard_dataset/evaluation/hires \
-places_standard_dataset/evaluation/random_thick_512/
-
-python3 bin/gen_mask_dataset.py \
-$(pwd)/configs/data_gen/random_thin_512.yaml \
-places_standard_dataset/evaluation/hires \
-places_standard_dataset/evaluation/random_thin_512/
-
-python3 bin/gen_mask_dataset.py \
-$(pwd)/configs/data_gen/random_medium_512.yaml \
-places_standard_dataset/evaluation/hires \
-places_standard_dataset/evaluation/random_medium_512/
-
-python3 bin/gen_mask_dataset.py \
-$(pwd)/configs/data_gen/random_thick_256.yaml \
-places_standard_dataset/evaluation/hires \
-places_standard_dataset/evaluation/random_thick_256/
-
-python3 bin/gen_mask_dataset.py \
-$(pwd)/configs/data_gen/random_thin_256.yaml \
-places_standard_dataset/evaluation/hires \
-places_standard_dataset/evaluation/random_thin_256/
-
-python3 bin/gen_mask_dataset.py \
-$(pwd)/configs/data_gen/random_medium_256.yaml \
-places_standard_dataset/evaluation/hires \
-places_standard_dataset/evaluation/random_medium_256/
diff --git a/spaces/CVPR/lama-example/saicinpainting/training/losses/perceptual.py b/spaces/CVPR/lama-example/saicinpainting/training/losses/perceptual.py
deleted file mode 100644
index 8c055c2b327ce7943682af5c5f9394b9fcbec506..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/saicinpainting/training/losses/perceptual.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torchvision
-
-from models.ade20k import ModelBuilder
-from saicinpainting.utils import check_and_warn_input_range
-
-
-IMAGENET_MEAN = torch.FloatTensor([0.485, 0.456, 0.406])[None, :, None, None]
-IMAGENET_STD = torch.FloatTensor([0.229, 0.224, 0.225])[None, :, None, None]
-
-
-class PerceptualLoss(nn.Module):
- def __init__(self, normalize_inputs=True):
- super(PerceptualLoss, self).__init__()
-
- self.normalize_inputs = normalize_inputs
- self.mean_ = IMAGENET_MEAN
- self.std_ = IMAGENET_STD
-
- vgg = torchvision.models.vgg19(pretrained=True).features
- vgg_avg_pooling = []
-
- for weights in vgg.parameters():
- weights.requires_grad = False
-
- for module in vgg.modules():
- if module.__class__.__name__ == 'Sequential':
- continue
- elif module.__class__.__name__ == 'MaxPool2d':
- vgg_avg_pooling.append(nn.AvgPool2d(kernel_size=2, stride=2, padding=0))
- else:
- vgg_avg_pooling.append(module)
-
- self.vgg = nn.Sequential(*vgg_avg_pooling)
-
- def do_normalize_inputs(self, x):
- return (x - self.mean_.to(x.device)) / self.std_.to(x.device)
-
- def partial_losses(self, input, target, mask=None):
- check_and_warn_input_range(target, 0, 1, 'PerceptualLoss target in partial_losses')
-
- # we expect input and target to be in [0, 1] range
- losses = []
-
- if self.normalize_inputs:
- features_input = self.do_normalize_inputs(input)
- features_target = self.do_normalize_inputs(target)
- else:
- features_input = input
- features_target = target
-
- for layer in self.vgg[:30]:
-
- features_input = layer(features_input)
- features_target = layer(features_target)
-
- if layer.__class__.__name__ == 'ReLU':
- loss = F.mse_loss(features_input, features_target, reduction='none')
-
- if mask is not None:
- cur_mask = F.interpolate(mask, size=features_input.shape[-2:],
- mode='bilinear', align_corners=False)
- loss = loss * (1 - cur_mask)
-
- loss = loss.mean(dim=tuple(range(1, len(loss.shape))))
- losses.append(loss)
-
- return losses
-
- def forward(self, input, target, mask=None):
- losses = self.partial_losses(input, target, mask=mask)
- return torch.stack(losses).sum(dim=0)
-
- def get_global_features(self, input):
- check_and_warn_input_range(input, 0, 1, 'PerceptualLoss input in get_global_features')
-
- if self.normalize_inputs:
- features_input = self.do_normalize_inputs(input)
- else:
- features_input = input
-
- features_input = self.vgg(features_input)
- return features_input
-
-
-class ResNetPL(nn.Module):
- def __init__(self, weight=1,
- weights_path=None, arch_encoder='resnet50dilated', segmentation=True):
- super().__init__()
- self.impl = ModelBuilder.get_encoder(weights_path=weights_path,
- arch_encoder=arch_encoder,
- arch_decoder='ppm_deepsup',
- fc_dim=2048,
- segmentation=segmentation)
- self.impl.eval()
- for w in self.impl.parameters():
- w.requires_grad_(False)
-
- self.weight = weight
-
- def forward(self, pred, target):
- pred = (pred - IMAGENET_MEAN.to(pred)) / IMAGENET_STD.to(pred)
- target = (target - IMAGENET_MEAN.to(target)) / IMAGENET_STD.to(target)
-
- pred_feats = self.impl(pred, return_feature_maps=True)
- target_feats = self.impl(target, return_feature_maps=True)
-
- result = torch.stack([F.mse_loss(cur_pred, cur_target)
- for cur_pred, cur_target
- in zip(pred_feats, target_feats)]).sum() * self.weight
- return result
diff --git a/spaces/CVPR/regionclip-demo/detectron2/solver/__init__.py b/spaces/CVPR/regionclip-demo/detectron2/solver/__init__.py
deleted file mode 100644
index 9a2dbd35bb24f0d4a979bc8f304142376d87e7ec..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/solver/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .build import build_lr_scheduler, build_optimizer, get_default_optimizer_params
-from .lr_scheduler import WarmupCosineLR, WarmupMultiStepLR, LRMultiplier, WarmupParamScheduler
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/run.py b/spaces/ChrisPreston/diff-svc_minato_aqua/run.py
deleted file mode 100644
index 0935dee5ffc8670b6a68a9466179bd926c6de8c6..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/run.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import importlib
-
-from utils.hparams import set_hparams, hparams
-
-set_hparams(print_hparams=False)
-
-
-def run_task():
- assert hparams['task_cls'] != ''
- pkg = ".".join(hparams["task_cls"].split(".")[:-1])
- cls_name = hparams["task_cls"].split(".")[-1]
- task_cls = getattr(importlib.import_module(pkg), cls_name)
- task_cls.start()
-
-
-if __name__ == '__main__':
- run_task()
diff --git a/spaces/CofAI/chat/client/css/theme-toggler.css b/spaces/CofAI/chat/client/css/theme-toggler.css
deleted file mode 100644
index b673b5920a24693e7ea15b873e46731b388ec527..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/client/css/theme-toggler.css
+++ /dev/null
@@ -1,33 +0,0 @@
-.theme-toggler-container {
- margin: 24px 0px 8px 0px;
- justify-content: center;
-}
-
-.theme-toggler-container.checkbox input + label,
-.theme-toggler-container.checkbox input:checked + label:after {
- background: var(--colour-1);
-}
-
-.theme-toggler-container.checkbox input + label:after,
-.theme-toggler-container.checkbox input:checked + label {
- background: var(--colour-3);
-}
-
-.theme-toggler-container.checkbox span {
- font-size: 0.75rem;
-}
-
-.theme-toggler-container.checkbox label {
- width: 24px;
- height: 16px;
-}
-
-.theme-toggler-container.checkbox label:after {
- left: 2px;
- width: 10px;
- height: 10px;
-}
-
-.theme-toggler-container.checkbox input:checked + label:after {
- left: calc(100% - 2px - 10px);
-}
\ No newline at end of file
diff --git a/spaces/CristianGonzalez281098/Cheto/README.md b/spaces/CristianGonzalez281098/Cheto/README.md
deleted file mode 100644
index 69959527bdce515bfe2bb3204a23b6a49ea193b2..0000000000000000000000000000000000000000
--- a/spaces/CristianGonzalez281098/Cheto/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Cheto
-emoji: 📚
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 2.9.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/DHEIVER/analise_imagem_mama/README.md b/spaces/DHEIVER/analise_imagem_mama/README.md
deleted file mode 100644
index 1c28a3c55ae115b61478bc0b5368dae6575354e4..0000000000000000000000000000000000000000
--- a/spaces/DHEIVER/analise_imagem_mama/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Analise Imagem Mama
-emoji: 🚀
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.38.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/pipelines.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/pipelines.py
deleted file mode 100644
index 144f1f7ecd59c2c9c71fbd836061de9ed4b1f71b..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/pipelines.py
+++ /dev/null
@@ -1,225 +0,0 @@
-"""This module should not be used directly as its API is subject to change. Instead,
-please use the `gr.Interface.from_pipeline()` function."""
-
-from __future__ import annotations
-
-from typing import TYPE_CHECKING
-
-from gradio import components
-
-if TYPE_CHECKING: # Only import for type checking (is False at runtime).
- from transformers import pipelines
-
-
-def load_from_pipeline(pipeline: pipelines.base.Pipeline) -> dict:
- """
- Gets the appropriate Interface kwargs for a given Hugging Face transformers.Pipeline.
- pipeline (transformers.Pipeline): the transformers.Pipeline from which to create an interface
- Returns:
- (dict): a dictionary of kwargs that can be used to construct an Interface object
- """
- try:
- import transformers
- from transformers import pipelines
- except ImportError as ie:
- raise ImportError(
- "transformers not installed. Please try `pip install transformers`"
- ) from ie
- if not isinstance(pipeline, pipelines.base.Pipeline):
- raise ValueError("pipeline must be a transformers.Pipeline")
-
- # Handle the different pipelines. The has_attr() checks to make sure the pipeline exists in the
- # version of the transformers library that the user has installed.
- if hasattr(transformers, "AudioClassificationPipeline") and isinstance(
- pipeline, pipelines.audio_classification.AudioClassificationPipeline
- ):
- pipeline_info = {
- "inputs": components.Audio(
- source="microphone", type="filepath", label="Input"
- ),
- "outputs": components.Label(label="Class"),
- "preprocess": lambda i: {"inputs": i},
- "postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
- }
- elif hasattr(transformers, "AutomaticSpeechRecognitionPipeline") and isinstance(
- pipeline,
- pipelines.automatic_speech_recognition.AutomaticSpeechRecognitionPipeline,
- ):
- pipeline_info = {
- "inputs": components.Audio(
- source="microphone", type="filepath", label="Input"
- ),
- "outputs": components.Textbox(label="Output"),
- "preprocess": lambda i: {"inputs": i},
- "postprocess": lambda r: r["text"],
- }
- elif hasattr(transformers, "FeatureExtractionPipeline") and isinstance(
- pipeline, pipelines.feature_extraction.FeatureExtractionPipeline
- ):
- pipeline_info = {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Dataframe(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r[0],
- }
- elif hasattr(transformers, "FillMaskPipeline") and isinstance(
- pipeline, pipelines.fill_mask.FillMaskPipeline
- ):
- pipeline_info = {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: {i["token_str"]: i["score"] for i in r},
- }
- elif hasattr(transformers, "ImageClassificationPipeline") and isinstance(
- pipeline, pipelines.image_classification.ImageClassificationPipeline
- ):
- pipeline_info = {
- "inputs": components.Image(type="filepath", label="Input Image"),
- "outputs": components.Label(type="confidences", label="Classification"),
- "preprocess": lambda i: {"images": i},
- "postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
- }
- elif hasattr(transformers, "QuestionAnsweringPipeline") and isinstance(
- pipeline, pipelines.question_answering.QuestionAnsweringPipeline
- ):
- pipeline_info = {
- "inputs": [
- components.Textbox(lines=7, label="Context"),
- components.Textbox(label="Question"),
- ],
- "outputs": [
- components.Textbox(label="Answer"),
- components.Label(label="Score"),
- ],
- "preprocess": lambda c, q: {"context": c, "question": q},
- "postprocess": lambda r: (r["answer"], r["score"]),
- }
- elif hasattr(transformers, "SummarizationPipeline") and isinstance(
- pipeline, pipelines.text2text_generation.SummarizationPipeline
- ):
- pipeline_info = {
- "inputs": components.Textbox(lines=7, label="Input"),
- "outputs": components.Textbox(label="Summary"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r[0]["summary_text"],
- }
- elif hasattr(transformers, "TextClassificationPipeline") and isinstance(
- pipeline, pipelines.text_classification.TextClassificationPipeline
- ):
- pipeline_info = {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda x: [x],
- "postprocess": lambda r: {i["label"].split(", ")[0]: i["score"] for i in r},
- }
- elif hasattr(transformers, "TextGenerationPipeline") and isinstance(
- pipeline, pipelines.text_generation.TextGenerationPipeline
- ):
- pipeline_info = {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Output"),
- "preprocess": lambda x: {"text_inputs": x},
- "postprocess": lambda r: r[0]["generated_text"],
- }
- elif hasattr(transformers, "TranslationPipeline") and isinstance(
- pipeline, pipelines.text2text_generation.TranslationPipeline
- ):
- pipeline_info = {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Translation"),
- "preprocess": lambda x: [x],
- "postprocess": lambda r: r[0]["translation_text"],
- }
- elif hasattr(transformers, "Text2TextGenerationPipeline") and isinstance(
- pipeline, pipelines.text2text_generation.Text2TextGenerationPipeline
- ):
- pipeline_info = {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Generated Text"),
- "preprocess": lambda x: [x],
- "postprocess": lambda r: r[0]["generated_text"],
- }
- elif hasattr(transformers, "ZeroShotClassificationPipeline") and isinstance(
- pipeline, pipelines.zero_shot_classification.ZeroShotClassificationPipeline
- ):
- pipeline_info = {
- "inputs": [
- components.Textbox(label="Input"),
- components.Textbox(label="Possible class names (" "comma-separated)"),
- components.Checkbox(label="Allow multiple true classes"),
- ],
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda i, c, m: {
- "sequences": i,
- "candidate_labels": c,
- "multi_label": m,
- },
- "postprocess": lambda r: {
- r["labels"][i]: r["scores"][i] for i in range(len(r["labels"]))
- },
- }
- elif hasattr(transformers, "DocumentQuestionAnsweringPipeline") and isinstance(
- pipeline,
- pipelines.document_question_answering.DocumentQuestionAnsweringPipeline, # type: ignore
- ):
- pipeline_info = {
- "inputs": [
- components.Image(type="filepath", label="Input Document"),
- components.Textbox(label="Question"),
- ],
- "outputs": components.Label(label="Label"),
- "preprocess": lambda img, q: {"image": img, "question": q},
- "postprocess": lambda r: {i["answer"]: i["score"] for i in r},
- }
- elif hasattr(transformers, "VisualQuestionAnsweringPipeline") and isinstance(
- pipeline, pipelines.visual_question_answering.VisualQuestionAnsweringPipeline
- ):
- pipeline_info = {
- "inputs": [
- components.Image(type="filepath", label="Input Image"),
- components.Textbox(label="Question"),
- ],
- "outputs": components.Label(label="Score"),
- "preprocess": lambda img, q: {"image": img, "question": q},
- "postprocess": lambda r: {i["answer"]: i["score"] for i in r},
- }
- elif hasattr(transformers, "ImageToTextPipeline") and isinstance(
- pipeline, pipelines.image_to_text.ImageToTextPipeline # type: ignore
- ):
- pipeline_info = {
- "inputs": components.Image(type="filepath", label="Input Image"),
- "outputs": components.Textbox(label="Text"),
- "preprocess": lambda i: {"images": i},
- "postprocess": lambda r: r[0]["generated_text"],
- }
- else:
- raise ValueError(f"Unsupported pipeline type: {type(pipeline)}")
-
- # define the function that will be called by the Interface
- def fn(*params):
- data = pipeline_info["preprocess"](*params)
- # special cases that needs to be handled differently
- if isinstance(
- pipeline,
- (
- pipelines.text_classification.TextClassificationPipeline,
- pipelines.text2text_generation.Text2TextGenerationPipeline,
- pipelines.text2text_generation.TranslationPipeline,
- ),
- ):
- data = pipeline(*data)
- else:
- data = pipeline(**data)
- output = pipeline_info["postprocess"](data)
- return output
-
- interface_info = pipeline_info.copy()
- interface_info["fn"] = fn
- del interface_info["preprocess"]
- del interface_info["postprocess"]
-
- # define the title/description of the Interface
- interface_info["title"] = pipeline.model.__class__.__name__
-
- return interface_info
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_state.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_state.py
deleted file mode 100644
index 3593430a74f21f6e0c2faf495e1627551eebfc30..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_state.py
+++ /dev/null
@@ -1,367 +0,0 @@
-################################################################
-# The core state machine
-################################################################
-#
-# Rule 1: everything that affects the state machine and state transitions must
-# live here in this file. As much as possible goes into the table-based
-# representation, but for the bits that don't quite fit, the actual code and
-# state must nonetheless live here.
-#
-# Rule 2: this file does not know about what role we're playing; it only knows
-# about HTTP request/response cycles in the abstract. This ensures that we
-# don't cheat and apply different rules to local and remote parties.
-#
-#
-# Theory of operation
-# ===================
-#
-# Possibly the simplest way to think about this is that we actually have 5
-# different state machines here. Yes, 5. These are:
-#
-# 1) The client state, with its complicated automaton (see the docs)
-# 2) The server state, with its complicated automaton (see the docs)
-# 3) The keep-alive state, with possible states {True, False}
-# 4) The SWITCH_CONNECT state, with possible states {False, True}
-# 5) The SWITCH_UPGRADE state, with possible states {False, True}
-#
-# For (3)-(5), the first state listed is the initial state.
-#
-# (1)-(3) are stored explicitly in member variables. The last
-# two are stored implicitly in the pending_switch_proposals set as:
-# (state of 4) == (_SWITCH_CONNECT in pending_switch_proposals)
-# (state of 5) == (_SWITCH_UPGRADE in pending_switch_proposals)
-#
-# And each of these machines has two different kinds of transitions:
-#
-# a) Event-triggered
-# b) State-triggered
-#
-# Event triggered is the obvious thing that you'd think it is: some event
-# happens, and if it's the right event at the right time then a transition
-# happens. But there are somewhat complicated rules for which machines can
-# "see" which events. (As a rule of thumb, if a machine "sees" an event, this
-# means two things: the event can affect the machine, and if the machine is
-# not in a state where it expects that event then it's an error.) These rules
-# are:
-#
-# 1) The client machine sees all h11.events objects emitted by the client.
-#
-# 2) The server machine sees all h11.events objects emitted by the server.
-#
-# It also sees the client's Request event.
-#
-# And sometimes, server events are annotated with a _SWITCH_* event. For
-# example, we can have a (Response, _SWITCH_CONNECT) event, which is
-# different from a regular Response event.
-#
-# 3) The keep-alive machine sees the process_keep_alive_disabled() event
-# (which is derived from Request/Response events), and this event
-# transitions it from True -> False, or from False -> False. There's no way
-# to transition back.
-#
-# 4&5) The _SWITCH_* machines transition from False->True when we get a
-# Request that proposes the relevant type of switch (via
-# process_client_switch_proposals), and they go from True->False when we
-# get a Response that has no _SWITCH_* annotation.
-#
-# So that's event-triggered transitions.
-#
-# State-triggered transitions are less standard. What they do here is couple
-# the machines together. The way this works is, when certain *joint*
-# configurations of states are achieved, then we automatically transition to a
-# new *joint* state. So, for example, if we're ever in a joint state with
-#
-# client: DONE
-# keep-alive: False
-#
-# then the client state immediately transitions to:
-#
-# client: MUST_CLOSE
-#
-# This is fundamentally different from an event-based transition, because it
-# doesn't matter how we arrived at the {client: DONE, keep-alive: False} state
-# -- maybe the client transitioned SEND_BODY -> DONE, or keep-alive
-# transitioned True -> False. Either way, once this precondition is satisfied,
-# this transition is immediately triggered.
-#
-# What if two conflicting state-based transitions get enabled at the same
-# time? In practice there's only one case where this arises (client DONE ->
-# MIGHT_SWITCH_PROTOCOL versus DONE -> MUST_CLOSE), and we resolve it by
-# explicitly prioritizing the DONE -> MIGHT_SWITCH_PROTOCOL transition.
-#
-# Implementation
-# --------------
-#
-# The event-triggered transitions for the server and client machines are all
-# stored explicitly in a table. Ditto for the state-triggered transitions that
-# involve just the server and client state.
-#
-# The transitions for the other machines, and the state-triggered transitions
-# that involve the other machines, are written out as explicit Python code.
-#
-# It'd be nice if there were some cleaner way to do all this. This isn't
-# *too* terrible, but I feel like it could probably be better.
-#
-# WARNING
-# -------
-#
-# The script that generates the state machine diagrams for the docs knows how
-# to read out the EVENT_TRIGGERED_TRANSITIONS and STATE_TRIGGERED_TRANSITIONS
-# tables. But it can't automatically read the transitions that are written
-# directly in Python code. So if you touch those, you need to also update the
-# script to keep it in sync!
-from typing import cast, Dict, Optional, Set, Tuple, Type, Union
-
-from ._events import *
-from ._util import LocalProtocolError, Sentinel
-
-# Everything in __all__ gets re-exported as part of the h11 public API.
-__all__ = [
- "CLIENT",
- "SERVER",
- "IDLE",
- "SEND_RESPONSE",
- "SEND_BODY",
- "DONE",
- "MUST_CLOSE",
- "CLOSED",
- "MIGHT_SWITCH_PROTOCOL",
- "SWITCHED_PROTOCOL",
- "ERROR",
-]
-
-
-class CLIENT(Sentinel, metaclass=Sentinel):
- pass
-
-
-class SERVER(Sentinel, metaclass=Sentinel):
- pass
-
-
-# States
-class IDLE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class SEND_RESPONSE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class SEND_BODY(Sentinel, metaclass=Sentinel):
- pass
-
-
-class DONE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class MUST_CLOSE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class CLOSED(Sentinel, metaclass=Sentinel):
- pass
-
-
-class ERROR(Sentinel, metaclass=Sentinel):
- pass
-
-
-# Switch types
-class MIGHT_SWITCH_PROTOCOL(Sentinel, metaclass=Sentinel):
- pass
-
-
-class SWITCHED_PROTOCOL(Sentinel, metaclass=Sentinel):
- pass
-
-
-class _SWITCH_UPGRADE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class _SWITCH_CONNECT(Sentinel, metaclass=Sentinel):
- pass
-
-
-EventTransitionType = Dict[
- Type[Sentinel],
- Dict[
- Type[Sentinel],
- Dict[Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]], Type[Sentinel]],
- ],
-]
-
-EVENT_TRIGGERED_TRANSITIONS: EventTransitionType = {
- CLIENT: {
- IDLE: {Request: SEND_BODY, ConnectionClosed: CLOSED},
- SEND_BODY: {Data: SEND_BODY, EndOfMessage: DONE},
- DONE: {ConnectionClosed: CLOSED},
- MUST_CLOSE: {ConnectionClosed: CLOSED},
- CLOSED: {ConnectionClosed: CLOSED},
- MIGHT_SWITCH_PROTOCOL: {},
- SWITCHED_PROTOCOL: {},
- ERROR: {},
- },
- SERVER: {
- IDLE: {
- ConnectionClosed: CLOSED,
- Response: SEND_BODY,
- # Special case: server sees client Request events, in this form
- (Request, CLIENT): SEND_RESPONSE,
- },
- SEND_RESPONSE: {
- InformationalResponse: SEND_RESPONSE,
- Response: SEND_BODY,
- (InformationalResponse, _SWITCH_UPGRADE): SWITCHED_PROTOCOL,
- (Response, _SWITCH_CONNECT): SWITCHED_PROTOCOL,
- },
- SEND_BODY: {Data: SEND_BODY, EndOfMessage: DONE},
- DONE: {ConnectionClosed: CLOSED},
- MUST_CLOSE: {ConnectionClosed: CLOSED},
- CLOSED: {ConnectionClosed: CLOSED},
- SWITCHED_PROTOCOL: {},
- ERROR: {},
- },
-}
-
-StateTransitionType = Dict[
- Tuple[Type[Sentinel], Type[Sentinel]], Dict[Type[Sentinel], Type[Sentinel]]
-]
-
-# NB: there are also some special-case state-triggered transitions hard-coded
-# into _fire_state_triggered_transitions below.
-STATE_TRIGGERED_TRANSITIONS: StateTransitionType = {
- # (Client state, Server state) -> new states
- # Protocol negotiation
- (MIGHT_SWITCH_PROTOCOL, SWITCHED_PROTOCOL): {CLIENT: SWITCHED_PROTOCOL},
- # Socket shutdown
- (CLOSED, DONE): {SERVER: MUST_CLOSE},
- (CLOSED, IDLE): {SERVER: MUST_CLOSE},
- (ERROR, DONE): {SERVER: MUST_CLOSE},
- (DONE, CLOSED): {CLIENT: MUST_CLOSE},
- (IDLE, CLOSED): {CLIENT: MUST_CLOSE},
- (DONE, ERROR): {CLIENT: MUST_CLOSE},
-}
-
-
-class ConnectionState:
- def __init__(self) -> None:
- # Extra bits of state that don't quite fit into the state model.
-
- # If this is False then it enables the automatic DONE -> MUST_CLOSE
- # transition. Don't set this directly; call .keep_alive_disabled()
- self.keep_alive = True
-
- # This is a subset of {UPGRADE, CONNECT}, containing the proposals
- # made by the client for switching protocols.
- self.pending_switch_proposals: Set[Type[Sentinel]] = set()
-
- self.states: Dict[Type[Sentinel], Type[Sentinel]] = {CLIENT: IDLE, SERVER: IDLE}
-
- def process_error(self, role: Type[Sentinel]) -> None:
- self.states[role] = ERROR
- self._fire_state_triggered_transitions()
-
- def process_keep_alive_disabled(self) -> None:
- self.keep_alive = False
- self._fire_state_triggered_transitions()
-
- def process_client_switch_proposal(self, switch_event: Type[Sentinel]) -> None:
- self.pending_switch_proposals.add(switch_event)
- self._fire_state_triggered_transitions()
-
- def process_event(
- self,
- role: Type[Sentinel],
- event_type: Type[Event],
- server_switch_event: Optional[Type[Sentinel]] = None,
- ) -> None:
- _event_type: Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]] = event_type
- if server_switch_event is not None:
- assert role is SERVER
- if server_switch_event not in self.pending_switch_proposals:
- raise LocalProtocolError(
- "Received server {} event without a pending proposal".format(
- server_switch_event
- )
- )
- _event_type = (event_type, server_switch_event)
- if server_switch_event is None and _event_type is Response:
- self.pending_switch_proposals = set()
- self._fire_event_triggered_transitions(role, _event_type)
- # Special case: the server state does get to see Request
- # events.
- if _event_type is Request:
- assert role is CLIENT
- self._fire_event_triggered_transitions(SERVER, (Request, CLIENT))
- self._fire_state_triggered_transitions()
-
- def _fire_event_triggered_transitions(
- self,
- role: Type[Sentinel],
- event_type: Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]],
- ) -> None:
- state = self.states[role]
- try:
- new_state = EVENT_TRIGGERED_TRANSITIONS[role][state][event_type]
- except KeyError:
- event_type = cast(Type[Event], event_type)
- raise LocalProtocolError(
- "can't handle event type {} when role={} and state={}".format(
- event_type.__name__, role, self.states[role]
- )
- ) from None
- self.states[role] = new_state
-
- def _fire_state_triggered_transitions(self) -> None:
- # We apply these rules repeatedly until converging on a fixed point
- while True:
- start_states = dict(self.states)
-
- # It could happen that both these special-case transitions are
- # enabled at the same time:
- #
- # DONE -> MIGHT_SWITCH_PROTOCOL
- # DONE -> MUST_CLOSE
- #
- # For example, this will always be true of a HTTP/1.0 client
- # requesting CONNECT. If this happens, the protocol switch takes
- # priority. From there the client will either go to
- # SWITCHED_PROTOCOL, in which case it's none of our business when
- # they close the connection, or else the server will deny the
- # request, in which case the client will go back to DONE and then
- # from there to MUST_CLOSE.
- if self.pending_switch_proposals:
- if self.states[CLIENT] is DONE:
- self.states[CLIENT] = MIGHT_SWITCH_PROTOCOL
-
- if not self.pending_switch_proposals:
- if self.states[CLIENT] is MIGHT_SWITCH_PROTOCOL:
- self.states[CLIENT] = DONE
-
- if not self.keep_alive:
- for role in (CLIENT, SERVER):
- if self.states[role] is DONE:
- self.states[role] = MUST_CLOSE
-
- # Tabular state-triggered transitions
- joint_state = (self.states[CLIENT], self.states[SERVER])
- changes = STATE_TRIGGERED_TRANSITIONS.get(joint_state, {})
- self.states.update(changes)
-
- if self.states == start_states:
- # Fixed point reached
- return
-
- def start_next_cycle(self) -> None:
- if self.states != {CLIENT: DONE, SERVER: DONE}:
- raise LocalProtocolError(
- "not in a reusable state. self.states={}".format(self.states)
- )
- # Can't reach DONE/DONE with any of these active, but still, let's be
- # sure.
- assert self.keep_alive
- assert not self.pending_switch_proposals
- self.states = {CLIENT: IDLE, SERVER: IDLE}
diff --git a/spaces/DaleChen/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md b/spaces/DaleChen/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md
deleted file mode 100644
index a4f28a3d27d66d79cb95f2b8b847832172bb5f11..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-
-
-### Background
-
-
-### Changes
-
-
-### Documentation
-
-
-### Test Plan
-
-
-### PR Quality Checklist
-- [ ] My pull request is atomic and focuses on a single change.
-- [ ] I have thoroughly tested my changes with multiple different prompts.
-- [ ] I have considered potential risks and mitigations for my changes.
-- [ ] I have documented my changes clearly and comprehensively.
-- [ ] I have not snuck in any "extra" small tweaks changes
-
-
-
-
diff --git a/spaces/Datasculptor/MusicGen/audiocraft/data/audio_utils.py b/spaces/Datasculptor/MusicGen/audiocraft/data/audio_utils.py
deleted file mode 100644
index 76d4bc2a33ce722d879db2af33cd1336bd6b1fb3..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/audiocraft/data/audio_utils.py
+++ /dev/null
@@ -1,174 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-import typing as tp
-
-import julius
-import torch
-import torchaudio
-
-
-def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor:
- """Convert audio to the given number of channels.
-
- Args:
- wav (torch.Tensor): Audio wave of shape [B, C, T].
- channels (int): Expected number of channels as output.
- Returns:
- torch.Tensor: Downmixed or unchanged audio wave [B, C, T].
- """
- *shape, src_channels, length = wav.shape
- if src_channels == channels:
- pass
- elif channels == 1:
- # Case 1:
- # The caller asked 1-channel audio, and the stream has multiple
- # channels, downmix all channels.
- wav = wav.mean(dim=-2, keepdim=True)
- elif src_channels == 1:
- # Case 2:
- # The caller asked for multiple channels, but the input file has
- # a single channel, replicate the audio over all channels.
- wav = wav.expand(*shape, channels, length)
- elif src_channels >= channels:
- # Case 3:
- # The caller asked for multiple channels, and the input file has
- # more channels than requested. In that case return the first channels.
- wav = wav[..., :channels, :]
- else:
- # Case 4: What is a reasonable choice here?
- raise ValueError('The audio file has less channels than requested but is not mono.')
- return wav
-
-
-def convert_audio(wav: torch.Tensor, from_rate: float,
- to_rate: float, to_channels: int) -> torch.Tensor:
- """Convert audio to new sample rate and number of audio channels.
- """
- wav = julius.resample_frac(wav, int(from_rate), int(to_rate))
- wav = convert_audio_channels(wav, to_channels)
- return wav
-
-
-def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, energy_floor: float = 2e-3):
- """Normalize an input signal to a user loudness in dB LKFS.
- Audio loudness is defined according to the ITU-R BS.1770-4 recommendation.
-
- Args:
- wav (torch.Tensor): Input multichannel audio data.
- sample_rate (int): Sample rate.
- loudness_headroom_db (float): Target loudness of the output in dB LUFS.
- loudness_compressor (bool): Uses tanh for soft clipping.
- energy_floor (float): anything below that RMS level will not be rescaled.
- Returns:
- output (torch.Tensor): Loudness normalized output data.
- """
- energy = wav.pow(2).mean().sqrt().item()
- if energy < energy_floor:
- return wav
- transform = torchaudio.transforms.Loudness(sample_rate)
- input_loudness_db = transform(wav).item()
- # calculate the gain needed to scale to the desired loudness level
- delta_loudness = -loudness_headroom_db - input_loudness_db
- gain = 10.0 ** (delta_loudness / 20.0)
- output = gain * wav
- if loudness_compressor:
- output = torch.tanh(output)
- assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt())
- return output
-
-
-def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None:
- """Utility function to clip the audio with logging if specified."""
- max_scale = wav.abs().max()
- if log_clipping and max_scale > 1:
- clamp_prob = (wav.abs() > 1).float().mean().item()
- print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):",
- clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr)
- wav.clamp_(-1, 1)
-
-
-def normalize_audio(wav: torch.Tensor, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False, log_clipping: bool = False,
- sample_rate: tp.Optional[int] = None,
- stem_name: tp.Optional[str] = None) -> torch.Tensor:
- """Normalize the audio according to the prescribed strategy (see after).
-
- Args:
- wav (torch.Tensor): Audio data.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): If True, uses tanh based soft clipping.
- log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- sample_rate (int): Sample rate for the audio data (required for loudness).
- stem_name (Optional[str]): Stem name for clipping logging.
- Returns:
- torch.Tensor: Normalized audio.
- """
- scale_peak = 10 ** (-peak_clip_headroom_db / 20)
- scale_rms = 10 ** (-rms_headroom_db / 20)
- if strategy == 'peak':
- rescaling = (scale_peak / wav.abs().max())
- if normalize or rescaling < 1:
- wav = wav * rescaling
- elif strategy == 'clip':
- wav = wav.clamp(-scale_peak, scale_peak)
- elif strategy == 'rms':
- mono = wav.mean(dim=0)
- rescaling = scale_rms / mono.pow(2).mean().sqrt()
- if normalize or rescaling < 1:
- wav = wav * rescaling
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- elif strategy == 'loudness':
- assert sample_rate is not None, "Loudness normalization requires sample rate."
- wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor)
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
- else:
- assert wav.abs().max() < 1
- assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'"
- return wav
-
-
-def f32_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to float 32 bits PCM format.
- """
- if wav.dtype.is_floating_point:
- return wav
- else:
- assert wav.dtype == torch.int16
- return wav.float() / 2**15
-
-
-def i16_pcm(wav: torch.Tensor) -> torch.Tensor:
- """Convert audio to int 16 bits PCM format.
-
- ..Warning:: There exist many formula for doing this convertion. None are perfect
- due to the asymetry of the int16 range. One either have possible clipping, DC offset,
- or inconsistancies with f32_pcm. If the given wav doesn't have enough headroom,
- it is possible that `i16_pcm(f32_pcm)) != Identity`.
- """
- if wav.dtype.is_floating_point:
- assert wav.abs().max() <= 1
- candidate = (wav * 2 ** 15).round()
- if candidate.max() >= 2 ** 15: # clipping would occur
- candidate = (wav * (2 ** 15 - 1)).round()
- return candidate.short()
- else:
- assert wav.dtype == torch.int16
- return wav
diff --git a/spaces/Datasculptor/sd-prism/share_btn.py b/spaces/Datasculptor/sd-prism/share_btn.py
deleted file mode 100644
index 797a92ed18e2616a5dbbbc60bf8db22d9e07d902..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/sd-prism/share_btn.py
+++ /dev/null
@@ -1,100 +0,0 @@
-community_icon_html = """
-
-
- """
-
-loading_icon_html = """ """
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
-
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
-
- const gradioEl = document.querySelector('body > gradio-app');
- // const gradioEl = document.querySelector("gradio-app").shadowRoot;
- const inputImgEl = gradioEl.querySelector('#input-img img');
- const imgEls = gradioEl.querySelectorAll('#generated-gallery img');
- const promptTxt = gradioEl.querySelector('#translated textarea').value;
- let titleTxt = promptTxt;
- if(titleTxt.length > 100){
- titleTxt = titleTxt.slice(0, 100) + ' ...';
- }
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
-
- if(!imgEls.length){
- return;
- };
-
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
-
- const files = await Promise.all(
- [...imgEls].map(async (imgEl) => {
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- })
- );
- const inputFile = await getInputImgFile(inputImgEl);
- files.push(inputFile);
-
- const urls = await Promise.all(files.map((f) => uploadFile(f)));
- const urlInputImg = urls.pop();
- const htmlImgs = urls.map(url => ` `);
- const htmlImgsMd = htmlImgs.join(`\n`);
-
- const descriptionMd = `#### Input img:
-
-
-#### Caption:
-${promptTxt}
-
-#### Generations:
-
-${htmlImgsMd}
-
`;
-
- const params = new URLSearchParams({
- title: titleTxt,
- description: descriptionMd,
- });
-
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/pharma/sd-prism/discussions/new?${paramsStr}`, '_blank');
-
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/Detomo/aisatsu-app-api/utils.py b/spaces/Detomo/aisatsu-app-api/utils.py
deleted file mode 100644
index b129cff699059570b4459625a44f1ab8a366eb51..0000000000000000000000000000000000000000
--- a/spaces/Detomo/aisatsu-app-api/utils.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from io import BytesIO
-import base64
-from PIL import Image
-import cv2
-import numpy as np
-from gtts import gTTS
-
-def tts(text: str, language="ja") -> object:
- """Converts text into autoplay html.
- Args:
- text (str): generated answer of bot
- language (str): language of text
- Returns:
- html: autoplay object
- """
- tts_object = gTTS(text=text, lang=language, slow=False)
- bytes_object = BytesIO()
- tts_object.write_to_fp(bytes_object)
- bytes_object.seek(0)
- b64 = base64.b64encode(bytes_object.getvalue()).decode()
- return b64
-
-
-def read_image_file(file) -> Image.Image:
- image = Image.open(BytesIO(file))
- return image
-
-
-def pil_to_base64(img, format="jpeg"):
- buffer = BytesIO()
- img.save(buffer, format)
- img_str = base64.b64encode(buffer.getvalue()).decode("ascii")
- return img_str
-
-
-def base64_to_pil(img_str):
- if "base64," in img_str:
- img_str = img_str.split(",")[1]
- img_raw = base64.b64decode(img_str)
- img = Image.open(BytesIO(img_raw))
- return img
-
-
-def get_hist(image):
- hist = cv2.calcHist([np.array(image)], [0, 1, 2], None, [8, 8, 8], [0, 256, 0, 256, 0, 256])
- hist = cv2.normalize(hist, hist).flatten()
- return hist
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.cpp b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.cpp
deleted file mode 100644
index aef47317a3ae018de6ea620060337bcf44b2d649..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/bias_act.cpp
+++ /dev/null
@@ -1,101 +0,0 @@
-// Copyright (c) SenseTime Research. All rights reserved.
-
-// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-#include
-#include
-#include "bias_act.h"
-
-//------------------------------------------------------------------------
-
-static bool has_same_layout(torch::Tensor x, torch::Tensor y)
-{
- if (x.dim() != y.dim())
- return false;
- for (int64_t i = 0; i < x.dim(); i++)
- {
- if (x.size(i) != y.size(i))
- return false;
- if (x.size(i) >= 2 && x.stride(i) != y.stride(i))
- return false;
- }
- return true;
-}
-
-//------------------------------------------------------------------------
-
-static torch::Tensor bias_act(torch::Tensor x, torch::Tensor b, torch::Tensor xref, torch::Tensor yref, torch::Tensor dy, int grad, int dim, int act, float alpha, float gain, float clamp)
-{
- // Validate arguments.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- TORCH_CHECK(b.numel() == 0 || (b.dtype() == x.dtype() && b.device() == x.device()), "b must have the same dtype and device as x");
- TORCH_CHECK(xref.numel() == 0 || (xref.sizes() == x.sizes() && xref.dtype() == x.dtype() && xref.device() == x.device()), "xref must have the same shape, dtype, and device as x");
- TORCH_CHECK(yref.numel() == 0 || (yref.sizes() == x.sizes() && yref.dtype() == x.dtype() && yref.device() == x.device()), "yref must have the same shape, dtype, and device as x");
- TORCH_CHECK(dy.numel() == 0 || (dy.sizes() == x.sizes() && dy.dtype() == x.dtype() && dy.device() == x.device()), "dy must have the same dtype and device as x");
- TORCH_CHECK(x.numel() <= INT_MAX, "x is too large");
- TORCH_CHECK(b.dim() == 1, "b must have rank 1");
- TORCH_CHECK(b.numel() == 0 || (dim >= 0 && dim < x.dim()), "dim is out of bounds");
- TORCH_CHECK(b.numel() == 0 || b.numel() == x.size(dim), "b has wrong number of elements");
- TORCH_CHECK(grad >= 0, "grad must be non-negative");
-
- // Validate layout.
- TORCH_CHECK(x.is_non_overlapping_and_dense(), "x must be non-overlapping and dense");
- TORCH_CHECK(b.is_contiguous(), "b must be contiguous");
- TORCH_CHECK(xref.numel() == 0 || has_same_layout(xref, x), "xref must have the same layout as x");
- TORCH_CHECK(yref.numel() == 0 || has_same_layout(yref, x), "yref must have the same layout as x");
- TORCH_CHECK(dy.numel() == 0 || has_same_layout(dy, x), "dy must have the same layout as x");
-
- // Create output tensor.
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
- torch::Tensor y = torch::empty_like(x);
- TORCH_CHECK(has_same_layout(y, x), "y must have the same layout as x");
-
- // Initialize CUDA kernel parameters.
- bias_act_kernel_params p;
- p.x = x.data_ptr();
- p.b = (b.numel()) ? b.data_ptr() : NULL;
- p.xref = (xref.numel()) ? xref.data_ptr() : NULL;
- p.yref = (yref.numel()) ? yref.data_ptr() : NULL;
- p.dy = (dy.numel()) ? dy.data_ptr() : NULL;
- p.y = y.data_ptr();
- p.grad = grad;
- p.act = act;
- p.alpha = alpha;
- p.gain = gain;
- p.clamp = clamp;
- p.sizeX = (int)x.numel();
- p.sizeB = (int)b.numel();
- p.stepB = (b.numel()) ? (int)x.stride(dim) : 1;
-
- // Choose CUDA kernel.
- void* kernel;
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&]
- {
- kernel = choose_bias_act_kernel(p);
- });
- TORCH_CHECK(kernel, "no CUDA kernel found for the specified activation func");
-
- // Launch CUDA kernel.
- p.loopX = 4;
- int blockSize = 4 * 32;
- int gridSize = (p.sizeX - 1) / (p.loopX * blockSize) + 1;
- void* args[] = {&p};
- AT_CUDA_CHECK(cudaLaunchKernel(kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream()));
- return y;
-}
-
-//------------------------------------------------------------------------
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
-{
- m.def("bias_act", &bias_act);
-}
-
-//------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/training/networks_stylegan2.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/training/networks_stylegan2.py
deleted file mode 100644
index 923f150ef7352ed85b4be2ff8d9f8a6193aac1e9..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/training/networks_stylegan2.py
+++ /dev/null
@@ -1,974 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Network architectures from the paper
-"Analyzing and Improving the Image Quality of StyleGAN".
-Matches the original implementation of configs E-F by Karras et al. at
-https://github.com/NVlabs/stylegan2/blob/master/training/networks_stylegan2.py"""
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch_utils import misc
-from torch_utils import persistence
-from torch_utils.ops import conv2d_resample
-from torch_utils.ops import upfirdn2d
-from torch_utils.ops import bias_act
-from torch_utils.ops import fma
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def normalize_2nd_moment(x, dim=1, eps=1e-8):
- return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt()
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def modulated_conv2d(
- # Input tensor of shape [batch_size, in_channels, in_height, in_width].
- x,
- # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width].
- weight,
- # Modulation coefficients of shape [batch_size, in_channels].
- styles,
- noise=None, # Optional noise tensor to add to the output activations.
- up=1, # Integer upsampling factor.
- down=1, # Integer downsampling factor.
- padding=0, # Padding with respect to the upsampled image.
- # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter().
- resample_filter=None,
- demodulate=True, # Apply weight demodulation?
- # False = convolution, True = correlation (matches torch.nn.functional.conv2d).
- flip_weight=True,
- # Perform modulation, convolution, and demodulation as a single fused operation?
- fused_modconv=True,
-):
- batch_size = x.shape[0]
- out_channels, in_channels, kh, kw = weight.shape
- misc.assert_shape(weight, [out_channels, in_channels, kh, kw]) # [OIkk]
- misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW]
- misc.assert_shape(styles, [batch_size, in_channels]) # [NI]
-
- # Pre-normalize inputs to avoid FP16 overflow.
- if x.dtype == torch.float16 and demodulate:
- weight = weight * (1 / np.sqrt(in_channels * kh * kw) /
- weight.norm(float('inf'), dim=[1, 2, 3], keepdim=True)) # max_Ikk
- styles = styles / \
- styles.norm(float('inf'), dim=1, keepdim=True) # max_I
-
- # Calculate per-sample weights and demodulation coefficients.
- w = None
- dcoefs = None
- if demodulate or fused_modconv:
- w = weight.unsqueeze(0) # [NOIkk]
- w = w * styles.reshape(batch_size, 1, -1, 1, 1) # [NOIkk]
- if demodulate:
- dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO]
- if demodulate and fused_modconv:
- w = w * dcoefs.reshape(batch_size, -1, 1, 1, 1) # [NOIkk]
-
- # Execute by scaling the activations before and after the convolution.
- if not fused_modconv:
- x = x * styles.to(x.dtype).reshape(batch_size, -1, 1, 1)
- x = conv2d_resample.conv2d_resample(x=x, w=weight.to(
- x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight)
- if demodulate and noise is not None:
- x = fma.fma(x, dcoefs.to(x.dtype).reshape(
- batch_size, -1, 1, 1), noise.to(x.dtype))
- elif demodulate:
- x = x * dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1)
- elif noise is not None:
- x = x.add_(noise.to(x.dtype))
- return x
-
- # Execute as one fused op using grouped convolution.
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- batch_size = int(batch_size)
- misc.assert_shape(x, [batch_size, in_channels, None, None])
- x = x.reshape(1, -1, *x.shape[2:])
- w = w.reshape(-1, in_channels, kh, kw)
- x = conv2d_resample.conv2d_resample(x=x, w=w.to(
- x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight)
- x = x.reshape(batch_size, -1, *x.shape[2:])
- if noise is not None:
- x = x.add_(noise)
- return x
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class FullyConnectedLayer(torch.nn.Module):
- def __init__(self,
- in_features, # Number of input features.
- out_features, # Number of output features.
- bias=True, # Apply additive bias before the activation function?
- # Activation function: 'relu', 'lrelu', etc.
- activation='linear',
- lr_multiplier=1, # Learning rate multiplier.
- bias_init=0, # Initial value for the additive bias.
- ):
- super().__init__()
- self.in_features = in_features
- self.out_features = out_features
- self.activation = activation
- self.weight = torch.nn.Parameter(torch.randn(
- [out_features, in_features]) / lr_multiplier)
- self.bias = torch.nn.Parameter(torch.full(
- [out_features], np.float32(bias_init))) if bias else None
- self.weight_gain = lr_multiplier / np.sqrt(in_features)
- self.bias_gain = lr_multiplier
-
- def forward(self, x):
- w = self.weight.to(x.dtype) * self.weight_gain
- b = self.bias
- if b is not None:
- b = b.to(x.dtype)
- if self.bias_gain != 1:
- b = b * self.bias_gain
-
- if self.activation == 'linear' and b is not None:
- x = torch.addmm(b.unsqueeze(0), x, w.t())
- else:
- x = x.matmul(w.t())
- x = bias_act.bias_act(x, b, act=self.activation)
- return x
-
- def extra_repr(self):
- return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Conv2dLayer(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- # Width and height of the convolution kernel.
- kernel_size,
- bias=True, # Apply additive bias before the activation function?
- # Activation function: 'relu', 'lrelu', etc.
- activation='linear',
- up=1, # Integer upsampling factor.
- down=1, # Integer downsampling factor.
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output to +-X, None = disable clamping.
- conv_clamp=None,
- channels_last=False, # Expect the input to have memory_format=channels_last?
- trainable=True, # Update the weights of this layer during training?
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.activation = activation
- self.up = up
- self.down = down
- self.conv_clamp = conv_clamp
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.padding = kernel_size // 2
- self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2))
- self.act_gain = bias_act.activation_funcs[activation].def_gain
-
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to(
- memory_format=memory_format)
- bias = torch.zeros([out_channels]) if bias else None
- if trainable:
- self.weight = torch.nn.Parameter(weight)
- self.bias = torch.nn.Parameter(bias) if bias is not None else None
- else:
- self.register_buffer('weight', weight)
- if bias is not None:
- self.register_buffer('bias', bias)
- else:
- self.bias = None
-
- def forward(self, x, gain=1):
- w = self.weight * self.weight_gain
- b = self.bias.to(x.dtype) if self.bias is not None else None
- flip_weight = (self.up == 1) # slightly faster
- x = conv2d_resample.conv2d_resample(x=x, w=w.to(
- x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight)
-
- act_gain = self.act_gain * gain
- act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
- x = bias_act.bias_act(x, b, act=self.activation,
- gain=act_gain, clamp=act_clamp)
- return x
-
- def extra_repr(self):
- return ' '.join([
- f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, activation={self.activation:s},',
- f'up={self.up}, down={self.down}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class MappingNetwork(torch.nn.Module):
- def __init__(self,
- # Input latent (Z) dimensionality, 0 = no latent.
- z_dim,
- # Conditioning label (C) dimensionality, 0 = no label.
- c_dim,
- # Intermediate latent (W) dimensionality.
- w_dim,
- # Number of intermediate latents to output, None = do not broadcast.
- num_ws,
- num_layers=8, # Number of mapping layers.
- # Label embedding dimensionality, None = same as w_dim.
- embed_features=None,
- # Number of intermediate features in the mapping layers, None = same as w_dim.
- layer_features=None,
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Learning rate multiplier for the mapping layers.
- lr_multiplier=0.01,
- # Decay for tracking the moving average of W during training, None = do not track.
- w_avg_beta=0.998,
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.num_ws = num_ws
- self.num_layers = num_layers
- self.w_avg_beta = w_avg_beta
-
- if embed_features is None:
- embed_features = w_dim
- if c_dim == 0:
- embed_features = 0
- if layer_features is None:
- layer_features = w_dim
- features_list = [z_dim + embed_features] + \
- [layer_features] * (num_layers - 1) + [w_dim]
-
- if c_dim > 0:
- self.embed = FullyConnectedLayer(c_dim, embed_features)
- for idx in range(num_layers):
- in_features = features_list[idx]
- out_features = features_list[idx + 1]
- layer = FullyConnectedLayer(
- in_features, out_features, activation=activation, lr_multiplier=lr_multiplier)
- setattr(self, f'fc{idx}', layer)
-
- if num_ws is not None and w_avg_beta is not None:
- self.register_buffer('w_avg', torch.zeros([w_dim]))
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False):
- # Embed, normalize, and concat inputs.
- x = None
- with torch.autograd.profiler.record_function('input'):
- if self.z_dim > 0:
- misc.assert_shape(z, [None, self.z_dim])
- x = normalize_2nd_moment(z.to(torch.float32))
- if self.c_dim > 0:
- misc.assert_shape(c, [None, self.c_dim])
- y = normalize_2nd_moment(self.embed(c.to(torch.float32)))
- x = torch.cat([x, y], dim=1) if x is not None else y
-
- # Main layers.
- for idx in range(self.num_layers):
- layer = getattr(self, f'fc{idx}')
- x = layer(x)
-
- # Update moving average of W.
- if update_emas and self.w_avg_beta is not None:
- with torch.autograd.profiler.record_function('update_w_avg'):
- self.w_avg.copy_(x.detach().mean(
- dim=0).lerp(self.w_avg, self.w_avg_beta))
-
- # Broadcast.
- if self.num_ws is not None:
- with torch.autograd.profiler.record_function('broadcast'):
- x = x.unsqueeze(1).repeat([1, self.num_ws, 1])
-
- # Apply truncation.
- if truncation_psi != 1:
- with torch.autograd.profiler.record_function('truncate'):
- assert self.w_avg_beta is not None
- if self.num_ws is None or truncation_cutoff is None:
- x = self.w_avg.lerp(x, truncation_psi)
- else:
- x[:, :truncation_cutoff] = self.w_avg.lerp(
- x[:, :truncation_cutoff], truncation_psi)
- return x
-
- def extra_repr(self):
- return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisLayer(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- # Intermediate latent (W) dimensionality.
- w_dim,
- resolution, # Resolution of this layer.
- kernel_size=3, # Convolution kernel size.
- up=1, # Integer upsampling factor.
- use_noise=True, # Enable noise input?
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- channels_last=False, # Use channels_last format for the weights?
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.w_dim = w_dim
- self.resolution = resolution
- self.up = up
- self.use_noise = use_noise
- self.activation = activation
- self.conv_clamp = conv_clamp
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.padding = kernel_size // 2
- self.act_gain = bias_act.activation_funcs[activation].def_gain
-
- self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- self.weight = torch.nn.Parameter(torch.randn(
- [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format))
- if use_noise:
- self.register_buffer(
- 'noise_const', torch.randn([resolution, resolution]))
- self.noise_strength = torch.nn.Parameter(torch.zeros([]))
- self.bias = torch.nn.Parameter(torch.zeros([out_channels]))
-
- def forward(self, x, w, noise_mode='random', fused_modconv=True, gain=1):
- assert noise_mode in ['random', 'const', 'none']
- in_resolution = self.resolution // self.up
- misc.assert_shape(x, [None, self.in_channels,
- in_resolution, in_resolution])
- styles = self.affine(w)
-
- noise = None
- if self.use_noise and noise_mode == 'random':
- noise = torch.randn([x.shape[0], 1, self.resolution,
- self.resolution], device=x.device) * self.noise_strength
- if self.use_noise and noise_mode == 'const':
- noise = self.noise_const * self.noise_strength
-
- flip_weight = (self.up == 1) # slightly faster
- x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
- padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)
-
- act_gain = self.act_gain * gain
- act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
- x = bias_act.bias_act(x, self.bias.to(
- x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp)
- return x
-
- def extra_repr(self):
- return ' '.join([
- f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d},',
- f'resolution={self.resolution:d}, up={self.up}, activation={self.activation:s}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class ToRGBLayer(torch.nn.Module):
- def __init__(self, in_channels, out_channels, w_dim, kernel_size=1, conv_clamp=None, channels_last=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.w_dim = w_dim
- self.conv_clamp = conv_clamp
- self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)
- memory_format = torch.channels_last if channels_last else torch.contiguous_format
- self.weight = torch.nn.Parameter(torch.randn(
- [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format))
- self.bias = torch.nn.Parameter(torch.zeros([out_channels]))
- self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2))
-
- def forward(self, x, w, fused_modconv=True):
- styles = self.affine(w) * self.weight_gain
- x = modulated_conv2d(x=x, weight=self.weight, styles=styles,
- demodulate=False, fused_modconv=fused_modconv)
- x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp)
- return x
-
- def extra_repr(self):
- return f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}, w_dim={self.w_dim:d}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisBlock(torch.nn.Module):
- def __init__(self,
- # Number of input channels, 0 = first block.
- in_channels,
- # Number of output channels.
- out_channels,
- # Intermediate latent (W) dimensionality.
- w_dim,
- # Resolution of this block.
- resolution,
- # Number of output color channels.
- img_channels,
- is_last, # Is this the last block?
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='skip',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=256,
- use_fp16=False, # Use FP16 for this block?
- fp16_channels_last=False, # Use channels-last memory format with FP16?
- # Default value of fused_modconv. 'inference_only' = True for inference, False for training.
- fused_modconv_default=True,
- # Arguments for SynthesisLayer.
- **layer_kwargs,
- ):
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.w_dim = w_dim
- self.resolution = resolution
- self.img_channels = img_channels
- self.is_last = is_last
- self.architecture = architecture
- self.use_fp16 = use_fp16
- self.channels_last = (use_fp16 and fp16_channels_last)
- self.fused_modconv_default = fused_modconv_default
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
- self.num_conv = 0
- self.num_torgb = 0
-
- if in_channels == 0:
- self.const = torch.nn.Parameter(torch.randn(
- [out_channels, resolution, resolution]))
-
- if in_channels != 0:
- self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, up=2,
- resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs)
- self.num_conv += 1
-
- self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution,
- conv_clamp=conv_clamp, channels_last=self.channels_last, **layer_kwargs)
- self.num_conv += 1
-
- if is_last or architecture == 'skip':
- self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim,
- conv_clamp=conv_clamp, channels_last=self.channels_last)
- self.num_torgb += 1
-
- if in_channels != 0 and architecture == 'resnet':
- self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2,
- resample_filter=resample_filter, channels_last=self.channels_last)
-
- def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, update_emas=False, **layer_kwargs):
- _ = update_emas # unused
- misc.assert_shape(
- ws, [None, self.num_conv + self.num_torgb, self.w_dim])
- w_iter = iter(ws.unbind(dim=1))
- if ws.device.type != 'cuda':
- force_fp32 = True
- dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32
- memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format
- if fused_modconv is None:
- fused_modconv = self.fused_modconv_default
- if fused_modconv == 'inference_only':
- fused_modconv = (not self.training)
-
- # Input.
- if self.in_channels == 0:
- x = self.const.to(dtype=dtype, memory_format=memory_format)
- x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1])
- else:
- misc.assert_shape(x, [None, self.in_channels,
- self.resolution // 2, self.resolution // 2])
- x = x.to(dtype=dtype, memory_format=memory_format)
-
- # Main layers.
- if self.in_channels == 0:
- x = self.conv1(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- elif self.architecture == 'resnet':
- y = self.skip(x, gain=np.sqrt(0.5))
- x = self.conv0(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv,
- gain=np.sqrt(0.5), **layer_kwargs)
- x = y.add_(x)
- else:
- x = self.conv0(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
- x = self.conv1(x, next(w_iter),
- fused_modconv=fused_modconv, **layer_kwargs)
-
- # ToRGB.
- if img is not None:
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution // 2, self.resolution // 2])
- img = upfirdn2d.upsample2d(img, self.resample_filter)
- if self.is_last or self.architecture == 'skip':
- y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv)
- y = y.to(dtype=torch.float32,
- memory_format=torch.contiguous_format)
- img = img.add_(y) if img is not None else y
-
- assert x.dtype == dtype
- assert img is None or img.dtype == torch.float32
- return x, img
-
- def extra_repr(self):
- return f'resolution={self.resolution:d}, architecture={self.architecture:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class SynthesisNetwork(torch.nn.Module):
- def __init__(self,
- # Intermediate latent (W) dimensionality.
- w_dim,
- img_resolution, # Output image resolution.
- img_channels, # Number of color channels.
- # Overall multiplier for the number of channels.
- channel_base=32768,
- # Maximum number of channels in any layer.
- channel_max=512,
- # Use FP16 for the N highest resolutions.
- num_fp16_res=4,
- **block_kwargs, # Arguments for SynthesisBlock.
- ):
- assert img_resolution >= 4 and img_resolution & (
- img_resolution - 1) == 0
- super().__init__()
- self.w_dim = w_dim
- self.img_resolution = img_resolution
- self.img_resolution_log2 = int(np.log2(img_resolution))
- self.img_channels = img_channels
- self.num_fp16_res = num_fp16_res
- self.block_resolutions = [
- 2 ** i for i in range(2, self.img_resolution_log2 + 1)]
- channels_dict = {res: min(channel_base // res, channel_max)
- for res in self.block_resolutions}
- fp16_resolution = max(
- 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8)
-
- self.num_ws = 0
- for res in self.block_resolutions:
- in_channels = channels_dict[res // 2] if res > 4 else 0
- out_channels = channels_dict[res]
- use_fp16 = (res >= fp16_resolution)
- is_last = (res == self.img_resolution)
- block = SynthesisBlock(in_channels, out_channels, w_dim=w_dim, resolution=res,
- img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, **block_kwargs)
- self.num_ws += block.num_conv
- if is_last:
- self.num_ws += block.num_torgb
- setattr(self, f'b{res}', block)
-
- def forward(self, ws, return_feature=False, **block_kwargs):
- block_ws = []
- features = []
- with torch.autograd.profiler.record_function('split_ws'):
- misc.assert_shape(ws, [None, self.num_ws, self.w_dim])
- ws = ws.to(torch.float32)
- w_idx = 0
- for res in self.block_resolutions:
- block = getattr(self, f'b{res}')
- block_ws.append(
- ws.narrow(1, w_idx, block.num_conv + block.num_torgb))
- w_idx += block.num_conv
-
- x = img = None
- for res, cur_ws in zip(self.block_resolutions, block_ws):
- block = getattr(self, f'b{res}')
- x, img = block(x, img, cur_ws, **block_kwargs)
- features.append(x)
- if return_feature:
- return img, features
- else:
- return img
-
- def extra_repr(self):
- return ' '.join([
- f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},',
- f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},',
- f'num_fp16_res={self.num_fp16_res:d}'])
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Generator(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- # Conditioning label (C) dimensionality.
- c_dim,
- # Intermediate latent (W) dimensionality.
- w_dim,
- img_resolution, # Output resolution.
- img_channels, # Number of output color channels.
- mapping_kwargs={}, # Arguments for MappingNetwork.
- synthesis_kwargs={}, # Arguments for SynthesisNetwork.
- resize=None,
- # **synthesis_kwargs, # Arguments for SynthesisNetwork.
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.synthesis = SynthesisNetwork(
- w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs)
- self.num_ws = self.synthesis.num_ws
- self.mapping = MappingNetwork(
- z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)
- self.resize = resize
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, input_is_w=False, return_feature=False, **synthesis_kwargs):
- if input_is_w:
- ws = z
- if ws.dim() == 2:
- ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1])
- else:
- ws = self.mapping(z, c, truncation_psi=truncation_psi,
- truncation_cutoff=truncation_cutoff, update_emas=update_emas)
- img = self.synthesis(ws, update_emas=update_emas,
- return_feature=return_feature, **synthesis_kwargs)
- if self.resize is not None:
- img = imresize(img, [self.resize, self.resize])
- return img
-
-
-def imresize(image, size):
- dim = image.dim()
- if dim == 3:
- image = image.unsqueeze(1)
- b, _, h, w = image.shape
- if size[0] > h:
- image = F.interpolate(image, size, mode='bilinear')
- elif size[0] < h:
- image = F.interpolate(image, size, mode='area')
- if dim == 3:
- image = image.squeeze(1)
- return image
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class DiscriminatorBlock(torch.nn.Module):
- def __init__(self,
- # Number of input channels, 0 = first block.
- in_channels,
- # Number of intermediate channels.
- tmp_channels,
- # Number of output channels.
- out_channels,
- # Resolution of this block.
- resolution,
- # Number of input color channels.
- img_channels,
- # Index of the first layer.
- first_layer_idx,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Low-pass filter to apply when resampling activations.
- resample_filter=[1, 3, 3, 1],
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- use_fp16=False, # Use FP16 for this block?
- fp16_channels_last=False, # Use channels-last memory format with FP16?
- # Freeze-D: Number of layers to freeze.
- freeze_layers=0,
- ):
- assert in_channels in [0, tmp_channels]
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.resolution = resolution
- self.img_channels = img_channels
- self.first_layer_idx = first_layer_idx
- self.architecture = architecture
- self.use_fp16 = use_fp16
- self.channels_last = (use_fp16 and fp16_channels_last)
- self.register_buffer(
- 'resample_filter', upfirdn2d.setup_filter(resample_filter))
-
- self.num_layers = 0
-
- def trainable_gen():
- while True:
- layer_idx = self.first_layer_idx + self.num_layers
- trainable = (layer_idx >= freeze_layers)
- self.num_layers += 1
- yield trainable
- trainable_iter = trainable_gen()
-
- if in_channels == 0 or architecture == 'skip':
- self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation,
- trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation,
- trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2,
- trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last)
-
- if architecture == 'resnet':
- self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2,
- trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last)
-
- def forward(self, x, img, force_fp32=False):
- if (x if x is not None else img).device.type != 'cuda':
- force_fp32 = True
- dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32
- memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format
-
- # Input.
- if x is not None:
- misc.assert_shape(x, [None, self.in_channels,
- self.resolution, self.resolution])
- x = x.to(dtype=dtype, memory_format=memory_format)
-
- # FromRGB.
- if self.in_channels == 0 or self.architecture == 'skip':
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution])
- img = img.to(dtype=dtype, memory_format=memory_format)
- y = self.fromrgb(img)
- x = x + y if x is not None else y
- img = upfirdn2d.downsample2d(
- img, self.resample_filter) if self.architecture == 'skip' else None
-
- # Main layers.
- if self.architecture == 'resnet':
- y = self.skip(x, gain=np.sqrt(0.5))
- x = self.conv0(x)
- x = self.conv1(x, gain=np.sqrt(0.5))
- x = y.add_(x)
- else:
- x = self.conv0(x)
- x = self.conv1(x)
-
- assert x.dtype == dtype
- return x, img
-
- def extra_repr(self):
- return f'resolution={self.resolution:d}, architecture={self.architecture:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class MinibatchStdLayer(torch.nn.Module):
- def __init__(self, group_size, num_channels=1):
- super().__init__()
- self.group_size = group_size
- self.num_channels = num_channels
-
- def forward(self, x):
- N, C, H, W = x.shape
- with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants
- G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor(
- N)) if self.group_size is not None else N
- F = self.num_channels
- c = C // F
-
- # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c.
- y = x.reshape(G, -1, F, c, H, W)
- # [GnFcHW] Subtract mean over group.
- y = y - y.mean(dim=0)
- # [nFcHW] Calc variance over group.
- y = y.square().mean(dim=0)
- y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group.
- # [nF] Take average over channels and pixels.
- y = y.mean(dim=[2, 3, 4])
- y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions.
- # [NFHW] Replicate over group and pixels.
- y = y.repeat(G, 1, H, W)
- # [NCHW] Append to input as new channels.
- x = torch.cat([x, y], dim=1)
- return x
-
- def extra_repr(self):
- return f'group_size={self.group_size}, num_channels={self.num_channels:d}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class DiscriminatorEpilogue(torch.nn.Module):
- def __init__(self,
- in_channels, # Number of input channels.
- # Dimensionality of mapped conditioning label, 0 = no label.
- cmap_dim,
- resolution, # Resolution of this block.
- # Number of input color channels.
- img_channels,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Group size for the minibatch standard deviation layer, None = entire minibatch.
- mbstd_group_size=4,
- # Number of features for the minibatch standard deviation layer, 0 = disable.
- mbstd_num_channels=1,
- # Activation function: 'relu', 'lrelu', etc.
- activation='lrelu',
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=None,
- ):
- assert architecture in ['orig', 'skip', 'resnet']
- super().__init__()
- self.in_channels = in_channels
- self.cmap_dim = cmap_dim
- self.resolution = resolution
- self.img_channels = img_channels
- self.architecture = architecture
-
- if architecture == 'skip':
- self.fromrgb = Conv2dLayer(
- img_channels, in_channels, kernel_size=1, activation=activation)
- self.mbstd = MinibatchStdLayer(
- group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None
- self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels,
- kernel_size=3, activation=activation, conv_clamp=conv_clamp)
- self.fc = FullyConnectedLayer(
- in_channels * (resolution ** 2), in_channels, activation=activation)
- self.out = FullyConnectedLayer(
- in_channels, 1 if cmap_dim == 0 else cmap_dim)
-
- def forward(self, x, img, cmap, force_fp32=False):
- misc.assert_shape(x, [None, self.in_channels,
- self.resolution, self.resolution]) # [NCHW]
- _ = force_fp32 # unused
- dtype = torch.float32
- memory_format = torch.contiguous_format
-
- # FromRGB.
- x = x.to(dtype=dtype, memory_format=memory_format)
- if self.architecture == 'skip':
- misc.assert_shape(
- img, [None, self.img_channels, self.resolution, self.resolution])
- img = img.to(dtype=dtype, memory_format=memory_format)
- x = x + self.fromrgb(img)
-
- # Main layers.
- if self.mbstd is not None:
- x = self.mbstd(x)
- x = self.conv(x)
- x = self.fc(x.flatten(1))
- x = self.out(x)
-
- # Conditioning.
- if self.cmap_dim > 0:
- misc.assert_shape(cmap, [None, self.cmap_dim])
- x = (x * cmap).sum(dim=1, keepdim=True) * \
- (1 / np.sqrt(self.cmap_dim))
-
- assert x.dtype == dtype
- return x
-
- def extra_repr(self):
- return f'resolution={self.resolution:d}, architecture={self.architecture:s}'
-
-# ----------------------------------------------------------------------------
-
-
-@persistence.persistent_class
-class Discriminator(torch.nn.Module):
- def __init__(self,
- # Conditioning label (C) dimensionality.
- c_dim,
- img_resolution, # Input resolution.
- # Number of input color channels.
- img_channels,
- # Architecture: 'orig', 'skip', 'resnet'.
- architecture='resnet',
- # Overall multiplier for the number of channels.
- channel_base=32768,
- # Maximum number of channels in any layer.
- channel_max=512,
- # Use FP16 for the N highest resolutions.
- num_fp16_res=4,
- # Clamp the output of convolution layers to +-X, None = disable clamping.
- conv_clamp=256,
- # Dimensionality of mapped conditioning label, None = default.
- cmap_dim=None,
- block_kwargs={}, # Arguments for DiscriminatorBlock.
- mapping_kwargs={}, # Arguments for MappingNetwork.
- # Arguments for DiscriminatorEpilogue.
- epilogue_kwargs={},
- ):
- super().__init__()
- self.c_dim = c_dim
- self.img_resolution = img_resolution
- self.img_resolution_log2 = int(np.log2(img_resolution))
- self.img_channels = img_channels
- self.block_resolutions = [
- 2 ** i for i in range(self.img_resolution_log2, 2, -1)]
- channels_dict = {res: min(channel_base // res, channel_max)
- for res in self.block_resolutions + [4]}
- fp16_resolution = max(
- 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8)
-
- if cmap_dim is None:
- cmap_dim = channels_dict[4]
- if c_dim == 0:
- cmap_dim = 0
-
- common_kwargs = dict(img_channels=img_channels,
- architecture=architecture, conv_clamp=conv_clamp)
- cur_layer_idx = 0
- for res in self.block_resolutions:
- in_channels = channels_dict[res] if res < img_resolution else 0
- tmp_channels = channels_dict[res]
- out_channels = channels_dict[res // 2]
- use_fp16 = (res >= fp16_resolution)
- block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res,
- first_layer_idx=cur_layer_idx, use_fp16=use_fp16, **block_kwargs, **common_kwargs)
- setattr(self, f'b{res}', block)
- cur_layer_idx += block.num_layers
- if c_dim > 0:
- self.mapping = MappingNetwork(
- z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs)
- self.b4 = DiscriminatorEpilogue(
- channels_dict[4], cmap_dim=cmap_dim, resolution=4, **epilogue_kwargs, **common_kwargs)
-
- def forward(self, img, c, update_emas=False, **block_kwargs):
- _ = update_emas # unused
- x = None
- for res in self.block_resolutions:
- block = getattr(self, f'b{res}')
- x, img = block(x, img, **block_kwargs)
-
- cmap = None
- if self.c_dim > 0:
- cmap = self.mapping(None, c)
- x = self.b4(x, img, cmap)
- return x
-
- def extra_repr(self):
- return f'c_dim={self.c_dim:d}, img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d}'
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/archs/srvgg_arch.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/archs/srvgg_arch.py
deleted file mode 100644
index 39460965c9c5ee9cd6eb41c50d33574cb8ba6e50..0000000000000000000000000000000000000000
--- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/archs/srvgg_arch.py
+++ /dev/null
@@ -1,69 +0,0 @@
-from basicsr.utils.registry import ARCH_REGISTRY
-from torch import nn as nn
-from torch.nn import functional as F
-
-
-@ARCH_REGISTRY.register()
-class SRVGGNetCompact(nn.Module):
- """A compact VGG-style network structure for super-resolution.
-
- It is a compact network structure, which performs upsampling in the last layer and no convolution is
- conducted on the HR feature space.
-
- Args:
- num_in_ch (int): Channel number of inputs. Default: 3.
- num_out_ch (int): Channel number of outputs. Default: 3.
- num_feat (int): Channel number of intermediate features. Default: 64.
- num_conv (int): Number of convolution layers in the body network. Default: 16.
- upscale (int): Upsampling factor. Default: 4.
- act_type (str): Activation type, options: 'relu', 'prelu', 'leakyrelu'. Default: prelu.
- """
-
- def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'):
- super(SRVGGNetCompact, self).__init__()
- self.num_in_ch = num_in_ch
- self.num_out_ch = num_out_ch
- self.num_feat = num_feat
- self.num_conv = num_conv
- self.upscale = upscale
- self.act_type = act_type
-
- self.body = nn.ModuleList()
- # the first conv
- self.body.append(nn.Conv2d(num_in_ch, num_feat, 3, 1, 1))
- # the first activation
- if act_type == 'relu':
- activation = nn.ReLU(inplace=True)
- elif act_type == 'prelu':
- activation = nn.PReLU(num_parameters=num_feat)
- elif act_type == 'leakyrelu':
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the body structure
- for _ in range(num_conv):
- self.body.append(nn.Conv2d(num_feat, num_feat, 3, 1, 1))
- # activation
- if act_type == 'relu':
- activation = nn.ReLU(inplace=True)
- elif act_type == 'prelu':
- activation = nn.PReLU(num_parameters=num_feat)
- elif act_type == 'leakyrelu':
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the last conv
- self.body.append(nn.Conv2d(num_feat, num_out_ch * upscale * upscale, 3, 1, 1))
- # upsample
- self.upsampler = nn.PixelShuffle(upscale)
-
- def forward(self, x):
- out = x
- for i in range(0, len(self.body)):
- out = self.body[i](out)
-
- out = self.upsampler(out)
- # add the nearest upsampled image, so that the network learns the residual
- base = F.interpolate(x, scale_factor=self.upscale, mode='nearest')
- out += base
- return out
diff --git a/spaces/EronSamez/RVC_HFmeu/tools/infer/trans_weights.py b/spaces/EronSamez/RVC_HFmeu/tools/infer/trans_weights.py
deleted file mode 100644
index 1c54eefd6e7c678238d31e251a2e15479bf35d5b..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/tools/infer/trans_weights.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import pdb
-
-import torch
-
-# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-suc\G_1000.pth")["model"]#sim_nsf#
-# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-freeze-vocoder-flow-enc_q\G_1000.pth")["model"]#sim_nsf#
-# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-freeze-vocoder\G_1000.pth")["model"]#sim_nsf#
-# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-test\G_1000.pth")["model"]#sim_nsf#
-a = torch.load(
- r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-no_opt-no_dropout\G_1000.pth"
-)[
- "model"
-] # sim_nsf#
-for key in a.keys():
- a[key] = a[key].half()
-# torch.save(a,"ft-mi-freeze-vocoder_true_1k.pt")#
-# torch.save(a,"ft-mi-sim1k.pt")#
-torch.save(a, "ft-mi-no_opt-no_dropout.pt") #
diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/ProjectBreakdownUI.html b/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/ProjectBreakdownUI.html
deleted file mode 100644
index 65b54ecbfb58ce55f2be84e92ce61dd73ebfa387..0000000000000000000000000000000000000000
--- a/spaces/FYP-23-S1-21/Refineverse_Plugin/templates/ProjectBreakdownUI.html
+++ /dev/null
@@ -1,171 +0,0 @@
-
-
-
- Project Task Breakdown
-
-
-
-
-
-
-
-
-
-
-
-
-
- Project Task Breakdown
-
-
-
-
-
-
-
-
Breakdown Summary
-
-
-
-
-
- User Story
- Project Task
-
-
-
-
-
- {% for row in rows %}
-
-
- {{ row[0] }}
- {{ row[1] }}
-
- {% endfor %}
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/create_pyramid_with_color_coded_ells.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/create_pyramid_with_color_coded_ells.py
deleted file mode 100644
index 392494eb59d1a1c08ebd5698822e45f63c8dd021..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/create_pyramid_with_color_coded_ells.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import pybullet as p
-
-class CreatePyramidWithColorCodedElls(Task):
- """Pick up ell-shaped objects of different colors and stack them onto a pallet in the shape of a pyramid."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 20
- self.lang_template = "stack the {color} ell on the pyramid"
- self.task_completed_desc = "done stacking ell pyramid."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Add pallet.
- pallet_size = (0.15, 0.15, 0.01)
- pallet_urdf = 'pallet/pallet.urdf'
- pallet_pose = self.get_random_pose(env, pallet_size)
- env.add_object(pallet_urdf, pallet_pose, category='fixed')
-
- # Ell colors.
- colors = [
- utils.COLORS['red'], utils.COLORS['blue'],
- utils.COLORS['yellow'], utils.COLORS['green']
- ]
- color_names = ['red', 'blue', 'yellow', 'green']
-
- # Add Ells.
- ell_size = (0.04, 0.04, 0.04)
- ell_urdf = 'insertion/ell.urdf'
- objs = []
- for i in range(4):
- ell_pose = self.get_random_pose(env, ell_size)
- ell_id = env.add_object(ell_urdf, ell_pose, color=colors[i])
- objs.append(ell_id)
-
- # Associate placement locations for goals.
- place_pos = [(0, -0.05, 0.03), (0, 0, 0.03),
- (0, 0.05, 0.03), (0, 0, 0.08)]
- targs = [(utils.apply(pallet_pose, i), pallet_pose[1]) for i in place_pos]
-
- # Goal: Ells are stacked in a pyramid (bottom row: red, middle row: blue, top row: yellow, green).
- for i in range(4):
- self.add_goal(objs=[objs[i]], matches=np.ones((1, 1)), targ_poses=[targs[i]], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1 / 4, symmetries=[np.pi/2],
- language_goal=self.lang_template.format(color=color_names[i]))
\ No newline at end of file
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/insert_ell_along_square_path.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/insert_ell_along_square_path.py
deleted file mode 100644
index b1dbce4c230b14bbfdd888392a1727673addd7c7..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/insert_ell_along_square_path.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import numpy as np
-import os
-import pybullet as p
-import random
-from cliport.tasks import primitives
-from cliport.tasks.grippers import Spatula
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import numpy as np
-from cliport.tasks.task import Task
-from cliport.utils import utils
-import pybullet as p
-
-class InsertEllAlongSquarePath(Task):
- """Pick up each ell block and insert it into the fixture of the same color. However, the robot must move each ell block along the marked square path to reach the fixture."""
-
- def __init__(self):
- super().__init__()
- self.max_steps = 20
- self.lang_template = "move the {color} ell block into the {color} fixture"
- self.task_completed_desc = "done inserting ell blocks."
- self.additional_reset()
-
- def reset(self, env):
- super().reset(env)
-
- # Ell block colors.
- colors = ['red', 'blue', 'green', 'yellow']
-
- # Add ell blocks and fixtures.
- ell_size = (0.04, 0.04, 0.04)
- ell_urdf = 'insertion/ell.urdf'
- fixture_urdf = 'insertion/fixture.urdf'
- ell_blocks = []
- fixtures = []
- for color in colors:
- # Add ell block
- ell_pose = self.get_random_pose(env, ell_size)
- ell_id = env.add_object(ell_urdf, ell_pose, color=utils.COLORS[color])
- ell_blocks.append(ell_id)
-
- # Add fixture
- fixture_pose = self.get_random_pose(env, ell_size)
- fixture_id = env.add_object(fixture_urdf, fixture_pose, color=utils.COLORS[color])
- fixtures.append(fixture_id)
-
- # Goal: each ell block is inserted into the fixture of the same color.
- for i in range(len(colors)):
- self.add_goal(objs=[ell_blocks[i]], matches=np.ones((1, 1)), targ_poses=[p.getBasePositionAndOrientation(fixtures[i])], replace=False,
- rotations=True, metric='pose', params=None, step_max_reward=1/len(colors),
- language_goal=self.lang_template.format(color=colors[i]))
-
- # Add square path marked by small blocks.
- path_block_size = (0.02, 0.02, 0.02)
- path_block_urdf = 'block/small.urdf'
- path_block_color = utils.COLORS['gray']
- for _ in range(16):
- path_block_pose = self.get_random_pose(env, path_block_size)
- env.add_object(path_block_urdf, path_block_pose, color=path_block_color)
\ No newline at end of file
diff --git a/spaces/GilbertClaus/VideoCutter/bunkrr.py b/spaces/GilbertClaus/VideoCutter/bunkrr.py
deleted file mode 100644
index 9b87e04e4b2478c09fd94dffa979df299c832c5e..0000000000000000000000000000000000000000
--- a/spaces/GilbertClaus/VideoCutter/bunkrr.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from bs4 import BeautifulSoup
-import re
-from others import *
-
-def get_info_bunkrr(soup: BeautifulSoup):
- # Mencari judul video di elemen
- title = soup.find("title")
- if title:
- video_title = title.text
-
- # Ubah '&' menjadi '&'
- video_title = video_title.replace('&', '&')
-
- # Jika ada karakter '-' maka cari '-' paling belakang dan hapus mulai dari itu sampai ke belakang
- if '-' in video_title:
- last_dash_index = video_title.rfind('-')
- video_title = video_title[:last_dash_index]
-
- # Sisa dari karakter '-' ubah menjadi ' '
- video_title = video_title.replace('-', ' ')
-
- # Mencari link download yang berawalan https dan berakhiran .mp4
- link_download = soup.find("source", src=re.compile(r'^https.*\.mp4$'))
- if link_download:
- link_download = link_download['src']
-
- # Ubah '&' menjadi '&'
- link_download = link_download.replace('&', '&')
- print(link_download)
- else:
- link_download = ''
-
- return video_title, link_download
- else:
- print("Tidak ditemukan elemen ")
- return '', ''
\ No newline at end of file
diff --git a/spaces/Giozh/openai-reverse-proxy/Dockerfile b/spaces/Giozh/openai-reverse-proxy/Dockerfile
deleted file mode 100644
index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000
--- a/spaces/Giozh/openai-reverse-proxy/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/preprocess.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/preprocess.py
deleted file mode 100644
index 551a8b29c4d84c0e1430f285a1c8b5e10c98ee5f..0000000000000000000000000000000000000000
--- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/preprocess.py
+++ /dev/null
@@ -1,175 +0,0 @@
-from multiprocess.pool import ThreadPool
-from encoder.params_data import *
-from encoder.config import librispeech_datasets, anglophone_nationalites
-from datetime import datetime
-from encoder import audio
-from pathlib import Path
-from tqdm import tqdm
-import numpy as np
-
-
-class DatasetLog:
- """
- Registers metadata about the dataset in a text file.
- """
- def __init__(self, root, name):
- self.text_file = open(Path(root, "Log_%s.txt" % name.replace("/", "_")), "w")
- self.sample_data = dict()
-
- start_time = str(datetime.now().strftime("%A %d %B %Y at %H:%M"))
- self.write_line("Creating dataset %s on %s" % (name, start_time))
- self.write_line("-----")
- self._log_params()
-
- def _log_params(self):
- from encoder import params_data
- self.write_line("Parameter values:")
- for param_name in (p for p in dir(params_data) if not p.startswith("__")):
- value = getattr(params_data, param_name)
- self.write_line("\t%s: %s" % (param_name, value))
- self.write_line("-----")
-
- def write_line(self, line):
- self.text_file.write("%s\n" % line)
-
- def add_sample(self, **kwargs):
- for param_name, value in kwargs.items():
- if not param_name in self.sample_data:
- self.sample_data[param_name] = []
- self.sample_data[param_name].append(value)
-
- def finalize(self):
- self.write_line("Statistics:")
- for param_name, values in self.sample_data.items():
- self.write_line("\t%s:" % param_name)
- self.write_line("\t\tmin %.3f, max %.3f" % (np.min(values), np.max(values)))
- self.write_line("\t\tmean %.3f, median %.3f" % (np.mean(values), np.median(values)))
- self.write_line("-----")
- end_time = str(datetime.now().strftime("%A %d %B %Y at %H:%M"))
- self.write_line("Finished on %s" % end_time)
- self.text_file.close()
-
-
-def _init_preprocess_dataset(dataset_name, datasets_root, out_dir) -> (Path, DatasetLog):
- dataset_root = datasets_root.joinpath(dataset_name)
- if not dataset_root.exists():
- print("Couldn\'t find %s, skipping this dataset." % dataset_root)
- return None, None
- return dataset_root, DatasetLog(out_dir, dataset_name)
-
-
-def _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, extension,
- skip_existing, logger):
- print("%s: Preprocessing data for %d speakers." % (dataset_name, len(speaker_dirs)))
-
- # Function to preprocess utterances for one speaker
- def preprocess_speaker(speaker_dir: Path):
- # Give a name to the speaker that includes its dataset
- speaker_name = "_".join(speaker_dir.relative_to(datasets_root).parts)
-
- # Create an output directory with that name, as well as a txt file containing a
- # reference to each source file.
- speaker_out_dir = out_dir.joinpath(speaker_name)
- speaker_out_dir.mkdir(exist_ok=True)
- sources_fpath = speaker_out_dir.joinpath("_sources.txt")
-
- # There's a possibility that the preprocessing was interrupted earlier, check if
- # there already is a sources file.
- if sources_fpath.exists():
- try:
- with sources_fpath.open("r") as sources_file:
- existing_fnames = {line.split(",")[0] for line in sources_file}
- except:
- existing_fnames = {}
- else:
- existing_fnames = {}
-
- # Gather all audio files for that speaker recursively
- sources_file = sources_fpath.open("a" if skip_existing else "w")
- for in_fpath in speaker_dir.glob("**/*.%s" % extension):
- # Check if the target output file already exists
- out_fname = "_".join(in_fpath.relative_to(speaker_dir).parts)
- out_fname = out_fname.replace(".%s" % extension, ".npy")
- if skip_existing and out_fname in existing_fnames:
- continue
-
- # Load and preprocess the waveform
- wav = audio.preprocess_wav(in_fpath)
- if len(wav) == 0:
- continue
-
- # Create the mel spectrogram, discard those that are too short
- frames = audio.wav_to_mel_spectrogram(wav)
- if len(frames) < partials_n_frames:
- continue
-
- out_fpath = speaker_out_dir.joinpath(out_fname)
- np.save(out_fpath, frames)
- logger.add_sample(duration=len(wav) / sampling_rate)
- sources_file.write("%s,%s\n" % (out_fname, in_fpath))
-
- sources_file.close()
-
- # Process the utterances for each speaker
- with ThreadPool(8) as pool:
- list(tqdm(pool.imap(preprocess_speaker, speaker_dirs), dataset_name, len(speaker_dirs),
- unit="speakers"))
- logger.finalize()
- print("Done preprocessing %s.\n" % dataset_name)
-
-
-def preprocess_librispeech(datasets_root: Path, out_dir: Path, skip_existing=False):
- for dataset_name in librispeech_datasets["train"]["other"]:
- # Initialize the preprocessing
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Preprocess all speakers
- speaker_dirs = list(dataset_root.glob("*"))
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "flac",
- skip_existing, logger)
-
-
-def preprocess_voxceleb1(datasets_root: Path, out_dir: Path, skip_existing=False):
- # Initialize the preprocessing
- dataset_name = "VoxCeleb1"
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Get the contents of the meta file
- with dataset_root.joinpath("vox1_meta.csv").open("r") as metafile:
- metadata = [line.split("\t") for line in metafile][1:]
-
- # Select the ID and the nationality, filter out non-anglophone speakers
- nationalities = {line[0]: line[3] for line in metadata}
- keep_speaker_ids = [speaker_id for speaker_id, nationality in nationalities.items() if
- nationality.lower() in anglophone_nationalites]
- print("VoxCeleb1: using samples from %d (presumed anglophone) speakers out of %d." %
- (len(keep_speaker_ids), len(nationalities)))
-
- # Get the speaker directories for anglophone speakers only
- speaker_dirs = dataset_root.joinpath("wav").glob("*")
- speaker_dirs = [speaker_dir for speaker_dir in speaker_dirs if
- speaker_dir.name in keep_speaker_ids]
- print("VoxCeleb1: found %d anglophone speakers on the disk, %d missing (this is normal)." %
- (len(speaker_dirs), len(keep_speaker_ids) - len(speaker_dirs)))
-
- # Preprocess all speakers
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "wav",
- skip_existing, logger)
-
-
-def preprocess_voxceleb2(datasets_root: Path, out_dir: Path, skip_existing=False):
- # Initialize the preprocessing
- dataset_name = "VoxCeleb2"
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Get the speaker directories
- # Preprocess all speakers
- speaker_dirs = list(dataset_root.joinpath("dev", "aac").glob("*"))
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "m4a",
- skip_existing, logger)
diff --git a/spaces/Goblin-of-Games/README/README.md b/spaces/Goblin-of-Games/README/README.md
deleted file mode 100644
index ced63da3d1aeb683b47bc1c65b0776c33080052e..0000000000000000000000000000000000000000
--- a/spaces/Goblin-of-Games/README/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
-Name: Goblin of Games
-
-Address: Ago Palace way, Lagos,Nigeria
-
-Phone: +2349032425535
-
-Email: oluwanifemiadelakun001@gmail.com
-
-Website: https://oluwanifemiadelaku.wixsite.com/goblin-of-games
-
-About Us:
-
-Goblin of Games is a leading developer and publisher of fantasy-themed video games. With a team of experienced game developers and designers, we create immersive and engaging gaming experiences for players of all ages. From role-playing games to strategy games and beyond, we strive to push the boundaries of what is possible in the world of fantasy gaming. Come join us on our journey and discover the magic of Goblin of Games.
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_80k_ade20k.py
deleted file mode 100644
index 78f4d0d9de3d6b8dd2b097531317956d8e3b19f1..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_80k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/uniformer.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/uniformer.py
deleted file mode 100644
index f599e981d0a31d5cd5dae01595dc1ff30513ccc5..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/uniformer.py
+++ /dev/null
@@ -1,422 +0,0 @@
-# --------------------------------------------------------
-# UniFormer
-# Copyright (c) 2022 SenseTime X-Lab
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Kunchang Li
-# --------------------------------------------------------
-
-from collections import OrderedDict
-import math
-
-from functools import partial
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-import numpy as np
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from mmcv_custom import load_checkpoint
-from mmseg.utils import get_root_logger
-from ..builder import BACKBONES
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class CMlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Conv2d(in_features, hidden_features, 1)
- self.act = act_layer()
- self.fc2 = nn.Conv2d(hidden_features, out_features, 1)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class CBlock(nn.Module):
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = nn.BatchNorm2d(dim)
- self.conv1 = nn.Conv2d(dim, dim, 1)
- self.conv2 = nn.Conv2d(dim, dim, 1)
- self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = nn.BatchNorm2d(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x)))))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x):
- B, N, C = x.shape
- qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SABlock(nn.Module):
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim,
- num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- B, N, H, W = x.shape
- x = x.flatten(2).transpose(1, 2)
- x = x + self.drop_path(self.attn(self.norm1(x)))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- x = x.transpose(1, 2).reshape(B, N, H, W)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class SABlock_Windows(nn.Module):
- def __init__(self, dim, num_heads, window_size=14, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.window_size=window_size
- self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim,
- num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.pos_embed(x)
- x = x.permute(0, 2, 3, 1)
- B, H, W, C = x.shape
- shortcut = x
- x = self.norm1(x)
-
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- x_windows = window_partition(x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- x = x.permute(0, 3, 1, 2).reshape(B, C, H, W)
- return x
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
- self.img_size = img_size
- self.patch_size = patch_size
- self.num_patches = num_patches
- self.norm = nn.LayerNorm(embed_dim)
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
-
- def forward(self, x):
- B, _, H, W = x.shape
- x = self.proj(x)
- B, _, H, W = x.shape
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous()
- return x
-
-
-@BACKBONES.register_module()
-class UniFormer(nn.Module):
- """ Vision Transformer
- A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` -
- https://arxiv.org/abs/2010.11929
- """
- def __init__(self, layers=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=80, embed_dim=[64, 128, 320, 512],
- head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=partial(nn.LayerNorm, eps=1e-6),
- pretrained_path=None, use_checkpoint=False, checkpoint_num=[0, 0, 0, 0],
- windows=False, hybrid=False, window_size=14):
- """
- Args:
- layer (list): number of block in each layer
- img_size (int, tuple): input image size
- in_chans (int): number of input channels
- num_classes (int): number of classes for classification head
- embed_dim (int): embedding dimension
- head_dim (int): dimension of attention heads
- mlp_ratio (int): ratio of mlp hidden dim to embedding dim
- qkv_bias (bool): enable bias for qkv if True
- qk_scale (float): override default qk scale of head_dim ** -0.5 if set
- representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
- drop_rate (float): dropout rate
- attn_drop_rate (float): attention dropout rate
- drop_path_rate (float): stochastic depth rate
- norm_layer (nn.Module): normalization layer
- pretrained_path (str): path of pretrained model
- use_checkpoint (bool): whether use checkpoint
- checkpoint_num (list): index for using checkpoint in every stage
- windows (bool): whether use window MHRA
- hybrid (bool): whether use hybrid MHRA
- window_size (int): size of window (>14)
- """
- super().__init__()
- self.num_classes = num_classes
- self.use_checkpoint = use_checkpoint
- self.checkpoint_num = checkpoint_num
- self.windows = windows
- print(f'Use Checkpoint: {self.use_checkpoint}')
- print(f'Checkpoint Number: {self.checkpoint_num}')
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
- norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
-
- self.patch_embed1 = PatchEmbed(
- img_size=img_size, patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0])
- self.patch_embed2 = PatchEmbed(
- img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1])
- self.patch_embed3 = PatchEmbed(
- img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2])
- self.patch_embed4 = PatchEmbed(
- img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3])
-
- self.pos_drop = nn.Dropout(p=drop_rate)
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(layers))] # stochastic depth decay rule
- num_heads = [dim // head_dim for dim in embed_dim]
- self.blocks1 = nn.ModuleList([
- CBlock(
- dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer)
- for i in range(layers[0])])
- self.norm1=norm_layer(embed_dim[0])
- self.blocks2 = nn.ModuleList([
- CBlock(
- dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]], norm_layer=norm_layer)
- for i in range(layers[1])])
- self.norm2 = norm_layer(embed_dim[1])
- if self.windows:
- print('Use local window for all blocks in stage3')
- self.blocks3 = nn.ModuleList([
- SABlock_Windows(
- dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)
- for i in range(layers[2])])
- elif hybrid:
- print('Use hybrid window for blocks in stage3')
- block3 = []
- for i in range(layers[2]):
- if (i + 1) % 4 == 0:
- block3.append(SABlock(
- dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer))
- else:
- block3.append(SABlock_Windows(
- dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer))
- self.blocks3 = nn.ModuleList(block3)
- else:
- print('Use global window for all blocks in stage3')
- self.blocks3 = nn.ModuleList([
- SABlock(
- dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)
- for i in range(layers[2])])
- self.norm3 = norm_layer(embed_dim[2])
- self.blocks4 = nn.ModuleList([
- SABlock(
- dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]+layers[2]], norm_layer=norm_layer)
- for i in range(layers[3])])
- self.norm4 = norm_layer(embed_dim[3])
-
- # Representation layer
- if representation_size:
- self.num_features = representation_size
- self.pre_logits = nn.Sequential(OrderedDict([
- ('fc', nn.Linear(embed_dim, representation_size)),
- ('act', nn.Tanh())
- ]))
- else:
- self.pre_logits = nn.Identity()
-
- self.apply(self._init_weights)
- self.init_weights(pretrained=pretrained_path)
-
- def init_weights(self, pretrained):
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger)
- print(f'Load pretrained model from {pretrained}')
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'pos_embed', 'cls_token'}
-
- def get_classifier(self):
- return self.head
-
- def reset_classifier(self, num_classes, global_pool=''):
- self.num_classes = num_classes
- self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
-
- def forward_features(self, x):
- out = []
- x = self.patch_embed1(x)
- x = self.pos_drop(x)
- for i, blk in enumerate(self.blocks1):
- if self.use_checkpoint and i < self.checkpoint_num[0]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm1(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed2(x)
- for i, blk in enumerate(self.blocks2):
- if self.use_checkpoint and i < self.checkpoint_num[1]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm2(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed3(x)
- for i, blk in enumerate(self.blocks3):
- if self.use_checkpoint and i < self.checkpoint_num[2]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm3(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- x = self.patch_embed4(x)
- for i, blk in enumerate(self.blocks4):
- if self.use_checkpoint and i < self.checkpoint_num[3]:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- x_out = self.norm4(x.permute(0, 2, 3, 1))
- out.append(x_out.permute(0, 3, 1, 2).contiguous())
- return tuple(out)
-
- def forward(self, x):
- x = self.forward_features(x)
- return x
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/modules/test_seanet.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/modules/test_seanet.py
deleted file mode 100644
index e5c51b340a2f94fb2828b14daf83d5fad645073d..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/modules/test_seanet.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-
-import pytest
-import torch
-
-from audiocraft.modules.seanet import SEANetEncoder, SEANetDecoder, SEANetResnetBlock
-from audiocraft.modules import StreamableConv1d, StreamableConvTranspose1d
-
-
-class TestSEANetModel:
-
- def test_base(self):
- encoder = SEANetEncoder()
- decoder = SEANetDecoder()
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_causal(self):
- encoder = SEANetEncoder(causal=True)
- decoder = SEANetDecoder(causal=True)
- x = torch.randn(1, 1, 24000)
-
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_conv_skip_connection(self):
- encoder = SEANetEncoder(true_skip=False)
- decoder = SEANetDecoder(true_skip=False)
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def test_seanet_encoder_decoder_final_act(self):
- encoder = SEANetEncoder(true_skip=False)
- decoder = SEANetDecoder(true_skip=False, final_activation='Tanh')
-
- x = torch.randn(1, 1, 24000)
- z = encoder(x)
- assert list(z.shape) == [1, 128, 75], z.shape
- y = decoder(z)
- assert y.shape == x.shape, (x.shape, y.shape)
-
- def _check_encoder_blocks_norm(self, encoder: SEANetEncoder, n_disable_blocks: int, norm: str):
- n_blocks = 0
- for layer in encoder.model:
- if isinstance(layer, StreamableConv1d):
- n_blocks += 1
- assert layer.conv.norm_type == 'none' if n_blocks <= n_disable_blocks else norm
- elif isinstance(layer, SEANetResnetBlock):
- for resnet_layer in layer.block:
- if isinstance(resnet_layer, StreamableConv1d):
- # here we add + 1 to n_blocks as we increment n_blocks just after the block
- assert resnet_layer.conv.norm_type == 'none' if (n_blocks + 1) <= n_disable_blocks else norm
-
- def test_encoder_disable_norm(self):
- n_residuals = [0, 1, 3]
- disable_blocks = [0, 1, 2, 3, 4, 5, 6]
- norms = ['weight_norm', 'none']
- for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms):
- encoder = SEANetEncoder(n_residual_layers=n_res, norm=norm,
- disable_norm_outer_blocks=disable_blocks)
- self._check_encoder_blocks_norm(encoder, disable_blocks, norm)
-
- def _check_decoder_blocks_norm(self, decoder: SEANetDecoder, n_disable_blocks: int, norm: str):
- n_blocks = 0
- for layer in decoder.model:
- if isinstance(layer, StreamableConv1d):
- n_blocks += 1
- assert layer.conv.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
- elif isinstance(layer, StreamableConvTranspose1d):
- n_blocks += 1
- assert layer.convtr.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
- elif isinstance(layer, SEANetResnetBlock):
- for resnet_layer in layer.block:
- if isinstance(resnet_layer, StreamableConv1d):
- assert resnet_layer.conv.norm_type == 'none' \
- if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm
-
- def test_decoder_disable_norm(self):
- n_residuals = [0, 1, 3]
- disable_blocks = [0, 1, 2, 3, 4, 5, 6]
- norms = ['weight_norm', 'none']
- for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms):
- decoder = SEANetDecoder(n_residual_layers=n_res, norm=norm,
- disable_norm_outer_blocks=disable_blocks)
- self._check_decoder_blocks_norm(decoder, disable_blocks, norm)
-
- def test_disable_norm_raises_exception(self):
- # Invalid disable_norm_outer_blocks values raise exceptions
- with pytest.raises(AssertionError):
- SEANetEncoder(disable_norm_outer_blocks=-1)
-
- with pytest.raises(AssertionError):
- SEANetEncoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7)
-
- with pytest.raises(AssertionError):
- SEANetDecoder(disable_norm_outer_blocks=-1)
-
- with pytest.raises(AssertionError):
- SEANetDecoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7)
diff --git a/spaces/Grezz/generate_human_motion/pyrender/tests/conftest.py b/spaces/Grezz/generate_human_motion/pyrender/tests/conftest.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/GroveStreet/GTA_SOVITS/vdecoder/__init__.py b/spaces/GroveStreet/GTA_SOVITS/vdecoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HachiRe/Fusani/index.html b/spaces/HachiRe/Fusani/index.html
deleted file mode 100644
index ca90f96309a5c6457bfeeb74ea271b11ad0bd83a..0000000000000000000000000000000000000000
--- a/spaces/HachiRe/Fusani/index.html
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
-
-
-
-
-
- Fusani
-
-
-
-
-
- Fusani
- go conlang!
-
- Enter a word:
-
-
- Add Vowels
- Remove Vowels
- Add Consonants
- Remove Consonants
-
-
-
-
-
-
-
-
-
- How does this work?
-
- The code provided is a set of instructions that a computer can follow to make changes to a word that someone types into a box on a website.
- There are four buttons on the website that you can click. Each button does a different thing to the word that you type in.
- The first button is called "Add Vowels." When you click this button, the computer looks at the word you typed and adds some vowels to it to make it longer. It does this by picking a random vowel and putting it in between two consonants. For example, if you typed in the word "dog," the computer might add an "e" to make it "doeg."
- The second button is called "Remove Vowels." When you click this button, the computer looks at the word you typed and takes out all the vowels. So if you typed in the word "hello," the computer would change it to "hll."
- The third button is called "Add Consonants." When you click this button, the computer looks at the word you typed and adds some consonants to it to make it longer. It does this by picking a random consonant and putting it in between two vowels. For example, if you typed in the word "cat," the computer might add an "h" to make it "chat."
- The fourth button is called "Remove Consonants." When you click this button, the computer looks at the word you typed and takes out all the consonants. So if you typed in the word "world," the computer would change it to "o."
-
-
- Offline Fusani
- Here are the requirements:
-
-
-
-
- Copy the code from pastebin .
- Install Visual Studio Code .
- Open Visual Studio Code and click on "New File".
- Create an HTML file with the extension ".html". For example, "index.html".
- Paste the copied code into the HTML file.
- Save the HTML file.
- Open the HTML file in a web browser to see the preview.
-
-
-
-
-
-
- Join our fandom! ❤️
- Donate Here! 💲 (btw you pay in cycles)
-
-
-
-
diff --git a/spaces/Hallucinate/demo/ldm/modules/losses/vqperceptual.py b/spaces/Hallucinate/demo/ldm/modules/losses/vqperceptual.py
deleted file mode 100644
index f69981769e4bd5462600458c4fcf26620f7e4306..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/ldm/modules/losses/vqperceptual.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-from einops import repeat
-
-from taming.modules.discriminator.model import NLayerDiscriminator, weights_init
-from taming.modules.losses.lpips import LPIPS
-from taming.modules.losses.vqperceptual import hinge_d_loss, vanilla_d_loss
-
-
-def hinge_d_loss_with_exemplar_weights(logits_real, logits_fake, weights):
- assert weights.shape[0] == logits_real.shape[0] == logits_fake.shape[0]
- loss_real = torch.mean(F.relu(1. - logits_real), dim=[1,2,3])
- loss_fake = torch.mean(F.relu(1. + logits_fake), dim=[1,2,3])
- loss_real = (weights * loss_real).sum() / weights.sum()
- loss_fake = (weights * loss_fake).sum() / weights.sum()
- d_loss = 0.5 * (loss_real + loss_fake)
- return d_loss
-
-def adopt_weight(weight, global_step, threshold=0, value=0.):
- if global_step < threshold:
- weight = value
- return weight
-
-
-def measure_perplexity(predicted_indices, n_embed):
- # src: https://github.com/karpathy/deep-vector-quantization/blob/main/model.py
- # eval cluster perplexity. when perplexity == num_embeddings then all clusters are used exactly equally
- encodings = F.one_hot(predicted_indices, n_embed).float().reshape(-1, n_embed)
- avg_probs = encodings.mean(0)
- perplexity = (-(avg_probs * torch.log(avg_probs + 1e-10)).sum()).exp()
- cluster_use = torch.sum(avg_probs > 0)
- return perplexity, cluster_use
-
-def l1(x, y):
- return torch.abs(x-y)
-
-
-def l2(x, y):
- return torch.pow((x-y), 2)
-
-
-class VQLPIPSWithDiscriminator(nn.Module):
- def __init__(self, disc_start, codebook_weight=1.0, pixelloss_weight=1.0,
- disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0,
- perceptual_weight=1.0, use_actnorm=False, disc_conditional=False,
- disc_ndf=64, disc_loss="hinge", n_classes=None, perceptual_loss="lpips",
- pixel_loss="l1"):
- super().__init__()
- assert disc_loss in ["hinge", "vanilla"]
- assert perceptual_loss in ["lpips", "clips", "dists"]
- assert pixel_loss in ["l1", "l2"]
- self.codebook_weight = codebook_weight
- self.pixel_weight = pixelloss_weight
- if perceptual_loss == "lpips":
- print(f"{self.__class__.__name__}: Running with LPIPS.")
- self.perceptual_loss = LPIPS().eval()
- else:
- raise ValueError(f"Unknown perceptual loss: >> {perceptual_loss} <<")
- self.perceptual_weight = perceptual_weight
-
- if pixel_loss == "l1":
- self.pixel_loss = l1
- else:
- self.pixel_loss = l2
-
- self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels,
- n_layers=disc_num_layers,
- use_actnorm=use_actnorm,
- ndf=disc_ndf
- ).apply(weights_init)
- self.discriminator_iter_start = disc_start
- if disc_loss == "hinge":
- self.disc_loss = hinge_d_loss
- elif disc_loss == "vanilla":
- self.disc_loss = vanilla_d_loss
- else:
- raise ValueError(f"Unknown GAN loss '{disc_loss}'.")
- print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.")
- self.disc_factor = disc_factor
- self.discriminator_weight = disc_weight
- self.disc_conditional = disc_conditional
- self.n_classes = n_classes
-
- def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
- if last_layer is not None:
- nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
- else:
- nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0]
-
- d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
- d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
- d_weight = d_weight * self.discriminator_weight
- return d_weight
-
- def forward(self, codebook_loss, inputs, reconstructions, optimizer_idx,
- global_step, last_layer=None, cond=None, split="train", predicted_indices=None):
- if not exists(codebook_loss):
- codebook_loss = torch.tensor([0.]).to(inputs.device)
- #rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
- rec_loss = self.pixel_loss(inputs.contiguous(), reconstructions.contiguous())
- if self.perceptual_weight > 0:
- p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous())
- rec_loss = rec_loss + self.perceptual_weight * p_loss
- else:
- p_loss = torch.tensor([0.0])
-
- nll_loss = rec_loss
- #nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
- nll_loss = torch.mean(nll_loss)
-
- # now the GAN part
- if optimizer_idx == 0:
- # generator update
- if cond is None:
- assert not self.disc_conditional
- logits_fake = self.discriminator(reconstructions.contiguous())
- else:
- assert self.disc_conditional
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1))
- g_loss = -torch.mean(logits_fake)
-
- try:
- d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer)
- except RuntimeError:
- assert not self.training
- d_weight = torch.tensor(0.0)
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- loss = nll_loss + d_weight * disc_factor * g_loss + self.codebook_weight * codebook_loss.mean()
-
- log = {"{}/total_loss".format(split): loss.clone().detach().mean(),
- "{}/quant_loss".format(split): codebook_loss.detach().mean(),
- "{}/nll_loss".format(split): nll_loss.detach().mean(),
- "{}/rec_loss".format(split): rec_loss.detach().mean(),
- "{}/p_loss".format(split): p_loss.detach().mean(),
- "{}/d_weight".format(split): d_weight.detach(),
- "{}/disc_factor".format(split): torch.tensor(disc_factor),
- "{}/g_loss".format(split): g_loss.detach().mean(),
- }
- if predicted_indices is not None:
- assert self.n_classes is not None
- with torch.no_grad():
- perplexity, cluster_usage = measure_perplexity(predicted_indices, self.n_classes)
- log[f"{split}/perplexity"] = perplexity
- log[f"{split}/cluster_usage"] = cluster_usage
- return loss, log
-
- if optimizer_idx == 1:
- # second pass for discriminator update
- if cond is None:
- logits_real = self.discriminator(inputs.contiguous().detach())
- logits_fake = self.discriminator(reconstructions.contiguous().detach())
- else:
- logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1))
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1))
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
-
- log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(),
- "{}/logits_real".format(split): logits_real.detach().mean(),
- "{}/logits_fake".format(split): logits_fake.detach().mean()
- }
- return d_loss, log
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/fconv_self_att.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/fconv_self_att.py
deleted file mode 100644
index 8357ef7847ed25a62345e219c41906156828c233..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/fconv_self_att.py
+++ /dev/null
@@ -1,674 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import math
-import os
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import checkpoint_utils
-from fairseq.incremental_decoding_utils import with_incremental_state
-from fairseq.models import (
- CompositeEncoder,
- FairseqDecoder,
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- register_model,
- register_model_architecture,
-)
-from fairseq.modules import (
- DownsampledMultiHeadAttention,
- FairseqDropout,
- GradMultiply,
- LayerNorm,
- LearnedPositionalEmbedding,
- LinearizedConvolution,
-)
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("fconv_self_att")
-class FConvModelSelfAtt(FairseqEncoderDecoderModel):
- @classmethod
- def hub_models(cls):
- return {
- "conv.stories.pretrained": {
- "path": "https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.gz",
- "checkpoint_file": "pretrained_checkpoint.pt",
- "tokenizer": "nltk",
- },
- "conv.stories": {
- "path": "https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.gz",
- "checkpoint_file": "fusion_checkpoint.pt",
- "tokenizer": "nltk",
- "pretrained": "True",
- "pretrained_checkpoint": "./pretrained_checkpoint.pt",
- },
- # Test set containing dictionaries
- "data.stories": "https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2",
- }
-
- def __init__(self, encoder, decoder, pretrained_encoder=None):
- super().__init__(encoder, decoder)
- self.encoder.num_attention_layers = sum(
- layer is not None for layer in decoder.attention
- )
- self.pretrained_encoder = pretrained_encoder
- if self.pretrained_encoder is None:
- encoders = {"encoder": encoder}
- else:
- encoders = {"encoder": encoder, "pretrained": self.pretrained_encoder}
- # for fusion model, CompositeEncoder contains both pretrained and training encoders
- # these are forwarded and then combined in the decoder
- self.encoder = CompositeEncoder(encoders)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- # fmt: off
- parser.add_argument('--dropout', type=float, metavar='D',
- help='dropout probability')
- parser.add_argument('--encoder-embed-dim', type=int, metavar='N',
- help='encoder embedding dimension')
- parser.add_argument('--encoder-layers', type=str, metavar='EXPR',
- help='encoder layers [(dim, kernel_size), ...]')
- parser.add_argument('--decoder-embed-dim', type=int, metavar='N',
- help='decoder embedding dimension')
- parser.add_argument('--decoder-layers', type=str, metavar='EXPR',
- help='decoder layers [(dim, kernel_size), ...]')
- parser.add_argument('--decoder-out-embed-dim', type=int, metavar='N',
- help='decoder output embedding dimension')
- parser.add_argument('--decoder-attention', type=str, metavar='EXPR',
- help='decoder attention [True, ...]')
- parser.add_argument('--self-attention', type=str, metavar='EXPR',
- help='decoder self-attention layers, ex: [True] + [False]*5')
- parser.add_argument('--multihead-attention-nheads', type=int,
- help='Number of heads to use in attention')
- parser.add_argument('--multihead-self-attention-nheads', type=int,
- help='Number of heads to use in self-attention')
- parser.add_argument('--encoder-attention', type=str, metavar='EXPR',
- help='encoder attention [True, ...]')
- parser.add_argument('--encoder-attention-nheads', type=int,
- help='Number of heads to use in encoder attention')
- parser.add_argument('--project-input', type=str, metavar='EXPR',
- help='Use projections in self-attention [True, ...]')
- parser.add_argument('--gated-attention', type=str, metavar='EXPR',
- help='Use GLU layers in self-attention projections [True, ...]')
- parser.add_argument('--downsample', type=str, metavar='EXPR',
- help='Use downsampling in self-attention [True, ...]')
- parser.add_argument('--pretrained-checkpoint', metavar='DIR',
- help='path to load checkpoint from pretrained model')
- parser.add_argument('--pretrained', type=str, metavar='EXPR',
- help='use pretrained model when training [True, ...]')
- # fmt: on
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
- trained_encoder, trained_decoder = None, None
- pretrained = eval(args.pretrained)
- if pretrained:
- logger.info("loading pretrained model")
- if not os.path.exists(args.pretrained_checkpoint):
- new_pretrained_checkpoint = os.path.join(
- args.data, args.pretrained_checkpoint
- )
- if os.path.exists(new_pretrained_checkpoint):
- args.pretrained_checkpoint = new_pretrained_checkpoint
- trained_model = checkpoint_utils.load_model_ensemble(
- filenames=[args.pretrained_checkpoint],
- task=task,
- )[0][0]
- trained_decoder = list(trained_model.children())[1]
- trained_encoder = list(trained_model.children())[0]
-
- # freeze pretrained model
- for param in trained_decoder.parameters():
- param.requires_grad = False
- for param in trained_encoder.parameters():
- param.requires_grad = False
-
- encoder = FConvEncoder(
- task.source_dictionary,
- embed_dim=args.encoder_embed_dim,
- convolutions=eval(args.encoder_layers),
- dropout=args.dropout,
- max_positions=args.max_source_positions,
- attention=eval(args.encoder_attention),
- attention_nheads=args.encoder_attention_nheads,
- )
-
- decoder = FConvDecoder(
- task.target_dictionary,
- embed_dim=args.decoder_embed_dim,
- convolutions=eval(args.decoder_layers),
- out_embed_dim=args.decoder_out_embed_dim,
- attention=eval(args.decoder_attention),
- dropout=args.dropout,
- max_positions=args.max_target_positions,
- selfattention=eval(args.self_attention),
- attention_nheads=args.multihead_attention_nheads,
- selfattention_nheads=args.multihead_self_attention_nheads,
- project_input=eval(args.project_input),
- gated_attention=eval(args.gated_attention),
- downsample=eval(args.downsample),
- pretrained=pretrained,
- trained_decoder=trained_decoder,
- )
- model = FConvModelSelfAtt(encoder, decoder, trained_encoder)
-
- return model
-
- @property
- def pretrained(self):
- return self.pretrained_encoder is not None
-
-
-class FConvEncoder(FairseqEncoder):
- """Convolutional encoder"""
-
- def __init__(
- self,
- dictionary,
- embed_dim=512,
- max_positions=1024,
- convolutions=((512, 3),) * 20,
- dropout=0.1,
- attention=False,
- attention_nheads=1,
- ):
- super().__init__(dictionary)
- self.dropout_module = FairseqDropout(
- dropout, module_name=self.__class__.__name__
- )
- self.num_attention_layers = None
-
- num_embeddings = len(dictionary)
- self.padding_idx = dictionary.pad()
- self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx)
- self.embed_positions = PositionalEmbedding(
- max_positions,
- embed_dim,
- self.padding_idx,
- )
-
- def expand_bool_array(val):
- if isinstance(val, bool):
- # expand True into [True, True, ...] and do the same with False
- return [val] * len(convolutions)
- return val
-
- attention = expand_bool_array(attention)
-
- in_channels = convolutions[0][0]
- self.fc1 = Linear(embed_dim, in_channels, dropout=dropout)
- self.projections = nn.ModuleList()
- self.convolutions = nn.ModuleList()
- self.attention = nn.ModuleList()
- self.attproj = nn.ModuleList()
- for i, (out_channels, kernel_size) in enumerate(convolutions):
- self.projections.append(
- Linear(in_channels, out_channels)
- if in_channels != out_channels
- else None
- )
- self.convolutions.append(
- ConvTBC(in_channels, out_channels * 2, kernel_size, dropout=dropout)
- )
-
- self.attention.append(
- SelfAttention(out_channels, embed_dim, attention_nheads)
- if attention[i]
- else None
- )
- in_channels = out_channels
-
- self.fc2 = Linear(in_channels, embed_dim)
-
- def forward(self, src_tokens, src_lengths):
- # embed tokens and positions
- x = self.embed_tokens(src_tokens) + self.embed_positions(src_tokens)
- x = self.dropout_module(x)
- input_embedding = x.transpose(0, 1)
-
- # project to size of convolution
- x = self.fc1(x)
-
- encoder_padding_mask = src_tokens.eq(self.padding_idx).t() # -> T x B
- if not encoder_padding_mask.any():
- encoder_padding_mask = None
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # temporal convolutions
- for proj, conv, attention in zip(
- self.projections, self.convolutions, self.attention
- ):
- residual = x if proj is None else proj(x)
-
- if encoder_padding_mask is not None:
- x = x.masked_fill(encoder_padding_mask.unsqueeze(-1), 0)
-
- x = self.dropout_module(x)
- padding_l = (conv.kernel_size[0] - 1) // 2
- padding_r = conv.kernel_size[0] // 2
- x = F.pad(x, (0, 0, 0, 0, padding_l, padding_r))
- x = conv(x)
- x = F.glu(x, dim=2)
- if attention is not None:
- x = attention(x)
- x = (x + residual) * math.sqrt(0.5)
-
- # T x B x C -> B x T x C
- x = x.transpose(1, 0)
-
- # project back to size of embedding
- x = self.fc2(x)
-
- if encoder_padding_mask is not None:
- encoder_padding_mask = encoder_padding_mask.t() # -> B x T
- x = x.masked_fill(encoder_padding_mask.unsqueeze(-1), 0)
-
- # scale gradients (this only affects backward, not forward)
- x = GradMultiply.apply(x, 1.0 / (2.0 * self.num_attention_layers))
-
- # add output to input embedding for attention
- y = (x + input_embedding.transpose(0, 1)) * math.sqrt(0.5)
-
- return {
- "encoder_out": (x, y),
- "encoder_padding_mask": encoder_padding_mask, # B x T
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- encoder_out["encoder_out"] = tuple(
- eo.index_select(0, new_order) for eo in encoder_out["encoder_out"]
- )
-
- if encoder_out["encoder_padding_mask"] is not None:
- encoder_out["encoder_padding_mask"] = encoder_out[
- "encoder_padding_mask"
- ].index_select(0, new_order)
-
- if "pretrained" in encoder_out:
- encoder_out["pretrained"]["encoder_out"] = tuple(
- eo.index_select(0, new_order)
- for eo in encoder_out["pretrained"]["encoder_out"]
- )
-
- return encoder_out
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- return self.embed_positions.max_positions
-
-
-@with_incremental_state
-class FConvDecoder(FairseqDecoder):
- """Convolutional decoder"""
-
- def __init__(
- self,
- dictionary,
- embed_dim=512,
- out_embed_dim=256,
- max_positions=1024,
- convolutions=((512, 3),) * 8,
- attention=True,
- dropout=0.1,
- selfattention=False,
- attention_nheads=1,
- selfattention_nheads=1,
- project_input=False,
- gated_attention=False,
- downsample=False,
- pretrained=False,
- trained_decoder=None,
- ):
- super().__init__(dictionary)
- self.register_buffer("version", torch.Tensor([2]))
- self.pretrained = pretrained
- self.pretrained_decoder = trained_decoder
- self.dropout_module = FairseqDropout(
- dropout, module_name=self.__class__.__name__
- )
- self.need_attn = True
- in_channels = convolutions[0][0]
-
- def expand_bool_array(val):
- if isinstance(val, bool):
- # expand True into [True, True, ...] and do the same with False
- return [val] * len(convolutions)
- return val
-
- attention = expand_bool_array(attention)
- selfattention = expand_bool_array(selfattention)
-
- if not isinstance(attention, list) or len(attention) != len(convolutions):
- raise ValueError(
- "Attention is expected to be a list of booleans of "
- "length equal to the number of layers."
- )
-
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx)
-
- self.embed_positions = PositionalEmbedding(
- max_positions,
- embed_dim,
- padding_idx,
- )
-
- self.fc1 = Linear(embed_dim, in_channels, dropout=dropout)
- self.projections = nn.ModuleList()
- self.convolutions = nn.ModuleList()
- self.attention = nn.ModuleList()
- self.selfattention = nn.ModuleList()
- self.attproj = nn.ModuleList()
- for i, (out_channels, kernel_size) in enumerate(convolutions):
- self.projections.append(
- Linear(in_channels, out_channels)
- if in_channels != out_channels
- else None
- )
- self.convolutions.append(
- LinearizedConv1d(
- in_channels,
- out_channels * 2,
- kernel_size,
- padding=(kernel_size - 1),
- dropout=dropout,
- )
- )
-
- self.attention.append(
- DownsampledMultiHeadAttention(
- out_channels,
- embed_dim,
- attention_nheads,
- project_input=project_input,
- gated=False,
- downsample=False,
- )
- if attention[i]
- else None
- )
-
- self.attproj.append(
- Linear(out_channels, embed_dim, dropout=dropout)
- if attention[i]
- else None
- )
- self.selfattention.append(
- SelfAttention(
- out_channels,
- embed_dim,
- selfattention_nheads,
- project_input=project_input,
- gated=gated_attention,
- downsample=downsample,
- )
- if selfattention[i]
- else None
- )
- in_channels = out_channels
-
- self.fc2 = Linear(in_channels, out_embed_dim)
- self.fc3 = Linear(out_embed_dim, num_embeddings, dropout=dropout)
-
- # model fusion
- if self.pretrained:
- # independent gates are learned from the concatenated input
- self.gate1 = nn.Sequential(
- Linear(out_embed_dim * 2, out_embed_dim), nn.Sigmoid()
- )
- self.gate2 = nn.Sequential(
- Linear(out_embed_dim * 2, out_embed_dim), nn.Sigmoid()
- )
- # pretrained and trained models are joined
- self.joining = nn.Sequential(
- Linear(out_embed_dim * 2, out_embed_dim * 2),
- LayerNorm(out_embed_dim * 2),
- nn.GLU(),
- Linear(out_embed_dim, out_embed_dim * 2),
- LayerNorm(out_embed_dim * 2),
- nn.GLU(),
- Linear(out_embed_dim, out_embed_dim),
- LayerNorm(out_embed_dim),
- )
- # pretrained model contains an output layer that is nhid -> vocab size
- # but the models are combined in their hidden state
- # the hook stores the output of the pretrained model forward
- self.pretrained_outputs = {}
-
- def save_output():
- def hook(a, b, output):
- self.pretrained_outputs["out"] = output
-
- return hook
-
- self.pretrained_decoder.fc2.register_forward_hook(save_output())
-
- def forward(self, prev_output_tokens, encoder_out):
- trained_encoder_out = encoder_out["pretrained"] if self.pretrained else None
- encoder_out = encoder_out["encoder"]["encoder_out"]
-
- encoder_a, encoder_b = self._split_encoder_out(encoder_out)
-
- # embed positions
- positions = self.embed_positions(prev_output_tokens)
-
- # embed tokens and positions
- x = self.embed_tokens(prev_output_tokens) + positions
- x = self.dropout_module(x)
- target_embedding = x.transpose(0, 1)
-
- # project to size of convolution
- x = self.fc1(x)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # temporal convolutions
- avg_attn_scores = None
- for proj, conv, attention, selfattention, attproj in zip(
- self.projections,
- self.convolutions,
- self.attention,
- self.selfattention,
- self.attproj,
- ):
- residual = x if proj is None else proj(x)
-
- x = self.dropout_module(x)
- x = conv(x)
- x = F.glu(x, dim=2)
-
- # attention
- if attention is not None:
- r = x
- x, attn_scores = attention(
- attproj(x) + target_embedding, encoder_a, encoder_b
- )
- x = x + r
- if not self.training and self.need_attn:
- if avg_attn_scores is None:
- avg_attn_scores = attn_scores
- else:
- avg_attn_scores.add_(attn_scores)
-
- if selfattention is not None:
- x = selfattention(x)
-
- x = (x + residual) * math.sqrt(0.5)
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- # project back to size of vocabulary
- x = self.fc2(x)
- x = self.dropout_module(x)
- if not self.pretrained:
- x = self.fc3(x)
-
- # fusion gating
- if self.pretrained:
- trained_x, _ = self.pretrained_decoder.forward(
- prev_output_tokens, trained_encoder_out
- )
- y = torch.cat([x, self.pretrained_outputs["out"]], dim=-1)
- gate1 = self.gate1(y)
- gate2 = self.gate2(y)
- gated_x1 = gate1 * x
- gated_x2 = gate2 * self.pretrained_outputs["out"]
- fusion = torch.cat([gated_x1, gated_x2], dim=-1)
- fusion = self.joining(fusion)
- fusion_output = self.fc3(fusion)
- return fusion_output, avg_attn_scores
- else:
- return x, avg_attn_scores
-
- def max_positions(self):
- """Maximum output length supported by the decoder."""
- return self.embed_positions.max_positions
-
- def make_generation_fast_(self, need_attn=False, **kwargs):
- self.need_attn = need_attn
-
- def _split_encoder_out(self, encoder_out):
- """Split and transpose encoder outputs."""
- # transpose only once to speed up attention layers
- encoder_a, encoder_b = encoder_out
- encoder_a = encoder_a.transpose(0, 1).contiguous()
- encoder_b = encoder_b.transpose(0, 1).contiguous()
- result = (encoder_a, encoder_b)
- return result
-
-
-class SelfAttention(nn.Module):
- def __init__(
- self,
- out_channels,
- embed_dim,
- num_heads,
- project_input=False,
- gated=False,
- downsample=False,
- ):
- super().__init__()
- self.attention = DownsampledMultiHeadAttention(
- out_channels,
- embed_dim,
- num_heads,
- dropout=0,
- bias=True,
- project_input=project_input,
- gated=gated,
- downsample=downsample,
- )
- self.in_proj_q = Linear(out_channels, embed_dim)
- self.in_proj_k = Linear(out_channels, embed_dim)
- self.in_proj_v = Linear(out_channels, embed_dim)
- self.ln = LayerNorm(out_channels)
-
- def forward(self, x):
- residual = x
- query = self.in_proj_q(x)
- key = self.in_proj_k(x)
- value = self.in_proj_v(x)
- x, _ = self.attention(
- query, key, value, mask_future_timesteps=True, use_scalar_bias=True
- )
- return self.ln(x + residual)
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- m.weight.data.normal_(0, 0.1)
- return m
-
-
-def PositionalEmbedding(num_embeddings, embedding_dim, padding_idx):
- m = LearnedPositionalEmbedding(num_embeddings, embedding_dim, padding_idx)
- m.weight.data.normal_(0, 0.1)
- return m
-
-
-def Linear(in_features, out_features, dropout=0.0):
- """Weight-normalized Linear layer (input: N x T x C)"""
- m = nn.Linear(in_features, out_features)
- m.weight.data.normal_(mean=0, std=math.sqrt((1 - dropout) / in_features))
- m.bias.data.zero_()
- return m
-
-
-def LinearizedConv1d(in_channels, out_channels, kernel_size, dropout=0.0, **kwargs):
- """Weight-normalized Conv1d layer optimized for decoding"""
- m = LinearizedConvolution(in_channels, out_channels, kernel_size, **kwargs)
- std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels))
- m.weight.data.normal_(mean=0, std=std)
- m.bias.data.zero_()
- return m
-
-
-def ConvTBC(in_channels, out_channels, kernel_size, dropout=0.0, **kwargs):
- """Weight-normalized Conv1d layer"""
- from fairseq.modules import ConvTBC
-
- m = ConvTBC(in_channels, out_channels, kernel_size, **kwargs)
- std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels))
- m.weight.data.normal_(mean=0, std=std)
- m.bias.data.zero_()
- return m
-
-
-@register_model_architecture("fconv_self_att", "fconv_self_att")
-def base_architecture(args):
- args.dropout = getattr(args, "dropout", 0.1)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_layers = getattr(args, "encoder_layers", "[(512, 3)] * 3")
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.decoder_layers = getattr(args, "decoder_layers", "[(512, 3)] * 8")
- args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256)
- args.decoder_attention = getattr(args, "decoder_attention", "True")
- args.self_attention = getattr(args, "self_attention", "False")
- args.encoder_attention = getattr(args, "encoder_attention", "False")
- args.multihead_attention_nheads = getattr(args, "multihead_attention_nheads", 1)
- args.multihead_self_attention_nheads = getattr(
- args, "multihead_self_attention_nheads", 1
- )
- args.encoder_attention_nheads = getattr(args, "encoder_attention_nheads", 1)
- args.project_input = getattr(args, "project_input", "False")
- args.gated_attention = getattr(args, "gated_attention", "False")
- args.downsample = getattr(args, "downsample", "False")
- args.pretrained_checkpoint = getattr(args, "pretrained_checkpoint", "")
- args.pretrained = getattr(args, "pretrained", "False")
-
-
-@register_model_architecture("fconv_self_att", "fconv_self_att_wp")
-def fconv_self_att_wp(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256)
- args.encoder_layers = getattr(
- args, "encoder_layers", "[(128, 3)] * 2 + [(512,3)] * 1"
- )
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256)
- args.decoder_layers = getattr(
- args, "decoder_layers", "[(512, 4)] * 4 + [(768, 4)] * 2 + [(1024, 4)] * 1"
- )
- args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256)
- args.self_attention = getattr(args, "self_attention", "True")
- args.multihead_self_attention_nheads = getattr(
- args, "multihead_self_attention_nheads", 4
- )
- args.project_input = getattr(args, "project_input", "True")
- args.gated_attention = getattr(args, "gated_attention", "True")
- args.downsample = getattr(args, "downsample", "True")
- base_architecture(args)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer_from_pretrained_xlm.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer_from_pretrained_xlm.py
deleted file mode 100644
index 236d9942e1fb0238cc92e2b4f160520b5cdd6504..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer_from_pretrained_xlm.py
+++ /dev/null
@@ -1,152 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-from typing import Any, Dict
-
-from fairseq import checkpoint_utils
-from fairseq.data.legacy.masked_lm_dictionary import MaskedLMDictionary
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.transformer import (
- TransformerDecoder,
- TransformerEncoder,
- TransformerModel,
- base_architecture as transformer_base_architecture,
-)
-
-
-@register_model("transformer_from_pretrained_xlm")
-class TransformerFromPretrainedXLMModel(TransformerModel):
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- TransformerModel.add_args(parser)
- parser.add_argument(
- "--pretrained-xlm-checkpoint",
- type=str,
- metavar="STR",
- help="XLM model to use for initializing transformer encoder and/or decoder",
- )
- parser.add_argument(
- "--init-encoder-only",
- action="store_true",
- help="if set, don't load the XLM weights and embeddings into decoder",
- )
- parser.add_argument(
- "--init-decoder-only",
- action="store_true",
- help="if set, don't load the XLM weights and embeddings into encoder",
- )
-
- @classmethod
- def build_model(self, args, task, cls_dictionary=MaskedLMDictionary):
- assert hasattr(args, "pretrained_xlm_checkpoint"), (
- "You must specify a path for --pretrained-xlm-checkpoint to use "
- "--arch transformer_from_pretrained_xlm"
- )
- assert isinstance(task.source_dictionary, cls_dictionary) and isinstance(
- task.target_dictionary, cls_dictionary
- ), (
- "You should use a MaskedLMDictionary when using --arch "
- "transformer_from_pretrained_xlm because the pretrained XLM model "
- "was trained using data binarized with MaskedLMDictionary. "
- "For translation, you may want to use --task "
- "translation_from_pretrained_xlm"
- )
- assert not (
- getattr(args, "init_encoder_only", False)
- and getattr(args, "init_decoder_only", False)
- ), "Only one of --init-encoder-only and --init-decoder-only can be set."
- return super().build_model(args, task)
-
- @classmethod
- def build_encoder(cls, args, src_dict, embed_tokens):
- return TransformerEncoderFromPretrainedXLM(args, src_dict, embed_tokens)
-
- @classmethod
- def build_decoder(cls, args, tgt_dict, embed_tokens):
- return TransformerDecoderFromPretrainedXLM(args, tgt_dict, embed_tokens)
-
-
-def upgrade_state_dict_with_xlm_weights(
- state_dict: Dict[str, Any], pretrained_xlm_checkpoint: str
-) -> Dict[str, Any]:
- """
- Load XLM weights into a Transformer encoder or decoder model.
-
- Args:
- state_dict: state dict for either TransformerEncoder or
- TransformerDecoder
- pretrained_xlm_checkpoint: checkpoint to load XLM weights from
-
- Raises:
- AssertionError: If architecture (num layers, attention heads, etc.)
- does not match between the current Transformer encoder or
- decoder and the pretrained_xlm_checkpoint
- """
- if not os.path.exists(pretrained_xlm_checkpoint):
- raise IOError("Model file not found: {}".format(pretrained_xlm_checkpoint))
-
- state = checkpoint_utils.load_checkpoint_to_cpu(pretrained_xlm_checkpoint)
- xlm_state_dict = state["model"]
- for key in xlm_state_dict.keys():
-
- for search_key in ["embed_tokens", "embed_positions", "layers"]:
- if search_key in key:
- subkey = key[key.find(search_key) :]
- assert subkey in state_dict, (
- "{} Transformer encoder / decoder "
- "state_dict does not contain {}. Cannot "
- "load {} from pretrained XLM checkpoint "
- "{} into Transformer.".format(
- str(state_dict.keys()), subkey, key, pretrained_xlm_checkpoint
- )
- )
-
- state_dict[subkey] = xlm_state_dict[key]
- return state_dict
-
-
-class TransformerEncoderFromPretrainedXLM(TransformerEncoder):
- def __init__(self, args, dictionary, embed_tokens):
- super().__init__(args, dictionary, embed_tokens)
- if getattr(args, "init_decoder_only", False):
- # Don't load XLM weights for encoder if --init-decoder-only
- return
-
- assert hasattr(args, "pretrained_xlm_checkpoint"), (
- "--pretrained-xlm-checkpoint must be specified to load Transformer "
- "encoder from pretrained XLM"
- )
- xlm_loaded_state_dict = upgrade_state_dict_with_xlm_weights(
- state_dict=self.state_dict(),
- pretrained_xlm_checkpoint=args.pretrained_xlm_checkpoint,
- )
- self.load_state_dict(xlm_loaded_state_dict, strict=True)
-
-
-class TransformerDecoderFromPretrainedXLM(TransformerDecoder):
- def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False):
- super().__init__(args, dictionary, embed_tokens, no_encoder_attn)
- if getattr(args, "init_encoder_only", False):
- # Don't load XLM weights for decoder if --init-encoder-only
- return
- assert hasattr(args, "pretrained_xlm_checkpoint"), (
- "--pretrained-xlm-checkpoint must be specified to load Transformer "
- "decoder from pretrained XLM"
- )
-
- xlm_loaded_state_dict = upgrade_state_dict_with_xlm_weights(
- state_dict=self.state_dict(),
- pretrained_xlm_checkpoint=args.pretrained_xlm_checkpoint,
- )
- self.load_state_dict(xlm_loaded_state_dict, strict=True)
-
-
-@register_model_architecture(
- "transformer_from_pretrained_xlm", "transformer_from_pretrained_xlm"
-)
-def base_architecture(args):
- transformer_base_architecture(args)
diff --git a/spaces/Harveenchadha/en_to_indic_translation/README.md b/spaces/Harveenchadha/en_to_indic_translation/README.md
deleted file mode 100644
index 9258bbaf551795c0e63ef2a830b7268971f7b12d..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: English to Indic Translation
-emoji: 🐢
-colorFrom: blue
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
----
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/models/realesrnet_model.py b/spaces/Iceclear/StableSR/StableSR/basicsr/models/realesrnet_model.py
deleted file mode 100644
index f5790918b969682a0db0e2ed9236b7046d627b90..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/models/realesrnet_model.py
+++ /dev/null
@@ -1,189 +0,0 @@
-import numpy as np
-import random
-import torch
-from torch.nn import functional as F
-
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.models.sr_model import SRModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-
-
-@MODEL_REGISTRY.register(suffix='basicsr')
-class RealESRNetModel(SRModel):
- """RealESRNet Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It is trained without GAN losses.
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRNetModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- # USM sharpen the GT images
- if self.opt['gt_usm'] is True:
- self.gt = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- self.gt, self.lq = paired_random_crop(self.gt, self.lq, gt_size, self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRNetModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
diff --git a/spaces/Illumotion/Koboldcpp/otherarch/tools/gpt2_quantize.cpp b/spaces/Illumotion/Koboldcpp/otherarch/tools/gpt2_quantize.cpp
deleted file mode 100644
index 2c0b908f975ea82bed8d8fa59575b165e8b30436..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/otherarch/tools/gpt2_quantize.cpp
+++ /dev/null
@@ -1,182 +0,0 @@
-#include "utils.h"
-#include "common-ggml.h"
-
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-// default hparams (GPT-2 117M)
-struct gpt2_hparams {
- int32_t n_vocab = 50257;
- int32_t n_ctx = 1024;
- int32_t n_embd = 768;
- int32_t n_head = 12;
- int32_t n_layer = 12;
- int32_t ftype = 1;
-};
-
-// quantize a model
-bool gpt2_model_quantize(const std::string & fname_inp, const std::string & fname_out, ggml_ftype ftype) {
- gpt_vocab vocab;
-
- printf("%s: loading model from '%s'\n", __func__, fname_inp.c_str());
-
- auto finp = std::ifstream(fname_inp, std::ios::binary);
- if (!finp) {
- fprintf(stderr, "%s: failed to open '%s' for reading\n", __func__, fname_inp.c_str());
- return false;
- }
-
- auto fout = std::ofstream(fname_out, std::ios::binary);
- if (!fout) {
- fprintf(stderr, "%s: failed to open '%s' for writing\n", __func__, fname_out.c_str());
- return false;
- }
-
- // verify magic
- {
- uint32_t magic;
- finp.read((char *) &magic, sizeof(magic));
- if (magic != 0x67676d6c) {
- fprintf(stderr, "%s: invalid model file '%s' (bad magic)\n", __func__, fname_inp.c_str());
- return false;
- }
-
- fout.write((char *) &magic, sizeof(magic));
- }
-
- gpt2_hparams hparams;
-
- // load hparams
- {
- finp.read((char *) &hparams.n_vocab, sizeof(hparams.n_vocab));
- finp.read((char *) &hparams.n_ctx, sizeof(hparams.n_ctx));
- finp.read((char *) &hparams.n_embd, sizeof(hparams.n_embd));
- finp.read((char *) &hparams.n_head, sizeof(hparams.n_head));
- finp.read((char *) &hparams.n_layer, sizeof(hparams.n_layer));
- finp.read((char *) &hparams.ftype, sizeof(hparams.ftype));
-
- const int32_t qntvr_src = hparams.ftype / GGML_QNT_VERSION_FACTOR;
- const int32_t ftype_dst = GGML_QNT_VERSION * GGML_QNT_VERSION_FACTOR + ftype;
-
- printf("%s: n_vocab = %d\n", __func__, hparams.n_vocab);
- printf("%s: n_ctx = %d\n", __func__, hparams.n_ctx);
- printf("%s: n_embd = %d\n", __func__, hparams.n_embd);
- printf("%s: n_head = %d\n", __func__, hparams.n_head);
- printf("%s: n_layer = %d\n", __func__, hparams.n_layer);
- printf("%s: ftype (src) = %d\n", __func__, hparams.ftype);
- printf("%s: qntvr (src) = %d\n", __func__, qntvr_src);
- printf("%s: ftype (dst) = %d\n", __func__, ftype_dst);
- printf("%s: qntvr (dst) = %d\n", __func__, GGML_QNT_VERSION);
-
- fout.write((char *) &hparams.n_vocab, sizeof(hparams.n_vocab));
- fout.write((char *) &hparams.n_ctx, sizeof(hparams.n_ctx));
- fout.write((char *) &hparams.n_embd, sizeof(hparams.n_embd));
- fout.write((char *) &hparams.n_head, sizeof(hparams.n_head));
- fout.write((char *) &hparams.n_layer, sizeof(hparams.n_layer));
- fout.write((char *) &ftype_dst, sizeof(ftype_dst));
- }
-
- // load vocab
- {
- int32_t n_vocab = 0;
- finp.read ((char *) &n_vocab, sizeof(n_vocab));
- fout.write((char *) &n_vocab, sizeof(n_vocab));
-
- if (n_vocab != hparams.n_vocab) {
- fprintf(stderr, "%s: invalid model file '%s' (bad vocab size %d != %d)\n",
- __func__, fname_inp.c_str(), n_vocab, hparams.n_vocab);
- return false;
- }
-
- std::string word;
- for (int i = 0; i < n_vocab; i++) {
- uint32_t len;
- finp.read ((char *) &len, sizeof(len));
- fout.write((char *) &len, sizeof(len));
-
- word.resize(len);
- finp.read ((char *) word.data(), len);
- fout.write((char *) word.data(), len);
-
- vocab.token_to_id[word] = i;
- vocab.id_to_token[i] = word;
- }
- }
-
- // regexes of tensor names to be quantized
- const std::vector to_quant = {
- "model/wte",
- "model/lm_head",
- "model/h.*/attn/c_attn/w",
- "model/h.*/attn/c_proj/w",
- "model/h.*/mlp/c_fc/w",
- "model/h.*/mlp/c_proj/w",
- };
-
- if (!ggml_common_quantize_0(finp, fout, ftype, to_quant, {})) {
- fprintf(stderr, "%s: failed to quantize model '%s'\n", __func__, fname_inp.c_str());
- return false;
- }
-
- finp.close();
- fout.close();
-
- return true;
-}
-
-// usage:
-// ./gpt-2-quantize models/gpt-2-117M/ggml-model.bin models/gpt-2-117M/ggml-model-quant.bin type
-//
-int main(int argc, char ** argv) {
- if (argc != 4) {
- fprintf(stderr, "usage: %s model-f32.bin model-quant.bin type\n", argv[0]);
- ggml_print_ftypes(stderr);
- return 1;
- }
-
- // needed to initialize f16 tables
- {
- struct ggml_init_params params = { 0, NULL, false };
- struct ggml_context * ctx = ggml_init(params);
- ggml_free(ctx);
- }
-
- const std::string fname_inp = argv[1];
- const std::string fname_out = argv[2];
-
- const ggml_ftype ftype = ggml_parse_ftype(argv[3]);
-
- const int64_t t_main_start_us = ggml_time_us();
-
- int64_t t_quantize_us = 0;
-
- // load the model
- {
- const int64_t t_start_us = ggml_time_us();
-
- if (!gpt2_model_quantize(fname_inp, fname_out, ggml_ftype(ftype))) {
- fprintf(stderr, "%s: failed to quantize model from '%s'\n", __func__, fname_inp.c_str());
- return 1;
- }
-
- t_quantize_us = ggml_time_us() - t_start_us;
- }
-
- // report timing
- {
- const int64_t t_main_end_us = ggml_time_us();
-
- printf("\n");
- printf("%s: quantize time = %8.2f ms\n", __func__, t_quantize_us/1000.0f);
- printf("%s: total time = %8.2f ms\n", __func__, (t_main_end_us - t_main_start_us)/1000.0f);
- }
-
- return 0;
-}
\ No newline at end of file
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/commands/twitter.py b/spaces/Jamkonams/AutoGPT/autogpt/commands/twitter.py
deleted file mode 100644
index 3eaed36e20e1c520690ac59f25a4da6501f3440f..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/autogpt/commands/twitter.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import os
-
-import tweepy
-from dotenv import load_dotenv
-
-load_dotenv()
-
-
-def send_tweet(tweet_text):
- consumer_key = os.environ.get("TW_CONSUMER_KEY")
- consumer_secret = os.environ.get("TW_CONSUMER_SECRET")
- access_token = os.environ.get("TW_ACCESS_TOKEN")
- access_token_secret = os.environ.get("TW_ACCESS_TOKEN_SECRET")
- # Authenticate to Twitter
- auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
- auth.set_access_token(access_token, access_token_secret)
-
- # Create API object
- api = tweepy.API(auth)
-
- # Send tweet
- try:
- api.update_status(tweet_text)
- print("Tweet sent successfully!")
- except tweepy.TweepyException as e:
- print("Error sending tweet: {}".format(e.reason))
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/transforms.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/lib/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/KevinQHLin/UniVTG/utils/windows_utils.py b/spaces/KevinQHLin/UniVTG/utils/windows_utils.py
deleted file mode 100644
index f3527cdfd7107db5d7eb57afe47f3e8b3bbbc15d..0000000000000000000000000000000000000000
--- a/spaces/KevinQHLin/UniVTG/utils/windows_utils.py
+++ /dev/null
@@ -1,59 +0,0 @@
-"""
-Find windows from a video with clip_ids.
-
-A window is defined by a [start_clip_idx, end_clip_idx] pair:
-For example, assuming clip_len = 2 seconds
-[0, 0] meaning a single clip window [0, 2] (seconds)
-[10, 19] meaning a 9 clip window [20, 40] (seconds)
-
-"""
-
-
-def convert_clip_ids_to_windows(clip_ids):
- """ Inverse function of convert_windows_to_clip_ids
- Args:
- clip_ids: list(int), each is a index of a clip, starting from 0
-
- Returns:
- list(list(int)), each sublist contains two integers which are clip indices.
- [10, 19] meaning a 9 clip window [20, 40] (seconds), if each clip is 2 seconds.
-
- >>> test_clip_ids = [56, 57, 58, 59, 60, 61, 62] + [64, ] + [67, 68, 69, 70, 71]
- >>> convert_clip_ids_to_windows(test_clip_ids)
- [[56, 62], [64, 64], [67, 71]]
- """
- windows = []
- _window = [clip_ids[0], None]
- last_clip_id = clip_ids[0]
- for clip_id in clip_ids:
- if clip_id - last_clip_id > 1: # find gap
- _window[1] = last_clip_id
- windows.append(_window)
- _window = [clip_id, None]
- last_clip_id = clip_id
- _window[1] = last_clip_id
- windows.append(_window)
- return windows
-
-
-def convert_windows_to_clip_ids(windows):
- """ Inverse function of convert_clip_ids_to_windows
- Args:
- windows: list(list(int)), each sublist contains two integers which are clip indices.
- [10, 11] meaning a 9 clip window [20, 40] (seconds), if each clip is 2 seconds.
-
- Returns:
- clip_ids: list(int)
-
- >>> test_windows =[[56, 62], [64, 64], [67, 71]]
- >>> convert_windows_to_clip_ids(test_windows)
- [56, 57, 58, 59, 60, 61, 62] + [64, ] + [67, 68, 69, 70, 71]
- """
- clip_ids = []
- for w in windows:
- clip_ids += list(range(w[0], w[1]+1))
- return clip_ids
-
-
-def convert_clip_window_to_seconds(window, clip_len=2):
- return [window[0] * clip_len, (window[1] + 1) * clip_len]
diff --git a/spaces/Kreaols/ChuanhuChatGPT/modules/models/base_model.py b/spaces/Kreaols/ChuanhuChatGPT/modules/models/base_model.py
deleted file mode 100644
index 0c703b6750cbea953bbe8e97a806473831035c0a..0000000000000000000000000000000000000000
--- a/spaces/Kreaols/ChuanhuChatGPT/modules/models/base_model.py
+++ /dev/null
@@ -1,685 +0,0 @@
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import commentjson as cjson
-import os
-import sys
-import requests
-import urllib3
-import traceback
-import pathlib
-
-from tqdm import tqdm
-import colorama
-from duckduckgo_search import DDGS
-from itertools import islice
-import asyncio
-import aiohttp
-from enum import Enum
-
-from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
-from langchain.callbacks.manager import BaseCallbackManager
-
-from typing import Any, Dict, List, Optional, Union
-
-from langchain.callbacks.base import BaseCallbackHandler
-from langchain.input import print_text
-from langchain.schema import AgentAction, AgentFinish, LLMResult
-from threading import Thread, Condition
-from collections import deque
-
-from ..presets import *
-from ..index_func import *
-from ..utils import *
-from .. import shared
-from ..config import retrieve_proxy
-
-class CallbackToIterator:
- def __init__(self):
- self.queue = deque()
- self.cond = Condition()
- self.finished = False
-
- def callback(self, result):
- with self.cond:
- self.queue.append(result)
- self.cond.notify() # Wake up the generator.
-
- def __iter__(self):
- return self
-
- def __next__(self):
- with self.cond:
- while not self.queue and not self.finished: # Wait for a value to be added to the queue.
- self.cond.wait()
- if not self.queue:
- raise StopIteration()
- return self.queue.popleft()
-
- def finish(self):
- with self.cond:
- self.finished = True
- self.cond.notify() # Wake up the generator if it's waiting.
-
-def get_action_description(text):
- match = re.search('```(.*?)```', text, re.S)
- json_text = match.group(1)
- # 把json转化为python字典
- json_dict = json.loads(json_text)
- # 提取'action'和'action_input'的值
- action_name = json_dict['action']
- action_input = json_dict['action_input']
- if action_name != "Final Answer":
- return f'{action_name}: {action_input}
'
- else:
- return ""
-
-class ChuanhuCallbackHandler(BaseCallbackHandler):
-
- def __init__(self, callback) -> None:
- """Initialize callback handler."""
- self.callback = callback
-
- def on_agent_action(
- self, action: AgentAction, color: Optional[str] = None, **kwargs: Any
- ) -> Any:
- self.callback(get_action_description(action.log))
-
- def on_tool_end(
- self,
- output: str,
- color: Optional[str] = None,
- observation_prefix: Optional[str] = None,
- llm_prefix: Optional[str] = None,
- **kwargs: Any,
- ) -> None:
- """If not the final action, print out observation."""
- # if observation_prefix is not None:
- # self.callback(f"\n\n{observation_prefix}")
- # self.callback(output)
- # if llm_prefix is not None:
- # self.callback(f"\n\n{llm_prefix}")
- if observation_prefix is not None:
- logging.info(observation_prefix)
- self.callback(output)
- if llm_prefix is not None:
- logging.info(llm_prefix)
-
- def on_agent_finish(
- self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any
- ) -> None:
- # self.callback(f"{finish.log}\n\n")
- logging.info(finish.log)
-
- def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
- """Run on new LLM token. Only available when streaming is enabled."""
- self.callback(token)
-
-
-class ModelType(Enum):
- Unknown = -1
- OpenAI = 0
- ChatGLM = 1
- LLaMA = 2
- XMChat = 3
- StableLM = 4
- MOSS = 5
- YuanAI = 6
- Minimax = 7
- ChuanhuAgent = 8
-
- @classmethod
- def get_type(cls, model_name: str):
- model_type = None
- model_name_lower = model_name.lower()
- if "gpt" in model_name_lower:
- model_type = ModelType.OpenAI
- elif "chatglm" in model_name_lower:
- model_type = ModelType.ChatGLM
- elif "llama" in model_name_lower or "alpaca" in model_name_lower:
- model_type = ModelType.LLaMA
- elif "xmchat" in model_name_lower:
- model_type = ModelType.XMChat
- elif "stablelm" in model_name_lower:
- model_type = ModelType.StableLM
- elif "moss" in model_name_lower:
- model_type = ModelType.MOSS
- elif "yuanai" in model_name_lower:
- model_type = ModelType.YuanAI
- elif "minimax" in model_name_lower:
- model_type = ModelType.Minimax
- elif "川虎助理" in model_name_lower:
- model_type = ModelType.ChuanhuAgent
- else:
- model_type = ModelType.Unknown
- return model_type
-
-
-class BaseLLMModel:
- def __init__(
- self,
- model_name,
- system_prompt="",
- temperature=1.0,
- top_p=1.0,
- n_choices=1,
- stop=None,
- max_generation_token=None,
- presence_penalty=0,
- frequency_penalty=0,
- logit_bias=None,
- user="",
- ) -> None:
- self.history = []
- self.all_token_counts = []
- self.model_name = model_name
- self.model_type = ModelType.get_type(model_name)
- try:
- self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name]
- except KeyError:
- self.token_upper_limit = DEFAULT_TOKEN_LIMIT
- self.interrupted = False
- self.system_prompt = system_prompt
- self.api_key = None
- self.need_api_key = False
- self.single_turn = False
-
- self.temperature = temperature
- self.top_p = top_p
- self.n_choices = n_choices
- self.stop_sequence = stop
- self.max_generation_token = None
- self.presence_penalty = presence_penalty
- self.frequency_penalty = frequency_penalty
- self.logit_bias = logit_bias
- self.user_identifier = user
-
- def get_answer_stream_iter(self):
- """stream predict, need to be implemented
- conversations are stored in self.history, with the most recent question, in OpenAI format
- should return a generator, each time give the next word (str) in the answer
- """
- logging.warning("stream predict not implemented, using at once predict instead")
- response, _ = self.get_answer_at_once()
- yield response
-
- def get_answer_at_once(self):
- """predict at once, need to be implemented
- conversations are stored in self.history, with the most recent question, in OpenAI format
- Should return:
- the answer (str)
- total token count (int)
- """
- logging.warning("at once predict not implemented, using stream predict instead")
- response_iter = self.get_answer_stream_iter()
- count = 0
- for response in response_iter:
- count += 1
- return response, sum(self.all_token_counts) + count
-
- def billing_info(self):
- """get billing infomation, inplement if needed"""
- logging.warning("billing info not implemented, using default")
- return BILLING_NOT_APPLICABLE_MSG
-
- def count_token(self, user_input):
- """get token count from input, implement if needed"""
- # logging.warning("token count not implemented, using default")
- return len(user_input)
-
- def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""):
- def get_return_value():
- return chatbot, status_text
-
- status_text = i18n("开始实时传输回答……")
- if fake_input:
- chatbot.append((fake_input, ""))
- else:
- chatbot.append((inputs, ""))
-
- user_token_count = self.count_token(inputs)
- self.all_token_counts.append(user_token_count)
- logging.debug(f"输入token计数: {user_token_count}")
-
- stream_iter = self.get_answer_stream_iter()
-
- if display_append:
- display_append = " " +display_append
- for partial_text in stream_iter:
- chatbot[-1] = (chatbot[-1][0], partial_text + display_append)
- self.all_token_counts[-1] += 1
- status_text = self.token_message()
- yield get_return_value()
- if self.interrupted:
- self.recover()
- break
- self.history.append(construct_assistant(partial_text))
-
- def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""):
- if fake_input:
- chatbot.append((fake_input, ""))
- else:
- chatbot.append((inputs, ""))
- if fake_input is not None:
- user_token_count = self.count_token(fake_input)
- else:
- user_token_count = self.count_token(inputs)
- self.all_token_counts.append(user_token_count)
- ai_reply, total_token_count = self.get_answer_at_once()
- self.history.append(construct_assistant(ai_reply))
- if fake_input is not None:
- self.history[-2] = construct_user(fake_input)
- chatbot[-1] = (chatbot[-1][0], ai_reply + display_append)
- if fake_input is not None:
- self.all_token_counts[-1] += count_token(construct_assistant(ai_reply))
- else:
- self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts)
- status_text = self.token_message()
- return chatbot, status_text
-
- def handle_file_upload(self, files, chatbot, language):
- """if the model accepts multi modal input, implement this function"""
- status = gr.Markdown.update()
- if files:
- index = construct_index(self.api_key, file_src=files)
- status = i18n("索引构建完成")
- return gr.Files.update(), chatbot, status
-
- def summarize_index(self, files, chatbot, language):
- status = gr.Markdown.update()
- if files:
- index = construct_index(self.api_key, file_src=files)
- status = i18n("总结完成")
- logging.info(i18n("生成内容总结中……"))
- os.environ["OPENAI_API_KEY"] = self.api_key
- from langchain.chains.summarize import load_summarize_chain
- from langchain.prompts import PromptTemplate
- from langchain.chat_models import ChatOpenAI
- from langchain.callbacks import StdOutCallbackHandler
- prompt_template = "Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN " + language + ":"
- PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
- llm = ChatOpenAI()
- chain = load_summarize_chain(llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT)
- summary = chain({"input_documents": list(index.docstore.__dict__["_dict"].values())}, return_only_outputs=True)["output_text"]
- print(i18n("总结") + f": {summary}")
- chatbot.append([i18n("上传了")+str(len(files))+"个文件", summary])
- return chatbot, status
-
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
- fake_inputs = None
- display_append = []
- limited_context = False
- fake_inputs = real_inputs
- if files:
- from langchain.embeddings.huggingface import HuggingFaceEmbeddings
- from langchain.vectorstores.base import VectorStoreRetriever
- limited_context = True
- msg = "加载索引中……"
- logging.info(msg)
- index = construct_index(self.api_key, file_src=files)
- assert index is not None, "获取索引失败"
- msg = "索引获取成功,生成回答中……"
- logging.info(msg)
- with retrieve_proxy():
- retriever = VectorStoreRetriever(vectorstore=index, search_type="similarity_score_threshold",search_kwargs={"k":6, "score_threshold": 0.5})
- relevant_documents = retriever.get_relevant_documents(real_inputs)
- reference_results = [[d.page_content.strip("�"), os.path.basename(d.metadata["source"])] for d in relevant_documents]
- reference_results = add_source_numbers(reference_results)
- display_append = add_details(reference_results)
- display_append = "\n\n" + "".join(display_append)
- real_inputs = (
- replace_today(PROMPT_TEMPLATE)
- .replace("{query_str}", real_inputs)
- .replace("{context_str}", "\n\n".join(reference_results))
- .replace("{reply_language}", reply_language)
- )
- elif use_websearch:
- search_results = []
- with DDGS() as ddgs:
- ddgs_gen = ddgs.text(real_inputs, backend="lite")
- for r in islice(ddgs_gen, 10):
- search_results.append(r)
- reference_results = []
- for idx, result in enumerate(search_results):
- logging.debug(f"搜索结果{idx + 1}:{result}")
- domain_name = urllib3.util.parse_url(result['href']).host
- reference_results.append([result['body'], result['href']])
- display_append.append(
- # f"{idx+1}. [{domain_name}]({result['href']})\n"
- f"{result['title']} \n"
- )
- reference_results = add_source_numbers(reference_results)
- display_append = "\n\n" + "".join(display_append) + " "
- real_inputs = (
- replace_today(WEBSEARCH_PTOMPT_TEMPLATE)
- .replace("{query}", real_inputs)
- .replace("{web_results}", "\n\n".join(reference_results))
- .replace("{reply_language}", reply_language)
- )
- else:
- display_append = ""
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
-
- def predict(
- self,
- inputs,
- chatbot,
- stream=False,
- use_websearch=False,
- files=None,
- reply_language="中文",
- should_check_token_count=True,
- ): # repetition_penalty, top_k
-
- status_text = "开始生成回答……"
- logging.info(
- "用户" + f"{self.user_identifier}" + "的输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL
- )
- if should_check_token_count:
- yield chatbot + [(inputs, "")], status_text
- if reply_language == "跟随问题语言(不稳定)":
- reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch."
-
- limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot)
- yield chatbot + [(fake_inputs, "")], status_text
-
- if (
- self.need_api_key and
- self.api_key is None
- and not shared.state.multi_api_key
- ):
- status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG
- logging.info(status_text)
- chatbot.append((inputs, ""))
- if len(self.history) == 0:
- self.history.append(construct_user(inputs))
- self.history.append("")
- self.all_token_counts.append(0)
- else:
- self.history[-2] = construct_user(inputs)
- yield chatbot + [(inputs, "")], status_text
- return
- elif len(inputs.strip()) == 0:
- status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG
- logging.info(status_text)
- yield chatbot + [(inputs, "")], status_text
- return
-
- if self.single_turn:
- self.history = []
- self.all_token_counts = []
- self.history.append(construct_user(inputs))
-
- try:
- if stream:
- logging.debug("使用流式传输")
- iter = self.stream_next_chatbot(
- inputs,
- chatbot,
- fake_input=fake_inputs,
- display_append=display_append,
- )
- for chatbot, status_text in iter:
- yield chatbot, status_text
- else:
- logging.debug("不使用流式传输")
- chatbot, status_text = self.next_chatbot_at_once(
- inputs,
- chatbot,
- fake_input=fake_inputs,
- display_append=display_append,
- )
- yield chatbot, status_text
- except Exception as e:
- traceback.print_exc()
- status_text = STANDARD_ERROR_MSG + str(e)
- yield chatbot, status_text
-
- if len(self.history) > 1 and self.history[-1]["content"] != inputs:
- logging.info(
- "回答为:"
- + colorama.Fore.BLUE
- + f"{self.history[-1]['content']}"
- + colorama.Style.RESET_ALL
- )
-
- if limited_context:
- # self.history = self.history[-4:]
- # self.all_token_counts = self.all_token_counts[-2:]
- self.history = []
- self.all_token_counts = []
-
- max_token = self.token_upper_limit - TOKEN_OFFSET
-
- if sum(self.all_token_counts) > max_token and should_check_token_count:
- count = 0
- while (
- sum(self.all_token_counts)
- > self.token_upper_limit * REDUCE_TOKEN_FACTOR
- and sum(self.all_token_counts) > 0
- ):
- count += 1
- del self.all_token_counts[0]
- del self.history[:2]
- logging.info(status_text)
- status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话"
- yield chatbot, status_text
-
- self.auto_save(chatbot)
-
- def retry(
- self,
- chatbot,
- stream=False,
- use_websearch=False,
- files=None,
- reply_language="中文",
- ):
- logging.debug("重试中……")
- if len(self.history) > 0:
- inputs = self.history[-2]["content"]
- del self.history[-2:]
- if len(self.all_token_counts) > 0:
- self.all_token_counts.pop()
- elif len(chatbot) > 0:
- inputs = chatbot[-1][0]
- else:
- yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的"
- return
-
- iter = self.predict(
- inputs,
- chatbot,
- stream=stream,
- use_websearch=use_websearch,
- files=files,
- reply_language=reply_language,
- )
- for x in iter:
- yield x
- logging.debug("重试完毕")
-
- # def reduce_token_size(self, chatbot):
- # logging.info("开始减少token数量……")
- # chatbot, status_text = self.next_chatbot_at_once(
- # summarize_prompt,
- # chatbot
- # )
- # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR
- # num_chat = find_n(self.all_token_counts, max_token_count)
- # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats")
- # chatbot = chatbot[:-1]
- # self.history = self.history[-2*num_chat:] if num_chat > 0 else []
- # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else []
- # msg = f"保留了最近{num_chat}轮对话"
- # logging.info(msg)
- # logging.info("减少token数量完毕")
- # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0])
-
- def interrupt(self):
- self.interrupted = True
-
- def recover(self):
- self.interrupted = False
-
- def set_token_upper_limit(self, new_upper_limit):
- self.token_upper_limit = new_upper_limit
- print(f"token上限设置为{new_upper_limit}")
-
- def set_temperature(self, new_temperature):
- self.temperature = new_temperature
-
- def set_top_p(self, new_top_p):
- self.top_p = new_top_p
-
- def set_n_choices(self, new_n_choices):
- self.n_choices = new_n_choices
-
- def set_stop_sequence(self, new_stop_sequence: str):
- new_stop_sequence = new_stop_sequence.split(",")
- self.stop_sequence = new_stop_sequence
-
- def set_max_tokens(self, new_max_tokens):
- self.max_generation_token = new_max_tokens
-
- def set_presence_penalty(self, new_presence_penalty):
- self.presence_penalty = new_presence_penalty
-
- def set_frequency_penalty(self, new_frequency_penalty):
- self.frequency_penalty = new_frequency_penalty
-
- def set_logit_bias(self, logit_bias):
- logit_bias = logit_bias.split()
- bias_map = {}
- encoding = tiktoken.get_encoding("cl100k_base")
- for line in logit_bias:
- word, bias_amount = line.split(":")
- if word:
- for token in encoding.encode(word):
- bias_map[token] = float(bias_amount)
- self.logit_bias = bias_map
-
- def set_user_identifier(self, new_user_identifier):
- self.user_identifier = new_user_identifier
-
- def set_system_prompt(self, new_system_prompt):
- self.system_prompt = new_system_prompt
-
- def set_key(self, new_access_key):
- self.api_key = new_access_key.strip()
- msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key)
- logging.info(msg)
- return self.api_key, msg
-
- def set_single_turn(self, new_single_turn):
- self.single_turn = new_single_turn
-
- def reset(self):
- self.history = []
- self.all_token_counts = []
- self.interrupted = False
- pathlib.Path(os.path.join(HISTORY_DIR, self.user_identifier, new_auto_history_filename(os.path.join(HISTORY_DIR, self.user_identifier)))).touch()
- return [], self.token_message([0])
-
- def delete_first_conversation(self):
- if self.history:
- del self.history[:2]
- del self.all_token_counts[0]
- return self.token_message()
-
- def delete_last_conversation(self, chatbot):
- if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]:
- msg = "由于包含报错信息,只删除chatbot记录"
- chatbot.pop()
- return chatbot, self.history
- if len(self.history) > 0:
- self.history.pop()
- self.history.pop()
- if len(chatbot) > 0:
- msg = "删除了一组chatbot对话"
- chatbot.pop()
- if len(self.all_token_counts) > 0:
- msg = "删除了一组对话的token计数记录"
- self.all_token_counts.pop()
- msg = "删除了一组对话"
- return chatbot, msg
-
- def token_message(self, token_lst=None):
- if token_lst is None:
- token_lst = self.all_token_counts
- token_sum = 0
- for i in range(len(token_lst)):
- token_sum += sum(token_lst[: i + 1])
- return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens"
-
- def save_chat_history(self, filename, chatbot, user_name):
- if filename == "":
- return
- if not filename.endswith(".json"):
- filename += ".json"
- return save_file(filename, self.system_prompt, self.history, chatbot, user_name)
-
- def auto_save(self, chatbot):
- history_file_path = get_history_filepath(self.user_identifier)
- save_file(history_file_path, self.system_prompt, self.history, chatbot, self.user_identifier)
-
- def export_markdown(self, filename, chatbot, user_name):
- if filename == "":
- return
- if not filename.endswith(".md"):
- filename += ".md"
- return save_file(filename, self.system_prompt, self.history, chatbot, user_name)
-
- def load_chat_history(self, filename, user_name):
- logging.debug(f"{user_name} 加载对话历史中……")
- logging.info(f"filename: {filename}")
- if type(filename) != str and filename is not None:
- filename = filename.name
- try:
- if "/" not in filename:
- history_file_path = os.path.join(HISTORY_DIR, user_name, filename)
- else:
- history_file_path = filename
- with open(history_file_path, "r", encoding="utf-8") as f:
- json_s = json.load(f)
- try:
- if type(json_s["history"][0]) == str:
- logging.info("历史记录格式为旧版,正在转换……")
- new_history = []
- for index, item in enumerate(json_s["history"]):
- if index % 2 == 0:
- new_history.append(construct_user(item))
- else:
- new_history.append(construct_assistant(item))
- json_s["history"] = new_history
- logging.info(new_history)
- except:
- pass
- logging.debug(f"{user_name} 加载对话历史完毕")
- self.history = json_s["history"]
- return os.path.basename(filename), json_s["system"], json_s["chatbot"]
- except:
- # 没有对话历史或者对话历史解析失败
- logging.info(f"没有找到对话历史记录 {filename}")
- return gr.update(), self.system_prompt, gr.update()
-
- def auto_load(self):
- if self.user_identifier == "":
- self.reset()
- return self.system_prompt, gr.update()
- history_file_path = get_history_filepath(self.user_identifier)
- filename, system_prompt, chatbot = self.load_chat_history(history_file_path, self.user_identifier)
- return system_prompt, chatbot
-
-
- def like(self):
- """like the last response, implement if needed
- """
- return gr.update()
-
- def dislike(self):
- """dislike the last response, implement if needed
- """
- return gr.update()
diff --git "a/spaces/Kunal7/squats-analysis/pages/1_\360\237\223\267\357\270\217_Live_Stream.py" "b/spaces/Kunal7/squats-analysis/pages/1_\360\237\223\267\357\270\217_Live_Stream.py"
deleted file mode 100644
index 0ca9b0a20a2fe5f5d06eb3ed58b426599ac1b9d4..0000000000000000000000000000000000000000
--- "a/spaces/Kunal7/squats-analysis/pages/1_\360\237\223\267\357\270\217_Live_Stream.py"
+++ /dev/null
@@ -1,82 +0,0 @@
-import av
-import os
-import sys
-import streamlit as st
-from streamlit_webrtc import VideoHTMLAttributes, webrtc_streamer
-from aiortc.contrib.media import MediaRecorder
-
-
-BASE_DIR = os.path.abspath(os.path.join(__file__, '../../'))
-sys.path.append(BASE_DIR)
-
-
-from utils import get_mediapipe_pose
-from process_frame import ProcessFrame
-from thresholds import get_thresholds_beginner, get_thresholds_pro
-
-
-st.title('AI Fitness Trainer: Squats Analysis')
-
-mode = st.radio('Select Mode', ['Beginner', 'Pro'], horizontal=True)
-
-thresholds = None
-
-if mode == 'Beginner':
- thresholds = get_thresholds_beginner()
-
-elif mode == 'Pro':
- thresholds = get_thresholds_pro()
-
-
-live_process_frame = ProcessFrame(thresholds=thresholds, flip_frame=True)
-# Initialize face mesh solution
-pose = get_mediapipe_pose()
-
-
-if 'download' not in st.session_state:
- st.session_state['download'] = False
-
-output_video_file = f'output_live.flv'
-
-
-
-def video_frame_callback(frame: av.VideoFrame):
- frame = frame.to_ndarray(format="rgb24") # Decode and get RGB frame
- frame, _ = live_process_frame.process(frame, pose) # Process frame
- return av.VideoFrame.from_ndarray(frame, format="rgb24") # Encode and return BGR frame
-
-
-def out_recorder_factory() -> MediaRecorder:
- return MediaRecorder(output_video_file)
-
-
-ctx = webrtc_streamer(
- key="Squats-pose-analysis",
- video_frame_callback=video_frame_callback,
- rtc_configuration={"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]}, # Add this config
- media_stream_constraints={"video": {"width": {'min':480, 'ideal':480}}, "audio": False},
- video_html_attrs=VideoHTMLAttributes(autoPlay=True, controls=False, muted=False),
- out_recorder_factory=out_recorder_factory
- )
-
-
-download_button = st.empty()
-
-if os.path.exists(output_video_file):
- with open(output_video_file, 'rb') as op_vid:
- download = download_button.download_button('Download Video', data = op_vid, file_name='output_live.flv')
-
- if download:
- st.session_state['download'] = True
-
-
-
-if os.path.exists(output_video_file) and st.session_state['download']:
- os.remove(output_video_file)
- st.session_state['download'] = False
- download_button.empty()
-
-
-
-
-
diff --git a/spaces/KyanChen/FunSR/tools/data_tools/__init__.py b/spaces/KyanChen/FunSR/tools/data_tools/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Lamai/LAMAIGPT/autogpt/speech/say.py b/spaces/Lamai/LAMAIGPT/autogpt/speech/say.py
deleted file mode 100644
index 727983d12bf334205550a54bcd69a7a36824eda4..0000000000000000000000000000000000000000
--- a/spaces/Lamai/LAMAIGPT/autogpt/speech/say.py
+++ /dev/null
@@ -1,41 +0,0 @@
-""" Text to speech module """
-import threading
-from threading import Semaphore
-
-from autogpt.config import Config
-from autogpt.speech.brian import BrianSpeech
-from autogpt.speech.eleven_labs import ElevenLabsSpeech
-from autogpt.speech.gtts import GTTSVoice
-from autogpt.speech.macos_tts import MacOSTTS
-
-CFG = Config()
-DEFAULT_VOICE_ENGINE = GTTSVoice()
-VOICE_ENGINE = None
-if CFG.elevenlabs_api_key:
- VOICE_ENGINE = ElevenLabsSpeech()
-elif CFG.use_mac_os_tts == "True":
- VOICE_ENGINE = MacOSTTS()
-elif CFG.use_brian_tts == "True":
- VOICE_ENGINE = BrianSpeech()
-else:
- VOICE_ENGINE = GTTSVoice()
-
-
-QUEUE_SEMAPHORE = Semaphore(
- 1
-) # The amount of sounds to queue before blocking the main thread
-
-
-def say_text(text: str, voice_index: int = 0) -> None:
- """Speak the given text using the given voice index"""
-
- def speak() -> None:
- success = VOICE_ENGINE.say(text, voice_index)
- if not success:
- DEFAULT_VOICE_ENGINE.say(text)
-
- QUEUE_SEMAPHORE.release()
-
- QUEUE_SEMAPHORE.acquire(True)
- thread = threading.Thread(target=speak)
- thread.start()
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/heikinashi.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/heikinashi.py
deleted file mode 100644
index 242ea8e6621a1932c716b1bd4d5e0d446f81c1a0..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/heikinashi.py
+++ /dev/null
@@ -1,74 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-
-import backtrader as bt
-from backtrader.utils.py3 import range
-
-
-__all__ = ['HeikinAshi']
-
-
-class HeikinAshi(bt.Indicator):
- '''
- Heikin Ashi candlesticks in the forms of lines
-
- Formula:
- ha_open = (ha_open(-1) + ha_close(-1)) / 2
- ha_high = max(hi, ha_open, ha_close)
- ha_low = min(lo, ha_open, ha_close)
- ha_close = (open + high + low + close) / 4
-
- See also:
- https://en.wikipedia.org/wiki/Candlestick_chart#Heikin_Ashi_candlesticks
- http://stockcharts.com/school/doku.php?id=chart_school:chart_analysis:heikin_ashi
- '''
- lines = ('ha_open', 'ha_high', 'ha_low', 'ha_close',)
-
- linealias = (
- ('ha_open', 'open',),
- ('ha_high', 'high',),
- ('ha_low', 'low',),
- ('ha_close', 'close',),
- )
-
- plotinfo = dict(subplot=False)
-
- _nextforce = True
-
- def __init__(self):
- o = self.data.open
- h = self.data.high
- l = self.data.low
- c = self.data.close
-
- self.l.ha_close = ha_close = (o + h + l + c) / 4.0
- self.l.ha_open = ha_open = (self.l.ha_open(-1) + ha_close(-1)) / 2.0
- self.l.ha_high = bt.Max(h, ha_open, ha_close)
- self.l.ha_low = bt.Min(l, ha_open, ha_close)
-
- super(HeikinAshi, self).__init__()
-
- def prenext(self):
- # seed recursive value
- self.lines.ha_open[0] = (self.data.open[0] + self.data.close[0]) / 2.0
diff --git a/spaces/Manjushri/MusicGen/tests/common_utils/temp_utils.py b/spaces/Manjushri/MusicGen/tests/common_utils/temp_utils.py
deleted file mode 100644
index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/MusicGen/tests/common_utils/temp_utils.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-import tempfile
-
-
-class TempDirMixin:
- """Mixin to provide easy access to temp dir.
- """
-
- temp_dir_ = None
-
- @classmethod
- def get_base_temp_dir(cls):
- # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory.
- # this is handy for debugging.
- key = "AUDIOCRAFT_TEST_DIR"
- if key in os.environ:
- return os.environ[key]
- if cls.temp_dir_ is None:
- cls.temp_dir_ = tempfile.TemporaryDirectory()
- return cls.temp_dir_.name
-
- @classmethod
- def tearDownClass(cls):
- if cls.temp_dir_ is not None:
- try:
- cls.temp_dir_.cleanup()
- cls.temp_dir_ = None
- except PermissionError:
- # On Windows there is a know issue with `shutil.rmtree`,
- # which fails intermittenly.
- # https://github.com/python/cpython/issues/74168
- # Following the above thread, we ignore it.
- pass
- super().tearDownClass()
-
- @property
- def id(self):
- return self.__class__.__name__
-
- def get_temp_path(self, *paths):
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
- path = os.path.join(temp_dir, *paths)
- os.makedirs(os.path.dirname(path), exist_ok=True)
- return path
-
- def get_temp_dir(self, *paths):
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
- path = os.path.join(temp_dir, *paths)
- os.makedirs(path, exist_ok=True)
- return path
diff --git a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/__MACOSX/smpl/.___init__.py b/spaces/Marshalls/testmtd/analysis/aistplusplus_api/__MACOSX/smpl/.___init__.py
deleted file mode 100644
index 198315090137148619e28344fa871854f05f2afd..0000000000000000000000000000000000000000
Binary files a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/__MACOSX/smpl/.___init__.py and /dev/null differ
diff --git a/spaces/Matthijs/mms-tts-demo/uroman/README.md b/spaces/Matthijs/mms-tts-demo/uroman/README.md
deleted file mode 100644
index 6a0a40f6d4ebda9041d23efe0345340b7da9d4b8..0000000000000000000000000000000000000000
--- a/spaces/Matthijs/mms-tts-demo/uroman/README.md
+++ /dev/null
@@ -1,165 +0,0 @@
-# uroman
-
-*uroman* is a *universal romanizer*. It converts text in any script to the Latin alphabet.
-
-Version: 1.2.8
-Release date: April 23, 2021
-Author: Ulf Hermjakob, USC Information Sciences Institute
-
-
-### Usage
-```bash
-$ uroman.pl [-l ] [--chart] [--no-cache] < STDIN
- where the optional is a 3-letter languages code, e.g. ara, bel, bul, deu, ell, eng, fas,
- grc, ell, eng, heb, kaz, kir, lav, lit, mkd, mkd2, oss, pnt, pus, rus, srp, srp2, tur, uig, ukr, yid.
- --chart specifies chart output (in JSON format) to represent alternative romanizations.
- --no-cache disables caching.
-```
-### Examples
-```bash
-$ bin/uroman.pl < text/zho.txt
-$ bin/uroman.pl -l tur < text/tur.txt
-$ bin/uroman.pl -l heb --chart < text/heb.txt
-$ bin/uroman.pl < test/multi-script.txt > test/multi-script.uroman.txt
-```
-
-Identifying the input as Arabic, Belarusian, Bulgarian, English, Farsi, German,
-Ancient Greek, Modern Greek, Pontic Greek, Hebrew, Kazakh, Kyrgyz, Latvian,
-Lithuanian, North Macedonian, Russian, Serbian, Turkish, Ukrainian, Uyghur or
-Yiddish will improve romanization for those languages as some letters in those
-languages have different sound values from other languages using the same script
-(French, Russian, Hebrew respectively).
-No effect for other languages in this version.
-
-### Bibliography
-Ulf Hermjakob, Jonathan May, and Kevin Knight. 2018. Out-of-the-box universal romanization tool uroman. In Proceedings of the 56th Annual Meeting of Association for Computational Linguistics, Demo Track. ACL-2018 Best Demo Paper Award. [Paper in ACL Anthology](https://www.aclweb.org/anthology/P18-4003) | [Poster](https://www.isi.edu/~ulf/papers/poster-uroman-acl2018.pdf) | [BibTex](https://www.aclweb.org/anthology/P18-4003.bib)
-
-### Change History
-Changes in version 1.2.8
- * Updated to Unicode 13.0 (2021), which supports several new scripts (10% larger UnicodeData.txt).
- * Improved support for Georgian.
- * Preserve various symbols (as opposed to mapping to the symbols' names).
- * Various small improvements.
-
-Changes in version 1.2.7
- * Improved support for Pashto.
-
-Changes in version 1.2.6
- * Improved support for Ukrainian, Russian and Ogham (ancient Irish script).
- * Added support for English Braille.
- * Added alternative Romanization for North Macedonian and Serbian (mkd2/srp2)
- reflecting a casual style that many native speakers of those languages use
- when writing text in Latin script, e.g. non-accented single letters (e.g. "s")
- rather than phonetically motivated combinations of letters (e.g. "sh").
- * When a line starts with "::lcode xyz ", the new uroman version will switch to
- that language for that line. This is used for the new reference test file.
- * Various small improvements.
-
-Changes in version 1.2.5
- * Improved support for Armenian and eight languages using Cyrillic scripts.
- -- For Serbian and Macedonian, which are often written in both Cyrillic
- and Latin scripts, uroman will map both official versions to the same
- romanized text, e.g. both "Ниш" and "Niš" will be mapped to "Nish" (which
- properly reflects the pronunciation of the city's name).
- For both Serbian and Macedonian, casual writers often use a simplified
- Latin form without diacritics, e.g. "s" to represent not only Cyrillic "с"
- and Latin "s", but also "ш" or "š", even if this conflates "s" and "sh" and
- other such pairs. The casual romanization can be simulated by using
- alternative uroman language codes "srp2" and "mkd2", which romanize
- both "Ниш" and "Niš" to "Nis" to reflect the casual Latin spelling.
- * Various small improvements.
-
-Changes in version 1.2.4
- * Bug-fix that generated two emtpy lines for each empty line in cache mode.
-
-Changes in version 1.2
- * Run-time improvement based on (1) token-based caching and (2) shortcut
- romanization (identity) of ASCII strings for default 1-best (non-chart)
- output. Speed-up by a factor of 10 for Bengali and Uyghur on medium and
- large size texts.
- * Incremental improvements for Farsi, Amharic, Russian, Hebrew and related
- languages.
- * Richer lattice structure (more alternatives) for "Romanization" of English
- to support better matching to romanizations of other languages.
- Changes output only when --chart option is specified. No change in output for
- default 1-best output, which for ASCII characters is always the input string.
-
-Changes in version 1.1 (major upgrade)
- * Offers chart output (in JSON format) to represent alternative romanizations.
- -- Location of first character is defined to be "line: 1, start:0, end:0".
- * Incremental improvements of Hebrew and Greek romanization; Chinese numbers.
- * Improved web-interface at http://www.isi.edu/~ulf/uroman.html
- -- Shows corresponding original and romanization text in red
- when hovering over a text segment.
- -- Shows alternative romanizations when hovering over romanized text
- marked by dotted underline.
- -- Added right-to-left script detection and improved display for right-to-left
- script text (as determined line by line).
- -- On-page support for some scripts that are often not pre-installed on users'
- computers (Burmese, Egyptian, Klingon).
-
-Changes in version 1.0 (major upgrade)
- * Upgraded principal internal data structure from string to lattice.
- * Improvements mostly in vowelization of South and Southeast Asian languages.
- * Vocalic 'r' more consistently treated as vowel (no additional vowel added).
- * Repetition signs (Japanese/Chinese/Thai/Khmer/Lao) are mapped to superscript 2.
- * Japanese Katakana middle dots now mapped to ASCII space.
- * Tibetan intersyllabic mark now mapped to middle dot (U+00B7).
- * Some corrections regarding analysis of Chinese numbers.
- * Many more foreign diacritics and punctuation marks dropped or mapped to ASCII.
- * Zero-width characters dropped, except line/sentence-initial byte order marks.
- * Spaces normalized to ASCII space.
- * Fixed bug that in some cases mapped signs (such as dagger or bullet) to their verbal descriptions.
- * Tested against previous version of uroman with a new uroman visual diff tool.
- * Almost an order of magnitude faster.
-
-Changes in version 0.7 (minor upgrade)
- * Added script uroman-quick.pl for Arabic script languages, incl. Uyghur.
- Much faster, pre-caching mapping of Arabic to Latin characters, simple greedy processing.
- Will not convert material from non-Arabic blocks such as any (somewhat unusual) Cyrillic
- or Chinese characters in Uyghur texts.
-
-Changes in version 0.6 (minor upgrade)
- * Added support for two letter characters used in Uzbek:
- (1) character "ʻ" ("modifier letter turned comma", which modifies preceding "g" and "u" letters)
- (2) character "ʼ" ("modifier letter apostrophe", which Uzbek uses to mark a glottal stop).
- Both are now mapped to "'" (plain ASCII apostrophe).
- * Added support for Uyghur vowel characters such as "ې" (Arabic e) and "ۆ" (Arabic oe)
- even when they are not preceded by "ئ" (yeh with hamza above).
- * Added support for Arabic semicolon "؛", Arabic ligature forms for phrases such as "ﷺ"
- ("sallallahou alayhe wasallam" = "prayer of God be upon him and his family and peace")
- * Added robustness for Arabic letter presentation forms (initial/medial/final/isolated).
- However, it is strongly recommended to normalize any presentation form Arabic letters
- to their non-presentation form before calling uroman.
- * Added force flush directive ($|=1;).
-
-Changes in version 0.5 (minor upgrade)
- * Improvements for Uyghur (make sure to use language option: -l uig)
-
-Changes in version 0.4 (minor upgrade)
- * Improvements for Thai (special cases for vowel/consonant reordering, e.g. for "sara o"; dropped some aspiration 'h's)
- * Minor change for Arabic (added "alef+fathatan" = "an")
-
-New features in version 0.3
- * Covers Mandarin (Chinese)
- * Improved romanization for numerous languages
- * Preserves capitalization (e.g. from Latin, Cyrillic, Greek scripts)
- * Maps from native digits to Western numbers
- * Faster for South Asian languages
-
-### Other features
- * Web interface: http://www.isi.edu/~ulf/uroman.html
- * Vowelization is provided when locally computable, e.g. for many South Asian languages and Tibetan.
-
-### Limitations
- * The current version of uroman has a few limitations, some of which we plan to address in future versions.
- For Japanese, *uroman* currently romanizes hiragana and katakana as expected, but kanji are interpreted as Chinese characters and romanized as such.
- For Egyptian hieroglyphs, only single-sound phonetic characters and numbers are currently romanized.
- For Linear B, only phonetic syllabic characters are romanized.
- For some other extinct scripts such as cuneiform, no romanization is provided.
- * A romanizer is not a full transliterator. For example, this version of
- uroman does not vowelize text that lacks explicit vowelization such as
- normal text in Arabic and Hebrew (without diacritics/points).
-
-### Acknowledgments
-This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116, and by research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, Air Force Laboratory, DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
diff --git a/spaces/Mecca/whisper-webui/src/conversion/hf_converter.py b/spaces/Mecca/whisper-webui/src/conversion/hf_converter.py
deleted file mode 100644
index 6da4f0fd672d63b099f21d0498ba4001d23356f7..0000000000000000000000000000000000000000
--- a/spaces/Mecca/whisper-webui/src/conversion/hf_converter.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets
-
-from copy import deepcopy
-import torch
-
-WHISPER_MAPPING = {
- "layers": "blocks",
- "fc1": "mlp.0",
- "fc2": "mlp.2",
- "final_layer_norm": "mlp_ln",
- "layers": "blocks",
- ".self_attn.q_proj": ".attn.query",
- ".self_attn.k_proj": ".attn.key",
- ".self_attn.v_proj": ".attn.value",
- ".self_attn_layer_norm": ".attn_ln",
- ".self_attn.out_proj": ".attn.out",
- ".encoder_attn.q_proj": ".cross_attn.query",
- ".encoder_attn.k_proj": ".cross_attn.key",
- ".encoder_attn.v_proj": ".cross_attn.value",
- ".encoder_attn_layer_norm": ".cross_attn_ln",
- ".encoder_attn.out_proj": ".cross_attn.out",
- "decoder.layer_norm.": "decoder.ln.",
- "encoder.layer_norm.": "encoder.ln_post.",
- "embed_tokens": "token_embedding",
- "encoder.embed_positions.weight": "encoder.positional_embedding",
- "decoder.embed_positions.weight": "decoder.positional_embedding",
- "layer_norm": "ln_post",
-}
-
-
-def rename_keys(s_dict):
- keys = list(s_dict.keys())
- for key in keys:
- new_key = key
- for k, v in WHISPER_MAPPING.items():
- if k in key:
- new_key = new_key.replace(k, v)
-
- print(f"{key} -> {new_key}")
-
- s_dict[new_key] = s_dict.pop(key)
- return s_dict
-
-
-def convert_hf_whisper(hf_model_name_or_path: str, whisper_state_path: str):
- from transformers import WhisperForConditionalGeneration
- transformer_model = WhisperForConditionalGeneration.from_pretrained(hf_model_name_or_path)
- config = transformer_model.config
-
- # first build dims
- dims = {
- 'n_mels': config.num_mel_bins,
- 'n_vocab': config.vocab_size,
- 'n_audio_ctx': config.max_source_positions,
- 'n_audio_state': config.d_model,
- 'n_audio_head': config.encoder_attention_heads,
- 'n_audio_layer': config.encoder_layers,
- 'n_text_ctx': config.max_target_positions,
- 'n_text_state': config.d_model,
- 'n_text_head': config.decoder_attention_heads,
- 'n_text_layer': config.decoder_layers
- }
-
- state_dict = deepcopy(transformer_model.model.state_dict())
- state_dict = rename_keys(state_dict)
-
- torch.save({"dims": dims, "model_state_dict": state_dict}, whisper_state_path)
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/registry.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/registry.py
deleted file mode 100644
index 39eabc58db4b5954478a2ac1ab91cea5e45ab055..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/cnn/bricks/registry.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from annotator.uniformer.mmcv.utils import Registry
-
-CONV_LAYERS = Registry('conv layer')
-NORM_LAYERS = Registry('norm layer')
-ACTIVATION_LAYERS = Registry('activation layer')
-PADDING_LAYERS = Registry('padding layer')
-UPSAMPLE_LAYERS = Registry('upsample layer')
-PLUGIN_LAYERS = Registry('plugin layer')
-
-DROPOUT_LAYERS = Registry('drop out layers')
-POSITIONAL_ENCODING = Registry('position encoding')
-ATTENTION = Registry('attention')
-FEEDFORWARD_NETWORK = Registry('feed-forward Network')
-TRANSFORMER_LAYER = Registry('transformerLayer')
-TRANSFORMER_LAYER_SEQUENCE = Registry('transformer-layers sequence')
diff --git a/spaces/MirageML/sjc/sd1/ldm/models/autoencoder.py b/spaces/MirageML/sjc/sd1/ldm/models/autoencoder.py
deleted file mode 100644
index 6a9c4f45498561953b8085981609b2a3298a5473..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/sd1/ldm/models/autoencoder.py
+++ /dev/null
@@ -1,443 +0,0 @@
-import torch
-import pytorch_lightning as pl
-import torch.nn.functional as F
-from contextlib import contextmanager
-
-from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer
-
-from ldm.modules.diffusionmodules.model import Encoder, Decoder
-from ldm.modules.distributions.distributions import DiagonalGaussianDistribution
-
-from ldm.util import instantiate_from_config
-
-
-class VQModel(pl.LightningModule):
- def __init__(self,
- ddconfig,
- lossconfig,
- n_embed,
- embed_dim,
- ckpt_path=None,
- ignore_keys=[],
- image_key="image",
- colorize_nlabels=None,
- monitor=None,
- batch_resize_range=None,
- scheduler_config=None,
- lr_g_factor=1.0,
- remap=None,
- sane_index_shape=False, # tell vector quantizer to return indices as bhw
- use_ema=False
- ):
- super().__init__()
- self.embed_dim = embed_dim
- self.n_embed = n_embed
- self.image_key = image_key
- self.encoder = Encoder(**ddconfig)
- self.decoder = Decoder(**ddconfig)
- self.loss = instantiate_from_config(lossconfig)
- self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25,
- remap=remap,
- sane_index_shape=sane_index_shape)
- self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1)
- self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
- if colorize_nlabels is not None:
- assert type(colorize_nlabels)==int
- self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
- if monitor is not None:
- self.monitor = monitor
- self.batch_resize_range = batch_resize_range
- if self.batch_resize_range is not None:
- print(f"{self.__class__.__name__}: Using per-batch resizing in range {batch_resize_range}.")
-
- self.use_ema = use_ema
- if self.use_ema:
- self.model_ema = LitEma(self)
- print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
-
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
- self.scheduler_config = scheduler_config
- self.lr_g_factor = lr_g_factor
-
- @contextmanager
- def ema_scope(self, context=None):
- if self.use_ema:
- self.model_ema.store(self.parameters())
- self.model_ema.copy_to(self)
- if context is not None:
- print(f"{context}: Switched to EMA weights")
- try:
- yield None
- finally:
- if self.use_ema:
- self.model_ema.restore(self.parameters())
- if context is not None:
- print(f"{context}: Restored training weights")
-
- def init_from_ckpt(self, path, ignore_keys=list()):
- sd = torch.load(path, map_location="cpu")["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- missing, unexpected = self.load_state_dict(sd, strict=False)
- print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
- if len(missing) > 0:
- print(f"Missing Keys: {missing}")
- print(f"Unexpected Keys: {unexpected}")
-
- def on_train_batch_end(self, *args, **kwargs):
- if self.use_ema:
- self.model_ema(self)
-
- def encode(self, x):
- h = self.encoder(x)
- h = self.quant_conv(h)
- quant, emb_loss, info = self.quantize(h)
- return quant, emb_loss, info
-
- def encode_to_prequant(self, x):
- h = self.encoder(x)
- h = self.quant_conv(h)
- return h
-
- def decode(self, quant):
- quant = self.post_quant_conv(quant)
- dec = self.decoder(quant)
- return dec
-
- def decode_code(self, code_b):
- quant_b = self.quantize.embed_code(code_b)
- dec = self.decode(quant_b)
- return dec
-
- def forward(self, input, return_pred_indices=False):
- quant, diff, (_,_,ind) = self.encode(input)
- dec = self.decode(quant)
- if return_pred_indices:
- return dec, diff, ind
- return dec, diff
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
- if self.batch_resize_range is not None:
- lower_size = self.batch_resize_range[0]
- upper_size = self.batch_resize_range[1]
- if self.global_step <= 4:
- # do the first few batches with max size to avoid later oom
- new_resize = upper_size
- else:
- new_resize = np.random.choice(np.arange(lower_size, upper_size+16, 16))
- if new_resize != x.shape[2]:
- x = F.interpolate(x, size=new_resize, mode="bicubic")
- x = x.detach()
- return x
-
- def training_step(self, batch, batch_idx, optimizer_idx):
- # https://github.com/pytorch/pytorch/issues/37142
- # try not to fool the heuristics
- x = self.get_input(batch, self.image_key)
- xrec, qloss, ind = self(x, return_pred_indices=True)
-
- if optimizer_idx == 0:
- # autoencode
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train",
- predicted_indices=ind)
-
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- return aeloss
-
- if optimizer_idx == 1:
- # discriminator
- discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
- self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True)
- return discloss
-
- def validation_step(self, batch, batch_idx):
- log_dict = self._validation_step(batch, batch_idx)
- with self.ema_scope():
- log_dict_ema = self._validation_step(batch, batch_idx, suffix="_ema")
- return log_dict
-
- def _validation_step(self, batch, batch_idx, suffix=""):
- x = self.get_input(batch, self.image_key)
- xrec, qloss, ind = self(x, return_pred_indices=True)
- aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0,
- self.global_step,
- last_layer=self.get_last_layer(),
- split="val"+suffix,
- predicted_indices=ind
- )
-
- discloss, log_dict_disc = self.loss(qloss, x, xrec, 1,
- self.global_step,
- last_layer=self.get_last_layer(),
- split="val"+suffix,
- predicted_indices=ind
- )
- rec_loss = log_dict_ae[f"val{suffix}/rec_loss"]
- self.log(f"val{suffix}/rec_loss", rec_loss,
- prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True)
- self.log(f"val{suffix}/aeloss", aeloss,
- prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True)
- if version.parse(pl.__version__) >= version.parse('1.4.0'):
- del log_dict_ae[f"val{suffix}/rec_loss"]
- self.log_dict(log_dict_ae)
- self.log_dict(log_dict_disc)
- return self.log_dict
-
- def configure_optimizers(self):
- lr_d = self.learning_rate
- lr_g = self.lr_g_factor*self.learning_rate
- print("lr_d", lr_d)
- print("lr_g", lr_g)
- opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
- list(self.decoder.parameters())+
- list(self.quantize.parameters())+
- list(self.quant_conv.parameters())+
- list(self.post_quant_conv.parameters()),
- lr=lr_g, betas=(0.5, 0.9))
- opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),
- lr=lr_d, betas=(0.5, 0.9))
-
- if self.scheduler_config is not None:
- scheduler = instantiate_from_config(self.scheduler_config)
-
- print("Setting up LambdaLR scheduler...")
- scheduler = [
- {
- 'scheduler': LambdaLR(opt_ae, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- },
- {
- 'scheduler': LambdaLR(opt_disc, lr_lambda=scheduler.schedule),
- 'interval': 'step',
- 'frequency': 1
- },
- ]
- return [opt_ae, opt_disc], scheduler
- return [opt_ae, opt_disc], []
-
- def get_last_layer(self):
- return self.decoder.conv_out.weight
-
- def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs):
- log = dict()
- x = self.get_input(batch, self.image_key)
- x = x.to(self.device)
- if only_inputs:
- log["inputs"] = x
- return log
- xrec, _ = self(x)
- if x.shape[1] > 3:
- # colorize with random projection
- assert xrec.shape[1] > 3
- x = self.to_rgb(x)
- xrec = self.to_rgb(xrec)
- log["inputs"] = x
- log["reconstructions"] = xrec
- if plot_ema:
- with self.ema_scope():
- xrec_ema, _ = self(x)
- if x.shape[1] > 3: xrec_ema = self.to_rgb(xrec_ema)
- log["reconstructions_ema"] = xrec_ema
- return log
-
- def to_rgb(self, x):
- assert self.image_key == "segmentation"
- if not hasattr(self, "colorize"):
- self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
- x = F.conv2d(x, weight=self.colorize)
- x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
- return x
-
-
-class VQModelInterface(VQModel):
- def __init__(self, embed_dim, *args, **kwargs):
- super().__init__(embed_dim=embed_dim, *args, **kwargs)
- self.embed_dim = embed_dim
-
- def encode(self, x):
- h = self.encoder(x)
- h = self.quant_conv(h)
- return h
-
- def decode(self, h, force_not_quantize=False):
- # also go through quantization layer
- if not force_not_quantize:
- quant, emb_loss, info = self.quantize(h)
- else:
- quant = h
- quant = self.post_quant_conv(quant)
- dec = self.decoder(quant)
- return dec
-
-
-class AutoencoderKL(pl.LightningModule):
- def __init__(self,
- ddconfig,
- lossconfig,
- embed_dim,
- ckpt_path=None,
- ignore_keys=[],
- image_key="image",
- colorize_nlabels=None,
- monitor=None,
- ):
- super().__init__()
- self.image_key = image_key
- self.encoder = Encoder(**ddconfig)
- self.decoder = Decoder(**ddconfig)
- self.loss = instantiate_from_config(lossconfig)
- assert ddconfig["double_z"]
- self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1)
- self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
- self.embed_dim = embed_dim
- if colorize_nlabels is not None:
- assert type(colorize_nlabels)==int
- self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
- if monitor is not None:
- self.monitor = monitor
- if ckpt_path is not None:
- self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
-
- def init_from_ckpt(self, path, ignore_keys=list()):
- sd = torch.load(path, map_location="cpu")["state_dict"]
- keys = list(sd.keys())
- for k in keys:
- for ik in ignore_keys:
- if k.startswith(ik):
- print("Deleting key {} from state_dict.".format(k))
- del sd[k]
- self.load_state_dict(sd, strict=False)
- print(f"Restored from {path}")
-
- def encode(self, x):
- h = self.encoder(x)
- moments = self.quant_conv(h)
- posterior = DiagonalGaussianDistribution(moments)
- return posterior
-
- def decode(self, z):
- z = self.post_quant_conv(z)
- dec = self.decoder(z)
- return dec
-
- def forward(self, input, sample_posterior=True):
- posterior = self.encode(input)
- if sample_posterior:
- z = posterior.sample()
- else:
- z = posterior.mode()
- dec = self.decode(z)
- return dec, posterior
-
- def get_input(self, batch, k):
- x = batch[k]
- if len(x.shape) == 3:
- x = x[..., None]
- x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
- return x
-
- def training_step(self, batch, batch_idx, optimizer_idx):
- inputs = self.get_input(batch, self.image_key)
- reconstructions, posterior = self(inputs)
-
- if optimizer_idx == 0:
- # train encoder+decoder+logvar
- aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
- self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False)
- return aeloss
-
- if optimizer_idx == 1:
- # train the discriminator
- discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step,
- last_layer=self.get_last_layer(), split="train")
-
- self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True)
- self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False)
- return discloss
-
- def validation_step(self, batch, batch_idx):
- inputs = self.get_input(batch, self.image_key)
- reconstructions, posterior = self(inputs)
- aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step,
- last_layer=self.get_last_layer(), split="val")
-
- discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step,
- last_layer=self.get_last_layer(), split="val")
-
- self.log("val/rec_loss", log_dict_ae["val/rec_loss"])
- self.log_dict(log_dict_ae)
- self.log_dict(log_dict_disc)
- return self.log_dict
-
- def configure_optimizers(self):
- lr = self.learning_rate
- opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
- list(self.decoder.parameters())+
- list(self.quant_conv.parameters())+
- list(self.post_quant_conv.parameters()),
- lr=lr, betas=(0.5, 0.9))
- opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),
- lr=lr, betas=(0.5, 0.9))
- return [opt_ae, opt_disc], []
-
- def get_last_layer(self):
- return self.decoder.conv_out.weight
-
- @torch.no_grad()
- def log_images(self, batch, only_inputs=False, **kwargs):
- log = dict()
- x = self.get_input(batch, self.image_key)
- x = x.to(self.device)
- if not only_inputs:
- xrec, posterior = self(x)
- if x.shape[1] > 3:
- # colorize with random projection
- assert xrec.shape[1] > 3
- x = self.to_rgb(x)
- xrec = self.to_rgb(xrec)
- log["samples"] = self.decode(torch.randn_like(posterior.sample()))
- log["reconstructions"] = xrec
- log["inputs"] = x
- return log
-
- def to_rgb(self, x):
- assert self.image_key == "segmentation"
- if not hasattr(self, "colorize"):
- self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
- x = F.conv2d(x, weight=self.colorize)
- x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
- return x
-
-
-class IdentityFirstStage(torch.nn.Module):
- def __init__(self, *args, vq_interface=False, **kwargs):
- self.vq_interface = vq_interface # TODO: Should be true by default but check to not break older stuff
- super().__init__()
-
- def encode(self, x, *args, **kwargs):
- return x
-
- def decode(self, x, *args, **kwargs):
- return x
-
- def quantize(self, x, *args, **kwargs):
- if self.vq_interface:
- return x, None, [None, None, None]
- return x
-
- def forward(self, x, *args, **kwargs):
- return x
diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/framework.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/framework.py
deleted file mode 100644
index a4375b659a91267d3db9278f72bd1f0b030a4655..0000000000000000000000000000000000000000
--- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/framework.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Mario Rosasco, 2016
-# adapted from framework.cpp, Copyright (C) 2010-2012 by Jason L. McKesson
-# This file is licensed under the MIT License.
-#
-# NB: Unlike in the framework.cpp organization, the main loop is contained
-# in the tutorial files, not in this framework file. Additionally, a copy of
-# this module file must exist in the same directory as the tutorial files
-# to be imported properly.
-
-import os
-from OpenGL.GL import *
-
-# Function that creates and compiles shaders according to the given type (a GL enum value) and
-# shader program (a file containing a GLSL program).
-def loadShader(shaderType, shaderFile):
- # check if file exists, get full path name
- strFilename = findFileOrThrow(shaderFile)
- shaderData = None
- with open(strFilename, 'r') as f:
- shaderData = f.read()
-
- shader = glCreateShader(shaderType)
- glShaderSource(shader, shaderData) # note that this is a simpler function call than in C
-
- # This shader compilation is more explicit than the one used in
- # framework.cpp, which relies on a glutil wrapper function.
- # This is made explicit here mainly to decrease dependence on pyOpenGL
- # utilities and wrappers, which docs caution may change in future versions.
- glCompileShader(shader)
-
- status = glGetShaderiv(shader, GL_COMPILE_STATUS)
- if status == GL_FALSE:
- # Note that getting the error log is much simpler in Python than in C/C++
- # and does not require explicit handling of the string buffer
- strInfoLog = glGetShaderInfoLog(shader)
- strShaderType = ""
- if shaderType is GL_VERTEX_SHADER:
- strShaderType = "vertex"
- elif shaderType is GL_GEOMETRY_SHADER:
- strShaderType = "geometry"
- elif shaderType is GL_FRAGMENT_SHADER:
- strShaderType = "fragment"
-
- print("Compilation failure for " + strShaderType + " shader:\n" + str(strInfoLog))
-
- return shader
-
-
-# Function that accepts a list of shaders, compiles them, and returns a handle to the compiled program
-def createProgram(shaderList):
- program = glCreateProgram()
-
- for shader in shaderList:
- glAttachShader(program, shader)
-
- glLinkProgram(program)
-
- status = glGetProgramiv(program, GL_LINK_STATUS)
- if status == GL_FALSE:
- # Note that getting the error log is much simpler in Python than in C/C++
- # and does not require explicit handling of the string buffer
- strInfoLog = glGetProgramInfoLog(program)
- print("Linker failure: \n" + str(strInfoLog))
-
- for shader in shaderList:
- glDetachShader(program, shader)
-
- return program
-
-
-# Helper function to locate and open the target file (passed in as a string).
-# Returns the full path to the file as a string.
-def findFileOrThrow(strBasename):
- # Keep constant names in C-style convention, for readability
- # when comparing to C(/C++) code.
- if os.path.isfile(strBasename):
- return strBasename
-
- LOCAL_FILE_DIR = "data" + os.sep
- GLOBAL_FILE_DIR = os.path.dirname(os.path.abspath(__file__)) + os.sep + "data" + os.sep
-
- strFilename = LOCAL_FILE_DIR + strBasename
- if os.path.isfile(strFilename):
- return strFilename
-
- strFilename = GLOBAL_FILE_DIR + strBasename
- if os.path.isfile(strFilename):
- return strFilename
-
- raise IOError('Could not find target file ' + strBasename)
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/textsnake/textsnake_resnet50_fpn-unet_1200e_ctw1500.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/textsnake/textsnake_resnet50_fpn-unet_1200e_ctw1500.py
deleted file mode 100644
index 5e3158630bc047172ce50dc27ba23faf2f1606f1..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/textsnake/textsnake_resnet50_fpn-unet_1200e_ctw1500.py
+++ /dev/null
@@ -1,30 +0,0 @@
-_base_ = [
- '_base_textsnake_resnet50_fpn-unet.py',
- '../_base_/datasets/ctw1500.py',
- '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_sgd_1200e.py',
-]
-
-# dataset settings
-ctw1500_textdet_train = _base_.ctw1500_textdet_train
-ctw1500_textdet_train.pipeline = _base_.train_pipeline
-ctw1500_textdet_test = _base_.ctw1500_textdet_test
-ctw1500_textdet_test.pipeline = _base_.test_pipeline
-
-train_dataloader = dict(
- batch_size=4,
- num_workers=4,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=True),
- dataset=ctw1500_textdet_train)
-
-val_dataloader = dict(
- batch_size=1,
- num_workers=1,
- persistent_workers=True,
- sampler=dict(type='DefaultSampler', shuffle=False),
- dataset=ctw1500_textdet_test)
-
-test_dataloader = val_dataloader
-
-auto_scale_lr = dict(base_batch_size=4)
diff --git a/spaces/MuGeminorum/insecta/khandy/file_io_utils.py b/spaces/MuGeminorum/insecta/khandy/file_io_utils.py
deleted file mode 100644
index dcc5ab63df673b6feff121a5c845a397322a31c5..0000000000000000000000000000000000000000
--- a/spaces/MuGeminorum/insecta/khandy/file_io_utils.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import base64
-import json
-import numbers
-import pickle
-import warnings
-from collections import OrderedDict
-
-
-def load_list(filename, encoding='utf-8', start=0, stop=None):
- assert isinstance(start, numbers.Integral) and start >= 0
- assert (stop is None) or (isinstance(stop, numbers.Integral) and stop > start)
-
- lines = []
- with open(filename, 'r', encoding=encoding) as f:
- for _ in range(start):
- f.readline()
- for k, line in enumerate(f):
- if (stop is not None) and (k + start > stop):
- break
- lines.append(line.rstrip('\n'))
- return lines
-
-
-def save_list(filename, list_obj, encoding='utf-8', append_break=True):
- with open(filename, 'w', encoding=encoding) as f:
- if append_break:
- for item in list_obj:
- f.write(str(item) + '\n')
- else:
- for item in list_obj:
- f.write(str(item))
-
-
-def load_json(filename, encoding='utf-8'):
- with open(filename, 'r', encoding=encoding) as f:
- data = json.load(f, object_pairs_hook=OrderedDict)
- return data
-
-
-def save_json(filename, data, encoding='utf-8', indent=4, cls=None, sort_keys=False):
- if not filename.endswith('.json'):
- filename = filename + '.json'
- with open(filename, 'w', encoding=encoding) as f:
- json.dump(data, f, indent=indent, separators=(',',': '),
- ensure_ascii=False, cls=cls, sort_keys=sort_keys)
-
-
-def load_bytes(filename, use_base64: bool = False) -> bytes:
- """Open the file in bytes mode, read it, and close the file.
-
- References:
- pathlib.Path.read_bytes
- """
- with open(filename, 'rb') as f:
- data = f.read()
- if use_base64:
- data = base64.b64encode(data)
- return data
-
-
-def save_bytes(filename, data: bytes, use_base64: bool = False) -> int:
- """Open the file in bytes mode, write to it, and close the file.
-
- References:
- pathlib.Path.write_bytes
- """
- if use_base64:
- data = base64.b64decode(data)
- with open(filename, 'wb') as f:
- ret = f.write(data)
- return ret
-
-
-def load_as_base64(filename) -> bytes:
- warnings.warn('khandy.load_as_base64 will be deprecated, use khandy.load_bytes instead!')
- return load_bytes(filename, True)
-
-
-def load_object(filename):
- with open(filename, 'rb') as f:
- return pickle.load(f)
-
-
-def save_object(filename, obj):
- with open(filename, 'wb') as f:
- pickle.dump(obj, f)
-
diff --git a/spaces/NCSOFT/harim_plus/app.py b/spaces/NCSOFT/harim_plus/app.py
deleted file mode 100644
index 4422f0b4b93eb849352aad30e837af0a49a8ed0b..0000000000000000000000000000000000000000
--- a/spaces/NCSOFT/harim_plus/app.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import evaluate
-from evaluate.utils import launch_gradio_widget
-
-
-module = evaluate.load("NCSOFT/harim_plus")
-launch_gradio_widget(module)
diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/flags_test.py b/spaces/NCTCMumbai/NCTC/models/official/utils/flags/flags_test.py
deleted file mode 100644
index e11a1642242bf134f3a9f1df0908f29b00cecf74..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/flags_test.py
+++ /dev/null
@@ -1,162 +0,0 @@
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-import unittest
-
-from absl import flags
-import tensorflow as tf
-
-from official.utils.flags import core as flags_core # pylint: disable=g-bad-import-order
-
-
-def define_flags():
- flags_core.define_base(clean=True, num_gpu=False, stop_threshold=True,
- hooks=True, train_epochs=True,
- epochs_between_evals=True)
- flags_core.define_performance(
- num_parallel_calls=True, inter_op=True, intra_op=True,
- dynamic_loss_scale=True, loss_scale=True, synthetic_data=True,
- dtype=True)
- flags_core.define_image()
- flags_core.define_benchmark()
-
-
-class BaseTester(unittest.TestCase):
-
- @classmethod
- def setUpClass(cls):
- super(BaseTester, cls).setUpClass()
- define_flags()
-
- def test_default_setting(self):
- """Test to ensure fields exist and defaults can be set.
- """
-
- defaults = dict(
- data_dir="dfgasf",
- model_dir="dfsdkjgbs",
- train_epochs=534,
- epochs_between_evals=15,
- batch_size=256,
- hooks=["LoggingTensorHook"],
- num_parallel_calls=18,
- inter_op_parallelism_threads=5,
- intra_op_parallelism_threads=10,
- data_format="channels_first"
- )
-
- flags_core.set_defaults(**defaults)
- flags_core.parse_flags()
-
- for key, value in defaults.items():
- assert flags.FLAGS.get_flag_value(name=key, default=None) == value
-
- def test_benchmark_setting(self):
- defaults = dict(
- hooks=["LoggingMetricHook"],
- benchmark_log_dir="/tmp/12345",
- gcp_project="project_abc",
- )
-
- flags_core.set_defaults(**defaults)
- flags_core.parse_flags()
-
- for key, value in defaults.items():
- assert flags.FLAGS.get_flag_value(name=key, default=None) == value
-
- def test_booleans(self):
- """Test to ensure boolean flags trigger as expected.
- """
-
- flags_core.parse_flags([__file__, "--use_synthetic_data"])
-
- assert flags.FLAGS.use_synthetic_data
-
- def test_parse_dtype_info(self):
- flags_core.parse_flags([__file__, "--dtype", "fp16"])
- self.assertEqual(flags_core.get_tf_dtype(flags.FLAGS), tf.float16)
- self.assertEqual(flags_core.get_loss_scale(flags.FLAGS,
- default_for_fp16=2), 2)
-
- flags_core.parse_flags(
- [__file__, "--dtype", "fp16", "--loss_scale", "5"])
- self.assertEqual(flags_core.get_loss_scale(flags.FLAGS,
- default_for_fp16=2), 5)
-
- flags_core.parse_flags(
- [__file__, "--dtype", "fp16", "--loss_scale", "dynamic"])
- self.assertEqual(flags_core.get_loss_scale(flags.FLAGS,
- default_for_fp16=2), "dynamic")
-
- flags_core.parse_flags([__file__, "--dtype", "fp32"])
- self.assertEqual(flags_core.get_tf_dtype(flags.FLAGS), tf.float32)
- self.assertEqual(flags_core.get_loss_scale(flags.FLAGS,
- default_for_fp16=2), 1)
-
- flags_core.parse_flags([__file__, "--dtype", "fp32", "--loss_scale", "5"])
- self.assertEqual(flags_core.get_loss_scale(flags.FLAGS,
- default_for_fp16=2), 5)
-
-
- with self.assertRaises(SystemExit):
- flags_core.parse_flags([__file__, "--dtype", "int8"])
-
- with self.assertRaises(SystemExit):
- flags_core.parse_flags([__file__, "--dtype", "fp16",
- "--loss_scale", "abc"])
-
- def test_get_nondefault_flags_as_str(self):
- defaults = dict(
- clean=True,
- data_dir="abc",
- hooks=["LoggingTensorHook"],
- stop_threshold=1.5,
- use_synthetic_data=False
- )
- flags_core.set_defaults(**defaults)
- flags_core.parse_flags()
-
- expected_flags = ""
- self.assertEqual(flags_core.get_nondefault_flags_as_str(), expected_flags)
-
- flags.FLAGS.clean = False
- expected_flags += "--noclean"
- self.assertEqual(flags_core.get_nondefault_flags_as_str(), expected_flags)
-
- flags.FLAGS.data_dir = "xyz"
- expected_flags += " --data_dir=xyz"
- self.assertEqual(flags_core.get_nondefault_flags_as_str(), expected_flags)
-
- flags.FLAGS.hooks = ["aaa", "bbb", "ccc"]
- expected_flags += " --hooks=aaa,bbb,ccc"
- self.assertEqual(flags_core.get_nondefault_flags_as_str(), expected_flags)
-
- flags.FLAGS.stop_threshold = 3.
- expected_flags += " --stop_threshold=3.0"
- self.assertEqual(flags_core.get_nondefault_flags_as_str(), expected_flags)
-
- flags.FLAGS.use_synthetic_data = True
- expected_flags += " --use_synthetic_data"
- self.assertEqual(flags_core.get_nondefault_flags_as_str(), expected_flags)
-
- # Assert that explicit setting a flag to its default value does not cause it
- # to appear in the string
- flags.FLAGS.use_synthetic_data = False
- expected_flags = expected_flags[:-len(" --use_synthetic_data")]
- self.assertEqual(flags_core.get_nondefault_flags_as_str(), expected_flags)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/NanoMachin/Free-Palestine/Dockerfile b/spaces/NanoMachin/Free-Palestine/Dockerfile
deleted file mode 100644
index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000
--- a/spaces/NanoMachin/Free-Palestine/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/NimaBoscarino/climategan/figures/labels.py b/spaces/NimaBoscarino/climategan/figures/labels.py
deleted file mode 100644
index d60cb11def6277c8913e36bff3f91f744865b679..0000000000000000000000000000000000000000
--- a/spaces/NimaBoscarino/climategan/figures/labels.py
+++ /dev/null
@@ -1,200 +0,0 @@
-"""
-This scripts plots images from the Masker test set overlaid with their labels.
-"""
-print("Imports...", end="")
-from argparse import ArgumentParser
-import os
-import yaml
-import numpy as np
-import pandas as pd
-import seaborn as sns
-from pathlib import Path
-import matplotlib.pyplot as plt
-import matplotlib.patches as mpatches
-
-import sys
-
-sys.path.append("../")
-
-from eval_masker import crop_and_resize
-
-
-# -----------------------
-# ----- Constants -----
-# -----------------------
-
-# Colors
-colorblind_palette = sns.color_palette("colorblind")
-color_cannot = colorblind_palette[1]
-color_must = colorblind_palette[2]
-color_may = colorblind_palette[7]
-color_pred = colorblind_palette[4]
-
-icefire = sns.color_palette("icefire", as_cmap=False, n_colors=5)
-color_tp = icefire[0]
-color_tn = icefire[1]
-color_fp = icefire[4]
-color_fn = icefire[3]
-
-
-def parsed_args():
- """
- Parse and returns command-line args
-
- Returns:
- argparse.Namespace: the parsed arguments
- """
- parser = ArgumentParser()
- parser.add_argument(
- "--input_csv",
- default="ablations_metrics_20210311.csv",
- type=str,
- help="CSV containing the results of the ablation study",
- )
- parser.add_argument(
- "--output_dir",
- default=None,
- type=str,
- help="Output directory",
- )
- parser.add_argument(
- "--masker_test_set_dir",
- default=None,
- type=str,
- help="Directory containing the test images",
- )
- parser.add_argument(
- "--images",
- nargs="+",
- help="List of image file names to plot",
- default=[],
- type=str,
- )
- parser.add_argument(
- "--dpi",
- default=200,
- type=int,
- help="DPI for the output images",
- )
- parser.add_argument(
- "--alpha",
- default=0.5,
- type=float,
- help="Transparency of labels shade",
- )
-
- return parser.parse_args()
-
-
-def map_color(arr, input_color, output_color, rtol=1e-09):
- """
- Maps one color to another
- """
- input_color_arr = np.tile(input_color, (arr.shape[:2] + (1,)))
- output = arr.copy()
- output[np.all(np.isclose(arr, input_color_arr, rtol=rtol), axis=2)] = output_color
- return output
-
-
-if __name__ == "__main__":
- # -----------------------------
- # ----- Parse arguments -----
- # -----------------------------
- args = parsed_args()
- print("Args:\n" + "\n".join([f" {k:20}: {v}" for k, v in vars(args).items()]))
-
- # Determine output dir
- if args.output_dir is None:
- output_dir = Path(os.environ["SLURM_TMPDIR"])
- else:
- output_dir = Path(args.output_dir)
- if not output_dir.exists():
- output_dir.mkdir(parents=True, exist_ok=False)
-
- # Store args
- output_yml = output_dir / "labels.yml"
- with open(output_yml, "w") as f:
- yaml.dump(vars(args), f)
-
- # Data dirs
- imgs_orig_path = Path(args.masker_test_set_dir) / "imgs"
- labels_path = Path(args.masker_test_set_dir) / "labels"
-
- # Read CSV
- df = pd.read_csv(args.input_csv, index_col="model_img_idx")
-
- # Set up plot
- sns.reset_orig()
- sns.set(style="whitegrid")
- plt.rcParams.update({"font.family": "serif"})
- plt.rcParams.update(
- {
- "font.serif": [
- "Computer Modern Roman",
- "Times New Roman",
- "Utopia",
- "New Century Schoolbook",
- "Century Schoolbook L",
- "ITC Bookman",
- "Bookman",
- "Times",
- "Palatino",
- "Charter",
- "serif" "Bitstream Vera Serif",
- "DejaVu Serif",
- ]
- }
- )
-
- fig, axes = plt.subplots(
- nrows=1, ncols=len(args.images), dpi=args.dpi, figsize=(len(args.images) * 5, 5)
- )
-
- for idx, img_filename in enumerate(args.images):
-
- # Read images
- img_path = imgs_orig_path / img_filename
- label_path = labels_path / "{}_labeled.png".format(Path(img_filename).stem)
- img, label = crop_and_resize(img_path, label_path)
-
- # Map label colors
- label_colmap = label.astype(float)
- label_colmap = map_color(label_colmap, (255, 0, 0), color_cannot)
- label_colmap = map_color(label_colmap, (0, 0, 255), color_must)
- label_colmap = map_color(label_colmap, (0, 0, 0), color_may)
-
- ax = axes[idx]
- ax.imshow(img)
- ax.imshow(label_colmap, alpha=args.alpha)
- ax.axis("off")
-
- # Legend
- handles = []
- lw = 1.0
- handles.append(
- mpatches.Patch(
- facecolor=color_must, label="must", linewidth=lw, alpha=args.alpha
- )
- )
- handles.append(
- mpatches.Patch(facecolor=color_may, label="may", linewidth=lw, alpha=args.alpha)
- )
- handles.append(
- mpatches.Patch(
- facecolor=color_cannot, label="cannot", linewidth=lw, alpha=args.alpha
- )
- )
- labels = ["Must-be-flooded", "May-be-flooded", "Cannot-be-flooded"]
- fig.legend(
- handles=handles,
- labels=labels,
- loc="upper center",
- bbox_to_anchor=(0.0, 0.85, 1.0, 0.075),
- ncol=len(args.images),
- fontsize="medium",
- frameon=False,
- )
-
- # Save figure
- output_fig = output_dir / "labels.png"
- fig.savefig(output_fig, dpi=fig.dpi, bbox_inches="tight")
diff --git a/spaces/Ntabukiraniro/Recipe/modules/utils.py b/spaces/Ntabukiraniro/Recipe/modules/utils.py
deleted file mode 100644
index c27cd348fb55e0390bfe5ceb29b8a053f3be1757..0000000000000000000000000000000000000000
--- a/spaces/Ntabukiraniro/Recipe/modules/utils.py
+++ /dev/null
@@ -1,377 +0,0 @@
-from collections import defaultdict, OrderedDict
-import logging
-import os
-import re
-import torch
-import traceback
-
-from torch.serialization import default_restore_location
-
-
-def torch_persistent_save(*args, **kwargs):
- for i in range(3):
- try:
- return torch.save(*args, **kwargs)
- except Exception:
- if i == 2:
- logging.error(traceback.format_exc())
-
-
-def convert_state_dict_type(state_dict, ttype=torch.FloatTensor):
- if isinstance(state_dict, dict):
- cpu_dict = OrderedDict()
- for k, v in state_dict.items():
- cpu_dict[k] = convert_state_dict_type(v)
- return cpu_dict
- elif isinstance(state_dict, list):
- return [convert_state_dict_type(v) for v in state_dict]
- elif torch.is_tensor(state_dict):
- return state_dict.type(ttype)
- else:
- return state_dict
-
-
-def save_state(filename, args, model, criterion, optimizer, lr_scheduler,
- num_updates, optim_history=None, extra_state=None):
- if optim_history is None:
- optim_history = []
- if extra_state is None:
- extra_state = {}
- state_dict = {
- 'args': args,
- 'model': convert_state_dict_type(model.state_dict()),
- 'optimizer_history': optim_history + [
- {
- 'criterion_name': criterion.__class__.__name__,
- 'optimizer_name': optimizer.__class__.__name__,
- 'lr_scheduler_state': lr_scheduler.state_dict(),
- 'num_updates': num_updates,
- }
- ],
- 'last_optimizer_state': convert_state_dict_type(optimizer.state_dict()),
- 'extra_state': extra_state,
- }
- torch_persistent_save(state_dict, filename)
-
-
-def load_model_state(filename, model):
- if not os.path.exists(filename):
- return None, [], None
- state = torch.load(filename, map_location=lambda s, l: default_restore_location(s, 'cpu'))
- state = _upgrade_state_dict(state)
- model.upgrade_state_dict(state['model'])
-
- # load model parameters
- try:
- model.load_state_dict(state['model'], strict=True)
- except Exception:
- raise Exception('Cannot load model parameters from checkpoint, '
- 'please ensure that the architectures match')
-
- return state['extra_state'], state['optimizer_history'], state['last_optimizer_state']
-
-
-def _upgrade_state_dict(state):
- """Helper for upgrading old model checkpoints."""
- # add optimizer_history
- if 'optimizer_history' not in state:
- state['optimizer_history'] = [
- {
- 'criterion_name': 'CrossEntropyCriterion',
- 'best_loss': state['best_loss'],
- },
- ]
- state['last_optimizer_state'] = state['optimizer']
- del state['optimizer']
- del state['best_loss']
- # move extra_state into sub-dictionary
- if 'epoch' in state and 'extra_state' not in state:
- state['extra_state'] = {
- 'epoch': state['epoch'],
- 'batch_offset': state['batch_offset'],
- 'val_loss': state['val_loss'],
- }
- del state['epoch']
- del state['batch_offset']
- del state['val_loss']
- # reduce optimizer history's memory usage (only keep the last state)
- if 'optimizer' in state['optimizer_history'][-1]:
- state['last_optimizer_state'] = state['optimizer_history'][-1]['optimizer']
- for optim_hist in state['optimizer_history']:
- del optim_hist['optimizer']
- # record the optimizer class name
- if 'optimizer_name' not in state['optimizer_history'][-1]:
- state['optimizer_history'][-1]['optimizer_name'] = 'FairseqNAG'
- # move best_loss into lr_scheduler_state
- if 'lr_scheduler_state' not in state['optimizer_history'][-1]:
- state['optimizer_history'][-1]['lr_scheduler_state'] = {
- 'best': state['optimizer_history'][-1]['best_loss'],
- }
- del state['optimizer_history'][-1]['best_loss']
- # keep track of number of updates
- if 'num_updates' not in state['optimizer_history'][-1]:
- state['optimizer_history'][-1]['num_updates'] = 0
- # old model checkpoints may not have separate source/target positions
- if hasattr(state['args'], 'max_positions') and not hasattr(state['args'], 'max_source_positions'):
- state['args'].max_source_positions = state['args'].max_positions
- state['args'].max_target_positions = state['args'].max_positions
- # use stateful training data iterator
- if 'train_iterator' not in state['extra_state']:
- state['extra_state']['train_iterator'] = {
- 'epoch': state['extra_state']['epoch'],
- 'iterations_in_epoch': 0,
- }
- return state
-
-
-def load_ensemble_for_inference(filenames, task, model_arg_overrides=None):
- """Load an ensemble of models for inference.
- model_arg_overrides allows you to pass a dictionary model_arg_overrides --
- {'arg_name': arg} -- to override model args that were used during model
- training
- """
- # load model architectures and weights
- states = []
- for filename in filenames:
- if not os.path.exists(filename):
- raise IOError('Model file not found: {}'.format(filename))
- state = torch.load(filename, map_location=lambda s, l: default_restore_location(s, 'cpu'))
- state = _upgrade_state_dict(state)
- states.append(state)
- args = states[0]['args']
- if model_arg_overrides is not None:
- args = _override_model_args(args, model_arg_overrides)
-
- # build ensemble
- ensemble = []
- for state in states:
- model = task.build_model(args)
- model.upgrade_state_dict(state['model'])
- model.load_state_dict(state['model'], strict=True)
- ensemble.append(model)
- return ensemble, args
-
-
-def _override_model_args(args, model_arg_overrides):
- # Uses model_arg_overrides {'arg_name': arg} to override model args
- for arg_name, arg_val in model_arg_overrides.items():
- setattr(args, arg_name, arg_val)
- return args
-
-
-def move_to_cuda(sample):
- if len(sample) == 0:
- return {}
-
- def _move_to_cuda(maybe_tensor):
- if torch.is_tensor(maybe_tensor):
- return maybe_tensor.cuda()
- elif isinstance(maybe_tensor, dict):
- return {
- key: _move_to_cuda(value)
- for key, value in maybe_tensor.items()
- }
- elif isinstance(maybe_tensor, list):
- return [_move_to_cuda(x) for x in maybe_tensor]
- else:
- return maybe_tensor
-
- return _move_to_cuda(sample)
-
-
-INCREMENTAL_STATE_INSTANCE_ID = defaultdict(lambda: 0)
-
-
-def _get_full_incremental_state_key(module_instance, key):
- module_name = module_instance.__class__.__name__
-
- # assign a unique ID to each module instance, so that incremental state is
- # not shared across module instances
- if not hasattr(module_instance, '_fairseq_instance_id'):
- INCREMENTAL_STATE_INSTANCE_ID[module_name] += 1
- module_instance._fairseq_instance_id = INCREMENTAL_STATE_INSTANCE_ID[module_name]
-
- return '{}.{}.{}'.format(module_name, module_instance._fairseq_instance_id, key)
-
-
-def get_incremental_state(module, incremental_state, key):
- """Helper for getting incremental state for an nn.Module."""
- full_key = _get_full_incremental_state_key(module, key)
- if incremental_state is None or full_key not in incremental_state:
- return None
- return incremental_state[full_key]
-
-
-def set_incremental_state(module, incremental_state, key, value):
- """Helper for setting incremental state for an nn.Module."""
- if incremental_state is not None:
- full_key = _get_full_incremental_state_key(module, key)
- incremental_state[full_key] = value
-
-
-def load_align_dict(replace_unk):
- if replace_unk is None:
- align_dict = None
- elif isinstance(replace_unk, str):
- # Load alignment dictionary for unknown word replacement if it was passed as an argument.
- align_dict = {}
- with open(replace_unk, 'r') as f:
- for line in f:
- cols = line.split()
- align_dict[cols[0]] = cols[1]
- else:
- # No alignment dictionary provided but we still want to perform unknown word replacement by copying the
- # original source word.
- align_dict = {}
- return align_dict
-
-
-def print_embed_overlap(embed_dict, vocab_dict):
- embed_keys = set(embed_dict.keys())
- vocab_keys = set(vocab_dict.symbols)
- overlap = len(embed_keys & vocab_keys)
- print("| Found {}/{} types in embedding file.".format(overlap, len(vocab_dict)))
-
-
-def parse_embedding(embed_path):
- """Parse embedding text file into a dictionary of word and embedding tensors.
- The first line can have vocabulary size and dimension. The following lines
- should contain word and embedding separated by spaces.
- Example:
- 2 5
- the -0.0230 -0.0264 0.0287 0.0171 0.1403
- at -0.0395 -0.1286 0.0275 0.0254 -0.0932
- """
- embed_dict = {}
- with open(embed_path) as f_embed:
- next(f_embed) # skip header
- for line in f_embed:
- pieces = line.rstrip().split(" ")
- embed_dict[pieces[0]] = torch.Tensor([float(weight) for weight in pieces[1:]])
- return embed_dict
-
-
-def load_embedding(embed_dict, vocab, embedding):
- for idx in range(len(vocab)):
- token = vocab[idx]
- if token in embed_dict:
- embedding.weight.data[idx] = embed_dict[token]
- return embedding
-
-
-def replace_unk(hypo_str, src_str, alignment, align_dict, unk):
- from fairseq import tokenizer
- # Tokens are strings here
- hypo_tokens = tokenizer.tokenize_line(hypo_str)
- # TODO: Very rare cases where the replacement is '' should be handled gracefully
- src_tokens = tokenizer.tokenize_line(src_str) + ['']
- for i, ht in enumerate(hypo_tokens):
- if ht == unk:
- src_token = src_tokens[alignment[i]]
- # Either take the corresponding value in the aligned dictionary or just copy the original value.
- hypo_tokens[i] = align_dict.get(src_token, src_token)
- return ' '.join(hypo_tokens)
-
-
-def post_process_prediction(hypo_tokens, src_str, alignment, align_dict, tgt_dict, remove_bpe):
- from fairseq import tokenizer
- hypo_str = tgt_dict.string(hypo_tokens, remove_bpe)
- if align_dict is not None:
- hypo_str = replace_unk(hypo_str, src_str, alignment, align_dict, tgt_dict.unk_string())
- if align_dict is not None or remove_bpe is not None:
- # Convert back to tokens for evaluating with unk replacement or without BPE
- # Note that the dictionary can be modified inside the method.
- hypo_tokens = tokenizer.Tokenizer.tokenize(hypo_str, tgt_dict, add_if_not_exist=True)
- return hypo_tokens, hypo_str, alignment
-
-
-def make_positions(tensor, padding_idx, left_pad):
- """Replace non-padding symbols with their position numbers.
- Position numbers begin at padding_idx+1.
- Padding symbols are ignored, but it is necessary to specify whether padding
- is added on the left side (left_pad=True) or right side (left_pad=False).
- """
- max_pos = padding_idx + 1 + tensor.size(1)
- if not hasattr(make_positions, 'range_buf'):
- make_positions.range_buf = tensor.new()
- make_positions.range_buf = make_positions.range_buf.type_as(tensor)
- if make_positions.range_buf.numel() < max_pos:
- torch.arange(padding_idx + 1, max_pos, out=make_positions.range_buf)
- mask = tensor.ne(padding_idx)
- positions = make_positions.range_buf[:tensor.size(1)].expand_as(tensor)
- if left_pad:
- positions = positions - mask.size(1) + mask.long().sum(dim=1).unsqueeze(1)
- return tensor.clone().masked_scatter_(mask, positions[mask])
-
-
-def strip_pad(tensor, pad):
- return tensor[tensor.ne(pad)]
-
-
-def buffered_arange(max):
- if not hasattr(buffered_arange, 'buf'):
- buffered_arange.buf = torch.LongTensor()
- if max > buffered_arange.buf.numel():
- torch.arange(max, out=buffered_arange.buf)
- return buffered_arange.buf[:max]
-
-
-def convert_padding_direction(src_tokens, padding_idx, right_to_left=False, left_to_right=False):
- assert right_to_left ^ left_to_right
- pad_mask = src_tokens.eq(padding_idx)
- if not pad_mask.any():
- # no padding, return early
- return src_tokens
- if left_to_right and not pad_mask[:, 0].any():
- # already right padded
- return src_tokens
- if right_to_left and not pad_mask[:, -1].any():
- # already left padded
- return src_tokens
- max_len = src_tokens.size(1)
- range = buffered_arange(max_len).type_as(src_tokens).expand_as(src_tokens)
- num_pads = pad_mask.long().sum(dim=1, keepdim=True)
- if right_to_left:
- index = torch.remainder(range - num_pads, max_len)
- else:
- index = torch.remainder(range + num_pads, max_len)
- return src_tokens.gather(1, index)
-
-
-def item(tensor):
- if hasattr(tensor, 'item'):
- return tensor.item()
- if hasattr(tensor, '__getitem__'):
- return tensor[0]
- return tensor
-
-
-def clip_grad_norm_(tensor, max_norm):
- grad_norm = item(torch.norm(tensor))
- if grad_norm > max_norm > 0:
- clip_coef = max_norm / (grad_norm + 1e-6)
- tensor.mul_(clip_coef)
- return grad_norm
-
-
-def fill_with_neg_inf(t):
- """FP16-compatible function that fills a tensor with -inf."""
- return t.float().fill_(float('-inf')).type_as(t)
-
-
-def checkpoint_paths(path, pattern=r'checkpoint(\d+)\.pt'):
- """Retrieves all checkpoints found in `path` directory.
- Checkpoints are identified by matching filename to the specified pattern. If
- the pattern contains groups, the result will be sorted by the first group in
- descending order.
- """
- pt_regexp = re.compile(pattern)
- files = os.listdir(path)
-
- entries = []
- for i, f in enumerate(files):
- m = pt_regexp.fullmatch(f)
- if m is not None:
- idx = int(m.group(1)) if len(m.groups()) > 0 else i
- entries.append((idx, m.group(0)))
- return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)]
diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/scripts/generate_multiscale_DF2K.py b/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/scripts/generate_multiscale_DF2K.py
deleted file mode 100644
index 7ae6484e5c7a325bc55fdfb490ce4acd394f721a..0000000000000000000000000000000000000000
--- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/scripts/generate_multiscale_DF2K.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import argparse
-import glob
-import os
-from PIL import Image
-
-
-def main(args):
- # For DF2K, we consider the following three scales,
- # and the smallest image whose shortest edge is 400
- scale_list = [0.75, 0.5, 1 / 3]
- shortest_edge = 400
-
- path_list = sorted(glob.glob(os.path.join(args.input, "*")))
- for path in path_list:
- print(path)
- basename = os.path.splitext(os.path.basename(path))[0]
-
- img = Image.open(path)
- width, height = img.size
- for idx, scale in enumerate(scale_list):
- print(f"\t{scale:.2f}")
- rlt = img.resize(
- (int(width * scale), int(height * scale)), resample=Image.LANCZOS
- )
- rlt.save(os.path.join(args.output, f"{basename}T{idx}.png"))
-
- # save the smallest image which the shortest edge is 400
- if width < height:
- ratio = height / width
- width = shortest_edge
- height = int(width * ratio)
- else:
- ratio = width / height
- height = shortest_edge
- width = int(height * ratio)
- rlt = img.resize((int(width), int(height)), resample=Image.LANCZOS)
- rlt.save(os.path.join(args.output, f"{basename}T{idx+1}.png"))
-
-
-if __name__ == "__main__":
- """Generate multi-scale versions for GT images with LANCZOS resampling.
- It is now used for DF2K dataset (DIV2K + Flickr 2K)
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--input", type=str, default="datasets/DF2K/DF2K_HR", help="Input folder"
- )
- parser.add_argument(
- "--output",
- type=str,
- default="datasets/DF2K/DF2K_multiscale",
- help="Output folder",
- )
- args = parser.parse_args()
-
- os.makedirs(args.output, exist_ok=True)
- main(args)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py
deleted file mode 100644
index 1122c88c1964d8beead63bc8dfe21d41602b83bc..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py
+++ /dev/null
@@ -1,135 +0,0 @@
-"""
-Implement unsupervised metric for decoding hyperparameter selection:
- $$ alpha * LM_PPL + ViterbitUER(%) * 100 $$
-"""
-import argparse
-import logging
-import math
-import sys
-
-import kenlm
-import editdistance
-from g2p_en import G2p
-
-logging.root.setLevel(logging.INFO)
-logging.basicConfig(stream=sys.stdout, level=logging.INFO)
-logger = logging.getLogger(__name__)
-
-
-def get_parser():
- parser = argparse.ArgumentParser()
- parser.add_argument("ref_tra", help="reference pseudo labels")
- parser.add_argument("hyp_tra", help="decoded pseudo labels to be assess")
- parser.add_argument("--kenlm_path", default="/checkpoint/abaevski/data/speech/libri/librispeech_lm_novox.phnc_o5.bin", help="")
- parser.add_argument("--uppercase", action="store_true", help="")
- parser.add_argument("--skipwords", default="", help="")
- parser.add_argument("--gt_tra", default="", help="ground truth pseudo labels for computing oracle WER")
- parser.add_argument("--min_vt_uer", default=0.0, type=float)
- parser.add_argument("--phonemize", action="store_true", help="phonemize word hypotheses, used when reference is phone transcript")
- parser.add_argument("--phonemize_lexicon", default="", type=str, help="use a lexicon for phonemizing")
- return parser
-
-def load_tra(tra_path):
- with open(tra_path, "r") as f:
- uid_to_tra = {}
- for line in f:
- toks = line.rstrip().split()
- uid, tra = toks[0], " ".join(toks[1:])
- uid_to_tra[uid] = tra
- logger.debug(f"loaded {len(uid_to_tra)} utterances from {tra_path}")
- return uid_to_tra
-
-def load_lex(lex_path):
- with open(lex_path, "r") as f:
- w2p = {}
- for line in f:
- w, p = line.rstrip().split(None, 1)
- w2p[w] = p.split()
- return w2p
-
-def compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p, g2p_dict):
- d_cnt = 0
- w_cnt = 0
- w_cnt_h = 0
- for uid in hyp_uid_to_tra:
- ref = ref_uid_to_tra[uid].split()
- if g2p_dict is not None:
- hyp = []
- for word in hyp_uid_to_tra[uid].split():
- if word in g2p_dict:
- hyp = hyp + g2p_dict[word]
- else:
- logger.warning(f"{word} not in g2p_dict")
- elif g2p is not None:
- hyp = g2p(hyp_uid_to_tra[uid])
- hyp = [p for p in hyp if p != "'" and p != " "]
- hyp = [p[:-1] if p[-1].isnumeric() else p for p in hyp]
- else:
- hyp = hyp_uid_to_tra[uid].split()
- logger.debug((
- f"======================\n"
- f"HYP: {' '.join(hyp)}\n"
- f"REF: {' '.join(ref)}"
- ))
- d_cnt += editdistance.eval(ref, hyp)
- w_cnt += len(ref)
- w_cnt_h += len(hyp)
- wer = float(d_cnt) / w_cnt
- logger.debug((
- f"wer = {wer*100:.2f}%; num. of ref words = {w_cnt}; "
- f"num. of hyp words = {w_cnt_h}; num. of sentences = {len(ref_uid_to_tra)}"
- ))
- return wer
-
-def compute_lm_ppl(hyp_uid_to_tra, score_fn):
- lm_score = 0.
- w_cnt = 0
- for hyp in hyp_uid_to_tra.values():
- cur_score = score_fn(hyp)
- cur_cnt = len(hyp.split()) + 1 # plus one for
- lm_score += cur_score
- w_cnt += cur_cnt
- logger.debug((
- f"======================\n"
- f"score sum/avg = {cur_score:.2f}/{cur_score/cur_cnt:.2f}\n"
- f"hyp = {hyp}"
- ))
- lm_ppl = math.pow(10, -lm_score / w_cnt)
- logger.debug(f"lm ppl = {lm_ppl:.2f}; num. of words = {w_cnt}")
- return lm_ppl
-
-def main():
- args = get_parser().parse_args()
- logger.debug(f"Args: {args}")
-
- ref_uid_to_tra = load_tra(args.ref_tra)
- hyp_uid_to_tra = load_tra(args.hyp_tra)
- assert not bool(set(hyp_uid_to_tra.keys()) - set(ref_uid_to_tra.keys()))
-
- lm = kenlm.Model(args.kenlm_path)
- skipwords = set(args.skipwords.split(","))
- def compute_lm_score(s):
- s = " ".join(w for w in s.split() if w not in skipwords)
- s = s.upper() if args.uppercase else s
- return lm.score(s)
-
- g2p, g2p_dict = None, None
- if args.phonemize:
- if args.phonemize_lexicon:
- g2p_dict = load_lex(args.phonemize_lexicon)
- else:
- g2p = G2p()
-
- wer = compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p, g2p_dict)
- lm_ppl = compute_lm_ppl(hyp_uid_to_tra, compute_lm_score)
-
- gt_wer = -math.inf
- if args.gt_tra:
- gt_uid_to_tra = load_tra(args.gt_tra)
- gt_wer = compute_wer(gt_uid_to_tra, hyp_uid_to_tra, None, None)
-
- score = math.log(lm_ppl) * max(wer, args.min_vt_uer)
- logging.info(f"{args.hyp_tra}: score={score:.4f}; wer={wer*100:.2f}%; lm_ppl={lm_ppl:.4f}; gt_wer={gt_wer*100:.2f}%")
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/nat/nonautoregressive_transformer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/nat/nonautoregressive_transformer.py
deleted file mode 100644
index d114202d25fbd1dca66c7abebb0b0a8bffbe094d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/nat/nonautoregressive_transformer.py
+++ /dev/null
@@ -1,456 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.iterative_refinement_generator import DecoderOut
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.nat import FairseqNATDecoder, FairseqNATModel, ensemble_decoder
-from fairseq.models.transformer import Embedding
-from fairseq.modules.transformer_sentence_encoder import init_bert_params
-
-
-def _mean_pooling(enc_feats, src_masks):
- # enc_feats: T x B x C
- # src_masks: B x T or None
- if src_masks is None:
- enc_feats = enc_feats.mean(0)
- else:
- src_masks = (~src_masks).transpose(0, 1).type_as(enc_feats)
- enc_feats = (
- (enc_feats / src_masks.sum(0)[None, :, None]) * src_masks[:, :, None]
- ).sum(0)
- return enc_feats
-
-
-def _argmax(x, dim):
- return (x == x.max(dim, keepdim=True)[0]).type_as(x)
-
-
-def _uniform_assignment(src_lens, trg_lens):
- max_trg_len = trg_lens.max()
- steps = (src_lens.float() - 1) / (trg_lens.float() - 1) # step-size
- # max_trg_len
- index_t = utils.new_arange(trg_lens, max_trg_len).float()
- index_t = steps[:, None] * index_t[None, :] # batch_size X max_trg_len
- index_t = torch.round(index_t).long().detach()
- return index_t
-
-
-@register_model("nonautoregressive_transformer")
-class NATransformerModel(FairseqNATModel):
- @property
- def allow_length_beam(self):
- return True
-
- @staticmethod
- def add_args(parser):
- FairseqNATModel.add_args(parser)
-
- # length prediction
- parser.add_argument(
- "--src-embedding-copy",
- action="store_true",
- help="copy encoder word embeddings as the initial input of the decoder",
- )
- parser.add_argument(
- "--pred-length-offset",
- action="store_true",
- help="predicting the length difference between the target and source sentences",
- )
- parser.add_argument(
- "--sg-length-pred",
- action="store_true",
- help="stop the gradients back-propagated from the length predictor",
- )
- parser.add_argument(
- "--length-loss-factor",
- type=float,
- help="weights on the length prediction loss",
- )
-
- @classmethod
- def build_decoder(cls, args, tgt_dict, embed_tokens):
- decoder = NATransformerDecoder(args, tgt_dict, embed_tokens)
- if getattr(args, "apply_bert_init", False):
- decoder.apply(init_bert_params)
- return decoder
-
- def forward(
- self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs
- ):
- # encoding
- encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs)
-
- # length prediction
- length_out = self.decoder.forward_length(
- normalize=False, encoder_out=encoder_out
- )
- length_tgt = self.decoder.forward_length_prediction(
- length_out, encoder_out, tgt_tokens
- )
-
- # decoding
- word_ins_out = self.decoder(
- normalize=False,
- prev_output_tokens=prev_output_tokens,
- encoder_out=encoder_out,
- )
-
- return {
- "word_ins": {
- "out": word_ins_out,
- "tgt": tgt_tokens,
- "mask": tgt_tokens.ne(self.pad),
- "ls": self.args.label_smoothing,
- "nll_loss": True,
- },
- "length": {
- "out": length_out,
- "tgt": length_tgt,
- "factor": self.decoder.length_loss_factor,
- },
- }
-
- def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs):
- step = decoder_out.step
- output_tokens = decoder_out.output_tokens
- output_scores = decoder_out.output_scores
- history = decoder_out.history
-
- # execute the decoder
- output_masks = output_tokens.ne(self.pad)
- _scores, _tokens = self.decoder(
- normalize=True,
- prev_output_tokens=output_tokens,
- encoder_out=encoder_out,
- step=step,
- ).max(-1)
-
- output_tokens.masked_scatter_(output_masks, _tokens[output_masks])
- output_scores.masked_scatter_(output_masks, _scores[output_masks])
- if history is not None:
- history.append(output_tokens.clone())
-
- return decoder_out._replace(
- output_tokens=output_tokens,
- output_scores=output_scores,
- attn=None,
- history=history,
- )
-
- def initialize_output_tokens(self, encoder_out, src_tokens):
- # length prediction
- length_tgt = self.decoder.forward_length_prediction(
- self.decoder.forward_length(normalize=True, encoder_out=encoder_out),
- encoder_out=encoder_out,
- )
-
- max_length = length_tgt.clamp_(min=2).max()
- idx_length = utils.new_arange(src_tokens, max_length)
-
- initial_output_tokens = src_tokens.new_zeros(
- src_tokens.size(0), max_length
- ).fill_(self.pad)
- initial_output_tokens.masked_fill_(
- idx_length[None, :] < length_tgt[:, None], self.unk
- )
- initial_output_tokens[:, 0] = self.bos
- initial_output_tokens.scatter_(1, length_tgt[:, None] - 1, self.eos)
-
- initial_output_scores = initial_output_tokens.new_zeros(
- *initial_output_tokens.size()
- ).type_as(encoder_out["encoder_out"][0])
-
- return DecoderOut(
- output_tokens=initial_output_tokens,
- output_scores=initial_output_scores,
- attn=None,
- step=0,
- max_step=0,
- history=None,
- )
-
- def regenerate_length_beam(self, decoder_out, beam_size):
- output_tokens = decoder_out.output_tokens
- length_tgt = output_tokens.ne(self.pad).sum(1)
- length_tgt = (
- length_tgt[:, None]
- + utils.new_arange(length_tgt, 1, beam_size)
- - beam_size // 2
- )
- length_tgt = length_tgt.view(-1).clamp_(min=2)
- max_length = length_tgt.max()
- idx_length = utils.new_arange(length_tgt, max_length)
-
- initial_output_tokens = output_tokens.new_zeros(
- length_tgt.size(0), max_length
- ).fill_(self.pad)
- initial_output_tokens.masked_fill_(
- idx_length[None, :] < length_tgt[:, None], self.unk
- )
- initial_output_tokens[:, 0] = self.bos
- initial_output_tokens.scatter_(1, length_tgt[:, None] - 1, self.eos)
-
- initial_output_scores = initial_output_tokens.new_zeros(
- *initial_output_tokens.size()
- ).type_as(decoder_out.output_scores)
-
- return decoder_out._replace(
- output_tokens=initial_output_tokens, output_scores=initial_output_scores
- )
-
-
-class NATransformerDecoder(FairseqNATDecoder):
- def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False):
- super().__init__(
- args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn
- )
- self.dictionary = dictionary
- self.bos = dictionary.bos()
- self.unk = dictionary.unk()
- self.eos = dictionary.eos()
-
- self.encoder_embed_dim = args.encoder_embed_dim
- self.sg_length_pred = getattr(args, "sg_length_pred", False)
- self.pred_length_offset = getattr(args, "pred_length_offset", False)
- self.length_loss_factor = getattr(args, "length_loss_factor", 0.1)
- self.src_embedding_copy = getattr(args, "src_embedding_copy", False)
- self.embed_length = Embedding(256, self.encoder_embed_dim, None)
-
- @ensemble_decoder
- def forward(self, normalize, encoder_out, prev_output_tokens, step=0, **unused):
- features, _ = self.extract_features(
- prev_output_tokens,
- encoder_out=encoder_out,
- embedding_copy=(step == 0) & self.src_embedding_copy,
- )
- decoder_out = self.output_layer(features)
- return F.log_softmax(decoder_out, -1) if normalize else decoder_out
-
- @ensemble_decoder
- def forward_length(self, normalize, encoder_out):
- enc_feats = encoder_out["encoder_out"][0] # T x B x C
- if len(encoder_out["encoder_padding_mask"]) > 0:
- src_masks = encoder_out["encoder_padding_mask"][0] # B x T
- else:
- src_masks = None
- enc_feats = _mean_pooling(enc_feats, src_masks)
- if self.sg_length_pred:
- enc_feats = enc_feats.detach()
- length_out = F.linear(enc_feats, self.embed_length.weight)
- return F.log_softmax(length_out, -1) if normalize else length_out
-
- def extract_features(
- self,
- prev_output_tokens,
- encoder_out=None,
- early_exit=None,
- embedding_copy=False,
- **unused
- ):
- """
- Similar to *forward* but only return features.
-
- Inputs:
- prev_output_tokens: Tensor(B, T)
- encoder_out: a dictionary of hidden states and masks
-
- Returns:
- tuple:
- - the decoder's features of shape `(batch, tgt_len, embed_dim)`
- - a dictionary with any model-specific outputs
- the LevenshteinTransformer decoder has full-attention to all generated tokens
- """
- # embedding
- if embedding_copy:
- src_embd = encoder_out["encoder_embedding"][0]
- if len(encoder_out["encoder_padding_mask"]) > 0:
- src_mask = encoder_out["encoder_padding_mask"][0]
- else:
- src_mask = None
- src_mask = (
- ~src_mask
- if src_mask is not None
- else prev_output_tokens.new_ones(*src_embd.size()[:2]).bool()
- )
-
- x, decoder_padding_mask = self.forward_embedding(
- prev_output_tokens,
- self.forward_copying_source(
- src_embd, src_mask, prev_output_tokens.ne(self.padding_idx)
- ),
- )
-
- else:
-
- x, decoder_padding_mask = self.forward_embedding(prev_output_tokens)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
- attn = None
- inner_states = [x]
-
- # decoder layers
- for i, layer in enumerate(self.layers):
-
- # early exit from the decoder.
- if (early_exit is not None) and (i >= early_exit):
- break
-
- x, attn, _ = layer(
- x,
- encoder_out["encoder_out"][0]
- if (encoder_out is not None and len(encoder_out["encoder_out"]) > 0)
- else None,
- encoder_out["encoder_padding_mask"][0]
- if (
- encoder_out is not None
- and len(encoder_out["encoder_padding_mask"]) > 0
- )
- else None,
- self_attn_mask=None,
- self_attn_padding_mask=decoder_padding_mask,
- )
- inner_states.append(x)
-
- if self.layer_norm:
- x = self.layer_norm(x)
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- if self.project_out_dim is not None:
- x = self.project_out_dim(x)
-
- return x, {"attn": attn, "inner_states": inner_states}
-
- def forward_embedding(self, prev_output_tokens, states=None):
- # embed positions
- positions = (
- self.embed_positions(prev_output_tokens)
- if self.embed_positions is not None
- else None
- )
-
- # embed tokens and positions
- if states is None:
- x = self.embed_scale * self.embed_tokens(prev_output_tokens)
- if self.project_in_dim is not None:
- x = self.project_in_dim(x)
- else:
- x = states
-
- if positions is not None:
- x += positions
- x = self.dropout_module(x)
- decoder_padding_mask = prev_output_tokens.eq(self.padding_idx)
- return x, decoder_padding_mask
-
- def forward_copying_source(self, src_embeds, src_masks, tgt_masks):
- length_sources = src_masks.sum(1)
- length_targets = tgt_masks.sum(1)
- mapped_inputs = _uniform_assignment(length_sources, length_targets).masked_fill(
- ~tgt_masks, 0
- )
- copied_embedding = torch.gather(
- src_embeds,
- 1,
- mapped_inputs.unsqueeze(-1).expand(
- *mapped_inputs.size(), src_embeds.size(-1)
- ),
- )
- return copied_embedding
-
- def forward_length_prediction(self, length_out, encoder_out, tgt_tokens=None):
- enc_feats = encoder_out["encoder_out"][0] # T x B x C
- if len(encoder_out["encoder_padding_mask"]) > 0:
- src_masks = encoder_out["encoder_padding_mask"][0] # B x T
- else:
- src_masks = None
- if self.pred_length_offset:
- if src_masks is None:
- src_lengs = enc_feats.new_ones(enc_feats.size(1)).fill_(
- enc_feats.size(0)
- )
- else:
- src_lengs = (~src_masks).transpose(0, 1).type_as(enc_feats).sum(0)
- src_lengs = src_lengs.long()
-
- if tgt_tokens is not None:
- # obtain the length target
- tgt_lengs = tgt_tokens.ne(self.padding_idx).sum(1).long()
- if self.pred_length_offset:
- length_tgt = tgt_lengs - src_lengs + 128
- else:
- length_tgt = tgt_lengs
- length_tgt = length_tgt.clamp(min=0, max=255)
-
- else:
- # predict the length target (greedy for now)
- # TODO: implementing length-beam
- pred_lengs = length_out.max(-1)[1]
- if self.pred_length_offset:
- length_tgt = pred_lengs - 128 + src_lengs
- else:
- length_tgt = pred_lengs
-
- return length_tgt
-
-
-@register_model_architecture(
- "nonautoregressive_transformer", "nonautoregressive_transformer"
-)
-def base_architecture(args):
- args.encoder_embed_path = getattr(args, "encoder_embed_path", None)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048)
- args.encoder_layers = getattr(args, "encoder_layers", 6)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False)
- args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim)
- args.decoder_ffn_embed_dim = getattr(
- args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim
- )
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
- args.attention_dropout = getattr(args, "attention_dropout", 0.0)
- args.activation_dropout = getattr(args, "activation_dropout", 0.0)
- args.activation_fn = getattr(args, "activation_fn", "relu")
- args.dropout = getattr(args, "dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", False
- )
- args.share_all_embeddings = getattr(args, "share_all_embeddings", False)
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
- args.adaptive_input = getattr(args, "adaptive_input", False)
- args.apply_bert_init = getattr(args, "apply_bert_init", False)
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
-
- # --- special arguments ---
- args.sg_length_pred = getattr(args, "sg_length_pred", False)
- args.pred_length_offset = getattr(args, "pred_length_offset", False)
- args.length_loss_factor = getattr(args, "length_loss_factor", 0.1)
- args.src_embedding_copy = getattr(args, "src_embedding_copy", False)
-
-
-@register_model_architecture(
- "nonautoregressive_transformer", "nonautoregressive_transformer_wmt_en_de"
-)
-def nonautoregressive_transformer_wmt_en_de(args):
- base_architecture(args)
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py
deleted file mode 100644
index a30254604311a488a1d4959f941051890ed32b2e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py
+++ /dev/null
@@ -1,140 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import logging
-from pathlib import Path
-from collections import defaultdict
-from typing import List, Dict, Tuple
-
-import pandas as pd
-import numpy as np
-import torchaudio
-from tqdm import tqdm
-
-from examples.speech_to_text.data_utils import load_df_from_tsv, save_df_to_tsv
-
-
-log = logging.getLogger(__name__)
-
-SPLITS = ["train", "dev", "test"]
-
-
-def get_top_n(
- root: Path, n_speakers: int = 10, min_n_tokens: int = 5
-) -> pd.DataFrame:
- df = load_df_from_tsv(root / "validated.tsv")
- df["n_tokens"] = [len(s.split()) for s in df["sentence"]]
- df = df[df["n_tokens"] >= min_n_tokens]
- df["n_frames"] = [
- torchaudio.info((root / "clips" / p).as_posix()).num_frames
- for p in tqdm(df["path"])
- ]
- df["id"] = [Path(p).stem for p in df["path"]]
- total_duration_ms = df.groupby("client_id")["n_frames"].agg(["sum"])
- total_duration_ms = total_duration_ms.sort_values("sum", ascending=False)
-
- top_n_total_duration_ms = total_duration_ms.head(n_speakers)
- top_n_client_ids = set(top_n_total_duration_ms.index.tolist())
- df_top_n = df[df["client_id"].isin(top_n_client_ids)]
- return df_top_n
-
-
-def get_splits(
- df, train_split_ratio=0.99, speaker_in_all_splits=False, rand_seed=0
-) -> Tuple[Dict[str, str], List[str]]:
- np.random.seed(rand_seed)
- dev_split_ratio = (1. - train_split_ratio) / 3
- grouped = list(df.groupby("client_id"))
- id_to_split = {}
- for _, cur_df in tqdm(grouped):
- cur_n_examples = len(cur_df)
- if speaker_in_all_splits and cur_n_examples < 3:
- continue
- cur_n_train = int(cur_n_examples * train_split_ratio)
- cur_n_dev = int(cur_n_examples * dev_split_ratio)
- cur_n_test = cur_n_examples - cur_n_dev - cur_n_train
- if speaker_in_all_splits and cur_n_dev * cur_n_test == 0:
- cur_n_dev, cur_n_test = 1, 1
- cur_n_train = cur_n_examples - cur_n_dev - cur_n_test
- cur_indices = cur_df.index.tolist()
- cur_shuffled_indices = np.random.permutation(cur_n_examples)
- cur_shuffled_indices = [cur_indices[i] for i in cur_shuffled_indices]
- cur_indices_by_split = {
- "train": cur_shuffled_indices[:cur_n_train],
- "dev": cur_shuffled_indices[cur_n_train: cur_n_train + cur_n_dev],
- "test": cur_shuffled_indices[cur_n_train + cur_n_dev:]
- }
- for split in SPLITS:
- for i in cur_indices_by_split[split]:
- id_ = df["id"].loc[i]
- id_to_split[id_] = split
- return id_to_split, sorted(df["client_id"].unique())
-
-
-def convert_to_wav(root: Path, filenames: List[str], target_sr=16_000):
- out_root = root / "wav"
- out_root.mkdir(exist_ok=True, parents=True)
- print("Converting to WAV...")
- for n in tqdm(filenames):
- in_path = (root / "clips" / n).as_posix()
- waveform, sr = torchaudio.load(in_path)
- converted, converted_sr = torchaudio.sox_effects.apply_effects_tensor(
- waveform, sr, [["rate", str(target_sr)], ["channels", "1"]]
- )
- out_path = (out_root / Path(n).with_suffix(".wav").name).as_posix()
- torchaudio.save(out_path, converted, converted_sr, encoding="PCM_S",
- bits_per_sample=16)
-
-
-def process(args):
- data_root = Path(args.data_root).absolute() / args.lang
-
- # Generate TSV manifest
- print("Generating manifest...")
-
- df_top_n = get_top_n(data_root)
- id_to_split, speakers = get_splits(df_top_n)
-
- if args.convert_to_wav:
- convert_to_wav(data_root, df_top_n["path"].tolist())
-
- manifest_by_split = {split: defaultdict(list) for split in SPLITS}
- for sample in tqdm(df_top_n.to_dict(orient="index").values()):
- sample_id = sample["id"]
- split = id_to_split[sample_id]
- manifest_by_split[split]["id"].append(sample_id)
- if args.convert_to_wav:
- audio_path = data_root / "wav" / f"{sample_id}.wav"
- else:
- audio_path = data_root / "clips" / f"{sample_id}.mp3"
- manifest_by_split[split]["audio"].append(audio_path.as_posix())
- manifest_by_split[split]["n_frames"].append(sample["n_frames"])
- manifest_by_split[split]["tgt_text"].append(sample["sentence"])
- manifest_by_split[split]["speaker"].append(sample["client_id"])
- manifest_by_split[split]["src_text"].append(sample["sentence"])
-
- output_root = Path(args.output_manifest_root).absolute()
- output_root.mkdir(parents=True, exist_ok=True)
- for split in SPLITS:
- save_df_to_tsv(
- pd.DataFrame.from_dict(manifest_by_split[split]),
- output_root / f"{split}.audio.tsv"
- )
-
-
-def main():
- parser = argparse.ArgumentParser()
- parser.add_argument("--data-root", "-d", required=True, type=str)
- parser.add_argument("--output-manifest-root", "-m", required=True, type=str)
- parser.add_argument("--lang", "-l", required=True, type=str)
- parser.add_argument("--convert-to-wav", action="store_true")
- args = parser.parse_args()
-
- process(args)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/commonsense_qa/download_cqa_data.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/commonsense_qa/download_cqa_data.sh
deleted file mode 100644
index 5f300093fa0a0feb819d8b6aed307b59e3891d01..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/commonsense_qa/download_cqa_data.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-#!/bin/bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-OUTDIR=data/CommonsenseQA
-
-mkdir -p $OUTDIR
-
-wget -O $OUTDIR/train.jsonl https://s3.amazonaws.com/commensenseqa/train_rand_split.jsonl
-wget -O $OUTDIR/valid.jsonl https://s3.amazonaws.com/commensenseqa/dev_rand_split.jsonl
-wget -O $OUTDIR/test.jsonl https://s3.amazonaws.com/commensenseqa/test_rand_split_no_answers.jsonl
-wget -O $OUTDIR/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/sentence_prediction.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/sentence_prediction.py
deleted file mode 100644
index 482b97985a36aca07146772f52dde41df76bf643..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/sentence_prediction.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass, field
-
-import torch
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-
-
-@dataclass
-class SentencePredictionConfig(FairseqDataclass):
- classification_head_name: str = field(
- default="sentence_classification_head",
- metadata={"help": "name of the classification head to use"},
- )
- regression_target: bool = field(
- default=False,
- )
-
-
-@register_criterion("sentence_prediction", dataclass=SentencePredictionConfig)
-class SentencePredictionCriterion(FairseqCriterion):
- def __init__(self, cfg: SentencePredictionConfig, task):
- super().__init__(task)
- self.classification_head_name = cfg.classification_head_name
- self.regression_target = cfg.regression_target
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- assert (
- hasattr(model, "classification_heads")
- and self.classification_head_name in model.classification_heads
- ), "model must provide sentence classification head for --criterion=sentence_prediction"
-
- logits, _ = model(
- **sample["net_input"],
- features_only=True,
- classification_head_name=self.classification_head_name,
- )
- targets = model.get_targets(sample, [logits]).view(-1)
- sample_size = targets.numel()
-
- if not self.regression_target:
- lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32)
- loss = F.nll_loss(lprobs, targets, reduction="sum")
- else:
- logits = logits.view(-1).float()
- targets = targets.float()
- loss = F.mse_loss(logits, targets, reduction="sum")
-
- logging_output = {
- "loss": loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample_size,
- "sample_size": sample_size,
- }
- if not self.regression_target:
- preds = logits.argmax(dim=1)
- logging_output["ncorrect"] = (preds == targets).sum()
-
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- nsentences = sum(log.get("nsentences", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- if sample_size != ntokens:
- metrics.log_scalar(
- "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3
- )
-
- if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]:
- ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs)
- metrics.log_scalar(
- "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lightconv_layer/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lightconv_layer/__init__.py
deleted file mode 100644
index 3b2a99c1227f827768911e5e22e79f6865ffbfd3..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/lightconv_layer/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .lightconv_layer import LightconvLayer # noqa
diff --git a/spaces/OIUGLK/bingo/src/components/ui/button.tsx b/spaces/OIUGLK/bingo/src/components/ui/button.tsx
deleted file mode 100644
index 281da005124fa94c89a9a9db7605748a92b60865..0000000000000000000000000000000000000000
--- a/spaces/OIUGLK/bingo/src/components/ui/button.tsx
+++ /dev/null
@@ -1,57 +0,0 @@
-import * as React from 'react'
-import { Slot } from '@radix-ui/react-slot'
-import { cva, type VariantProps } from 'class-variance-authority'
-
-import { cn } from '@/lib/utils'
-
-const buttonVariants = cva(
- 'inline-flex items-center justify-center rounded-md text-sm font-medium shadow ring-offset-background transition-colors outline-none disabled:pointer-events-none disabled:opacity-50',
- {
- variants: {
- variant: {
- default:
- 'bg-primary text-primary-foreground shadow-md hover:bg-primary/90',
- destructive:
- 'bg-destructive text-destructive-foreground hover:bg-destructive/90',
- outline:
- 'border border-input hover:bg-accent hover:text-accent-foreground',
- secondary:
- 'bg-secondary text-secondary-foreground hover:bg-secondary/80',
- ghost: 'shadow-none hover:bg-accent hover:text-accent-foreground',
- link: 'text-primary underline-offset-4 shadow-none hover:underline'
- },
- size: {
- default: 'h-8 px-4 py-2',
- sm: 'h-8 rounded-md px-3',
- lg: 'h-11 rounded-md px-8',
- icon: 'h-8 w-8 p-0'
- }
- },
- defaultVariants: {
- variant: 'default',
- size: 'default'
- }
- }
-)
-
-export interface ButtonProps
- extends React.ButtonHTMLAttributes,
- VariantProps {
- asChild?: boolean
-}
-
-const Button = React.forwardRef(
- ({ className, variant, size, asChild = false, ...props }, ref) => {
- const Comp = asChild ? Slot : 'button'
- return (
-
- )
- }
-)
-Button.displayName = 'Button'
-
-export { Button, buttonVariants }
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/semantic_seg.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/semantic_seg.py
deleted file mode 100644
index 6dd3dc23f5a333e1170ab317875551f852a0b53f..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/semantic_seg.py
+++ /dev/null
@@ -1,260 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import numpy as np
-from typing import Callable, Dict, Optional, Tuple, Union
-import fvcore.nn.weight_init as weight_init
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d, ShapeSpec, get_norm
-from detectron2.structures import ImageList
-from detectron2.utils.registry import Registry
-
-from ..backbone import Backbone, build_backbone
-from ..postprocessing import sem_seg_postprocess
-from .build import META_ARCH_REGISTRY
-
-__all__ = [
- "SemanticSegmentor",
- "SEM_SEG_HEADS_REGISTRY",
- "SemSegFPNHead",
- "build_sem_seg_head",
-]
-
-
-SEM_SEG_HEADS_REGISTRY = Registry("SEM_SEG_HEADS")
-SEM_SEG_HEADS_REGISTRY.__doc__ = """
-Registry for semantic segmentation heads, which make semantic segmentation predictions
-from feature maps.
-"""
-
-
-@META_ARCH_REGISTRY.register()
-class SemanticSegmentor(nn.Module):
- """
- Main class for semantic segmentation architectures.
- """
-
- @configurable
- def __init__(
- self,
- *,
- backbone: Backbone,
- sem_seg_head: nn.Module,
- pixel_mean: Tuple[float],
- pixel_std: Tuple[float],
- ):
- """
- Args:
- backbone: a backbone module, must follow detectron2's backbone interface
- sem_seg_head: a module that predicts semantic segmentation from backbone features
- pixel_mean, pixel_std: list or tuple with #channels element, representing
- the per-channel mean and std to be used to normalize the input image
- """
- super().__init__()
- self.backbone = backbone
- self.sem_seg_head = sem_seg_head
- self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
- self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
-
- @classmethod
- def from_config(cls, cfg):
- backbone = build_backbone(cfg)
- sem_seg_head = build_sem_seg_head(cfg, backbone.output_shape())
- return {
- "backbone": backbone,
- "sem_seg_head": sem_seg_head,
- "pixel_mean": cfg.MODEL.PIXEL_MEAN,
- "pixel_std": cfg.MODEL.PIXEL_STD,
- }
-
- @property
- def device(self):
- return self.pixel_mean.device
-
- def forward(self, batched_inputs):
- """
- Args:
- batched_inputs: a list, batched outputs of :class:`DatasetMapper`.
- Each item in the list contains the inputs for one image.
-
- For now, each item in the list is a dict that contains:
-
- * "image": Tensor, image in (C, H, W) format.
- * "sem_seg": semantic segmentation ground truth
- * Other information that's included in the original dicts, such as:
- "height", "width" (int): the output resolution of the model (may be different
- from input resolution), used in inference.
-
-
- Returns:
- list[dict]:
- Each dict is the output for one input image.
- The dict contains one key "sem_seg" whose value is a
- Tensor that represents the
- per-pixel segmentation prediced by the head.
- The prediction has shape KxHxW that represents the logits of
- each class for each pixel.
- """
- images = [x["image"].to(self.device) for x in batched_inputs]
- images = [(x - self.pixel_mean) / self.pixel_std for x in images]
- images = ImageList.from_tensors(images, self.backbone.size_divisibility)
-
- features = self.backbone(images.tensor)
-
- if "sem_seg" in batched_inputs[0]:
- targets = [x["sem_seg"].to(self.device) for x in batched_inputs]
- targets = ImageList.from_tensors(
- targets, self.backbone.size_divisibility, self.sem_seg_head.ignore_value
- ).tensor
- else:
- targets = None
- results, losses = self.sem_seg_head(features, targets)
-
- if self.training:
- return losses
-
- processed_results = []
- for result, input_per_image, image_size in zip(results, batched_inputs, images.image_sizes):
- height = input_per_image.get("height", image_size[0])
- width = input_per_image.get("width", image_size[1])
- r = sem_seg_postprocess(result, image_size, height, width)
- processed_results.append({"sem_seg": r})
- return processed_results
-
-
-def build_sem_seg_head(cfg, input_shape):
- """
- Build a semantic segmentation head from `cfg.MODEL.SEM_SEG_HEAD.NAME`.
- """
- name = cfg.MODEL.SEM_SEG_HEAD.NAME
- return SEM_SEG_HEADS_REGISTRY.get(name)(cfg, input_shape)
-
-
-@SEM_SEG_HEADS_REGISTRY.register()
-class SemSegFPNHead(nn.Module):
- """
- A semantic segmentation head described in :paper:`PanopticFPN`.
- It takes a list of FPN features as input, and applies a sequence of
- 3x3 convs and upsampling to scale all of them to the stride defined by
- ``common_stride``. Then these features are added and used to make final
- predictions by another 1x1 conv layer.
- """
-
- @configurable
- def __init__(
- self,
- input_shape: Dict[str, ShapeSpec],
- *,
- num_classes: int,
- conv_dims: int,
- common_stride: int,
- loss_weight: float = 1.0,
- norm: Optional[Union[str, Callable]] = None,
- ignore_value: int = -1,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- input_shape: shapes (channels and stride) of the input features
- num_classes: number of classes to predict
- conv_dims: number of output channels for the intermediate conv layers.
- common_stride: the common stride that all features will be upscaled to
- loss_weight: loss weight
- norm (str or callable): normalization for all conv layers
- ignore_value: category id to be ignored during training.
- """
- super().__init__()
- input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride)
- if not len(input_shape):
- raise ValueError("SemSegFPNHead(input_shape=) cannot be empty!")
- self.in_features = [k for k, v in input_shape]
- feature_strides = [v.stride for k, v in input_shape]
- feature_channels = [v.channels for k, v in input_shape]
-
- self.ignore_value = ignore_value
- self.common_stride = common_stride
- self.loss_weight = loss_weight
-
- self.scale_heads = []
- for in_feature, stride, channels in zip(
- self.in_features, feature_strides, feature_channels
- ):
- head_ops = []
- head_length = max(1, int(np.log2(stride) - np.log2(self.common_stride)))
- for k in range(head_length):
- norm_module = get_norm(norm, conv_dims)
- conv = Conv2d(
- channels if k == 0 else conv_dims,
- conv_dims,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=not norm,
- norm=norm_module,
- activation=F.relu,
- )
- weight_init.c2_msra_fill(conv)
- head_ops.append(conv)
- if stride != self.common_stride:
- head_ops.append(
- nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False)
- )
- self.scale_heads.append(nn.Sequential(*head_ops))
- self.add_module(in_feature, self.scale_heads[-1])
- self.predictor = Conv2d(conv_dims, num_classes, kernel_size=1, stride=1, padding=0)
- weight_init.c2_msra_fill(self.predictor)
-
- @classmethod
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
- return {
- "input_shape": {
- k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES
- },
- "ignore_value": cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE,
- "num_classes": cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES,
- "conv_dims": cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM,
- "common_stride": cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE,
- "norm": cfg.MODEL.SEM_SEG_HEAD.NORM,
- "loss_weight": cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT,
- }
-
- def forward(self, features, targets=None):
- """
- Returns:
- In training, returns (None, dict of losses)
- In inference, returns (CxHxW logits, {})
- """
- x = self.layers(features)
- if self.training:
- return None, self.losses(x, targets)
- else:
- x = F.interpolate(
- x, scale_factor=self.common_stride, mode="bilinear", align_corners=False
- )
- return x, {}
-
- def layers(self, features):
- for i, f in enumerate(self.in_features):
- if i == 0:
- x = self.scale_heads[i](features[f])
- else:
- x = x + self.scale_heads[i](features[f])
- x = self.predictor(x)
- return x
-
- def losses(self, predictions, targets):
- predictions = predictions.float() # https://github.com/pytorch/pytorch/issues/48163
- predictions = F.interpolate(
- predictions,
- scale_factor=self.common_stride,
- mode="bilinear",
- align_corners=False,
- )
- loss = F.cross_entropy(
- predictions, targets, reduction="mean", ignore_index=self.ignore_value
- )
- losses = {"loss_sem_seg": loss * self.loss_weight}
- return losses
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_tuner.py b/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_tuner.py
deleted file mode 100644
index d73340d8f8fc9e79dce81533ab34b4647f3dc0a7..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_tuner.py
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-""" BaseTuner: a subclass of BasePipeline.
-"""
-
-from lmflow.pipeline.base_pipeline import BasePipeline
-
-
-class BaseTuner(BasePipeline):
- """ A subclass of BasePipeline which is tunable.
- """
- def __init__(self, *args, **kwargs):
- pass
-
- def _check_if_tunable(self, model, dataset):
- # TODO: check if the model is tunable and dataset is compatible
- pass
-
- def tune(self, model, dataset):
- raise NotImplementedError(".tune is not implemented")
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/analyze.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/analyze.go
deleted file mode 100644
index 32077350c903fc3e49137d70b9d51929b45b0938..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/analyze.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/book_docbook.py b/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/book_docbook.py
deleted file mode 100644
index 1fe66dedc3bc83d13584758014829d4312f7b957..0000000000000000000000000000000000000000
--- a/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/book_docbook.py
+++ /dev/null
@@ -1,154 +0,0 @@
-# -*- coding: utf-8 -*-
-# book_docbook.py
-#
-# This file is part of LilyPond, the GNU music typesetter.
-#
-# Copyright (C) 2020--2022 Han-Wen Nienhuys ,
-# 2010 Reinhold Kainhofer
-#
-# LilyPond is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# LilyPond is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with LilyPond. If not, see .
-
-
-import os
-
-import book_base
-import book_snippets
-
-
-# Recognize special sequences in the input.
-#
-# (?Pregex) -- Assign result of REGEX to NAME.
-# *? -- Match non-greedily.
-# (?!...) -- Match if `...' doesn't match next (without consuming
-# the string).
-#
-# (?m) -- Multiline regex: Make ^ and $ match at each line.
-# (?s) -- Make the dot match all characters including newline.
-# (?x) -- Ignore whitespace in patterns.
-# See book_base.BookOutputFormat for possible keys
-Docbook_snippet_res = {
- 'lilypond':
- r'''(?smx)
- <(?P(inline)?)mediaobject>\s*
- (?P
- \s*
- .*?)["'])?>
- (?P.*?)
-
\s*
- \s*)
- (inline)?mediaobject>''',
-
- 'lilypond_block':
- r'''(?smx)
- <(?P(inline)?)mediaobject>\s*
- (?P
- \s*
- .*?)["'])?>
- (?P.*?)
-
\s*
- \s*)
- (inline)?mediaobject>''',
-
- 'lilypond_file':
- r'''(?smx)
- <(?P(inline)?)mediaobject>\s*
- (?P
- \s*
- .*?\.ly)["']\s*
- (role=["'](?P.*?)["'])?\s*
- (/>|>\s* )\s*
- \s*)
- (inline)?mediaobject>''',
-
- 'multiline_comment':
- r'''(?smx)
- (?P
- \s*(?!@c\s+)
- (?P)
- \s)''',
-}
-
-
-Docbook_output = {
- book_snippets.FILTER: r'''
-
-
-%(code)s
-
-
- ''',
-
- # TODO: this looks wrong: in PDF, we should use per-system output
- book_snippets.OUTPUT: r'''
-
-
-
-
- ''',
-
- book_snippets.PRINTFILENAME: r'''
-
-
-
- %(filename)s
-
-
-
- ''',
-
- book_snippets.VERBATIM: r'''
- %(verb)s
- ''',
-
- book_snippets.VERSION: r'''%(program_version)s''',
-}
-
-
-class BookDocbookOutputFormat (book_base.BookOutputFormat):
- def __init__(self):
- book_base.BookOutputFormat.__init__(self)
- self.format = "docbook"
- self.default_extension = ".xml"
- self.snippet_res = Docbook_snippet_res
- self.output = Docbook_output
- self.handled_extensions = ['.lyxml']
- self.snippet_option_separator = r'\s+'
-
- def adjust_snippet_command(self, cmd):
- if '-dseparate-page-formats' not in cmd:
- cmd += ' -dseparate-page-formats=pdf '
- if '-dtall-page-formats' not in cmd:
- cmd += ' -dtall-page-formats=eps,pdf,png '
- return cmd
-
- def snippet_output(self, basename, snippet):
- s = ''
- rep = snippet.get_replacements()
- for image in snippet.get_images():
- rep['image'] = image
- (rep['base'], rep['ext']) = os.path.splitext(image)
- s += self.output[book_snippets.OUTPUT] % rep
- s += self.output_print_filename(basename, snippet)
- s += '\n'
- if book_snippets.VERBATIM in snippet.option_dict:
- rep['verb'] = book_base.verbatim_html(snippet.verb_ly())
- s += self.output[book_snippets.VERBATIM] % rep
- s += '\n'
-
- return s
-
-
-book_base.register_format(BookDocbookOutputFormat())
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/musicxml.py b/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/musicxml.py
deleted file mode 100644
index 96f1042037d1f4047d65b1a4a0a58c476940b070..0000000000000000000000000000000000000000
--- a/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/musicxml.py
+++ /dev/null
@@ -1,1902 +0,0 @@
-# musicxml.py
-# -*- coding: utf-8 -*-
-#
-# This file is part of LilyPond, the GNU music typesetter.
-#
-# Copyright (C) 2005--2022 Han-Wen Nienhuys ,
-# 2007--2011 Reinhold Kainhofer
-#
-# LilyPond is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# LilyPond is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with LilyPond. If not, see .
-
-
-from collections import OrderedDict
-import copy
-from fractions import Fraction
-import re
-import sys
-import warnings
-
-import lilylib as ly
-import musicexp
-import musicxml2ly_conversion
-import utilities
-
-
-class Xml_node(object):
-
- def __init__(self):
- self._children = []
- self._data = None
- self._original = None
- self._name = 'xml_node'
- self._parent = None
- self._attribute_dict = {}
-
- def get_parent(self):
- return self._parent
-
- def is_first(self):
- return self._parent.get_typed_children(self.__class__)[0] == self
-
- def original(self):
- return self._original
-
- def get_name(self):
- return self._name
-
- def get_text(self):
- if self._data:
- return self._data
-
- if not self._children:
- return ''
-
- return ''.join([c.get_text() for c in self._children])
-
- def message(self, msg):
- ly.warning(msg)
-
- p = self
- while p:
- ly.progress(' In: <%s %s>\n' % (p._name, ' '.join(
- ['%s=%s' % item for item in list(p._attribute_dict.items())])))
- p = p.get_parent()
-
- def dump(self, indent=''):
- ly.debug_output('%s<%s%s>' % (indent, self._name, ''.join(
- [' %s=%s' % item for item in list(self._attribute_dict.items())])))
- non_text_children = [
- c for c in self._children if not isinstance(c, Hash_text)]
- if non_text_children:
- ly.debug_output('\n')
- for c in self._children:
- c.dump(indent + " ")
- if non_text_children:
- ly.debug_output(indent)
- ly.debug_output('%s>\n' % self._name)
-
- def get_typed_children(self, klass):
- if not klass:
- return []
- else:
- return [c for c in self._children if isinstance(c, klass)]
-
- def get_named_children(self, nm):
- return self.get_typed_children(get_class(nm))
-
- def get_named_child(self, nm):
- return self.get_maybe_exist_named_child(nm)
-
- def get_children(self, predicate):
- return [c for c in self._children if predicate(c)]
-
- def get_all_children(self):
- return self._children
-
- def get_maybe_exist_named_child(self, name):
- return self.get_maybe_exist_typed_child(get_class(name))
-
- def get_maybe_exist_typed_child(self, klass):
- cn = self.get_typed_children(klass)
- if len(cn) == 0:
- return None
- else:
- if len(cn) > 1:
- warnings.warn(_('more than one child of class %s, all but'
- ' the first will be ignored') % klass.__name__)
- return cn[0]
-
- def get_unique_typed_child(self, klass):
- cn = self.get_typed_children(klass)
- if len(cn) != 1:
- ly.error(self.__dict__)
- raise RuntimeError(
- 'Child is not unique for %s found %d' % (klass, cn))
-
- return cn[0]
-
- def get_named_child_value_number(self, name, default):
- n = self.get_maybe_exist_named_child(name)
- if n:
- return int(n.get_text())
- else:
- return default
-
-
-class Music_xml_node(Xml_node):
- def __init__(self):
- Xml_node.__init__(self)
- self.duration = Fraction(0)
- self.start = Fraction(0)
- self.converted = False
- self.voice_id = None
-
-
-class Music_xml_spanner(Music_xml_node):
-
- def get_type(self):
- if hasattr(self, 'type'):
- return self.type
- else:
- return 0
-
- def get_size(self):
- if hasattr(self, 'size'):
- return int(self.size)
- else:
- return 0
-
-
-class Measure_element(Music_xml_node):
-
- def get_voice_id(self):
- voice = self.get_maybe_exist_named_child('voice')
- if voice:
- return voice.get_text()
- else:
- return self.voice_id
-
- def is_first(self):
- # Look at all measure elements(previously we had self.__class__, which
- # only looked at objects of the same type!
- cn = self._parent.get_typed_children(Measure_element)
- # But only look at the correct voice; But include Attributes, too, which
- # are not tied to any particular voice
- cn = [c for c in cn if(
- c.get_voice_id() == self.get_voice_id()) or isinstance(c, Attributes)]
- return cn[0] == self
-
-
-class Work(Xml_node):
-
- def get_work_information(self, tag):
- wt = self.get_maybe_exist_named_child(tag)
- if wt:
- return wt.get_text()
- else:
- return ''
-
- def get_work_title(self):
- return self.get_work_information('work-title')
-
- def get_work_number(self):
- return self.get_work_information('work-number')
-
- # def get_opus(self):
- # return self.get_work_information('opus')
-
-
-class Identification(Xml_node):
-
- def get_rights(self):
- rights = self.get_named_children('rights')
- ret = []
- for r in rights:
- text = r.get_text()
- # if this Xml_node has an attribute, such as 'type="words"',
- # include it in the header. Otherwise, it is assumed that
- # the text contents of this node looks something like this:
- # 'Copyright: X.Y.' and thus already contains the relevant
- # information.
- if hasattr(r, 'type'):
- rights_type = r.type.title() # capitalize first letter
- result = ''.join([rights_type, ': ', text])
- ret.append(result)
- else:
- ret.append(text)
- return "\n".join(ret)
-
- # get contents of the source-element(usually used for publishing information).(These contents are saved in a custom variable named "source" in the header of the .ly file.)
- def get_source(self):
- source = self.get_named_children('source')
- ret = []
- for r in source:
- ret.append(r.get_text())
- return "\n".join(ret)
-
- def get_creator(self, type):
- creators = self.get_named_children('creator')
- # return the first creator tag that has the particular type
- for i in creators:
- if hasattr(i, 'type') and i.type == type:
- return i.get_text()
- return None
-
- def get_composer(self):
- c = self.get_creator('composer')
- if c:
- return c
- creators = self.get_named_children('creator')
- # return the first creator tag that has no type at all
- for i in creators:
- if not hasattr(i, 'type'):
- return i.get_text()
- return None
-
- def get_arranger(self):
- return self.get_creator('arranger')
-
- def get_editor(self):
- return self.get_creator('editor')
-
- def get_poet(self):
- v = self.get_creator('lyricist')
- if v:
- return v
- v = self.get_creator('poet')
- return v
-
- def get_encoding_information(self, type):
- enc = self.get_named_children('encoding')
- if enc:
- children = enc[0].get_named_children(type)
- if children:
- return children[0].get_text()
- else:
- return None
-
- def get_encoding_software(self):
- return self.get_encoding_information('software')
-
- def get_encoding_date(self):
- return self.get_encoding_information('encoding-date')
-
- def get_encoding_person(self):
- return self.get_encoding_information('encoder')
-
- def get_encoding_description(self):
- return self.get_encoding_information('encoding-description')
-
- def get_encoding_software_list(self):
- enc = self.get_named_children('encoding')
- software = []
- for e in enc:
- softwares = e.get_named_children('software')
- for s in softwares:
- software.append(s.get_text())
- return software
-
- def get_file_description(self):
- misc = self.get_named_children('miscellaneous')
- for m in misc:
- misc_fields = m.get_named_children('miscellaneous-field')
- for mf in misc_fields:
- if hasattr(mf, 'name') and mf.name == 'description':
- return mf.get_text()
- return None
-
-
-class Credit(Xml_node):
-
- def get_type(self):
- type = self.get_maybe_exist_named_child('credit-type')
- if type is not None:
- return type.get_text()
- else:
- return None
-
- def find_type(self, credits):
- sizes = self.get_font_sizes(credits)
- sizes.sort(reverse=True)
- ys = self.get_default_ys(credits)
- ys.sort(reverse=True)
- xs = self.get_default_xs(credits)
- xs.sort(reverse=True)
-
- # Words child of the self credit-element
- words = self.get_maybe_exist_named_child('credit-words')
- size = None
- x = None
- y = None
- halign = None
- valign = None
- justify = None
- if words is not None:
- if hasattr(words, 'font-size'):
- size = int(float((getattr(words, 'font-size'))))
- if hasattr(words, 'default-x'):
- x = round(float(getattr(words, 'default-x')))
- if hasattr(words, 'default-y'):
- y = round(float(getattr(words, 'default-y')))
- if hasattr(words, 'halign'):
- halign = getattr(words, 'halign')
- if hasattr(words, 'valign'):
- valign = getattr(words, 'valign')
- if hasattr(words, 'justify'):
- justify = getattr(words, 'justify')
- if (size and size == max(sizes) and y and y == max(ys) and
- (justify or halign) and (justify == 'center' or halign == 'center')):
- return 'title'
- elif (y and y > min(ys) and y < max(ys) and (justify or halign) and
- (justify == 'center' or halign == 'center')):
- return 'subtitle'
- elif ((justify or halign) and (justify == 'left' or halign == 'left') and
- (not x or x == min(xs))):
- return 'lyricist'
- elif ((justify or halign) and (justify == 'right' or halign == 'right')
- and (not x or x == max(xs))):
- return 'composer'
- elif size and size == min(sizes) and y == min(ys):
- return 'rights'
- # Special cases for Finale NotePad
- elif valign and valign == 'top' and y and y == ys[1]:
- return 'subtitle'
- elif valign and valign == 'top' and x and x == min(xs):
- return 'lyricist'
- elif valign and valign == 'top' and y and y == min(ys):
- return 'rights'
- # Other special cases
- elif valign and valign == 'bottom':
- return 'rights'
- elif len([i for i, item in enumerate(ys) if item == y]) == 2:
- # The first one is the composer, the second one is the lyricist
- return 'composer'
-
- return None # no type recognized
-
- def get_font_sizes(self, credits):
- sizes = []
- for cred in credits:
- words = cred.get_maybe_exist_named_child('credit-words')
- if((words is not None) and hasattr(words, 'font-size')):
- sizes.append(getattr(words, 'font-size'))
- return [int(float(size)) for size in sizes]
-
- def get_default_xs(self, credits):
- default_xs = []
- for cred in credits:
- words = cred.get_maybe_exist_named_child('credit-words')
- if((words is not None) and hasattr(words, 'default-x')):
- default_xs.append(getattr(words, 'default-x'))
- return list(map(round, list(map(float, default_xs))))
-
- def get_default_ys(self, credits):
- default_ys = []
- for cred in credits:
- words = cred.get_maybe_exist_named_child('credit-words')
- if words is not None and hasattr(words, 'default-y'):
- default_ys.append(getattr(words, 'default-y'))
- return list(map(round, list(map(float, default_ys))))
-
- def get_text(self):
- words = self.get_maybe_exist_named_child('credit-words')
- if words is not None:
- return words.get_text()
- else:
- return ''
-
-
-class Duration(Music_xml_node):
-
- def get_length(self):
- dur = int(self.get_text()) * Fraction(1, 4)
- return dur
-
-
-class Hash_text(Music_xml_node):
-
- def dump(self, indent=''):
- ly.debug_output(self._data.strip())
-
-
-class Pitch(Music_xml_node):
-
- def get_step(self):
- ch = self.get_unique_typed_child(get_class('step'))
- step = ch.get_text().strip()
- return step
-
- def get_octave(self):
- ch = self.get_unique_typed_child(get_class('octave'))
- octave = ch.get_text().strip()
- return int(octave)
-
- def get_alteration(self):
- ch = self.get_maybe_exist_typed_child(get_class('alter'))
- return utilities.interpret_alter_element(ch)
-
- def to_lily_object(self):
- p = musicexp.Pitch()
- p.alteration = self.get_alteration()
- p.step = musicxml2ly_conversion.musicxml_step_to_lily(self.get_step())
- p.octave = self.get_octave() - 4
- return p
-
-
-class Unpitched(Music_xml_node):
-
- def get_step(self):
- ch = self.get_unique_typed_child(get_class('display-step'))
- step = ch.get_text().strip()
- return step
-
- def get_octave(self):
- ch = self.get_unique_typed_child(get_class('display-octave'))
-
- if ch:
- octave = ch.get_text().strip()
- return int(octave)
- else:
- return None
-
- def to_lily_object(self):
- p = None
- step = self.get_step()
- if step:
- p = musicexp.Pitch()
- p.step = musicxml2ly_conversion.musicxml_step_to_lily(step)
- octave = self.get_octave()
- if octave and p:
- p.octave = octave - 4
- return p
-
-
-class Measure_element (Music_xml_node):
- def get_voice_id(self):
- voice = self.get_maybe_exist_named_child('voice')
- if voice:
- return voice.get_text()
- else:
- return self.voice_id
-
-
-class Attributes(Measure_element):
-
- def __init__(self):
- Measure_element.__init__(self)
- self._dict = {}
- self._original_tag = None
- self._time_signature_cache = None
-
- def is_first(self):
- cn = self._parent.get_typed_children(self.__class__)
- if self._original_tag:
- return cn[0] == self._original_tag
- else:
- return cn[0] == self
-
- def set_attributes_from_previous(self, dict):
- self._dict.update(dict)
-
- def read_self(self):
- for c in self.get_all_children():
- self._dict[c.get_name()] = c
-
- def get_named_attribute(self, name):
- return self._dict.get(name)
-
- def single_time_sig_to_fraction(self, sig):
- if len(sig) < 2:
- return 0
- n = 0
- for i in sig[0:-1]:
- n += i
- return Fraction(n, sig[-1])
-
- def get_measure_length(self):
- sig = self.get_time_signature()
- if not sig or len(sig) == 0:
- return 1
- if isinstance(sig[0], list):
- # Complex compound time signature
- l = 0
- for i in sig:
- l += self.single_time_sig_to_fraction(i)
- return l
- else:
- # Simple(maybe compound) time signature of the form(beat, ..., type)
- return self.single_time_sig_to_fraction(sig)
- return 0
-
- def get_time_signature(self):
- "Return time sig as a(beat, beat-type) tuple. For compound signatures,"
- "return either(beat, beat,..., beat-type) or((beat,..., type), "
- "(beat,..., type), ...)."
- if self._time_signature_cache:
- return self._time_signature_cache
-
- try:
- mxl = self.get_named_attribute('time')
- if not mxl:
- return None
-
- if mxl.get_maybe_exist_named_child('senza-misura'):
- # TODO: Handle pieces without a time signature!
- ly.warning(
- _("Senza-misura time signatures are not yet supported!"))
- return(4, 4)
- else:
- signature = []
- current_sig = []
- for i in mxl.get_all_children():
- if isinstance(i, Beats):
- beats = i.get_text().strip().split("+")
- current_sig = [int(j) for j in beats]
- elif isinstance(i, BeatType):
- current_sig.append(int(i.get_text()))
- signature.append(current_sig)
- current_sig = []
- if isinstance(signature[0], list) and len(signature) == 1:
- signature = signature[0]
- self._time_signature_cache = signature
- return signature
- except(KeyError, ValueError):
- self.message(
- _("Unable to interpret time signature! Falling back to 4/4."))
- return(4, 4)
-
- # returns clef information in the form("cleftype", position, octave-shift)
- def get_clef_information(self):
- clefinfo = ['G', 2, 0]
- mxl = self.get_named_attribute('clef')
- if not mxl:
- return clefinfo
- sign = mxl.get_maybe_exist_named_child('sign')
- if sign:
- clefinfo[0] = sign.get_text()
- line = mxl.get_maybe_exist_named_child('line')
- if line:
- clefinfo[1] = int(line.get_text())
- octave = mxl.get_maybe_exist_named_child('clef-octave-change')
- if octave:
- clefinfo[2] = int(octave.get_text())
- return clefinfo
-
- def get_key_signature(self):
- "return(fifths, mode) tuple if the key signatures is given as "
- "major/minor in the Circle of fifths. Otherwise return an alterations"
- "list of the form [[step,alter<,octave>], [step,alter<,octave>], ...], "
- "where the octave values are optional."
-
- key = self.get_named_attribute('key')
- if not key:
- return None
- fifths_elm = key.get_maybe_exist_named_child('fifths')
- if fifths_elm:
- mode_node = key.get_maybe_exist_named_child('mode')
- mode = None
- if mode_node:
- mode = mode_node.get_text()
- if not mode or mode == '':
- mode = 'major'
- fifths = int(fifths_elm.get_text())
- # TODO: Shall we try to convert the key-octave and the cancel, too?
- return(fifths, mode)
- else:
- alterations = []
- current_step = 0
- for i in key.get_all_children():
- if isinstance(i, KeyStep):
- current_step = i.get_text().strip()
- elif isinstance(i, KeyAlter):
- alterations.append(
- [current_step, utilities.interpret_alter_element(i)])
- elif isinstance(i, KeyOctave):
- nr = -1
- if hasattr(i, 'number'):
- nr = int(i.number)
- if(nr > 0) and (nr <= len(alterations)):
- # MusicXML Octave 4 is middle C -> shift to 0
- alterations[nr - 1].append(int(i.get_text()) - 4)
- else:
- i.message(_("Key alteration octave given for a "
- "non-existing alteration nr. %s, available numbers: %s!") % (nr, len(alterations)))
- return alterations
-
- def get_transposition(self):
- return self.get_named_attribute('transpose')
-
-
-class Barline(Measure_element):
-
- def to_lily_object(self):
- # retval contains all possible markers in the order:
- # 0..bw_ending, 1..bw_repeat, 2..barline, 3..fw_repeat, 4..fw_ending
- retval = {}
- bartype_element = self.get_maybe_exist_named_child("bar-style")
- repeat_element = self.get_maybe_exist_named_child("repeat")
- ending_element = self.get_maybe_exist_named_child("ending")
-
- bartype = None
- if bartype_element:
- bartype = bartype_element.get_text()
-
- if repeat_element and hasattr(repeat_element, 'direction'):
- repeat = musicxml2ly_conversion.RepeatMarker()
- repeat.direction = {"forward": -1, "backward": 1}.get(
- repeat_element.direction, 0)
-
- if((repeat_element.direction == "forward" and bartype == "heavy-light") or
- (repeat_element.direction == "backward" and bartype == "light-heavy")):
- bartype = None
- if hasattr(repeat_element, 'times'):
- try:
- repeat.times = int(repeat_element.times)
- except ValueError:
- repeat.times = 2
- repeat.event = self
- if repeat.direction == -1:
- retval[3] = repeat
- else:
- retval[1] = repeat
-
- if ending_element and hasattr(ending_element, 'type'):
- ending = musicxml2ly_conversion.EndingMarker()
- ending.direction = {"start": -1, "stop": 1, "discontinue": 1}.get(
- ending_element.type, 0)
- ending.event = self
- if ending.direction == -1:
- retval[4] = ending
- else:
- retval[0] = ending
- # TODO. ending number=""
-
- if bartype:
- b = musicexp.BarLine()
- b.type = bartype
- retval[2] = b
-
- return list(retval.values())
-
-
-class Partial(Measure_element):
- def __init__(self, partial):
- Measure_element.__init__(self)
- self.partial = partial
-
-
-class Stem(Music_xml_node):
-
- stem_value_dict = {
- 'down': 'stemDown',
- 'up': 'stemUp',
- 'double': None, # TODO: Implement
- 'none': 'stemNeutral'
- }
-
- def to_stem_event(self):
- values = []
- value = self.stem_value_dict.get(self.get_text(), None)
- stem_value = musicexp.StemEvent()
- if value:
- stem_value.value = value
- values.append(stem_value)
- return values
-
- def to_stem_style_event(self):
- styles = []
- style_elm = musicexp.StemstyleEvent()
- if hasattr(self, 'color'):
- style_elm.color = utilities.hex_to_color(getattr(self, 'color'))
- if style_elm.color is not None:
- styles.append(style_elm)
- return styles
-
-
-class Notehead(Music_xml_node):
-
- notehead_styles_dict = {
- 'slash': '\'slash',
- 'triangle': '\'triangle',
- 'diamond': '\'diamond',
- 'square': '\'la', # TODO: Proper squared note head
- 'cross': None, # TODO: + shaped note head
- 'x': '\'cross',
- 'circle-x': '\'xcircle',
- 'inverted triangle': None, # TODO: Implement
- 'arrow down': None, # TODO: Implement
- 'arrow up': None, # TODO: Implement
- 'slashed': None, # TODO: Implement
- 'back slashed': None, # TODO: Implement
- 'normal': None,
- 'cluster': None, # TODO: Implement
- 'none': '#f',
- 'do': '\'do',
- 're': '\'re',
- 'mi': '\'mi',
- 'fa': '\'fa',
- 'so': None,
- 'la': '\'la',
- 'ti': '\'ti',
- }
-
- def to_lily_object(self): # function changed: additionally processcolor attribute
- styles = []
-
- # Notehead style
- key = self.get_text().strip()
- style = self.notehead_styles_dict.get(key, None)
- event = musicexp.NotestyleEvent()
- if style:
- event.style = style
- if hasattr(self, 'filled'):
- event.filled = (getattr(self, 'filled') == "yes")
- if hasattr(self, 'color'):
- event.color = utilities.hex_to_color(getattr(self, 'color'))
- if event.style or (event.filled is not None) or (event.color is not None):
- styles.append(event)
- # parentheses
- if hasattr(self, 'parentheses') and (self.parentheses == "yes"):
- styles.append(musicexp.ParenthesizeEvent())
-
- return styles
-
-
-class Note(Measure_element):
-
- def __init__(self):
- Measure_element.__init__(self)
- self.instrument_name = ''
- self._after_grace = False
- self._duration = 1
-
- def is_grace(self):
- return self.get_maybe_exist_named_child('grace')
-
- def is_after_grace(self):
- if not self.is_grace():
- return False
- gr = self.get_maybe_exist_typed_child(Grace)
- return self._after_grace or hasattr(gr, 'steal-time-previous')
-
- def get_duration_log(self):
- ch = self.get_maybe_exist_named_child('type')
-
- if ch:
- log = ch.get_text().strip()
- return utilities.musicxml_duration_to_log(log)
- elif self.get_maybe_exist_named_child('grace'):
- # FIXME: is it ok to default to eight note for grace notes?
- return 3
- else:
- return None
-
- def get_duration_info(self):
- log = self.get_duration_log()
- if log is not None:
- dots = len(self.get_typed_children(Dot))
- return(log, dots)
- else:
- return None
-
- def get_factor(self):
- return 1
-
- def get_pitches(self):
- return self.get_typed_children(get_class('pitch'))
-
- def set_notehead_style(self, event):
- noteheads = self.get_named_children('notehead')
- for nh in noteheads:
- styles = nh.to_lily_object()
- for style in styles:
- event.add_associated_event(style)
-
- def set_stem_directions(self, event):
- stems = self.get_named_children('stem')
- for stem in stems:
- values = stem.to_stem_event()
- for v in values:
- event.add_associated_event(v)
-
- def set_stem_style(self, event):
- stems = self.get_named_children('stem')
- for stem in stems:
- styles = stem.to_stem_style_event()
- for style in styles:
- event.add_associated_event(style)
-
- def initialize_duration(self):
- from musicxml2ly_conversion import rational_to_lily_duration
- from musicexp import Duration
- # if the note has no Type child, then that method returns None. In that case,
- # use the tag instead. If that doesn't exist, either -> Error
- dur = self.get_duration_info()
- if dur:
- d = Duration()
- d.duration_log = dur[0]
- d.dots = dur[1]
- # Grace notes by specification have duration 0, so no time modification
- # factor is possible. It even messes up the output with *0/1
- if not self.get_maybe_exist_typed_child(Grace):
- d.factor = self._duration / d.get_length()
- return d
- else:
- if self._duration > 0:
- return rational_to_lily_duration(self._duration)
- else:
- self.message(
- _("Encountered note at %s without type and duration(=%s)")
- % (mxl_note.start, mxl_note._duration))
- return None
-
- def initialize_pitched_event(self):
- mxl_pitch = self.get_maybe_exist_typed_child(Pitch)
- pitch = mxl_pitch.to_lily_object()
- event = musicexp.NoteEvent()
- event.pitch = pitch
- acc = self.get_maybe_exist_named_child('accidental')
- if acc:
- # let's not force accs everywhere.
- event.cautionary = acc.cautionary
- # TODO: Handle editorial accidentals
- # TODO: Handle the level-display setting for displaying brackets/parentheses
- return event
-
- def initialize_unpitched_event(self):
- # Unpitched elements have display-step and can also have
- # display-octave.
- unpitched = self.get_maybe_exist_typed_child(Unpitched)
- event = musicexp.NoteEvent()
- event.pitch = unpitched.to_lily_object()
- return event
-
- def initialize_rest_event(self, convert_rest_positions=True):
- # rests can have display-octave and display-step, which are
- # treated like an ordinary note pitch
- rest = self.get_maybe_exist_typed_child(Rest)
- event = musicexp.RestEvent()
- if convert_rest_positions:
- pitch = rest.to_lily_object()
- event.pitch = pitch
- return event
-
- def to_lily_object(self,
- convert_stem_directions=True,
- convert_rest_positions=True):
- pitch = None
- duration = None
- event = None
-
- if self.get_maybe_exist_typed_child(Pitch):
- event = self.initialize_pitched_event()
- elif self.get_maybe_exist_typed_child(Unpitched):
- event = self.initialize_unpitched_event()
- elif self.get_maybe_exist_typed_child(Rest):
- event = self.initialize_rest_event(convert_rest_positions)
- else:
- self.message(_("cannot find suitable event"))
-
- if event:
- event.duration = self.initialize_duration()
-
- self.set_notehead_style(event)
- self.set_stem_style(event)
- if convert_stem_directions:
- self.set_stem_directions(event)
-
- return event
-
-
-class Part_list(Music_xml_node):
-
- def __init__(self):
- Music_xml_node.__init__(self)
- self._id_instrument_name_dict = {}
-
- def generate_id_instrument_dict(self):
-
- # not empty to make sure this happens only once.
- mapping = {1: 1}
- for score_part in self.get_named_children('score-part'):
- for instr in score_part.get_named_children('score-instrument'):
- id = instr.id
- name = instr.get_named_child("instrument-name")
- mapping[id] = name.get_text()
-
- self._id_instrument_name_dict = mapping
-
- def get_instrument(self, id):
- if not self._id_instrument_name_dict:
- self.generate_id_instrument_dict()
-
- instrument_name = self._id_instrument_name_dict.get(id)
- if instrument_name:
- return instrument_name
- else:
- ly.warning(_("Unable to find instrument for ID=%s\n") % id)
- return "Grand Piano"
-
-
-class Measure(Music_xml_node):
-
- def __init__(self):
- Music_xml_node.__init__(self)
- self.partial = 0
-
- def is_implicit(self):
- return hasattr(self, 'implicit') and self.implicit == 'yes'
-
- def get_notes(self):
- return self.get_typed_children(get_class('note'))
-
-
-class Syllabic(Music_xml_node):
-
- def continued(self):
- text = self.get_text()
- return text == "begin" or text == "middle"
-
- def begin(self):
- return text == "begin"
-
- def middle(self):
- return text == "middle"
-
- def end(self):
- return text == "end"
-
-
-class Lyric(Music_xml_node):
-
- def get_number(self):
- """
- Return the number attribute(if it exists) of the lyric element.
-
- @rtype: number
- @return: The value of the number attribute
- """
- return getattr(self, 'number', None)
-
-
-class Sound(Music_xml_node):
-
- def get_tempo(self):
- """
- Return the tempo attribute(if it exists) of the sound element.
- This attribute can be used by musicxml2ly for the midi output(see L{musicexp.Score}).
-
- @rtype: string
- @return: The value of the tempo attribute
- """
- if hasattr(self, 'tempo'):
- return self.tempo
- else:
- return None
-
-
-class Notations(Music_xml_node):
-
- def get_tie(self):
- ts = self.get_named_children('tied')
- starts = [t for t in ts if t.type == 'start']
- if starts:
- return starts[0]
- else:
- return None
-
- def get_tuplets(self):
- return self.get_typed_children(Tuplet)
-
-
-class Time_modification(Music_xml_node):
-
- def get_fraction(self):
- b = self.get_maybe_exist_named_child('actual-notes')
- a = self.get_maybe_exist_named_child('normal-notes')
- return(int(a.get_text()), int(b.get_text()))
-
- def get_normal_type(self):
- tuplet_type = self.get_maybe_exist_named_child('normal-type')
- if tuplet_type:
- dots = self.get_named_children('normal-dot')
- log = utilities.musicxml_duration_to_log(
- tuplet_type.get_text().strip())
- return(log, len(dots))
- else:
- return None
-
-
-class Accidental(Music_xml_node):
-
- def __init__(self):
- Music_xml_node.__init__(self)
- self.editorial = False
- self.cautionary = False
-
-
-class Tuplet(Music_xml_spanner):
-
- def duration_info_from_tuplet_note(self, tuplet_note):
- tuplet_type = tuplet_note.get_maybe_exist_named_child('tuplet-type')
- if tuplet_type:
- dots = tuplet_note.get_named_children('tuplet-dot')
- log = utilities.musicxml_duration_to_log(
- tuplet_type.get_text().strip())
- return(log, len(dots))
- else:
- return None
-
- # Return tuplet note type as(log, dots)
- def get_normal_type(self):
- tuplet = self.get_maybe_exist_named_child('tuplet-normal')
- if tuplet:
- return self.duration_info_from_tuplet_note(tuplet)
- else:
- return None
-
- def get_actual_type(self):
- tuplet = self.get_maybe_exist_named_child('tuplet-actual')
- if tuplet:
- return self.duration_info_from_tuplet_note(tuplet)
- else:
- return None
-
- def get_tuplet_note_count(self, tuplet_note):
- if tuplet_note:
- tuplet_nr = tuplet_note.get_maybe_exist_named_child(
- 'tuplet-number')
- if tuplet_nr:
- return int(tuplet_nr.get_text())
- return None
-
- def get_normal_nr(self):
- return self.get_tuplet_note_count(self.get_maybe_exist_named_child('tuplet-normal'))
-
- def get_actual_nr(self):
- return self.get_tuplet_note_count(self.get_maybe_exist_named_child('tuplet-actual'))
-
-
-class Slur(Music_xml_spanner):
-
- def get_type(self):
- return self.type
-
-
-class Tied(Music_xml_spanner):
-
- def get_type(self):
- return self.type
-
-
-class Beam(Music_xml_spanner):
- def get_type(self):
- return self.get_text()
-
- def is_primary(self):
- if hasattr(self, 'number'):
- return self.number == "1"
- else:
- return True
-
-
-class Octave_shift(Music_xml_spanner):
- # default is 8 for the octave-shift!
- def get_size(self):
- if hasattr(self, 'size'):
- return int(self.size)
- else:
- return 8
-
-
-# Rests in MusicXML are blocks with a inside. This class is only
-# for the inner element, not the whole rest block.
-class Rest(Music_xml_node):
-
- def __init__(self):
- Music_xml_node.__init__(self)
- self._is_whole_measure = False
-
- def is_whole_measure(self):
- return self._is_whole_measure
-
- def get_step(self):
- ch = self.get_maybe_exist_typed_child(get_class('display-step'))
- if ch:
- return ch.get_text().strip()
- else:
- return None
-
- def get_octave(self):
- ch = self.get_maybe_exist_typed_child(get_class('display-octave'))
- if ch:
- oct = ch.get_text().strip()
- return int(oct)
- else:
- return None
-
- def to_lily_object(self):
- p = None
- step = self.get_step()
- if step:
- p = musicexp.Pitch()
- p.step = musicxml2ly_conversion.musicxml_step_to_lily(step)
- octave = self.get_octave()
- if octave and p:
- p.octave = octave - 4
- return p
-
-
-class Bend(Music_xml_node):
-
- def bend_alter(self):
- alter = self.get_maybe_exist_named_child('bend-alter')
- return utilities.interpret_alter_element(alter)
-
-
-class ChordPitch(Music_xml_node):
-
- def step_class_name(self):
- return 'root-step'
-
- def alter_class_name(self):
- return 'root-alter'
-
- def get_step(self):
- ch = self.get_unique_typed_child(get_class(self.step_class_name()))
- return ch.get_text().strip()
-
- def get_alteration(self):
- ch = self.get_maybe_exist_typed_child(
- get_class(self.alter_class_name()))
- return utilities.interpret_alter_element(ch)
-
-
-class Bass(ChordPitch):
-
- def step_class_name(self):
- return 'bass-step'
-
- def alter_class_name(self):
- return 'bass-alter'
-
-
-class ChordModification(Music_xml_node):
-
- def get_type(self):
- ch = self.get_maybe_exist_typed_child(get_class('degree-type'))
- return {'add': 1, 'alter': 1, 'subtract': -1}.get(ch.get_text().strip(), 0)
-
- def get_value(self):
- ch = self.get_maybe_exist_typed_child(get_class('degree-value'))
- value = 0
- if ch:
- value = int(ch.get_text().strip())
- return value
-
- def get_alter(self):
- ch = self.get_maybe_exist_typed_child(get_class('degree-alter'))
- return utilities.interpret_alter_element(ch)
-
-
-class Frame(Music_xml_node):
-
- def get_frets(self):
- return self.get_named_child_value_number('frame-frets', 4)
-
- def get_strings(self):
- return self.get_named_child_value_number('frame-strings', 6)
-
- def get_first_fret(self):
- return self.get_named_child_value_number('first-fret', 1)
-
-
-class Frame_Note(Music_xml_node):
-
- def get_string(self):
- return self.get_named_child_value_number('string', 1)
-
- def get_fret(self):
- return self.get_named_child_value_number('fret', 0)
-
- def get_fingering(self):
- return self.get_named_child_value_number('fingering', -1)
-
- def get_barre(self):
- n = self.get_maybe_exist_named_child('barre')
- if n:
- return getattr(n, 'type', '')
- else:
- return ''
-
-
-class Musicxml_voice:
-
- def __init__(self):
- self._elements = []
- self._staves = {}
- self._start_staff = None
- self._lyrics = []
- self._has_lyrics = False
-
- def add_element(self, e):
- self._elements.append(e)
- if(isinstance(e, Note)
- and e.get_maybe_exist_typed_child(Staff)):
- name = e.get_maybe_exist_typed_child(Staff).get_text()
-
- if not self._start_staff and not e.get_maybe_exist_typed_child(Grace):
- self._start_staff = name
- self._staves[name] = True
-
- lyrics = e.get_typed_children(Lyric)
- if not self._has_lyrics:
- self.has_lyrics = len(lyrics) > 0
-
- for l in lyrics:
- nr = l.get_number()
- if (nr is not None) and nr not in self._lyrics:
- self._lyrics.append(nr)
-
- def insert(self, idx, e):
- self._elements.insert(idx, e)
-
- def get_lyrics_numbers(self):
- if(len(self._lyrics) == 0) and self._has_lyrics:
- # only happens if none of the tags has a number attribute
- return ['1']
- else:
- return self._lyrics
-
-
-class Part(Music_xml_node):
-
- def __init__(self):
- Music_xml_node.__init__(self)
- self._voices = {}
- self._staff_attributes_dict = {}
-
- def get_part_list(self):
- n = self
- while n and n.get_name() != 'score-partwise':
- n = n._parent
-
- return n.get_named_child('part-list')
-
- def graces_to_aftergraces(self, pending_graces):
- for gr in pending_graces:
- gr._when = gr._prev_when
- gr._measure_position = gr._prev_measure_position
- gr._after_grace = True
-
- def interpret(self):
- """Set durations and starting points."""
- """The starting point of the very first note is 0!"""
-
- part_list = self.get_part_list()
-
- now = Fraction(0)
- factor = Fraction(1)
- attributes_dict = {}
- attributes_object = None
- measures = self.get_typed_children(Measure)
- last_moment = Fraction(-1)
- last_measure_position = Fraction(-1)
- measure_position = Fraction(0)
- measure_start_moment = now
- is_first_measure = True
- previous_measure = None
- # Graces at the end of a measure need to have their position set to the
- # previous number!
- pending_graces = []
- for m in measures:
- # implicit measures are used for artificial measures, e.g. when
- # a repeat bar line splits a bar into two halves. In this case,
- # don't reset the measure position to 0. They are also used for
- # upbeats(initial value of 0 fits these, too).
- # Also, don't reset the measure position at the end of the loop,
- # but rather when starting the next measure(since only then do we
- # know if the next measure is implicit and continues that measure)
- if not m.is_implicit():
- # Warn about possibly overfull measures and reset the position
- if attributes_object and previous_measure and previous_measure.partial == 0:
- length = attributes_object.get_measure_length()
- new_now = measure_start_moment + length
- if now != new_now:
- problem = 'incomplete'
- if now > new_now:
- problem = 'overfull'
- # only for verbose operation.
- if problem != 'incomplete' and previous_measure:
- previous_measure.message(
- '%s measure? Expected: %s, Difference: %s' % (problem, now, new_now - now))
- now = new_now
- measure_start_moment = now
- measure_position = Fraction(0)
-
- voice_id = None
- assign_to_next_voice = []
- for n in m.get_all_children():
- # assign a voice to all measure elements
- if n.get_name() == 'backup':
- voice_id = None
-
- if isinstance(n, Measure_element):
- if n.get_voice_id():
- voice_id = n.get_voice_id()
- for i in assign_to_next_voice:
- i.voice_id = voice_id
- assign_to_next_voice = []
- else:
- if voice_id:
- n.voice_id = voice_id
- else:
- assign_to_next_voice.append(n)
-
- # figured bass has a duration, but applies to the next note
- # and should not change the current measure position!
- if isinstance(n, FiguredBass):
- n._divisions = factor.denominator
- n._when = now
- n._measure_position = measure_position
- continue
-
- if isinstance(n, Hash_text):
- continue
- dur = Fraction(0)
-
- if n.__class__ == Attributes:
- n.set_attributes_from_previous(attributes_dict)
- n.read_self()
- attributes_dict = n._dict.copy()
- attributes_object = n
-
- # default to 1
- divisions = (int(attributes_dict['divisions'].get_text())
- if 'divisions' in attributes_dict else 1)
-
- factor = Fraction(1, divisions)
-
-
- if n.get_maybe_exist_typed_child(Duration):
- mxl_dur = n.get_maybe_exist_typed_child(Duration)
- dur = mxl_dur.get_length() * factor
-
- if n.get_name() == 'backup':
- dur = -dur
- # reset all graces before the backup to after-graces:
- self.graces_to_aftergraces(pending_graces)
- pending_graces = []
- if n.get_maybe_exist_typed_child(Grace):
- dur = Fraction(0)
-
- rest = n.get_maybe_exist_typed_child(Rest)
- if(rest
- and attributes_object
- and attributes_object.get_measure_length() == dur):
-
- rest._is_whole_measure = True
-
- if(dur > Fraction(0)
- and n.get_maybe_exist_typed_child(Chord)):
- now = last_moment
- measure_position = last_measure_position
-
- n._when = now
- n._measure_position = measure_position
-
- # For all grace notes, store the previous note, in case need
- # to turn the grace note into an after-grace later on!
- if isinstance(n, Note) and n.is_grace():
- n._prev_when = last_moment
- n._prev_measure_position = last_measure_position
- # After-graces are placed at the same position as the previous note
- if isinstance(n, Note) and n.is_after_grace():
- # TODO: We should do the same for grace notes at the end of
- # a measure with no following note!!!
- n._when = last_moment
- n._measure_position = last_measure_position
- elif isinstance(n, Note) and n.is_grace():
- pending_graces.append(n)
- elif dur > Fraction(0):
- pending_graces = []
-
- n._duration = dur
- if dur > Fraction(0):
- last_moment = now
- last_measure_position = measure_position
- now += dur
- measure_position += dur
- elif dur < Fraction(0):
- # backup element, reset measure position
- now += dur
- measure_position += dur
- if measure_position < 0:
- # backup went beyond the measure start => reset to 0
- now -= measure_position
- measure_position = 0
- last_moment = now
- last_measure_position = measure_position
- if n._name == 'note':
- instrument = n.get_maybe_exist_named_child('instrument')
- if instrument:
- n.instrument_name = part_list.get_instrument(
- instrument.id)
-
- # reset all graces at the end of the measure to after-graces:
- self.graces_to_aftergraces(pending_graces)
- pending_graces = []
- # Incomplete first measures are not padded, but registered as partial
- if is_first_measure:
- is_first_measure = False
- # upbeats are marked as implicit measures
- if attributes_object and m.is_implicit():
- length = attributes_object.get_measure_length()
- measure_end = measure_start_moment + length
- if measure_end != now:
- m.partial = now
- previous_measure = m
-
- # modify attributes so that only those applying to the given staff remain
- def extract_attributes_for_staff(part, attr, staff):
- attributes = copy.copy(attr)
- attributes._children = []
- attributes._dict = attr._dict.copy()
- attributes._original_tag = attr
- # copy only the relevant children over for the given staff
- if staff == "None":
- staff = "1"
- for c in attr._children:
- if ((not hasattr(c, 'number') or c.number == staff) and
- not isinstance(c, Hash_text)):
- attributes._children.append(c)
- if not attributes._children:
- return None
- else:
- return attributes
-
- def extract_voices(part):
- # The last indentified voice
- last_voice = None
-
- voices = OrderedDict()
- measures = part.get_typed_children(Measure)
- elements = []
- for m in measures:
- if m.partial > 0:
- elements.append(Partial(m.partial))
- elements.extend(m.get_all_children())
- # make sure we know all voices already so that dynamics, clefs, etc.
- # can be assigned to the correct voices
- voice_to_staff_dict = {}
- for n in elements:
- voice_id = n.get_maybe_exist_named_child('voice')
- vid = None
- if voice_id:
- vid = voice_id.get_text()
- elif isinstance(n, Note):
- # TODO: Check whether we shall really use "None" here, or
- # rather use "1" as the default?
- if n.get_maybe_exist_named_child('chord'):
- vid = last_voice
- else:
- vid = "1"
-
- if vid is not None:
- last_voice = vid
-
- staff_id = n.get_maybe_exist_named_child('staff')
- sid = None
- if staff_id:
- sid = staff_id.get_text()
- else:
- # TODO: Check whether we shall really use "None" here, or
- # rather use "1" as the default?
- # If this is changed, need to change the corresponding
- # check in extract_attributes_for_staff, too.
- sid = "None"
- if vid and vid not in voices:
- voices[vid] = Musicxml_voice()
- if vid and sid and not n.get_maybe_exist_typed_child(Grace):
- if vid not in voice_to_staff_dict:
- voice_to_staff_dict[vid] = sid
-
- # invert the voice_to_staff_dict into a staff_to_voice_dict(since we
- # need to assign staff-assigned objects like clefs, times, etc. to
- # all the correct voices. This will never work entirely correct due
- # to staff-switches, but that's the best we can do!
- staff_to_voice_dict = {}
- for(v, s) in list(voice_to_staff_dict.items()):
- if s not in staff_to_voice_dict:
- staff_to_voice_dict[s] = [v]
- else:
- staff_to_voice_dict[s].append(v)
-
- start_attr = None
- assign_to_next_note = []
- id = None
- for n in elements:
- voice_id = n.get_maybe_exist_typed_child(get_class('voice'))
- if voice_id:
- id = voice_id.get_text()
- else:
- if n.get_maybe_exist_typed_child(get_class('chord')):
- id = last_voice
- else:
- id = "1"
-
- if id != "None":
- last_voice = id
-
- # We don't need backup/forward any more, since we have already
- # assigned the correct onset times.
- # TODO: Let Grouping through. Also: link, print, bokmark sound
- if not(isinstance(n, Note) or isinstance(n, Attributes) or
- isinstance(n, Direction) or isinstance(n, Partial) or
- isinstance(n, Barline) or isinstance(n, Harmony) or
- isinstance(n, FiguredBass) or isinstance(n, Print)):
- continue
-
- if isinstance(n, Attributes) and not start_attr:
- start_attr = n
- continue
-
- if isinstance(n, Attributes):
- # assign these only to the voices they really belong to!
- for(s, vids) in list(staff_to_voice_dict.items()):
- staff_attributes = part.extract_attributes_for_staff(n, s)
- if staff_attributes:
- for v in vids:
- voices[v].add_element(staff_attributes)
- continue
-
- if isinstance(n, Partial) or isinstance(n, Barline) or isinstance(n, Print):
- for v in list(voices.keys()):
- voices[v].add_element(n)
- continue
-
- if isinstance(n, Direction):
- if n.voice_id:
- voices[n.voice_id].add_element(n)
- else:
- assign_to_next_note.append(n)
- continue
-
- if isinstance(n, Harmony) or isinstance(n, FiguredBass):
- # store the harmony or figured bass element until we encounter
- # the next note and assign it only to that one voice.
- assign_to_next_note.append(n)
- continue
-
- if hasattr(n, 'print-object') and getattr(n, 'print-object') == "no":
- # Skip this note.
- pass
- else:
- for i in assign_to_next_note:
- voices[id].add_element(i)
- assign_to_next_note = []
- voices[id].add_element(n)
-
- # Assign all remaining elements from assign_to_next_note to the voice
- # of the previous note:
- for i in assign_to_next_note:
- voices[id].add_element(i)
- assign_to_next_note = []
-
- if start_attr:
- for(s, vids) in list(staff_to_voice_dict.items()):
- staff_attributes = part.extract_attributes_for_staff(
- start_attr, s)
- staff_attributes.read_self()
- part._staff_attributes_dict[s] = staff_attributes
- for v in vids:
- voices[v].insert(0, staff_attributes)
- voices[v]._elements[0].read_self()
-
- part._voices = voices
-
- def get_voices(self):
- return self._voices
-
- def get_staff_attributes(self):
- return self._staff_attributes_dict
-
-
-class BarStyle(Music_xml_node):
- pass
-
-
-class BeatType(Music_xml_node):
- pass
-
-
-class BeatUnit(Music_xml_node):
- pass
-
-
-class BeatUnitDot(Music_xml_node):
- pass
-
-
-class Beats(Music_xml_node):
- pass
-
-
-class Bracket(Music_xml_spanner):
- pass
-
-
-class Chord(Music_xml_node):
- pass
-
-
-class Dashes(Music_xml_spanner):
- pass
-
-
-class DirType(Music_xml_node):
- pass
-
-
-class Direction(Measure_element):
- pass
-
-
-class Dot(Music_xml_node):
- pass
-
-
-class Elision(Music_xml_node):
- pass
-
-
-class Extend(Music_xml_node):
- pass
-
-
-class FiguredBass(Music_xml_node):
- pass
-
-
-class Glissando(Music_xml_spanner):
- pass
-
-
-class Grace(Music_xml_node):
- pass
-
-
-class Harmony(Music_xml_node):
- pass
-
-
-class Hash_comment(Music_xml_node):
- pass
-
-
-class KeyAlter(Music_xml_node):
- pass
-
-
-class Direction (Measure_element):
- pass
-
-
-class KeyOctave(Music_xml_node):
- pass
-
-
-class KeyStep(Music_xml_node):
- pass
-
-
-class Part_group(Music_xml_node):
- pass
-
-
-class Pedal(Music_xml_spanner):
- pass
-
-
-class PerMinute(Music_xml_node):
- pass
-
-
-class Print(Music_xml_node):
- pass
-
-
-class Root(ChordPitch):
- pass
-
-
-class Score_part(Music_xml_node):
- pass
-
-
-class Slide(Music_xml_spanner):
- pass
-
-
-class Staff(Music_xml_node):
- pass
-
-
-class Text(Music_xml_node):
- pass
-
-
-class Type(Music_xml_node):
- pass
-
-
-class Wavy_line(Music_xml_spanner):
- pass
-
-
-class Wedge(Music_xml_spanner):
- pass
-
-
-class Words(Music_xml_node):
- pass
-
-
-# need this, not all classes are instantiated
-# for every input file. Only add those classes, that are either directly
-# used by class name or extend Music_xml_node in some way!
-class_dict = {
- '#comment': Hash_comment,
- '#text': Hash_text,
- 'accidental': Accidental,
- 'attributes': Attributes,
- 'barline': Barline,
- 'bar-style': BarStyle,
- 'bass': Bass,
- 'beam': Beam,
- 'beats': Beats,
- 'beat-type': BeatType,
- 'beat-unit': BeatUnit,
- 'beat-unit-dot': BeatUnitDot,
- 'bend': Bend,
- 'bracket': Bracket,
- 'chord': Chord,
- 'credit': Credit,
- 'dashes': Dashes,
- 'degree': ChordModification,
- 'dot': Dot,
- 'direction': Direction,
- 'direction-type': DirType,
- 'duration': Duration,
- 'elision': Elision,
- 'extend': Extend,
- 'frame': Frame,
- 'frame-note': Frame_Note,
- 'figured-bass': FiguredBass,
- 'glissando': Glissando,
- 'grace': Grace,
- 'harmony': Harmony,
- 'identification': Identification,
- 'key-alter': KeyAlter,
- 'key-octave': KeyOctave,
- 'key-step': KeyStep,
- 'lyric': Lyric,
- 'measure': Measure,
- 'notations': Notations,
- 'note': Note,
- 'notehead': Notehead,
- 'octave-shift': Octave_shift,
- 'part': Part,
- 'part-group': Part_group,
- 'part-list': Part_list,
- 'pedal': Pedal,
- 'per-minute': PerMinute,
- 'pitch': Pitch,
- 'print': Print,
- 'rest': Rest,
- 'root': Root,
- 'score-part': Score_part,
- 'slide': Slide,
- 'slur': Slur,
- 'sound': Sound,
- 'staff': Staff,
- 'stem': Stem,
- 'syllabic': Syllabic,
- 'text': Text,
- 'time-modification': Time_modification,
- 'tied': Tied,
- 'tuplet': Tuplet,
- 'type': Type,
- 'unpitched': Unpitched,
- 'wavy-line': Wavy_line,
- 'wedge': Wedge,
- 'words': Words,
- 'work': Work,
-}
-
-
-def name2class_name(name):
- name = name.replace('-', '_')
- name = name.replace('#', 'hash_')
- name = name[0].upper() + name[1:].lower()
-
- return str(name)
-
-
-def get_class(name):
- classname = class_dict.get(name)
- if classname:
- return classname
- else:
- class_name = name2class_name(name)
- klass = type(class_name, (Music_xml_node,), {})
- class_dict[name] = klass
- return klass
-
-
-def lxml_demarshal_node(node):
- name = node.tag
-
- # Ignore comment nodes, which are also returned by the etree parser!
- if name is None or node.__class__.__name__ == "_Comment":
- return None
- klass = get_class(name)
- py_node = klass()
-
- py_node._original = node
- py_node._name = name
- py_node._data = node.text
- py_node._children = [lxml_demarshal_node(cn) for cn in node.getchildren()]
- py_node._children = [x for x in py_node._children if x]
-
- for c in py_node._children:
- c._parent = py_node
-
- for(k, v) in list(node.items()):
- py_node.__dict__[k] = v
- py_node._attribute_dict[k] = v
-
- return py_node
-
-
-def minidom_demarshal_node(node):
- name = node.nodeName
-
- klass = get_class(name)
- py_node = klass()
- py_node._name = name
- py_node._children = [minidom_demarshal_node(cn) for cn in node.childNodes]
- for c in py_node._children:
- c._parent = py_node
-
- if node.attributes:
- for(nm, value) in list(node.attributes.items()):
- py_node.__dict__[nm] = value
- py_node._attribute_dict[nm] = value
-
- py_node._data = None
- if node.nodeType == node.TEXT_NODE and node.data:
- py_node._data = node.data
-
- py_node._original = node
- return py_node
-
-
-if __name__ == '__main__':
- import lxml.etree
-
- tree = lxml.etree.parse('beethoven.xml')
- mxl_tree = lxml_demarshal_node(tree.getroot())
- ks = sorted(class_dict.keys())
- print('\n'.join(ks))
diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/hubert/hubert_manager.py b/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/hubert/hubert_manager.py
deleted file mode 100644
index 1a6c2fb1a878e5e54d78d9d50826a508fedff88c..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/Bark-Voice-Cloning/bark/hubert/hubert_manager.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os.path
-import shutil
-import urllib.request
-
-import huggingface_hub
-
-
-class HuBERTManager:
-
-
- @staticmethod
- def make_sure_hubert_installed(download_url: str = 'https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt', file_name: str = 'hubert.pt'):
- install_dir = os.path.join('models', 'hubert')
- if not os.path.isdir(install_dir):
- os.makedirs(install_dir, exist_ok=True)
- install_file = os.path.join(install_dir, file_name)
- if not os.path.isfile(install_file):
- print(f'Downloading HuBERT base model from {download_url}')
- urllib.request.urlretrieve(download_url, install_file)
- print('Downloaded HuBERT')
- return install_file
-
-
- @staticmethod
- def make_sure_tokenizer_installed(model: str = 'quantifier_hubert_base_ls960_14.pth', repo: str = 'GitMylo/bark-voice-cloning', tokenizer_lang: str = 'en'):
- local_file = tokenizer_lang + '_tokenizer.pth'
- install_dir = os.path.join('models', 'hubert')
- if not os.path.isdir(install_dir):
- os.makedirs(install_dir, exist_ok=True)
- install_file = os.path.join(install_dir, local_file)
- if not os.path.isfile(install_file):
- # refactor to use lists
- if tokenizer_lang == 'en':
- repo = 'GitMylo/bark-voice-cloning'
- model = 'quantifier_hubert_base_ls960_14.pth'
- elif tokenizer_lang == 'de':
- repo = 'CountFloyd/bark-voice-cloning-german-HuBERT-quantizer'
- model = 'german-HuBERT-quantizer_14_epoch.pth'
- elif tokenizer_lang == 'pl':
- repo = 'Hobis/bark-voice-cloning-polish-HuBERT-quantizer'
- model = 'polish-HuBERT-quantizer_8_epoch.pth'
- else:
- raise 'Unknown Tokenizer Language!'
- print(f'{local_file} not found. Downloading HuBERT custom tokenizer')
- huggingface_hub.hf_hub_download(repo, model, local_dir=install_dir, local_dir_use_symlinks=False)
- shutil.move(os.path.join(install_dir, model), install_file)
- print('Downloaded tokenizer')
- return install_file
diff --git a/spaces/Pengyey/bingo-chuchu/Dockerfile b/spaces/Pengyey/bingo-chuchu/Dockerfile
deleted file mode 100644
index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000
--- a/spaces/Pengyey/bingo-chuchu/Dockerfile
+++ /dev/null
@@ -1,36 +0,0 @@
-FROM node:18
-
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME
-
-# Switch to the "user" user
-USER user
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Install app dependencies
-# A wildcard is used to ensure both package.json AND package-lock.json are copied
-# where available (npm@5+)
-COPY --chown=user package*.json $HOME/app/
-
-RUN npm install
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app/
-
-RUN npm run build
-
-ENV PORT 7860
-EXPOSE 7860
-
-CMD npm start
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/logger/dvclive.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/logger/dvclive.py
deleted file mode 100644
index 687cdc58c0336c92b1e4f9a410ba67ebaab2bc7a..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/runner/hooks/logger/dvclive.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ...dist_utils import master_only
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class DvcliveLoggerHook(LoggerHook):
- """Class to log metrics with dvclive.
-
- It requires `dvclive`_ to be installed.
-
- Args:
- path (str): Directory where dvclive will write TSV log files.
- interval (int): Logging interval (every k iterations).
- Default 10.
- ignore_last (bool): Ignore the log of last iterations in each epoch
- if less than `interval`.
- Default: True.
- reset_flag (bool): Whether to clear the output buffer after logging.
- Default: True.
- by_epoch (bool): Whether EpochBasedRunner is used.
- Default: True.
-
- .. _dvclive:
- https://dvc.org/doc/dvclive
- """
-
- def __init__(self,
- path,
- interval=10,
- ignore_last=True,
- reset_flag=True,
- by_epoch=True):
-
- super(DvcliveLoggerHook, self).__init__(interval, ignore_last,
- reset_flag, by_epoch)
- self.path = path
- self.import_dvclive()
-
- def import_dvclive(self):
- try:
- import dvclive
- except ImportError:
- raise ImportError(
- 'Please run "pip install dvclive" to install dvclive')
- self.dvclive = dvclive
-
- @master_only
- def before_run(self, runner):
- self.dvclive.init(self.path)
-
- @master_only
- def log(self, runner):
- tags = self.get_loggable_tags(runner)
- if tags:
- for k, v in tags.items():
- self.dvclive.log(k, v, step=self.get_iter(runner))
diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/__init__.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/__init__.py
deleted file mode 100644
index 8339983905fb5d20bae42ba6f76fea75d278b1aa..0000000000000000000000000000000000000000
--- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from .cgnet import CGNet
-# from .fast_scnn import FastSCNN
-from .hrnet import HRNet
-from .mobilenet_v2 import MobileNetV2
-from .mobilenet_v3 import MobileNetV3
-from .resnest import ResNeSt
-from .resnet import ResNet, ResNetV1c, ResNetV1d
-from .resnext import ResNeXt
-from .unet import UNet
-from .vit import VisionTransformer
-from .uniformer import UniFormer
-
-__all__ = [
- 'ResNet', 'ResNetV1c', 'ResNetV1d', 'ResNeXt', 'HRNet',
- 'ResNeSt', 'MobileNetV2', 'UNet', 'CGNet', 'MobileNetV3',
- 'VisionTransformer', 'UniFormer'
-]
diff --git a/spaces/Politrees/RVC_V2_Huggingface_Version/lib/infer_pack/models.py b/spaces/Politrees/RVC_V2_Huggingface_Version/lib/infer_pack/models.py
deleted file mode 100644
index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000
--- a/spaces/Politrees/RVC_V2_Huggingface_Version/lib/infer_pack/models.py
+++ /dev/null
@@ -1,1142 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer_pack import modules
-from lib.infer_pack import attentions
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Potanin/12345/i18n/locale_diff.py b/spaces/Potanin/12345/i18n/locale_diff.py
deleted file mode 100644
index 257277965e0866a86d0361863a8f1b408c4f71ab..0000000000000000000000000000000000000000
--- a/spaces/Potanin/12345/i18n/locale_diff.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import json
-import os
-from collections import OrderedDict
-
-# Define the standard file name
-standard_file = "zh_CN.json"
-
-# Find all JSON files in the directory
-dir_path = "./"
-languages = [
- f for f in os.listdir(dir_path) if f.endswith(".json") and f != standard_file
-]
-
-# Load the standard file
-with open(standard_file, "r", encoding="utf-8") as f:
- standard_data = json.load(f, object_pairs_hook=OrderedDict)
-
-# Loop through each language file
-for lang_file in languages:
- # Load the language file
- with open(lang_file, "r", encoding="utf-8") as f:
- lang_data = json.load(f, object_pairs_hook=OrderedDict)
-
- # Find the difference between the language file and the standard file
- diff = set(standard_data.keys()) - set(lang_data.keys())
-
- miss = set(lang_data.keys()) - set(standard_data.keys())
-
- # Add any missing keys to the language file
- for key in diff:
- lang_data[key] = key
-
- # Del any extra keys to the language file
- for key in miss:
- del lang_data[key]
-
- # Sort the keys of the language file to match the order of the standard file
- lang_data = OrderedDict(
- sorted(lang_data.items(), key=lambda x: list(standard_data.keys()).index(x[0]))
- )
-
- # Save the updated language file
- with open(lang_file, "w", encoding="utf-8") as f:
- json.dump(lang_data, f, ensure_ascii=False, indent=4)
- f.write("\n")
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CODE_OF_CONDUCT.md b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CODE_OF_CONDUCT.md
deleted file mode 100644
index 83f431e8feeb7e80d571f39c9f6c1b96857b5f85..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Code of Conduct
-
-## Our Pledge
-
-In the interest of fostering an open and welcoming environment, we as
-contributors and maintainers pledge to make participation in our project and
-our community a harassment-free experience for everyone, regardless of age, body
-size, disability, ethnicity, sex characteristics, gender identity and expression,
-level of experience, education, socio-economic status, nationality, personal
-appearance, race, religion, or sexual identity and orientation.
-
-## Our Standards
-
-Examples of behavior that contributes to creating a positive environment
-include:
-
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy towards other community members
-
-Examples of unacceptable behavior by participants include:
-
-* The use of sexualized language or imagery and unwelcome sexual attention or
-advances
-* Trolling, insulting/derogatory comments, and personal or political attacks
-* Public or private harassment
-* Publishing others' private information, such as a physical or electronic
-address, without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a
-professional setting
-
-## Our Responsibilities
-
-Project maintainers are responsible for clarifying the standards of acceptable
-behavior and are expected to take appropriate and fair corrective action in
-response to any instances of unacceptable behavior.
-
-Project maintainers have the right and responsibility to remove, edit, or
-reject comments, commits, code, wiki edits, issues, and other contributions
-that are not aligned to this Code of Conduct, or to ban temporarily or
-permanently any contributor for other behaviors that they deem inappropriate,
-threatening, offensive, or harmful.
-
-## Scope
-
-This Code of Conduct applies within all project spaces, and it also applies when
-an individual is representing the project or its community in public spaces.
-Examples of representing a project or community include using an official
-project e-mail address, posting via an official social media account, or acting
-as an appointed representative at an online or offline event. Representation of
-a project may be further defined and clarified by project maintainers.
-
-This Code of Conduct also applies outside the project spaces when there is a
-reasonable belief that an individual's behavior may have a negative impact on
-the project or its community.
-
-## Enforcement
-
-Instances of abusive, harassing, or otherwise unacceptable behavior may be
-reported by contacting the project team at . All
-complaints will be reviewed and investigated and will result in a response that
-is deemed necessary and appropriate to the circumstances. The project team is
-obligated to maintain confidentiality with regard to the reporter of an incident.
-Further details of specific enforcement policies may be posted separately.
-
-Project maintainers who do not follow or enforce the Code of Conduct in good
-faith may face temporary or permanent repercussions as determined by other
-members of the project's leadership.
-
-## Attribution
-
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
-
-[homepage]: https://www.contributor-covenant.org
-
-For answers to common questions about this code of conduct, see
-https://www.contributor-covenant.org/faq
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/audio_dataset.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/audio_dataset.py
deleted file mode 100644
index 9d7442526186b3712f5d4754f928a40ecd964174..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/data/audio_dataset.py
+++ /dev/null
@@ -1,587 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-"""AudioDataset support. In order to handle a larger number of files
-without having to scan again the folders, we precompute some metadata
-(filename, sample rate, duration), and use that to efficiently sample audio segments.
-"""
-import argparse
-import copy
-from concurrent.futures import ThreadPoolExecutor, Future
-from dataclasses import dataclass, fields
-from contextlib import ExitStack
-from functools import lru_cache
-import gzip
-import json
-import logging
-import os
-from pathlib import Path
-import random
-import sys
-import typing as tp
-
-import torch
-import torch.nn.functional as F
-
-from .audio import audio_read, audio_info
-from .audio_utils import convert_audio
-from .zip import PathInZip
-
-try:
- import dora
-except ImportError:
- dora = None # type: ignore
-
-
-@dataclass(order=True)
-class BaseInfo:
-
- @classmethod
- def _dict2fields(cls, dictionary: dict):
- return {
- field.name: dictionary[field.name]
- for field in fields(cls) if field.name in dictionary
- }
-
- @classmethod
- def from_dict(cls, dictionary: dict):
- _dictionary = cls._dict2fields(dictionary)
- return cls(**_dictionary)
-
- def to_dict(self):
- return {
- field.name: self.__getattribute__(field.name)
- for field in fields(self)
- }
-
-
-@dataclass(order=True)
-class AudioMeta(BaseInfo):
- path: str
- duration: float
- sample_rate: int
- amplitude: tp.Optional[float] = None
- weight: tp.Optional[float] = None
- # info_path is used to load additional information about the audio file that is stored in zip files.
- info_path: tp.Optional[PathInZip] = None
-
- @classmethod
- def from_dict(cls, dictionary: dict):
- base = cls._dict2fields(dictionary)
- if 'info_path' in base and base['info_path'] is not None:
- base['info_path'] = PathInZip(base['info_path'])
- return cls(**base)
-
- def to_dict(self):
- d = super().to_dict()
- if d['info_path'] is not None:
- d['info_path'] = str(d['info_path'])
- return d
-
-
-@dataclass(order=True)
-class SegmentInfo(BaseInfo):
- meta: AudioMeta
- seek_time: float
- # The following values are given once the audio is processed, e.g.
- # at the target sample rate and target number of channels.
- n_frames: int # actual number of frames without padding
- total_frames: int # total number of frames, padding included
- sample_rate: int # actual sample rate
- channels: int # number of audio channels.
-
-
-DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a']
-
-logger = logging.getLogger(__name__)
-
-
-def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta:
- """AudioMeta from a path to an audio file.
-
- Args:
- file_path (str): Resolved path of valid audio file.
- minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
- Returns:
- AudioMeta: Audio file path and its metadata.
- """
- info = audio_info(file_path)
- amplitude: tp.Optional[float] = None
- if not minimal:
- wav, sr = audio_read(file_path)
- amplitude = wav.abs().max().item()
- return AudioMeta(file_path, info.duration, info.sample_rate, amplitude)
-
-
-def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta:
- """If Dora is available as a dependency, try to resolve potential relative paths
- in list of AudioMeta. This method is expected to be used when loading meta from file.
-
- Args:
- m (AudioMeta): Audio meta to resolve.
- fast (bool): If True, uses a really fast check for determining if a file
- is already absolute or not. Only valid on Linux/Mac.
- Returns:
- AudioMeta: Audio meta with resolved path.
- """
- def is_abs(m):
- if fast:
- return str(m)[0] == '/'
- else:
- os.path.isabs(str(m))
-
- if not dora:
- return m
-
- if not is_abs(m.path):
- m.path = dora.git_save.to_absolute_path(m.path)
- if m.info_path is not None and not is_abs(m.info_path.zip_path):
- m.info_path.zip_path = dora.git_save.to_absolute_path(m.path)
- return m
-
-
-def find_audio_files(path: tp.Union[Path, str],
- exts: tp.List[str] = DEFAULT_EXTS,
- resolve: bool = True,
- minimal: bool = True,
- progress: bool = False,
- workers: int = 0) -> tp.List[AudioMeta]:
- """Build a list of AudioMeta from a given path,
- collecting relevant audio files and fetching meta info.
-
- Args:
- path (str or Path): Path to folder containing audio files.
- exts (list of str): List of file extensions to consider for audio files.
- minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
- progress (bool): Whether to log progress on audio files collection.
- workers (int): number of parallel workers, if 0, use only the current thread.
- Returns:
- list of AudioMeta: List of audio file path and its metadata.
- """
- audio_files = []
- futures: tp.List[Future] = []
- pool: tp.Optional[ThreadPoolExecutor] = None
- with ExitStack() as stack:
- if workers > 0:
- pool = ThreadPoolExecutor(workers)
- stack.enter_context(pool)
-
- if progress:
- print("Finding audio files...")
- for root, folders, files in os.walk(path, followlinks=True):
- for file in files:
- full_path = Path(root) / file
- if full_path.suffix.lower() in exts:
- audio_files.append(full_path)
- if pool is not None:
- futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal))
- if progress:
- print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr)
-
- if progress:
- print("Getting audio metadata...")
- meta: tp.List[AudioMeta] = []
- for idx, file_path in enumerate(audio_files):
- try:
- if pool is None:
- m = _get_audio_meta(str(file_path), minimal)
- else:
- m = futures[idx].result()
- if resolve:
- m = _resolve_audio_meta(m)
- except Exception as err:
- print("Error with", str(file_path), err, file=sys.stderr)
- continue
- meta.append(m)
- if progress:
- print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr)
- meta.sort()
- return meta
-
-
-def load_audio_meta(path: tp.Union[str, Path],
- resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]:
- """Load list of AudioMeta from an optionally compressed json file.
-
- Args:
- path (str or Path): Path to JSON file.
- resolve (bool): Whether to resolve the path from AudioMeta (default=True).
- fast (bool): activates some tricks to make things faster.
- Returns:
- list of AudioMeta: List of audio file path and its total duration.
- """
- open_fn = gzip.open if str(path).lower().endswith('.gz') else open
- with open_fn(path, 'rb') as fp: # type: ignore
- lines = fp.readlines()
- meta = []
- for line in lines:
- d = json.loads(line)
- m = AudioMeta.from_dict(d)
- if resolve:
- m = _resolve_audio_meta(m, fast=fast)
- meta.append(m)
- return meta
-
-
-def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]):
- """Save the audio metadata to the file pointer as json.
-
- Args:
- path (str or Path): Path to JSON file.
- metadata (list of BaseAudioMeta): List of audio meta to save.
- """
- Path(path).parent.mkdir(exist_ok=True, parents=True)
- open_fn = gzip.open if str(path).lower().endswith('.gz') else open
- with open_fn(path, 'wb') as fp: # type: ignore
- for m in meta:
- json_str = json.dumps(m.to_dict()) + '\n'
- json_bytes = json_str.encode('utf-8')
- fp.write(json_bytes)
-
-
-class AudioDataset:
- """Base audio dataset.
-
- The dataset takes a list of AudioMeta and create a dataset composed of segments of audio
- and potentially additional information, by creating random segments from the list of audio
- files referenced in the metadata and applying minimal data pre-processing such as resampling,
- mixing of channels, padding, etc.
-
- If no segment_duration value is provided, the AudioDataset will return the full wav for each
- audio file. Otherwise, it will randomly sample audio files and create a segment of the specified
- duration, applying padding if required.
-
- By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True
- allows to return a tuple containing the torch Tensor and additional metadata on the segment and the
- original audio meta.
-
- Note that you can call `start_epoch(epoch)` in order to get
- a deterministic "randomization" for `shuffle=True`.
- For a given epoch and dataset index, this will always return the same extract.
- You can get back some diversity by setting the `shuffle_seed` param.
-
- Args:
- meta (list of AudioMeta): List of audio files metadata.
- segment_duration (float, optional): Optional segment duration of audio to load.
- If not specified, the dataset will load the full audio segment from the file.
- shuffle (bool): Set to `True` to have the data reshuffled at every epoch.
- sample_rate (int): Target sample rate of the loaded audio samples.
- channels (int): Target number of channels of the loaded audio samples.
- sample_on_duration (bool): Set to `True` to sample segments with probability
- dependent on audio file duration. This is only used if `segment_duration` is provided.
- sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of
- `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product
- of the file duration and file weight. This is only used if `segment_duration` is provided.
- min_segment_ratio (float): Minimum segment ratio to use when the audio file
- is shorter than the desired segment.
- max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset.
- return_info (bool): Whether to return the wav only or return wav along with segment info and metadata.
- min_audio_duration (float, optional): Minimum audio file duration, in seconds, if provided
- audio shorter than this will be filtered out.
- max_audio_duration (float, optional): Maximal audio file duration in seconds, if provided
- audio longer than this will be filtered out.
- shuffle_seed (int): can be used to further randomize
- load_wav (bool): if False, skip loading the wav but returns a tensor of 0
- with the expected segment_duration (which must be provided if load_wav is False).
- permutation_on_files (bool): only if `sample_on_weight` and `sample_on_duration`
- are False. Will ensure a permutation on files when going through the dataset.
- In that case the epoch number must be provided in order for the model
- to continue the permutation across epochs. In that case, it is assumed
- that `num_samples = total_batch_size * num_updates_per_epoch`, with
- `total_batch_size` the overall batch size accounting for all gpus.
- """
- def __init__(self,
- meta: tp.List[AudioMeta],
- segment_duration: tp.Optional[float] = None,
- shuffle: bool = True,
- num_samples: int = 10_000,
- sample_rate: int = 48_000,
- channels: int = 2,
- pad: bool = True,
- sample_on_duration: bool = True,
- sample_on_weight: bool = True,
- min_segment_ratio: float = 0.5,
- max_read_retry: int = 10,
- return_info: bool = False,
- min_audio_duration: tp.Optional[float] = None,
- max_audio_duration: tp.Optional[float] = None,
- shuffle_seed: int = 0,
- load_wav: bool = True,
- permutation_on_files: bool = False,
- ):
- assert len(meta) > 0, "No audio meta provided to AudioDataset. Please check loading of audio meta."
- assert segment_duration is None or segment_duration > 0
- assert segment_duration is None or min_segment_ratio >= 0
- self.segment_duration = segment_duration
- self.min_segment_ratio = min_segment_ratio
- self.max_audio_duration = max_audio_duration
- self.min_audio_duration = min_audio_duration
- if self.min_audio_duration is not None and self.max_audio_duration is not None:
- assert self.min_audio_duration <= self.max_audio_duration
- self.meta: tp.List[AudioMeta] = self._filter_duration(meta)
- assert len(self.meta) # Fail fast if all data has been filtered.
- self.total_duration = sum(d.duration for d in self.meta)
-
- if segment_duration is None:
- num_samples = len(self.meta)
- self.num_samples = num_samples
- self.shuffle = shuffle
- self.sample_rate = sample_rate
- self.channels = channels
- self.pad = pad
- self.sample_on_weight = sample_on_weight
- self.sample_on_duration = sample_on_duration
- self.sampling_probabilities = self._get_sampling_probabilities()
- self.max_read_retry = max_read_retry
- self.return_info = return_info
- self.shuffle_seed = shuffle_seed
- self.current_epoch: tp.Optional[int] = None
- self.load_wav = load_wav
- if not load_wav:
- assert segment_duration is not None
- self.permutation_on_files = permutation_on_files
- if permutation_on_files:
- assert not self.sample_on_duration
- assert not self.sample_on_weight
- assert self.shuffle
-
- def start_epoch(self, epoch: int):
- self.current_epoch = epoch
-
- def __len__(self):
- return self.num_samples
-
- def _get_sampling_probabilities(self, normalized: bool = True):
- """Return the sampling probabilities for each file inside `self.meta`."""
- scores: tp.List[float] = []
- for file_meta in self.meta:
- score = 1.
- if self.sample_on_weight and file_meta.weight is not None:
- score *= file_meta.weight
- if self.sample_on_duration:
- score *= file_meta.duration
- scores.append(score)
- probabilities = torch.tensor(scores)
- if normalized:
- probabilities /= probabilities.sum()
- return probabilities
-
- @staticmethod
- @lru_cache(16)
- def _get_file_permutation(num_files: int, permutation_index: int, base_seed: int):
- # Used to keep the most recent files permutation in memory implicitely.
- # will work unless someone is using a lot of Datasets in parallel.
- rng = torch.Generator()
- rng.manual_seed(base_seed + permutation_index)
- return torch.randperm(num_files, generator=rng)
-
- def sample_file(self, index: int, rng: torch.Generator) -> AudioMeta:
- """Sample a given file from `self.meta`. Can be overridden in subclasses.
- This is only called if `segment_duration` is not None.
-
- You must use the provided random number generator `rng` for reproducibility.
- You can further make use of the index accessed.
- """
- if self.permutation_on_files:
- assert self.current_epoch is not None
- total_index = self.current_epoch * len(self) + index
- permutation_index = total_index // len(self.meta)
- relative_index = total_index % len(self.meta)
- permutation = AudioDataset._get_file_permutation(
- len(self.meta), permutation_index, self.shuffle_seed)
- file_index = permutation[relative_index]
- return self.meta[file_index]
-
- if not self.sample_on_weight and not self.sample_on_duration:
- file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item())
- else:
- file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item())
-
- return self.meta[file_index]
-
- def _audio_read(self, path: str, seek_time: float = 0, duration: float = -1):
- # Override this method in subclass if needed.
- if self.load_wav:
- return audio_read(path, seek_time, duration, pad=False)
- else:
- assert self.segment_duration is not None
- n_frames = int(self.sample_rate * self.segment_duration)
- return torch.zeros(self.channels, n_frames), self.sample_rate
-
- def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]:
- if self.segment_duration is None:
- file_meta = self.meta[index]
- out, sr = audio_read(file_meta.path)
- out = convert_audio(out, sr, self.sample_rate, self.channels)
- n_frames = out.shape[-1]
- segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames,
- sample_rate=self.sample_rate, channels=out.shape[0])
- else:
- rng = torch.Generator()
- if self.shuffle:
- # We use index, plus extra randomness, either totally random if we don't know the epoch.
- # otherwise we make use of the epoch number and optional shuffle_seed.
- if self.current_epoch is None:
- rng.manual_seed(index + self.num_samples * random.randint(0, 2**24))
- else:
- rng.manual_seed(index + self.num_samples * (self.current_epoch + self.shuffle_seed))
- else:
- # We only use index
- rng.manual_seed(index)
-
- for retry in range(self.max_read_retry):
- file_meta = self.sample_file(index, rng)
- # We add some variance in the file position even if audio file is smaller than segment
- # without ending up with empty segments
- max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio)
- seek_time = torch.rand(1, generator=rng).item() * max_seek
- try:
- out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False)
- out = convert_audio(out, sr, self.sample_rate, self.channels)
- n_frames = out.shape[-1]
- target_frames = int(self.segment_duration * self.sample_rate)
- if self.pad:
- out = F.pad(out, (0, target_frames - n_frames))
- segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames,
- sample_rate=self.sample_rate, channels=out.shape[0])
- except Exception as exc:
- logger.warning("Error opening file %s: %r", file_meta.path, exc)
- if retry == self.max_read_retry - 1:
- raise
- else:
- break
-
- if self.return_info:
- # Returns the wav and additional information on the wave segment
- return out, segment_info
- else:
- return out
-
- def collater(self, samples):
- """The collater function has to be provided to the dataloader
- if AudioDataset has return_info=True in order to properly collate
- the samples of a batch.
- """
- if self.segment_duration is None and len(samples) > 1:
- assert self.pad, "Must allow padding when batching examples of different durations."
-
- # In this case the audio reaching the collater is of variable length as segment_duration=None.
- to_pad = self.segment_duration is None and self.pad
- if to_pad:
- max_len = max([wav.shape[-1] for wav, _ in samples])
-
- def _pad_wav(wav):
- return F.pad(wav, (0, max_len - wav.shape[-1]))
-
- if self.return_info:
- if len(samples) > 0:
- assert len(samples[0]) == 2
- assert isinstance(samples[0][0], torch.Tensor)
- assert isinstance(samples[0][1], SegmentInfo)
-
- wavs = [wav for wav, _ in samples]
- segment_infos = [copy.deepcopy(info) for _, info in samples]
-
- if to_pad:
- # Each wav could be of a different duration as they are not segmented.
- for i in range(len(samples)):
- # Determines the total length of the signal with padding, so we update here as we pad.
- segment_infos[i].total_frames = max_len
- wavs[i] = _pad_wav(wavs[i])
-
- wav = torch.stack(wavs)
- return wav, segment_infos
- else:
- assert isinstance(samples[0], torch.Tensor)
- if to_pad:
- samples = [_pad_wav(s) for s in samples]
- return torch.stack(samples)
-
- def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]:
- """Filters out audio files with audio durations that will not allow to sample examples from them."""
- orig_len = len(meta)
-
- # Filter data that is too short.
- if self.min_audio_duration is not None:
- meta = [m for m in meta if m.duration >= self.min_audio_duration]
-
- # Filter data that is too long.
- if self.max_audio_duration is not None:
- meta = [m for m in meta if m.duration <= self.max_audio_duration]
-
- filtered_len = len(meta)
- removed_percentage = 100*(1-float(filtered_len)/orig_len)
- msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage
- if removed_percentage < 10:
- logging.debug(msg)
- else:
- logging.warning(msg)
- return meta
-
- @classmethod
- def from_meta(cls, root: tp.Union[str, Path], **kwargs):
- """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file.
-
- Args:
- root (str or Path): Path to root folder containing audio files.
- kwargs: Additional keyword arguments for the AudioDataset.
- """
- root = Path(root)
- if root.is_dir():
- if (root / 'data.jsonl').exists():
- root = root / 'data.jsonl'
- elif (root / 'data.jsonl.gz').exists():
- root = root / 'data.jsonl.gz'
- else:
- raise ValueError("Don't know where to read metadata from in the dir. "
- "Expecting either a data.jsonl or data.jsonl.gz file but none found.")
- meta = load_audio_meta(root)
- return cls(meta, **kwargs)
-
- @classmethod
- def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True,
- exts: tp.List[str] = DEFAULT_EXTS, **kwargs):
- """Instantiate AudioDataset from a path containing (possibly nested) audio files.
-
- Args:
- root (str or Path): Path to root folder containing audio files.
- minimal_meta (bool): Whether to only load minimal metadata or not.
- exts (list of str): Extensions for audio files.
- kwargs: Additional keyword arguments for the AudioDataset.
- """
- root = Path(root)
- if root.is_file():
- meta = load_audio_meta(root, resolve=True)
- else:
- meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True)
- return cls(meta, **kwargs)
-
-
-def main():
- logging.basicConfig(stream=sys.stderr, level=logging.INFO)
- parser = argparse.ArgumentParser(
- prog='audio_dataset',
- description='Generate .jsonl files by scanning a folder.')
- parser.add_argument('root', help='Root folder with all the audio files')
- parser.add_argument('output_meta_file',
- help='Output file to store the metadata, ')
- parser.add_argument('--complete',
- action='store_false', dest='minimal', default=True,
- help='Retrieve all metadata, even the one that are expansive '
- 'to compute (e.g. normalization).')
- parser.add_argument('--resolve',
- action='store_true', default=False,
- help='Resolve the paths to be absolute and with no symlinks.')
- parser.add_argument('--workers',
- default=10, type=int,
- help='Number of workers.')
- args = parser.parse_args()
- meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True,
- resolve=args.resolve, minimal=args.minimal, workers=args.workers)
- save_audio_meta(args.output_meta_file, meta)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/musicgen.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/musicgen.py
deleted file mode 100644
index 1d4b2292eaec5016e208bbdf61ec5c99b40b67da..0000000000000000000000000000000000000000
--- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/musicgen.py
+++ /dev/null
@@ -1,409 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Main model for using MusicGen. This will combine all the required components
-and provide easy access to the generation API.
-"""
-
-import typing as tp
-import warnings
-
-import torch
-
-from .encodec import CompressionModel
-from .lm import LMModel
-from .builders import get_debug_compression_model, get_debug_lm_model
-from .loaders import load_compression_model, load_lm_model
-from ..data.audio_utils import convert_audio
-from ..modules.conditioners import ConditioningAttributes, WavCondition
-from ..utils.autocast import TorchAutocast
-
-
-MelodyList = tp.List[tp.Optional[torch.Tensor]]
-MelodyType = tp.Union[torch.Tensor, MelodyList]
-
-
-# backward compatible names mapping
-_HF_MODEL_CHECKPOINTS_MAP = {
- "small": "GrandaddyShmax/musicgen-small",
- "medium": "GrandaddyShmax/musicgen-medium",
- "large": "GrandaddyShmax/musicgen-large",
- "melody": "GrandaddyShmax/musicgen-melody",
-}
-
-
-class MusicGen:
- """MusicGen main model with convenient generation API.
-
- Args:
- name (str): name of the model.
- compression_model (CompressionModel): Compression model
- used to map audio to invertible discrete representations.
- lm (LMModel): Language model over discrete representations.
- max_duration (float, optional): maximum duration the model can produce,
- otherwise, inferred from the training params.
- """
- def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel,
- max_duration: tp.Optional[float] = None):
- self.name = name
- self.compression_model = compression_model
- self.lm = lm
- if max_duration is None:
- if hasattr(lm, 'cfg'):
- max_duration = lm.cfg.dataset.segment_duration # type: ignore
- else:
- raise ValueError("You must provide max_duration when building directly MusicGen")
- assert max_duration is not None
- self.max_duration: float = max_duration
- self.device = next(iter(lm.parameters())).device
- self.generation_params: dict = {}
- self.set_generation_params(duration=15) # 15 seconds by default
- self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None
- if self.device.type == 'cpu':
- self.autocast = TorchAutocast(enabled=False)
- else:
- self.autocast = TorchAutocast(
- enabled=True, device_type=self.device.type, dtype=torch.float16)
-
- @property
- def frame_rate(self) -> float:
- """Roughly the number of AR steps per seconds."""
- return self.compression_model.frame_rate
-
- @property
- def sample_rate(self) -> int:
- """Sample rate of the generated audio."""
- return self.compression_model.sample_rate
-
- @property
- def audio_channels(self) -> int:
- """Audio channels of the generated audio."""
- return self.compression_model.channels
-
- @staticmethod
- def get_pretrained(name: str = 'GrandaddyShmax/musicgen-melody', device=None):
- """Return pretrained model, we provide four models:
- - facebook/musicgen-small (300M), text to music,
- # see: https://huggingface.co/facebook/musicgen-small
- - facebook/musicgen-medium (1.5B), text to music,
- # see: https://huggingface.co/facebook/musicgen-medium
- - facebook/musicgen-melody (1.5B) text to music and text+melody to music,
- # see: https://huggingface.co/facebook/musicgen-melody
- - facebook/musicgen-large (3.3B), text to music,
- # see: https://huggingface.co/facebook/musicgen-large
- """
- if device is None:
- if torch.cuda.device_count():
- device = 'cuda'
- else:
- device = 'cpu'
-
- if name == 'debug':
- # used only for unit tests
- compression_model = get_debug_compression_model(device)
- lm = get_debug_lm_model(device)
- return MusicGen(name, compression_model, lm, max_duration=30)
-
- lm = load_lm_model(name, device=device)
- compression_model = load_compression_model(name, device=device)
- if 'self_wav' in lm.condition_provider.conditioners:
- lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True
-
- return MusicGen(name, compression_model, lm)
-
- def set_generation_params(self, use_sampling: bool = True, top_k: int = 250,
- top_p: float = 0.0, temperature: float = 1.0,
- duration: float = 30.0, cfg_coef: float = 3.0,
- two_step_cfg: bool = False, extend_stride: float = 18):
- """Set the generation parameters for MusicGen.
-
- Args:
- use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True.
- top_k (int, optional): top_k used for sampling. Defaults to 250.
- top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0.
- temperature (float, optional): Softmax temperature parameter. Defaults to 1.0.
- duration (float, optional): Duration of the generated waveform. Defaults to 30.0.
- cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0.
- two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance,
- instead of batching together the two. This has some impact on how things
- are padded but seems to have little impact in practice.
- extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much
- should we extend the audio each time. Larger values will mean less context is
- preserved, and shorter value will require extra computations.
- """
- assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration."
- self.extend_stride = extend_stride
- self.duration = duration
- self.generation_params = {
- 'use_sampling': use_sampling,
- 'temp': temperature,
- 'top_k': top_k,
- 'top_p': top_p,
- 'cfg_coef': cfg_coef,
- 'two_step_cfg': two_step_cfg,
- }
-
- def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None):
- """Override the default progress callback."""
- self._progress_callback = progress_callback
-
- def generate_unconditional(self, num_samples: int, progress: bool = False, return_tokens: bool = False) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]:
- """Generate samples in an unconditional manner.
-
- Args:
- num_samples (int): Number of samples to be generated.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- descriptions: tp.List[tp.Optional[str]] = [None] * num_samples
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- tokens = self._generate_tokens(attributes, prompt_tokens, progress)
- if return_tokens:
- return self.generate_audio(tokens), tokens
- return self.generate_audio(tokens)
-
- def generate(self, descriptions: tp.List[str], progress: bool = False, return_tokens: bool = False) \
- -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]:
- """Generate samples conditioned on text.
-
- Args:
- descriptions (list of str): A list of strings used as text conditioning.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- assert prompt_tokens is None
- tokens = self._generate_tokens(attributes, prompt_tokens, progress)
- if return_tokens:
- return self.generate_audio(tokens), tokens
- return self.generate_audio(tokens)
-
- def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, melody_sample_rate: int, progress: bool = False, return_tokens: bool = False) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]:
- """Generate samples conditioned on text and melody.
-
- Args:
- descriptions (list of str): A list of strings used as text conditioning.
- melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as
- melody conditioning. Should have shape [B, C, T] with B matching the description length,
- C=1 or 2. It can be [C, T] if there is a single description. It can also be
- a list of [C, T] tensors.
- melody_sample_rate: (int): Sample rate of the melody waveforms.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if isinstance(melody_wavs, torch.Tensor):
- if melody_wavs.dim() == 2:
- melody_wavs = melody_wavs[None]
- if melody_wavs.dim() != 3:
- raise ValueError("Melody wavs should have a shape [B, C, T].")
- melody_wavs = list(melody_wavs)
- else:
- for melody in melody_wavs:
- if melody is not None:
- assert melody.dim() == 2, "One melody in the list has the wrong number of dims."
-
- melody_wavs = [
- convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels)
- if wav is not None else None
- for wav in melody_wavs]
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None,
- melody_wavs=melody_wavs)
- assert prompt_tokens is None
- tokens = self._generate_tokens(attributes, prompt_tokens, progress)
- if return_tokens:
- return self.generate_audio(tokens), tokens
- return self.generate_audio(tokens)
-
- def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int,
- descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None,
- progress: bool = False, return_tokens: bool = False) \
- -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]:
- """Generate samples conditioned on audio prompts.
-
- Args:
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- Prompt should be [B, C, T], or [C, T] if only one sample is generated.
- prompt_sample_rate (int): Sampling rate of the given audio waveforms.
- descriptions (list of str, optional): A list of strings used as text conditioning. Defaults to None.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if prompt.dim() == 2:
- prompt = prompt[None]
- if prompt.dim() != 3:
- raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).")
- prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels)
- if descriptions is None:
- descriptions = [None] * len(prompt)
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt)
- assert prompt_tokens is not None
- tokens = self._generate_tokens(attributes, prompt_tokens, progress)
- if return_tokens:
- return self.generate_audio(tokens), tokens
- return self.generate_audio(tokens)
-
- @torch.no_grad()
- def _prepare_tokens_and_attributes(
- self,
- descriptions: tp.Sequence[tp.Optional[str]],
- prompt: tp.Optional[torch.Tensor],
- melody_wavs: tp.Optional[MelodyList] = None,
- ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]:
- """Prepare model inputs.
-
- Args:
- descriptions (list of str): A list of strings used as text conditioning.
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- melody_wavs (torch.Tensor, optional): A batch of waveforms
- used as melody conditioning. Defaults to None.
- """
- attributes = [
- ConditioningAttributes(text={'description': description})
- for description in descriptions]
-
- if melody_wavs is None:
- for attr in attributes:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- sample_rate=[self.sample_rate],
- path=[None])
- else:
- if 'self_wav' not in self.lm.condition_provider.conditioners:
- raise RuntimeError("This model doesn't support melody conditioning. "
- "Use the `melody` model.")
- assert len(melody_wavs) == len(descriptions), \
- f"number of melody wavs must match number of descriptions! " \
- f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}"
- for attr, melody in zip(attributes, melody_wavs):
- if melody is None:
- attr.wav['self_wav'] = WavCondition(
- torch.zeros((1, 1, 1), device=self.device),
- torch.tensor([0], device=self.device),
- sample_rate=[self.sample_rate],
- path=[None])
- else:
- attr.wav['self_wav'] = WavCondition(
- melody[None].to(device=self.device),
- torch.tensor([melody.shape[-1]], device=self.device),
- sample_rate=[self.sample_rate],
- path=[None],
- )
-
- if prompt is not None:
- if descriptions is not None:
- assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match"
- prompt = prompt.to(self.device)
- prompt_tokens, scale = self.compression_model.encode(prompt)
- assert scale is None
- else:
- prompt_tokens = None
- return attributes, prompt_tokens
-
- def _generate_tokens(self, attributes: tp.List[ConditioningAttributes],
- prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor:
- """Generate discrete audio tokens given audio prompt and/or conditions.
-
- Args:
- attributes (list of ConditioningAttributes): Conditions used for generation (text/melody).
- prompt_tokens (torch.Tensor, optional): Audio prompt used for continuation.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- Returns:
- torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params.
- """
- i = 0
- prompt_list = attributes[0].text['description']
- total_gen_len = int(self.duration * self.frame_rate)
- max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate)
- current_gen_offset: int = 0
-
- def _progress_callback(generated_tokens: int, tokens_to_generate: int):
- generated_tokens += current_gen_offset
- if current_gen_offset > 0:
- generated_tokens += (self.max_duration - self.extend_stride) * self.frame_rate
- if self._progress_callback is not None:
- # Note that total_gen_len might be quite wrong depending on the
- # codebook pattern used, but with delay it is almost accurate.
- self._progress_callback(generated_tokens, total_gen_len)
- else:
- print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r')
-
- if prompt_tokens is not None:
- assert max_prompt_len >= prompt_tokens.shape[-1], \
- "Prompt is longer than audio to generate"
-
- callback = None
- if progress:
- callback = _progress_callback
-
- if self.duration <= self.max_duration:
- # generate by sampling from LM, simple case.
- with self.autocast:
- attributes[0].text['description'] = prompt_list[0]
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=total_gen_len, **self.generation_params)
-
- else:
- # now this gets a bit messier, we need to handle prompts,
- # melody conditioning etc.
- ref_wavs = [attr.wav['self_wav'] for attr in attributes]
- all_tokens = []
- if prompt_tokens is None:
- prompt_length = 0
- else:
- all_tokens.append(prompt_tokens)
- prompt_length = prompt_tokens.shape[-1]
-
- stride_tokens = int(self.frame_rate * self.extend_stride)
-
- while current_gen_offset + prompt_length < total_gen_len:
- time_offset = current_gen_offset / self.frame_rate
- chunk_duration = min(self.duration - time_offset, self.max_duration)
- max_gen_len = int(chunk_duration * self.frame_rate)
- for attr, ref_wav in zip(attributes, ref_wavs):
- wav_length = ref_wav.length.item()
- if wav_length == 0:
- continue
- # We will extend the wav periodically if it not long enough.
- # we have to do it here rather than in conditioners.py as otherwise
- # we wouldn't have the full wav.
- initial_position = int(time_offset * self.sample_rate)
- wav_target_length = int(self.max_duration * self.sample_rate)
- positions = torch.arange(initial_position,
- initial_position + wav_target_length, device=self.device)
- attr.wav['self_wav'] = WavCondition(
- ref_wav[0][..., positions % wav_length],
- torch.full_like(ref_wav[1], wav_target_length),
- [self.sample_rate] * ref_wav[0].size(0),
- [None], [0.])
- with self.autocast:
- if i >= len(prompt_list):
- i = len(prompt_list) - 1
- attributes[0].text['description'] = prompt_list[i]
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=max_gen_len, **self.generation_params)
- i = i + 1
- if prompt_tokens is None:
- all_tokens.append(gen_tokens)
- else:
- all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:])
- prompt_tokens = gen_tokens[:, :, stride_tokens:]
- prompt_length = prompt_tokens.shape[-1]
- current_gen_offset += stride_tokens
-
- gen_tokens = torch.cat(all_tokens, dim=-1)
- return gen_tokens
-
- def generate_audio(self, gen_tokens: torch.Tensor):
- """Generate Audio from tokens"""
- assert gen_tokens.dim() == 3
- with torch.no_grad():
- gen_audio = self.compression_model.decode(gen_tokens, None)
- return gen_audio
-
- def to(self, device: str):
- self.compression_model.to(device)
- self.lm.to(device)
- return self
\ No newline at end of file
diff --git a/spaces/QinQiuFox/get_ppt/index.html b/spaces/QinQiuFox/get_ppt/index.html
deleted file mode 100644
index 58275de3b1c343a98420342baa076b9baaafa157..0000000000000000000000000000000000000000
--- a/spaces/QinQiuFox/get_ppt/index.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
-
-
-
- My static Space
-
-
-
-
-
Welcome to your static Space!
-
You can modify this app directly by editing index.html in the Files and versions tab.
-
- Also don't forget to check the
- Spaces documentation .
-
-
-
-
diff --git a/spaces/Realcat/image-matching-webui/hloc/matchers/lightglue.py b/spaces/Realcat/image-matching-webui/hloc/matchers/lightglue.py
deleted file mode 100644
index f4983d0bc5513f60c14a99e84d08617d751c0115..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/hloc/matchers/lightglue.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import sys
-from pathlib import Path
-from ..utils.base_model import BaseModel
-from .. import logger
-
-lightglue_path = Path(__file__).parent / "../../third_party/LightGlue"
-sys.path.append(str(lightglue_path))
-from lightglue import LightGlue as LG
-
-
-class LightGlue(BaseModel):
- default_conf = {
- "match_threshold": 0.2,
- "filter_threshold": 0.2,
- "width_confidence": 0.99, # for point pruning
- "depth_confidence": 0.95, # for early stopping,
- "features": "superpoint",
- "model_name": "superpoint_lightglue.pth",
- "flash": True, # enable FlashAttention if available.
- "mp": False, # enable mixed precision
- }
- required_inputs = [
- "image0",
- "keypoints0",
- "scores0",
- "descriptors0",
- "image1",
- "keypoints1",
- "scores1",
- "descriptors1",
- ]
-
- def _init(self, conf):
- weight_path = lightglue_path / "weights" / conf["model_name"]
- conf["weights"] = str(weight_path)
- conf["filter_threshold"] = conf["match_threshold"]
- self.net = LG(**conf)
- logger.info(f"Load lightglue model done.")
-
- def _forward(self, data):
- input = {}
- input["image0"] = {
- "image": data["image0"],
- "keypoints": data["keypoints0"],
- "descriptors": data["descriptors0"].permute(0, 2, 1),
- }
- input["image1"] = {
- "image": data["image1"],
- "keypoints": data["keypoints1"],
- "descriptors": data["descriptors1"].permute(0, 2, 1),
- }
- return self.net(input)
diff --git a/spaces/Rfilippelli/Deci-DeciCoder-1b/README.md b/spaces/Rfilippelli/Deci-DeciCoder-1b/README.md
deleted file mode 100644
index fb32114d765ff4bf159bfecff3da2f5270d70841..0000000000000000000000000000000000000000
--- a/spaces/Rfilippelli/Deci-DeciCoder-1b/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Deci DeciCoder 1b
-emoji: 🏢
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.40.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py
deleted file mode 100644
index edb4c174c51e34c103737ba39bfc48bf831e561d..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='DNLHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- dropout_ratio=0.1,
- reduction=2,
- use_scale=True,
- mode='embedded_gaussian',
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/bbox_heads/dii_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/bbox_heads/dii_head.py
deleted file mode 100644
index 8c970a78184672aaaa95edcdaecec03a26604390..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/bbox_heads/dii_head.py
+++ /dev/null
@@ -1,415 +0,0 @@
-import torch
-import torch.nn as nn
-from mmcv.cnn import (bias_init_with_prob, build_activation_layer,
- build_norm_layer)
-from mmcv.runner import auto_fp16, force_fp32
-
-from mmdet.core import multi_apply
-from mmdet.models.builder import HEADS, build_loss
-from mmdet.models.dense_heads.atss_head import reduce_mean
-from mmdet.models.losses import accuracy
-from mmdet.models.utils import FFN, MultiheadAttention, build_transformer
-from .bbox_head import BBoxHead
-
-
-@HEADS.register_module()
-class DIIHead(BBoxHead):
- r"""Dynamic Instance Interactive Head for `Sparse R-CNN: End-to-End Object
- Detection with Learnable Proposals `_
-
- Args:
- num_classes (int): Number of class in dataset.
- Defaults to 80.
- num_ffn_fcs (int): The number of fully-connected
- layers in FFNs. Defaults to 2.
- num_heads (int): The hidden dimension of FFNs.
- Defaults to 8.
- num_cls_fcs (int): The number of fully-connected
- layers in classification subnet. Defaults to 1.
- num_reg_fcs (int): The number of fully-connected
- layers in regression subnet. Defaults to 3.
- feedforward_channels (int): The hidden dimension
- of FFNs. Defaults to 2048
- in_channels (int): Hidden_channels of MultiheadAttention.
- Defaults to 256.
- dropout (float): Probability of drop the channel.
- Defaults to 0.0
- ffn_act_cfg (dict): The activation config for FFNs.
- dynamic_conv_cfg (dict): The convolution config
- for DynamicConv.
- loss_iou (dict): The config for iou or giou loss.
-
- """
-
- def __init__(self,
- num_classes=80,
- num_ffn_fcs=2,
- num_heads=8,
- num_cls_fcs=1,
- num_reg_fcs=3,
- feedforward_channels=2048,
- in_channels=256,
- dropout=0.0,
- ffn_act_cfg=dict(type='ReLU', inplace=True),
- dynamic_conv_cfg=dict(
- type='DynamicConv',
- in_channels=256,
- feat_channels=64,
- out_channels=256,
- input_feat_shape=7,
- act_cfg=dict(type='ReLU', inplace=True),
- norm_cfg=dict(type='LN')),
- loss_iou=dict(type='GIoULoss', loss_weight=2.0),
- **kwargs):
- super(DIIHead, self).__init__(
- num_classes=num_classes,
- reg_decoded_bbox=True,
- reg_class_agnostic=True,
- **kwargs)
- self.loss_iou = build_loss(loss_iou)
- self.in_channels = in_channels
- self.fp16_enabled = False
- self.attention = MultiheadAttention(in_channels, num_heads, dropout)
- self.attention_norm = build_norm_layer(dict(type='LN'), in_channels)[1]
-
- self.instance_interactive_conv = build_transformer(dynamic_conv_cfg)
- self.instance_interactive_conv_dropout = nn.Dropout(dropout)
- self.instance_interactive_conv_norm = build_norm_layer(
- dict(type='LN'), in_channels)[1]
-
- self.ffn = FFN(
- in_channels,
- feedforward_channels,
- num_ffn_fcs,
- act_cfg=ffn_act_cfg,
- dropout=dropout)
- self.ffn_norm = build_norm_layer(dict(type='LN'), in_channels)[1]
-
- self.cls_fcs = nn.ModuleList()
- for _ in range(num_cls_fcs):
- self.cls_fcs.append(
- nn.Linear(in_channels, in_channels, bias=False))
- self.cls_fcs.append(
- build_norm_layer(dict(type='LN'), in_channels)[1])
- self.cls_fcs.append(
- build_activation_layer(dict(type='ReLU', inplace=True)))
-
- # over load the self.fc_cls in BBoxHead
- if self.loss_cls.use_sigmoid:
- self.fc_cls = nn.Linear(in_channels, self.num_classes)
- else:
- self.fc_cls = nn.Linear(in_channels, self.num_classes + 1)
-
- self.reg_fcs = nn.ModuleList()
- for _ in range(num_reg_fcs):
- self.reg_fcs.append(
- nn.Linear(in_channels, in_channels, bias=False))
- self.reg_fcs.append(
- build_norm_layer(dict(type='LN'), in_channels)[1])
- self.reg_fcs.append(
- build_activation_layer(dict(type='ReLU', inplace=True)))
- # over load the self.fc_cls in BBoxHead
- self.fc_reg = nn.Linear(in_channels, 4)
-
- assert self.reg_class_agnostic, 'DIIHead only ' \
- 'suppport `reg_class_agnostic=True` '
- assert self.reg_decoded_bbox, 'DIIHead only ' \
- 'suppport `reg_decoded_bbox=True`'
-
- def init_weights(self):
- """Use xavier initialization for all weight parameter and set
- classification head bias as a specific value when use focal loss."""
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- else:
- # adopt the default initialization for
- # the weight and bias of the layer norm
- pass
- if self.loss_cls.use_sigmoid:
- bias_init = bias_init_with_prob(0.01)
- nn.init.constant_(self.fc_cls.bias, bias_init)
-
- @auto_fp16()
- def forward(self, roi_feat, proposal_feat):
- """Forward function of Dynamic Instance Interactive Head.
-
- Args:
- roi_feat (Tensor): Roi-pooling features with shape
- (batch_size*num_proposals, feature_dimensions,
- pooling_h , pooling_w).
- proposal_feat (Tensor): Intermediate feature get from
- diihead in last stage, has shape
- (batch_size, num_proposals, feature_dimensions)
-
- Returns:
- tuple[Tensor]: Usually a tuple of classification scores
- and bbox prediction and a intermediate feature.
-
- - cls_scores (Tensor): Classification scores for
- all proposals, has shape
- (batch_size, num_proposals, num_classes).
- - bbox_preds (Tensor): Box energies / deltas for
- all proposals, has shape
- (batch_size, num_proposals, 4).
- - obj_feat (Tensor): Object feature before classification
- and regression subnet, has shape
- (batch_size, num_proposal, feature_dimensions).
- """
- N, num_proposals = proposal_feat.shape[:2]
-
- # Self attention
- proposal_feat = proposal_feat.permute(1, 0, 2)
- proposal_feat = self.attention_norm(self.attention(proposal_feat))
-
- # instance interactive
- proposal_feat = proposal_feat.permute(1, 0,
- 2).reshape(-1, self.in_channels)
- proposal_feat_iic = self.instance_interactive_conv(
- proposal_feat, roi_feat)
- proposal_feat = proposal_feat + self.instance_interactive_conv_dropout(
- proposal_feat_iic)
- obj_feat = self.instance_interactive_conv_norm(proposal_feat)
-
- # FFN
- obj_feat = self.ffn_norm(self.ffn(obj_feat))
-
- cls_feat = obj_feat
- reg_feat = obj_feat
-
- for cls_layer in self.cls_fcs:
- cls_feat = cls_layer(cls_feat)
- for reg_layer in self.reg_fcs:
- reg_feat = reg_layer(reg_feat)
-
- cls_score = self.fc_cls(cls_feat).view(N, num_proposals, -1)
- bbox_delta = self.fc_reg(reg_feat).view(N, num_proposals, -1)
-
- return cls_score, bbox_delta, obj_feat.view(N, num_proposals, -1)
-
- @force_fp32(apply_to=('cls_score', 'bbox_pred'))
- def loss(self,
- cls_score,
- bbox_pred,
- labels,
- label_weights,
- bbox_targets,
- bbox_weights,
- imgs_whwh=None,
- reduction_override=None,
- **kwargs):
- """"Loss function of DIIHead, get loss of all images.
-
- Args:
- cls_score (Tensor): Classification prediction
- results of all class, has shape
- (batch_size * num_proposals_single_image, num_classes)
- bbox_pred (Tensor): Regression prediction results,
- has shape
- (batch_size * num_proposals_single_image, 4), the last
- dimension 4 represents [tl_x, tl_y, br_x, br_y].
- labels (Tensor): Label of each proposals, has shape
- (batch_size * num_proposals_single_image
- label_weights (Tensor): Classification loss
- weight of each proposals, has shape
- (batch_size * num_proposals_single_image
- bbox_targets (Tensor): Regression targets of each
- proposals, has shape
- (batch_size * num_proposals_single_image, 4),
- the last dimension 4 represents
- [tl_x, tl_y, br_x, br_y].
- bbox_weights (Tensor): Regression loss weight of each
- proposals's coordinate, has shape
- (batch_size * num_proposals_single_image, 4),
- imgs_whwh (Tensor): imgs_whwh (Tensor): Tensor with\
- shape (batch_size, num_proposals, 4), the last
- dimension means
- [img_width,img_height, img_width, img_height].
- reduction_override (str, optional): The reduction
- method used to override the original reduction
- method of the loss. Options are "none",
- "mean" and "sum". Defaults to None,
-
- Returns:
- dict[str, Tensor]: Dictionary of loss components
- """
- losses = dict()
- bg_class_ind = self.num_classes
- # note in spare rcnn num_gt == num_pos
- pos_inds = (labels >= 0) & (labels < bg_class_ind)
- num_pos = pos_inds.sum().float()
- avg_factor = reduce_mean(num_pos)
- if cls_score is not None:
- if cls_score.numel() > 0:
- losses['loss_cls'] = self.loss_cls(
- cls_score,
- labels,
- label_weights,
- avg_factor=avg_factor,
- reduction_override=reduction_override)
- losses['pos_acc'] = accuracy(cls_score[pos_inds],
- labels[pos_inds])
- if bbox_pred is not None:
- # 0~self.num_classes-1 are FG, self.num_classes is BG
- # do not perform bounding box regression for BG anymore.
- if pos_inds.any():
- pos_bbox_pred = bbox_pred.reshape(bbox_pred.size(0),
- 4)[pos_inds.type(torch.bool)]
- imgs_whwh = imgs_whwh.reshape(bbox_pred.size(0),
- 4)[pos_inds.type(torch.bool)]
- losses['loss_bbox'] = self.loss_bbox(
- pos_bbox_pred / imgs_whwh,
- bbox_targets[pos_inds.type(torch.bool)] / imgs_whwh,
- bbox_weights[pos_inds.type(torch.bool)],
- avg_factor=avg_factor)
- losses['loss_iou'] = self.loss_iou(
- pos_bbox_pred,
- bbox_targets[pos_inds.type(torch.bool)],
- bbox_weights[pos_inds.type(torch.bool)],
- avg_factor=avg_factor)
- else:
- losses['loss_bbox'] = bbox_pred.sum() * 0
- losses['loss_iou'] = bbox_pred.sum() * 0
- return losses
-
- def _get_target_single(self, pos_inds, neg_inds, pos_bboxes, neg_bboxes,
- pos_gt_bboxes, pos_gt_labels, cfg):
- """Calculate the ground truth for proposals in the single image
- according to the sampling results.
-
- Almost the same as the implementation in `bbox_head`,
- we add pos_inds and neg_inds to select positive and
- negative samples instead of selecting the first num_pos
- as positive samples.
-
- Args:
- pos_inds (Tensor): The length is equal to the
- positive sample numbers contain all index
- of the positive sample in the origin proposal set.
- neg_inds (Tensor): The length is equal to the
- negative sample numbers contain all index
- of the negative sample in the origin proposal set.
- pos_bboxes (Tensor): Contains all the positive boxes,
- has shape (num_pos, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- neg_bboxes (Tensor): Contains all the negative boxes,
- has shape (num_neg, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- pos_gt_bboxes (Tensor): Contains all the gt_boxes,
- has shape (num_gt, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- pos_gt_labels (Tensor): Contains all the gt_labels,
- has shape (num_gt).
- cfg (obj:`ConfigDict`): `train_cfg` of R-CNN.
-
- Returns:
- Tuple[Tensor]: Ground truth for proposals in a single image.
- Containing the following Tensors:
-
- - labels(Tensor): Gt_labels for all proposals, has
- shape (num_proposals,).
- - label_weights(Tensor): Labels_weights for all proposals, has
- shape (num_proposals,).
- - bbox_targets(Tensor):Regression target for all proposals, has
- shape (num_proposals, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- - bbox_weights(Tensor):Regression weights for all proposals,
- has shape (num_proposals, 4).
- """
- num_pos = pos_bboxes.size(0)
- num_neg = neg_bboxes.size(0)
- num_samples = num_pos + num_neg
-
- # original implementation uses new_zeros since BG are set to be 0
- # now use empty & fill because BG cat_id = num_classes,
- # FG cat_id = [0, num_classes-1]
- labels = pos_bboxes.new_full((num_samples, ),
- self.num_classes,
- dtype=torch.long)
- label_weights = pos_bboxes.new_zeros(num_samples)
- bbox_targets = pos_bboxes.new_zeros(num_samples, 4)
- bbox_weights = pos_bboxes.new_zeros(num_samples, 4)
- if num_pos > 0:
- labels[pos_inds] = pos_gt_labels
- pos_weight = 1.0 if cfg.pos_weight <= 0 else cfg.pos_weight
- label_weights[pos_inds] = pos_weight
- if not self.reg_decoded_bbox:
- pos_bbox_targets = self.bbox_coder.encode(
- pos_bboxes, pos_gt_bboxes)
- else:
- pos_bbox_targets = pos_gt_bboxes
- bbox_targets[pos_inds, :] = pos_bbox_targets
- bbox_weights[pos_inds, :] = 1
- if num_neg > 0:
- label_weights[neg_inds] = 1.0
-
- return labels, label_weights, bbox_targets, bbox_weights
-
- def get_targets(self,
- sampling_results,
- gt_bboxes,
- gt_labels,
- rcnn_train_cfg,
- concat=True):
- """Calculate the ground truth for all samples in a batch according to
- the sampling_results.
-
- Almost the same as the implementation in bbox_head, we passed
- additional parameters pos_inds_list and neg_inds_list to
- `_get_target_single` function.
-
- Args:
- sampling_results (List[obj:SamplingResults]): Assign results of
- all images in a batch after sampling.
- gt_bboxes (list[Tensor]): Gt_bboxes of all images in a batch,
- each tensor has shape (num_gt, 4), the last dimension 4
- represents [tl_x, tl_y, br_x, br_y].
- gt_labels (list[Tensor]): Gt_labels of all images in a batch,
- each tensor has shape (num_gt,).
- rcnn_train_cfg (obj:`ConfigDict`): `train_cfg` of RCNN.
- concat (bool): Whether to concatenate the results of all
- the images in a single batch.
-
- Returns:
- Tuple[Tensor]: Ground truth for proposals in a single image.
- Containing the following list of Tensors:
-
- - labels (list[Tensor],Tensor): Gt_labels for all
- proposals in a batch, each tensor in list has
- shape (num_proposals,) when `concat=False`, otherwise just
- a single tensor has shape (num_all_proposals,).
- - label_weights (list[Tensor]): Labels_weights for
- all proposals in a batch, each tensor in list has shape
- (num_proposals,) when `concat=False`, otherwise just a
- single tensor has shape (num_all_proposals,).
- - bbox_targets (list[Tensor],Tensor): Regression target
- for all proposals in a batch, each tensor in list has
- shape (num_proposals, 4) when `concat=False`, otherwise
- just a single tensor has shape (num_all_proposals, 4),
- the last dimension 4 represents [tl_x, tl_y, br_x, br_y].
- - bbox_weights (list[tensor],Tensor): Regression weights for
- all proposals in a batch, each tensor in list has shape
- (num_proposals, 4) when `concat=False`, otherwise just a
- single tensor has shape (num_all_proposals, 4).
- """
- pos_inds_list = [res.pos_inds for res in sampling_results]
- neg_inds_list = [res.neg_inds for res in sampling_results]
- pos_bboxes_list = [res.pos_bboxes for res in sampling_results]
- neg_bboxes_list = [res.neg_bboxes for res in sampling_results]
- pos_gt_bboxes_list = [res.pos_gt_bboxes for res in sampling_results]
- pos_gt_labels_list = [res.pos_gt_labels for res in sampling_results]
- labels, label_weights, bbox_targets, bbox_weights = multi_apply(
- self._get_target_single,
- pos_inds_list,
- neg_inds_list,
- pos_bboxes_list,
- neg_bboxes_list,
- pos_gt_bboxes_list,
- pos_gt_labels_list,
- cfg=rcnn_train_cfg)
- if concat:
- labels = torch.cat(labels, 0)
- label_weights = torch.cat(label_weights, 0)
- bbox_targets = torch.cat(bbox_targets, 0)
- bbox_weights = torch.cat(bbox_weights, 0)
- return labels, label_weights, bbox_targets, bbox_weights
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/maskiou_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/maskiou_head.py
deleted file mode 100644
index 39bcd6a7dbdb089cd19cef811038e0b6a80ab89a..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/maskiou_head.py
+++ /dev/null
@@ -1,186 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-from mmcv.cnn import Conv2d, Linear, MaxPool2d, kaiming_init, normal_init
-from mmcv.runner import force_fp32
-from torch.nn.modules.utils import _pair
-
-from mmdet.models.builder import HEADS, build_loss
-
-
-@HEADS.register_module()
-class MaskIoUHead(nn.Module):
- """Mask IoU Head.
-
- This head predicts the IoU of predicted masks and corresponding gt masks.
- """
-
- def __init__(self,
- num_convs=4,
- num_fcs=2,
- roi_feat_size=14,
- in_channels=256,
- conv_out_channels=256,
- fc_out_channels=1024,
- num_classes=80,
- loss_iou=dict(type='MSELoss', loss_weight=0.5)):
- super(MaskIoUHead, self).__init__()
- self.in_channels = in_channels
- self.conv_out_channels = conv_out_channels
- self.fc_out_channels = fc_out_channels
- self.num_classes = num_classes
- self.fp16_enabled = False
-
- self.convs = nn.ModuleList()
- for i in range(num_convs):
- if i == 0:
- # concatenation of mask feature and mask prediction
- in_channels = self.in_channels + 1
- else:
- in_channels = self.conv_out_channels
- stride = 2 if i == num_convs - 1 else 1
- self.convs.append(
- Conv2d(
- in_channels,
- self.conv_out_channels,
- 3,
- stride=stride,
- padding=1))
-
- roi_feat_size = _pair(roi_feat_size)
- pooled_area = (roi_feat_size[0] // 2) * (roi_feat_size[1] // 2)
- self.fcs = nn.ModuleList()
- for i in range(num_fcs):
- in_channels = (
- self.conv_out_channels *
- pooled_area if i == 0 else self.fc_out_channels)
- self.fcs.append(Linear(in_channels, self.fc_out_channels))
-
- self.fc_mask_iou = Linear(self.fc_out_channels, self.num_classes)
- self.relu = nn.ReLU()
- self.max_pool = MaxPool2d(2, 2)
- self.loss_iou = build_loss(loss_iou)
-
- def init_weights(self):
- for conv in self.convs:
- kaiming_init(conv)
- for fc in self.fcs:
- kaiming_init(
- fc,
- a=1,
- mode='fan_in',
- nonlinearity='leaky_relu',
- distribution='uniform')
- normal_init(self.fc_mask_iou, std=0.01)
-
- def forward(self, mask_feat, mask_pred):
- mask_pred = mask_pred.sigmoid()
- mask_pred_pooled = self.max_pool(mask_pred.unsqueeze(1))
-
- x = torch.cat((mask_feat, mask_pred_pooled), 1)
-
- for conv in self.convs:
- x = self.relu(conv(x))
- x = x.flatten(1)
- for fc in self.fcs:
- x = self.relu(fc(x))
- mask_iou = self.fc_mask_iou(x)
- return mask_iou
-
- @force_fp32(apply_to=('mask_iou_pred', ))
- def loss(self, mask_iou_pred, mask_iou_targets):
- pos_inds = mask_iou_targets > 0
- if pos_inds.sum() > 0:
- loss_mask_iou = self.loss_iou(mask_iou_pred[pos_inds],
- mask_iou_targets[pos_inds])
- else:
- loss_mask_iou = mask_iou_pred.sum() * 0
- return dict(loss_mask_iou=loss_mask_iou)
-
- @force_fp32(apply_to=('mask_pred', ))
- def get_targets(self, sampling_results, gt_masks, mask_pred, mask_targets,
- rcnn_train_cfg):
- """Compute target of mask IoU.
-
- Mask IoU target is the IoU of the predicted mask (inside a bbox) and
- the gt mask of corresponding gt mask (the whole instance).
- The intersection area is computed inside the bbox, and the gt mask area
- is computed with two steps, firstly we compute the gt area inside the
- bbox, then divide it by the area ratio of gt area inside the bbox and
- the gt area of the whole instance.
-
- Args:
- sampling_results (list[:obj:`SamplingResult`]): sampling results.
- gt_masks (BitmapMask | PolygonMask): Gt masks (the whole instance)
- of each image, with the same shape of the input image.
- mask_pred (Tensor): Predicted masks of each positive proposal,
- shape (num_pos, h, w).
- mask_targets (Tensor): Gt mask of each positive proposal,
- binary map of the shape (num_pos, h, w).
- rcnn_train_cfg (dict): Training config for R-CNN part.
-
- Returns:
- Tensor: mask iou target (length == num positive).
- """
- pos_proposals = [res.pos_bboxes for res in sampling_results]
- pos_assigned_gt_inds = [
- res.pos_assigned_gt_inds for res in sampling_results
- ]
-
- # compute the area ratio of gt areas inside the proposals and
- # the whole instance
- area_ratios = map(self._get_area_ratio, pos_proposals,
- pos_assigned_gt_inds, gt_masks)
- area_ratios = torch.cat(list(area_ratios))
- assert mask_targets.size(0) == area_ratios.size(0)
-
- mask_pred = (mask_pred > rcnn_train_cfg.mask_thr_binary).float()
- mask_pred_areas = mask_pred.sum((-1, -2))
-
- # mask_pred and mask_targets are binary maps
- overlap_areas = (mask_pred * mask_targets).sum((-1, -2))
-
- # compute the mask area of the whole instance
- gt_full_areas = mask_targets.sum((-1, -2)) / (area_ratios + 1e-7)
-
- mask_iou_targets = overlap_areas / (
- mask_pred_areas + gt_full_areas - overlap_areas)
- return mask_iou_targets
-
- def _get_area_ratio(self, pos_proposals, pos_assigned_gt_inds, gt_masks):
- """Compute area ratio of the gt mask inside the proposal and the gt
- mask of the corresponding instance."""
- num_pos = pos_proposals.size(0)
- if num_pos > 0:
- area_ratios = []
- proposals_np = pos_proposals.cpu().numpy()
- pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy()
- # compute mask areas of gt instances (batch processing for speedup)
- gt_instance_mask_area = gt_masks.areas
- for i in range(num_pos):
- gt_mask = gt_masks[pos_assigned_gt_inds[i]]
-
- # crop the gt mask inside the proposal
- bbox = proposals_np[i, :].astype(np.int32)
- gt_mask_in_proposal = gt_mask.crop(bbox)
-
- ratio = gt_mask_in_proposal.areas[0] / (
- gt_instance_mask_area[pos_assigned_gt_inds[i]] + 1e-7)
- area_ratios.append(ratio)
- area_ratios = torch.from_numpy(np.stack(area_ratios)).float().to(
- pos_proposals.device)
- else:
- area_ratios = pos_proposals.new_zeros((0, ))
- return area_ratios
-
- @force_fp32(apply_to=('mask_iou_pred', ))
- def get_mask_scores(self, mask_iou_pred, det_bboxes, det_labels):
- """Get the mask scores.
-
- mask_score = bbox_score * mask_iou
- """
- inds = range(det_labels.size(0))
- mask_scores = mask_iou_pred[inds, det_labels] * det_bboxes[inds, -1]
- mask_scores = mask_scores.cpu().numpy()
- det_labels = det_labels.cpu().numpy()
- return [mask_scores[det_labels == i] for i in range(self.num_classes)]
diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/model/wavenet.py b/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/model/wavenet.py
deleted file mode 100644
index 481c02d9cc776eba40e578e1b2549bf352357be8..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/GenerSpeech/modules/GenerSpeech/model/wavenet.py
+++ /dev/null
@@ -1,87 +0,0 @@
-from modules.commons.common_layers import *
-
-
-# @torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0,
- p_dropout=0, share_cond_layers=False):
- super(WN, self).__init__()
- assert (kernel_size % 2 == 1)
- assert (hidden_channels % 2 == 0)
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
- self.share_cond_layers = share_cond_layers
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0 and not share_cond_layers:
- cond_layer = torch.nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2 * hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask=None, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None and not self.share_cond_layers:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- x_in = self.drop(x_in)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset:cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- x = (x + res_skip_acts[:, :self.hidden_channels, :]) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels:, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- def remove_weight_norm(m):
- try:
- nn.utils.remove_weight_norm(m)
- except ValueError: # this module didn't have weight norm
- return
-
- self.apply(remove_weight_norm)
diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/__init__.py b/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/RuthBebe/sentiment_analysis/README.md b/spaces/RuthBebe/sentiment_analysis/README.md
deleted file mode 100644
index 97dd6d5c85e7db29ab892ee7809b0f919008057b..0000000000000000000000000000000000000000
--- a/spaces/RuthBebe/sentiment_analysis/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sentiment Analysis
-emoji: 🌍
-colorFrom: purple
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
-license: cc
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian//copy_of_wavlm_tun.py b/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian//copy_of_wavlm_tun.py
deleted file mode 100644
index 2efecd1bc0624df2ac997d5ee3c50e7301380b61..0000000000000000000000000000000000000000
--- a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian//copy_of_wavlm_tun.py
+++ /dev/null
@@ -1,761 +0,0 @@
-# -*- coding: utf-8 -*-
-
-import os
-import sys
-import torch
-import logging
-import speechbrain as sb
-from speechbrain.utils.distributed import run_on_main
-from hyperpyyaml import load_hyperpyyaml
-from pathlib import Path
-import torchaudio.transforms as T
-import torchaudio
-import numpy as np
-import kenlm
-from pyctcdecode import build_ctcdecoder
-import re
-
-# Commented out IPython magic to ensure Python compatibility.
-# %cd /content/drive/MyDrive/tunisian_corpora/tunisian_without_wavlm
-
-hparams_file, run_opts, overrides = sb.parse_arguments(["semi_supervised_test_tunisian.yaml"])
-
-# If distributed_launch=True then
-# create ddp_group with the right communication protocol
-sb.utils.distributed.ddp_init_group(run_opts)
-
-with open(hparams_file) as fin:
- hparams = load_hyperpyyaml(fin, overrides)
-
-# Create experiment directory
-sb.create_experiment_directory(
- experiment_directory=hparams["output_folder"],
- hyperparams_to_save=hparams_file,
- overrides=overrides,
-)
-"""
-def read_labels_file(labels_file):
- with open(labels_file, "r",encoding="utf-8") as lf:
- lines = lf.read().splitlines()
- division = "==="
- numbers = {}
- for line in lines :
- if division in line :
- break
- string, number = line.split("=>")
- number = int(number)
- string = string[1:-2]
- numbers[number] = string
- return [numbers[x] for x in range(len(numbers))]
-
-labels = read_labels_file(os.path.join(hparams["save_folder"], "label_encoder.txt"))
-labels = [""] + labels[1:-1] + ["1"]
-
-# Dataset prep (parsing Librispeech)
-"""
-
-def dataio_prepare(hparams):
- """This function prepares the datasets to be used in the brain class.
- It also defines the data processing pipeline through user-defined functions."""
-
- # 1. Define datasets
- data_folder = hparams["data_folder"]
-
- train_data = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=hparams["train_csv"], replacements={"data_root": data_folder},
- )
-
- if hparams["sorting"] == "ascending":
- # we sort training data to speed up training and get better results.
- train_data = train_data.filtered_sorted(
- sort_key="duration",
- key_max_value={"duration": hparams["avoid_if_longer_than"]},
- )
- # when sorting do not shuffle in dataloader ! otherwise is pointless
- hparams["dataloader_options"]["shuffle"] = False
-
- elif hparams["sorting"] == "descending":
- train_data = train_data.filtered_sorted(
- sort_key="duration",
- reverse=True,
- key_max_value={"duration": hparams["avoid_if_longer_than"]},
- )
- # when sorting do not shuffle in dataloader ! otherwise is pointless
- hparams["dataloader_options"]["shuffle"] = False
-
- elif hparams["sorting"] == "random":
- pass
-
- else:
- raise NotImplementedError(
- "sorting must be random, ascending or descending"
- )
-
- valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=hparams["valid_csv"], replacements={"data_root": data_folder},
- )
- # We also sort the validation data so it is faster to validate
- valid_data = valid_data.filtered_sorted(sort_key="duration")
- test_datasets = {}
- for csv_file in hparams["test_csv"]:
- name = Path(csv_file).stem
- test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv(
- csv_path=csv_file, replacements={"data_root": data_folder}
- )
- test_datasets[name] = test_datasets[name].filtered_sorted(
- sort_key="duration"
- )
-
- datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()]
-
-
- # 2. Define audio pipeline:
- @sb.utils.data_pipeline.takes("wav")
- @sb.utils.data_pipeline.provides("sig")
- def audio_pipeline(wav):
- info = torchaudio.info(wav)
- sig = sb.dataio.dataio.read_audio(wav)
- if len(sig.shape)>1 :
- sig = torch.mean(sig, dim=1)
- resampled = torchaudio.transforms.Resample(
- info.sample_rate, hparams["sample_rate"],
- )(sig)
- return resampled
-
- sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline)
- label_encoder = sb.dataio.encoder.CTCTextEncoder()
-
- # 3. Define text pipeline:
- @sb.utils.data_pipeline.takes("wrd")
- @sb.utils.data_pipeline.provides(
- "wrd", "char_list", "tokens_list", "tokens"
- )
- def text_pipeline(wrd):
- yield wrd
- char_list = list(wrd)
- yield char_list
- tokens_list = label_encoder.encode_sequence(char_list)
- yield tokens_list
- tokens = torch.LongTensor(tokens_list)
- yield tokens
-
- sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline)
- lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt")
- special_labels = {
- "blank_label": hparams["blank_index"],
- "unk_label": hparams["unk_index"]
- }
- label_encoder.load_or_create(
- path=lab_enc_file,
- from_didatasets=[train_data],
- output_key="char_list",
- special_labels=special_labels,
- sequence_input=True,
- )
-
- # 4. Set output:
- sb.dataio.dataset.set_output_keys(
- datasets, ["id", "sig", "wrd", "char_list", "tokens"],
- )
- return train_data, valid_data,test_datasets, label_encoder
-
-class ASR(sb.core.Brain):
- def compute_forward(self, batch, stage):
- """Forward computations from the waveform batches to the output probabilities."""
-
- batch = batch.to(self.device)
- wavs, wav_lens = batch.sig
- wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device)
-
- if stage == sb.Stage.TRAIN:
- if hasattr(self.hparams, "augmentation"):
- wavs = self.hparams.augmentation(wavs, wav_lens)
-
- # Forward pass
- feats = self.modules.wav2vec2(wavs, wav_lens)
- x = self.modules.enc(feats)
- logits = self.modules.ctc_lin(x)
- p_ctc = self.hparams.log_softmax(logits)
-
- return p_ctc, wav_lens
-
- def custom_encode(self,wavs,wav_lens) :
- wavs = wavs.to(self.device)
- if(wav_lens is not None): wav_lens.to(self.device)
-
- feats = self.modules.wav2vec2(wavs, wav_lens)
- x = self.modules.enc(feats)
- logits = self.modules.ctc_lin(x)
- p_ctc = self.hparams.log_softmax(logits)
-
- return feats,p_ctc
-
-
-
- def compute_objectives(self, predictions, batch, stage):
- """Computes the loss (CTC) given predictions and targets."""
-
- p_ctc, wav_lens = predictions
-
- ids = batch.id
- tokens, tokens_lens = batch.tokens
-
- loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens)
-
- if stage != sb.Stage.TRAIN:
- predicted_tokens = sb.decoders.ctc_greedy_decode(
- p_ctc, wav_lens, blank_id=self.hparams.blank_index
- )
- # Decode token terms to words
- if self.hparams.use_language_modelling:
- predicted_words = []
- for logs in p_ctc:
- text = decoder.decode(logs.detach().cpu().numpy())
- predicted_words.append(text.split(" "))
- else:
- predicted_words = [
- "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ")
- for utt_seq in predicted_tokens
- ]
- # Convert indices to words
- target_words = [wrd.split(" ") for wrd in batch.wrd]
-
- self.wer_metric.append(ids, predicted_words, target_words)
- self.cer_metric.append(ids, predicted_words, target_words)
-
- return loss
-
- def fit_batch(self, batch):
- """Train the parameters given a single batch in input"""
- should_step = self.step % self.grad_accumulation_factor == 0
- # Managing automatic mixed precision
- # TOFIX: CTC fine-tuning currently is unstable
- # This is certainly due to CTC being done in fp16 instead of fp32
- if self.auto_mix_prec:
- with torch.cuda.amp.autocast():
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
- with self.no_sync(not should_step):
- self.scaler.scale(
- loss / self.grad_accumulation_factor
- ).backward()
- if should_step:
-
- if not self.hparams.wav2vec2.freeze:
- self.scaler.unscale_(self.wav2vec_optimizer)
- self.scaler.unscale_(self.model_optimizer)
- if self.check_gradients(loss):
- if not self.hparams.wav2vec2.freeze:
- if self.optimizer_step >= self.hparams.warmup_steps:
- self.scaler.step(self.wav2vec_optimizer)
- self.scaler.step(self.model_optimizer)
- self.scaler.update()
- self.zero_grad()
- self.optimizer_step += 1
- else:
- # This is mandatory because HF models have a weird behavior with DDP
- # on the forward pass
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
-
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
-
- with self.no_sync(not should_step):
- (loss / self.grad_accumulation_factor).backward()
- if should_step:
- if self.check_gradients(loss):
- if not self.hparams.wav2vec2.freeze:
- if self.optimizer_step >= self.hparams.warmup_steps:
- self.wav2vec_optimizer.step()
- self.model_optimizer.step()
- self.zero_grad()
- self.optimizer_step += 1
-
- self.on_fit_batch_end(batch, outputs, loss, should_step)
- return loss.detach().cpu()
-
- def evaluate_batch(self, batch, stage):
- """Computations needed for validation/test batches"""
- predictions = self.compute_forward(batch, stage=stage)
- with torch.no_grad():
- loss = self.compute_objectives(predictions, batch, stage=stage)
- return loss.detach()
-
- def on_stage_start(self, stage, epoch):
- """Gets called at the beginning of each epoch"""
- if stage != sb.Stage.TRAIN:
- self.cer_metric = self.hparams.cer_computer()
- self.wer_metric = self.hparams.error_rate_computer()
-
- def on_stage_end(self, stage, stage_loss, epoch):
- """Gets called at the end of an epoch."""
- # Compute/store important stats
- stage_stats = {"loss": stage_loss}
- if stage == sb.Stage.TRAIN:
- self.train_stats = stage_stats
- else:
- stage_stats["CER"] = self.cer_metric.summarize("error_rate")
- stage_stats["WER"] = self.wer_metric.summarize("error_rate")
-
- # Perform end-of-iteration things, like annealing, logging, etc.
- if stage == sb.Stage.VALID:
- old_lr_model, new_lr_model = self.hparams.lr_annealing_model(
- stage_stats["loss"]
- )
- old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec(
- stage_stats["loss"]
- )
- sb.nnet.schedulers.update_learning_rate(
- self.model_optimizer, new_lr_model
- )
- if not self.hparams.wav2vec2.freeze:
- sb.nnet.schedulers.update_learning_rate(
- self.wav2vec_optimizer, new_lr_wav2vec
- )
- self.hparams.train_logger.log_stats(
- stats_meta={
- "epoch": epoch,
- "lr_model": old_lr_model,
- "lr_wav2vec": old_lr_wav2vec,
- },
- train_stats=self.train_stats,
- valid_stats=stage_stats,
- )
- self.checkpointer.save_and_keep_only(
- meta={"WER": stage_stats["WER"]}, min_keys=["WER"],
- )
- elif stage == sb.Stage.TEST:
- self.hparams.train_logger.log_stats(
- stats_meta={"Epoch loaded": self.hparams.epoch_counter.current},
- test_stats=stage_stats,
- )
- with open(self.hparams.wer_file, "w") as w:
- self.wer_metric.write_stats(w)
-
- def init_optimizers(self):
- "Initializes the wav2vec2 optimizer and model optimizer"
-
- # If the wav2vec encoder is unfrozen, we create the optimizer
- if not self.hparams.wav2vec2.freeze:
- self.wav2vec_optimizer = self.hparams.wav2vec_opt_class(
- self.modules.wav2vec2.parameters()
- )
- if self.checkpointer is not None:
- self.checkpointer.add_recoverable(
- "wav2vec_opt", self.wav2vec_optimizer
- )
-
- self.model_optimizer = self.hparams.model_opt_class(
- self.hparams.model.parameters()
- )
-
- if self.checkpointer is not None:
- self.checkpointer.add_recoverable("modelopt", self.model_optimizer)
-
- def zero_grad(self, set_to_none=False):
- if not self.hparams.wav2vec2.freeze:
- self.wav2vec_optimizer.zero_grad(set_to_none)
- self.model_optimizer.zero_grad(set_to_none)
-
-
-"""
-label_encoder = sb.dataio.encoder.CTCTextEncoder()
-
-train_data, valid_data, test_datasets, label_encoder = dataio_prepare(
- hparams
- )
-
-
-# We dynamicaly add the tokenizer to our brain class.
-# NB: This tokenizer corresponds to the one used for the LM!!
-decoder = build_ctcdecoder(
- labels,
- kenlm_model_path="/content/drive/MyDrive/tunisian_corpora/tunisian_without_wavlm/lm_data/arpas/indomain.arpa", # either .arpa or .bin file
- alpha=0.5, # tuned on a val set
- beta=1, # tuned on a val set
-)
-"""
-from speechbrain.pretrained import EncoderASR,EncoderDecoderASR
-french_asr_model = EncoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-fr", savedir="pretrained_models/asr-wav2vec2-commonvoice-fr").cuda()
-french_asr_model.mods.eval()
-#french_asr_model = "r"
-
-english_asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-en", savedir="pretrained_models/asr-wav2vec2-commonvoice-en/").cuda()
-english_asr_model.mods.eval()
-
-asr_brain = ASR(
- modules=hparams["modules"],
- hparams=hparams,
- run_opts=run_opts,
- checkpointer=hparams["checkpointer"],
-)
-asr_brain.checkpointer.recover_if_possible()
-asr_brain.modules.eval()
-"""
-asr_brain.tokenizer = label_encoder
-
-# Testing
-real = True
-if real :
- for k in test_datasets.keys(): # keys are test_clean, test_other etc
- asr_brain.hparams.wer_file = os.path.join(
- hparams["output_folder"], "wer_{}.txt".format(k)
- )
- asr_brain.evaluate(
- test_datasets[k], test_loader_kwargs=hparams["dataloader_options"]
- )
-"""
-
-"""
-from torch.nn.utils.rnn import pad_sequence
-def load_paths(wavs_path):
- waveforms = []
- for path in wavs_path :
- waveform, _ = torchaudio.load(path)
- waveforms.append(waveform.squeeze(0))
- # normalize array length to the bigger arrays by pading with 0's
- padded_arrays = pad_sequence(waveforms, batch_first=True)
- return torch.tensor(padded_arrays)
-
-waveform = load_paths(["/content/drive/MyDrive/tunisian_corpora/tunisian_without_wavlm/samples/Salah10.wav","/content/drive/MyDrive/tunisian_corpora/tunisian_without_wavlm/samples/Salah10.wav"])
-embeddings, posteriogram = asr_brain.custom_encode(waveform,None)
-print(embeddings.shape)
-print(posteriogram.shape)
-"""
-
-from speechbrain.pretrained import EncoderASR,EncoderDecoderASR
-import torchaudio
-import speechbrain as sb
-import torch
-from torch.nn.utils.rnn import pad_sequence
-import torch
-import speechbrain as sb
-import numpy as np
-import torch.optim as optim
-import torch.nn as nn
-
-# Commented out IPython magic to ensure Python compatibility.
-# %ls
-
-#UTILS FUNCTIOJNS
-def get_size_dimensions(arr):
- size_dimensions = []
- while isinstance(arr, list):
- size_dimensions.append(len(arr))
- arr = arr[0]
- return size_dimensions
-
-def scale_array(batch,n):
- scaled_batch = []
-
- for array in batch:
- if(n < len(array)): raise ValueError("Cannot scale Array down")
-
- repeat = round(n/len(array))+1
- scaled_length_array= []
-
- for i in array:
- for j in range(repeat) :
- if(len(scaled_length_array) == n): break
- scaled_length_array.append(i)
-
- scaled_batch.append(scaled_length_array)
-
- return torch.tensor(scaled_batch)
-
-
-def load_paths(wavs_path):
- waveforms = []
- for path in wavs_path :
- waveform, _ = torchaudio.load(path)
- waveforms.append(waveform.squeeze(0))
- # normalize array length to the bigger arrays by pading with 0's
- padded_arrays = pad_sequence(waveforms, batch_first=True)
- return torch.tensor(padded_arrays)
-
-
-
-def word_to_vec(input_string):
- mapping= {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6, 'g': 7, 'h': 8, 'i': 9, 'j': 10, 'k': 11, 'l': 12, 'm': 13, 'n': 14, 'o': 15, 'p': 16, 'q': 17, 'r': 18, 's': 19, 't': 20, 'u': 21, 'v': 22, 'w': 23, 'x': 24, 'y': 25, 'z': 26, 'ا': 27, 'ب': 28, 'ت': 29, 'ث': 30, 'ج': 31, 'ح': 32, 'خ': 33, 'د': 34, 'ذ': 35, 'ر': 36, 'ز': 37, 'س': 38, 'ش': 39, 'ص': 40, 'ض': 41, 'ط': 42, 'ظ': 43, 'ع': 44, 'غ': 45, 'ف': 46, 'ق': 47, 'ك': 48, 'ل': 49, 'م': 50, 'ن': 51, 'ه': 52, 'و': 53, 'ي': 54,' ':55}
-
- numbers = [mapping[word] for word in input_string if word in mapping]
- return numbers
-
-device = 'cuda'
-verbose = 0
-#FLOW LEVEL FUNCTIONS
-def merge_strategy(embeddings1, embeddings2, embeddings3,post1, post2,post3):
-
-
- post1 = post1.to(device)
- post2 = post2.to(device)
- post3 = post3.to(device)
- embeddings1 = embeddings1.to(device)
- embeddings2 = embeddings2.to(device)
- embeddings3 = embeddings3.to(device)
-
- posteriograms_merged = torch.cat((post1,post2,post3),dim=2)
- embeddings_merged = torch.cat((embeddings1,embeddings2,embeddings3),dim=2)
-
- if(verbose !=0):
- print('MERGED POST ',posteriograms_merged.shape)
- print('MERGED emb ',embeddings_merged.shape)
-
- return torch.cat((posteriograms_merged,embeddings_merged),dim=2).to(device)
-
-def decode(model,wavs,wav_lens):
-
- with torch.no_grad():
- wav_lens = wav_lens.to(model.device)
- encoder_out = model.encode_batch(wavs, wav_lens)
- predictions = model.decoding_function(encoder_out, wav_lens)
- return predictions
-
-def middle_layer(batch, lens):
-
- tn_embeddings, tn_posteriogram = asr_brain.custom_encode(batch,None)
-
- fr_embeddings = french_asr_model.mods.encoder.wav2vec2(batch)
- fr_posteriogram =french_asr_model.encode_batch(batch,lens)
-
- en_embeddings = english_asr_model.encode_batch(batch, lens)
- #scores, en_posteriogram = english_asr_model.mods.decoder(en_embeddings ,lens)
- en_posteriogram = en_embeddings
-
- if(verbose !=0):
- print('[EMBEDDINGS] FR:',fr_embeddings.shape, "EN:",en_embeddings.shape, "TN:", tn_embeddings.shape)
- print('[POSTERIOGRAM] FR:',fr_posteriogram.shape, "EN:",en_posteriogram.shape,"TN:",tn_posteriogram.shape)
-
-
- bilangual_sample = merge_strategy(fr_embeddings,en_embeddings,tn_embeddings,fr_posteriogram,en_posteriogram,tn_posteriogram)
- return bilangual_sample
-
-class Mixer(sb.core.Brain):
-
- def compute_forward(self, batch, stage):
- """Forward computations from the waveform batches to the output probabilities."""
- wavs, wav_lens = batch.sig
- wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device)
-
- if stage == sb.Stage.TRAIN:
- if hasattr(self.hparams, "augmentation"):
- wavs = self.hparams.augmentation(wavs, wav_lens)
-
- multi_langual_feats = middle_layer(wavs, wav_lens)
- multi_langual_feats= multi_langual_feats.to(device)
- feats, _ = self.modules.enc(multi_langual_feats)
- logits = self.modules.ctc_lin(feats)
- p_ctc = self.hparams.log_softmax(logits)
-
- if stage!= sb.Stage.TRAIN:
- p_tokens = sb.decoders.ctc_greedy_decode(
- p_ctc, wav_lens, blank_id=self.hparams.blank_index
- )
- else :
- p_tokens = None
- return p_ctc, wav_lens, p_tokens
-
- def compute_objectives(self, predictions, batch, stage):
- """Computes the loss (CTC) given predictions and targets."""
-
- p_ctc, wav_lens , predicted_tokens= predictions
-
- ids = batch.id
- tokens, tokens_lens = batch.tokens
-
- loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens)
-
-
- if stage != sb.Stage.TRAIN:
- predicted_words = [
- "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ")
- for utt_seq in predicted_tokens
- ]
- target_words = [wrd.split(" ") for wrd in batch.wrd]
- self.wer_metric.append(ids, predicted_words, target_words)
- self.cer_metric.append(ids, predicted_words, target_words)
-
- return loss
-
- def fit_batch(self, batch):
- """Train the parameters given a single batch in input"""
- should_step = self.step % self.grad_accumulation_factor == 0
- # Managing automatic mixed precision
- # TOFIX: CTC fine-tuning currently is unstable
- # This is certainly due to CTC being done in fp16 instead of fp32
- if self.auto_mix_prec:
- with torch.cuda.amp.autocast():
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
- with self.no_sync(not should_step):
- self.scaler.scale(
- loss / self.grad_accumulation_factor
- ).backward()
- if should_step:
-
-
- self.scaler.unscale_(self.model_optimizer)
- if self.check_gradients(loss):
- self.scaler.step(self.model_optimizer)
- self.scaler.update()
- self.zero_grad()
- self.optimizer_step += 1
- else:
- # This is mandatory because HF models have a weird behavior with DDP
- # on the forward pass
- with self.no_sync():
- outputs = self.compute_forward(batch, sb.Stage.TRAIN)
-
- loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN)
-
- with self.no_sync(not should_step):
- (loss / self.grad_accumulation_factor).backward()
- if should_step:
- if self.check_gradients(loss):
- self.model_optimizer.step()
- self.zero_grad()
- self.optimizer_step += 1
-
- self.on_fit_batch_end(batch, outputs, loss, should_step)
- return loss.detach().cpu()
-
- def evaluate_batch(self, batch, stage):
- """Computations needed for validation/test batches"""
- predictions = self.compute_forward(batch, stage=stage)
- with torch.no_grad():
- loss = self.compute_objectives(predictions, batch, stage=stage)
- return loss.detach()
-
- def on_stage_start(self, stage, epoch):
- """Gets called at the beginning of each epoch"""
- if stage != sb.Stage.TRAIN:
- self.cer_metric = self.hparams.cer_computer()
- self.wer_metric = self.hparams.error_rate_computer()
-
- def on_stage_end(self, stage, stage_loss, epoch):
- """Gets called at the end of an epoch."""
- # Compute/store important stats
- stage_stats = {"loss": stage_loss}
- if stage == sb.Stage.TRAIN:
- self.train_stats = stage_stats
- else:
- stage_stats["CER"] = self.cer_metric.summarize("error_rate")
- stage_stats["WER"] = self.wer_metric.summarize("error_rate")
-
- # Perform end-of-iteration things, like annealing, logging, etc.
- if stage == sb.Stage.VALID:
- old_lr_model, new_lr_model = self.hparams.lr_annealing_model(
- stage_stats["loss"]
- )
- sb.nnet.schedulers.update_learning_rate(
- self.model_optimizer, new_lr_model
- )
- self.hparams.train_logger.log_stats(
- stats_meta={
- "epoch": epoch,
- "lr_model": old_lr_model,
- },
- train_stats=self.train_stats,
- valid_stats=stage_stats,
- )
- self.checkpointer.save_and_keep_only(
- meta={"WER": stage_stats["WER"]}, min_keys=["WER"],
- )
- elif stage == sb.Stage.TEST:
- self.hparams.train_logger.log_stats(
- stats_meta={"Epoch loaded": self.hparams.epoch_counter.current},
- test_stats=stage_stats,
- )
- with open(self.hparams.wer_file, "w") as w:
- self.wer_metric.write_stats(w)
-
- def init_optimizers(self):
-
- self.model_optimizer = self.hparams.model_opt_class(
- self.hparams.model.parameters()
- )
-
- if self.checkpointer is not None:
- self.checkpointer.add_recoverable("modelopt", self.model_optimizer)
-
- def zero_grad(self, set_to_none=False):
-
- self.model_optimizer.zero_grad(set_to_none)
-
-
-hparams_file, run_opts, overrides = sb.parse_arguments([sys.argv[1]])
-
-# If distributed_launch=True then
-# create ddp_group with the right communication protocol
-sb.utils.distributed.ddp_init_group(run_opts)
-
-with open(hparams_file) as fin:
- hparams = load_hyperpyyaml(fin, overrides)
-
-# Create experiment directory
-sb.create_experiment_directory(
- experiment_directory=hparams["output_folder"],
- hyperparams_to_save=hparams_file,
- overrides=overrides,
-)
-"""
-def read_labels_file(labels_file):
- with open(labels_file, "r",encoding="utf-8") as lf:
- lines = lf.read().splitlines()
- division = "==="
- numbers = {}
- for line in lines :
- if division in line :
- break
- string, number = line.split("=>")
- number = int(number)
- string = string[1:-2]
- numbers[number] = string
- return [numbers[x] for x in range(len(numbers))]
-labels = read_labels_file(os.path.join(hparams["save_folder"], "label_encoder.txt"))
-labels = [""] + labels[1:-1] + ["1"]
-
-label_encoder = sb.dataio.encoder.CTCTextEncoder()
-"""
-train_data, valid_data, test_datasets, label_encoder = dataio_prepare(
- hparams
- )
-
-
-
-
-"""
-decoder = build_ctcdecoder(
- labels,
- kenlm_model_path="/content/drive/MyDrive/tunisian_corpora/tunisian_without_wavlm/lm_data/arpas/indomain.arpa", # either .arpa or .bin file
- alpha=0.5, # tuned on a val set
- beta=1, # tuned on a val set
-)
-"""
-mixer = Mixer(
- modules=hparams["modules"],
- hparams=hparams,
- run_opts=run_opts,
- checkpointer=hparams["checkpointer"],
-)
-mixer.tokenizer = label_encoder
-
-
-mixer.fit(
- mixer.hparams.epoch_counter,
- train_data,
- valid_data,
- train_loader_kwargs=hparams["dataloader_options"],
- valid_loader_kwargs=hparams["test_dataloader_options"],
-)
-
-for k in test_datasets.keys(): # keys are test_clean, test_other etc
- mixer.hparams.wer_file = os.path.join(
- hparams["output_folder"], "wer_{}.txt".format(k)
- )
- mixer.evaluate(
- test_datasets[k], test_loader_kwargs=hparams["test_dataloader_options"]
- )
-
diff --git a/spaces/Salesforce/EDICT/my_diffusers/schedulers/scheduling_sde_vp.py b/spaces/Salesforce/EDICT/my_diffusers/schedulers/scheduling_sde_vp.py
deleted file mode 100644
index 66e6ec6616ab01e5ae988b21e9599a0422a9714a..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_diffusers/schedulers/scheduling_sde_vp.py
+++ /dev/null
@@ -1,81 +0,0 @@
-# Copyright 2022 Google Brain and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch
-
-# TODO(Patrick, Anton, Suraj) - make scheduler framework indepedent and clean-up a bit
-
-import numpy as np
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from .scheduling_utils import SchedulerMixin
-
-
-class ScoreSdeVpScheduler(SchedulerMixin, ConfigMixin):
- """
- The variance preserving stochastic differential equation (SDE) scheduler.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`~ConfigMixin`] also provides general loading and saving functionality via the [`~ConfigMixin.save_config`] and
- [`~ConfigMixin.from_config`] functios.
-
- For more information, see the original paper: https://arxiv.org/abs/2011.13456
-
- UNDER CONSTRUCTION
-
- """
-
- @register_to_config
- def __init__(self, num_train_timesteps=2000, beta_min=0.1, beta_max=20, sampling_eps=1e-3, tensor_format="np"):
-
- self.sigmas = None
- self.discrete_sigmas = None
- self.timesteps = None
-
- def set_timesteps(self, num_inference_steps):
- self.timesteps = torch.linspace(1, self.config.sampling_eps, num_inference_steps)
-
- def step_pred(self, score, x, t):
- if self.timesteps is None:
- raise ValueError(
- "`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler"
- )
-
- # TODO(Patrick) better comments + non-PyTorch
- # postprocess model score
- log_mean_coeff = (
- -0.25 * t**2 * (self.config.beta_max - self.config.beta_min) - 0.5 * t * self.config.beta_min
- )
- std = torch.sqrt(1.0 - torch.exp(2.0 * log_mean_coeff))
- score = -score / std[:, None, None, None]
-
- # compute
- dt = -1.0 / len(self.timesteps)
-
- beta_t = self.config.beta_min + t * (self.config.beta_max - self.config.beta_min)
- drift = -0.5 * beta_t[:, None, None, None] * x
- diffusion = torch.sqrt(beta_t)
- drift = drift - diffusion[:, None, None, None] ** 2 * score
- x_mean = x + drift * dt
-
- # add noise
- noise = torch.randn_like(x)
- x = x_mean + diffusion[:, None, None, None] * np.sqrt(-dt) * noise
-
- return x, x_mean
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Sandiago21/automatic-speech-recognition-french/README.md b/spaces/Sandiago21/automatic-speech-recognition-french/README.md
deleted file mode 100644
index 79674d2261abc268ce155d505d4df83f8cc0a869..0000000000000000000000000000000000000000
--- a/spaces/Sandiago21/automatic-speech-recognition-french/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
----
-title: automatic-speech-recognition-french
-app_file: app.py
-sdk: gradio
-sdk_version: 3.36.0
----
diff --git a/spaces/Silentlin/DiffSinger/tasks/tts/fs2_utils.py b/spaces/Silentlin/DiffSinger/tasks/tts/fs2_utils.py
deleted file mode 100644
index 092550863d2fd72f008cc790bc6d950340e68182..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/tasks/tts/fs2_utils.py
+++ /dev/null
@@ -1,173 +0,0 @@
-import matplotlib
-
-matplotlib.use('Agg')
-
-import glob
-import importlib
-from utils.cwt import get_lf0_cwt
-import os
-import torch.optim
-import torch.utils.data
-from utils.indexed_datasets import IndexedDataset
-from utils.pitch_utils import norm_interp_f0
-import numpy as np
-from tasks.base_task import BaseDataset
-import torch
-import torch.optim
-import torch.utils.data
-import utils
-import torch.distributions
-from utils.hparams import hparams
-
-
-class FastSpeechDataset(BaseDataset):
- def __init__(self, prefix, shuffle=False):
- super().__init__(shuffle)
- self.data_dir = hparams['binary_data_dir']
- self.prefix = prefix
- self.hparams = hparams
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
- self.indexed_ds = None
- # self.name2spk_id={}
-
- # pitch stats
- f0_stats_fn = f'{self.data_dir}/train_f0s_mean_std.npy'
- if os.path.exists(f0_stats_fn):
- hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = np.load(f0_stats_fn)
- hparams['f0_mean'] = float(hparams['f0_mean'])
- hparams['f0_std'] = float(hparams['f0_std'])
- else:
- hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = None, None
-
- if prefix == 'test':
- if hparams['test_input_dir'] != '':
- self.indexed_ds, self.sizes = self.load_test_inputs(hparams['test_input_dir'])
- else:
- if hparams['num_test_samples'] > 0:
- self.avail_idxs = list(range(hparams['num_test_samples'])) + hparams['test_ids']
- self.sizes = [self.sizes[i] for i in self.avail_idxs]
-
- if hparams['pitch_type'] == 'cwt':
- _, hparams['cwt_scales'] = get_lf0_cwt(np.ones(10))
-
- def _get_item(self, index):
- if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
- index = self.avail_idxs[index]
- if self.indexed_ds is None:
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- return self.indexed_ds[index]
-
- def __getitem__(self, index):
- hparams = self.hparams
- item = self._get_item(index)
- max_frames = hparams['max_frames']
- spec = torch.Tensor(item['mel'])[:max_frames]
- energy = (spec.exp() ** 2).sum(-1).sqrt()
- mel2ph = torch.LongTensor(item['mel2ph'])[:max_frames] if 'mel2ph' in item else None
- f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
- phone = torch.LongTensor(item['phone'][:hparams['max_input_tokens']])
- pitch = torch.LongTensor(item.get("pitch"))[:max_frames]
- # print(item.keys(), item['mel'].shape, spec.shape)
- sample = {
- "id": index,
- "item_name": item['item_name'],
- "text": item['txt'],
- "txt_token": phone,
- "mel": spec,
- "pitch": pitch,
- "energy": energy,
- "f0": f0,
- "uv": uv,
- "mel2ph": mel2ph,
- "mel_nonpadding": spec.abs().sum(-1) > 0,
- }
- if self.hparams['use_spk_embed']:
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
- if self.hparams['use_spk_id']:
- sample["spk_id"] = item['spk_id']
- # sample['spk_id'] = 0
- # for key in self.name2spk_id.keys():
- # if key in item['item_name']:
- # sample['spk_id'] = self.name2spk_id[key]
- # break
- if self.hparams['pitch_type'] == 'cwt':
- cwt_spec = torch.Tensor(item['cwt_spec'])[:max_frames]
- f0_mean = item.get('f0_mean', item.get('cwt_mean'))
- f0_std = item.get('f0_std', item.get('cwt_std'))
- sample.update({"cwt_spec": cwt_spec, "f0_mean": f0_mean, "f0_std": f0_std})
- elif self.hparams['pitch_type'] == 'ph':
- f0_phlevel_sum = torch.zeros_like(phone).float().scatter_add(0, mel2ph - 1, f0)
- f0_phlevel_num = torch.zeros_like(phone).float().scatter_add(
- 0, mel2ph - 1, torch.ones_like(f0)).clamp_min(1)
- sample["f0_ph"] = f0_phlevel_sum / f0_phlevel_num
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- id = torch.LongTensor([s['id'] for s in samples])
- item_names = [s['item_name'] for s in samples]
- text = [s['text'] for s in samples]
- txt_tokens = utils.collate_1d([s['txt_token'] for s in samples], 0)
- f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
- pitch = utils.collate_1d([s['pitch'] for s in samples])
- uv = utils.collate_1d([s['uv'] for s in samples])
- energy = utils.collate_1d([s['energy'] for s in samples], 0.0)
- mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
- if samples[0]['mel2ph'] is not None else None
- mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
- txt_lengths = torch.LongTensor([s['txt_token'].numel() for s in samples])
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
-
- batch = {
- 'id': id,
- 'item_name': item_names,
- 'nsamples': len(samples),
- 'text': text,
- 'txt_tokens': txt_tokens,
- 'txt_lengths': txt_lengths,
- 'mels': mels,
- 'mel_lengths': mel_lengths,
- 'mel2ph': mel2ph,
- 'energy': energy,
- 'pitch': pitch,
- 'f0': f0,
- 'uv': uv,
- }
-
- if self.hparams['use_spk_embed']:
- spk_embed = torch.stack([s['spk_embed'] for s in samples])
- batch['spk_embed'] = spk_embed
- if self.hparams['use_spk_id']:
- spk_ids = torch.LongTensor([s['spk_id'] for s in samples])
- batch['spk_ids'] = spk_ids
- if self.hparams['pitch_type'] == 'cwt':
- cwt_spec = utils.collate_2d([s['cwt_spec'] for s in samples])
- f0_mean = torch.Tensor([s['f0_mean'] for s in samples])
- f0_std = torch.Tensor([s['f0_std'] for s in samples])
- batch.update({'cwt_spec': cwt_spec, 'f0_mean': f0_mean, 'f0_std': f0_std})
- elif self.hparams['pitch_type'] == 'ph':
- batch['f0'] = utils.collate_1d([s['f0_ph'] for s in samples])
-
- return batch
-
- def load_test_inputs(self, test_input_dir, spk_id=0):
- inp_wav_paths = glob.glob(f'{test_input_dir}/*.wav') + glob.glob(f'{test_input_dir}/*.mp3')
- sizes = []
- items = []
-
- binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizerr.BaseBinarizer')
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
- binarization_args = hparams['binarization_args']
-
- for wav_fn in inp_wav_paths:
- item_name = os.path.basename(wav_fn)
- ph = txt = tg_fn = ''
- wav_fn = wav_fn
- encoder = None
- item = binarizer_cls.process_item(item_name, ph, txt, tg_fn, wav_fn, spk_id, encoder, binarization_args)
- items.append(item)
- sizes.append(item['len'])
- return items, sizes
diff --git a/spaces/Silentlin/DiffSinger/usr/diffspeech_task.py b/spaces/Silentlin/DiffSinger/usr/diffspeech_task.py
deleted file mode 100644
index 05c313f94d07e91a94996a30bedd27b28c8cb04a..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/usr/diffspeech_task.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-
-import utils
-from utils.hparams import hparams
-from .diff.net import DiffNet
-from .diff.shallow_diffusion_tts import GaussianDiffusion
-from .task import DiffFsTask
-from vocoders.base_vocoder import get_vocoder_cls, BaseVocoder
-from utils.pitch_utils import denorm_f0
-from tasks.tts.fs2_utils import FastSpeechDataset
-
-DIFF_DECODERS = {
- 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']),
-}
-
-
-class DiffSpeechTask(DiffFsTask):
- def __init__(self):
- super(DiffSpeechTask, self).__init__()
- self.dataset_cls = FastSpeechDataset
- self.vocoder: BaseVocoder = get_vocoder_cls(hparams)()
-
- def build_tts_model(self):
- mel_bins = hparams['audio_num_mel_bins']
- self.model = GaussianDiffusion(
- phone_encoder=self.phone_encoder,
- out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
- timesteps=hparams['timesteps'],
- K_step=hparams['K_step'],
- loss_type=hparams['diff_loss_type'],
- spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
- )
- if hparams['fs2_ckpt'] != '':
- utils.load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True)
- # self.model.fs2.decoder = None
- for k, v in self.model.fs2.named_parameters():
- if not 'predictor' in k:
- v.requires_grad = False
-
- def build_optimizer(self, model):
- self.optimizer = optimizer = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, model.parameters()),
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- return optimizer
-
- def run_model(self, model, sample, return_output=False, infer=False):
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- target = sample['mels'] # [B, T_s, 80]
- # mel2ph = sample['mel2ph'] if hparams['use_gt_dur'] else None # [B, T_s]
- mel2ph = sample['mel2ph']
- f0 = sample['f0']
- uv = sample['uv']
- energy = sample['energy']
- # fs2_mel = sample['fs2_mels']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = sample[f'cwt_spec']
- f0_mean = sample['f0_mean']
- f0_std = sample['f0_std']
- sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
-
- output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed,
- ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer)
-
- losses = {}
- if 'diff_loss' in output:
- losses['mel'] = output['diff_loss']
- self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses)
- if hparams['use_pitch_embed']:
- self.add_pitch_loss(output, sample, losses)
- if hparams['use_energy_embed']:
- self.add_energy_loss(output['energy_pred'], energy, losses)
- if not return_output:
- return losses
- else:
- return losses, output
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- txt_tokens = sample['txt_tokens'] # [B, T_t]
-
- energy = sample['energy']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- mel2ph = sample['mel2ph']
- f0 = sample['f0']
- uv = sample['uv']
-
- outputs['losses'] = {}
-
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
-
-
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- # model_out = self.model(
- # txt_tokens, spk_embed=spk_embed, mel2ph=None, f0=None, uv=None, energy=None, ref_mels=None, infer=True)
- # self.plot_mel(batch_idx, model_out['mel_out'], model_out['fs2_mel'], name=f'diffspeech_vs_fs2_{batch_idx}')
- model_out = self.model(
- txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy, ref_mels=None, infer=True)
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=model_out.get('f0_denorm'))
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'])
- return outputs
-
- ############
- # validation plots
- ############
- def plot_wav(self, batch_idx, gt_wav, wav_out, is_mel=False, gt_f0=None, f0=None, name=None):
- gt_wav = gt_wav[0].cpu().numpy()
- wav_out = wav_out[0].cpu().numpy()
- gt_f0 = gt_f0[0].cpu().numpy()
- f0 = f0[0].cpu().numpy()
- if is_mel:
- gt_wav = self.vocoder.spec2wav(gt_wav, f0=gt_f0)
- wav_out = self.vocoder.spec2wav(wav_out, f0=f0)
- self.logger.experiment.add_audio(f'gt_{batch_idx}', gt_wav, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step)
- self.logger.experiment.add_audio(f'wav_{batch_idx}', wav_out, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step)
-
diff --git a/spaces/Sourabh2/detectron2-segmentation/app.py b/spaces/Sourabh2/detectron2-segmentation/app.py
deleted file mode 100644
index 6ae59d98184f23c7ec33dacf90cedde664e11402..0000000000000000000000000000000000000000
--- a/spaces/Sourabh2/detectron2-segmentation/app.py
+++ /dev/null
@@ -1,66 +0,0 @@
-try:
- import detectron2
-except:
- import os
- os.system('pip install git+https://github.com/facebookresearch/detectron2.git')
-
-import detectron2
-#from detectron2.utils.logger import setup_logger
-#setup_logger()
-
-# import some common libraries
-import numpy as np
-import gradio as gr
-import os, json, cv2, random
-import requests
-
-# import some common detectron2 utilities
-from detectron2 import model_zoo
-from detectron2.engine import DefaultPredictor
-from detectron2.config import get_cfg
-from detectron2.utils.visualizer import Visualizer
-from detectron2.data import MetadataCatalog, DatasetCatalog
-
-cfg_pan = get_cfg()
-cfg_pan.merge_from_file(model_zoo.get_config_file("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml"))
-cfg_pan.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml")
-
-
-cfg_pan.MODEL.DEVICE = "cpu"
-
-def inference(image_url, image, min_score):
- if image_url:
- r = requests.get(image_url)
- if r:
- im = np.frombuffer(r.content, dtype="uint8")
- im = cv2.imdecode(im, cv2.IMREAD_COLOR)
- else:
- # Model expect BGR!
- im = image[:,:,::-1]
-
- cfg_pan.MODEL.ROI_HEADS.SCORE_THRESH_TEST = min_score
- predictor = DefaultPredictor(cfg_pan)
- panoptic_seg, segments_info = predictor(im)["panoptic_seg"]
- v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg_pan.DATASETS.TRAIN[0]), scale=1.0)
- out = v.draw_panoptic_seg_predictions(panoptic_seg, segments_info)
-
- return out.get_image()
-
-
-title = "DBMDZ Detectron2-Segmentation Model Demo"
-description = "This demo introduces an interactive playground for our trained Detectron2 model. The model was trained on manually annotated segments from digitized books to detect Illustration or Illumination segments on a given page."
-article = 'Detectron model is available from our repository here on the Hugging Face Model Hub.
'
-
-gr.Interface(
- inference,
- [gr.inputs.Textbox(label="Image URL", placeholder="https://api.digitale-sammlungen.de/iiif/image/v2/bsb10483966_00008/full/500,/0/default.jpg"),
- gr.inputs.Image(type="numpy", label="Input Image"),
- gr.Slider(minimum=0.0, maximum=1.0, value=0.5, label="Minimum score"),
- ],
- gr.outputs.Image(type="pil", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=[]).launch()
-
-
\ No newline at end of file
diff --git a/spaces/SuYuanS/AudioCraft_Plus/docs/TRAINING.md b/spaces/SuYuanS/AudioCraft_Plus/docs/TRAINING.md
deleted file mode 100644
index 148de295f2ddfed2e4e893576bf31e1485038b8e..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/docs/TRAINING.md
+++ /dev/null
@@ -1,312 +0,0 @@
-# AudioCraft training pipelines
-
-AudioCraft training pipelines are built on top of PyTorch as our core deep learning library
-and [Flashy](https://github.com/facebookresearch/flashy) as our training pipeline design library,
-and [Dora](https://github.com/facebookresearch/dora) as our experiment manager.
-AudioCraft training pipelines are designed to be research and experiment-friendly.
-
-
-## Environment setup
-
-For the base installation, follow the instructions from the [README.md](../README.md).
-Below are some additional instructions for setting up environment to train new models.
-
-### Team and cluster configuration
-
-In order to support multiple teams and clusters, AudioCraft uses an environment configuration.
-The team configuration allows to specify cluster-specific configurations (e.g. SLURM configuration),
-or convenient mapping of paths between the supported environments.
-
-Each team can have a yaml file under the [configuration folder](../config). To select a team set the
-`AUDIOCRAFT_TEAM` environment variable to a valid team name (e.g. `labs` or `default`):
-```shell
-conda env config vars set AUDIOCRAFT_TEAM=default
-```
-
-Alternatively, you can add it to your `.bashrc`:
-```shell
-export AUDIOCRAFT_TEAM=default
-```
-
-If not defined, the environment will default to the `default` team.
-
-The cluster is automatically detected, but it is also possible to override it by setting
-the `AUDIOCRAFT_CLUSTER` environment variable.
-
-Based on this team and cluster, the environment is then configured with:
-* The dora experiment outputs directory.
-* The available slurm partitions: categorized by global and team.
-* A shared reference directory: In order to facilitate sharing research models while remaining
-agnostic to the used compute cluster, we created the `//reference` symbol that can be used in
-YAML config to point to a defined reference folder containing shared checkpoints
-(e.g. baselines, models for evaluation...).
-
-**Important:** The default output dir for trained models and checkpoints is under `/tmp/`. This is suitable
-only for quick testing. If you are doing anything serious you MUST edit the file `default.yaml` and
-properly set the `dora_dir` entries.
-
-#### Overriding environment configurations
-
-You can set the following environmet variables to bypass the team's environment configuration:
-* `AUDIOCRAFT_CONFIG`: absolute path to a team config yaml file.
-* `AUDIOCRAFT_DORA_DIR`: absolute path to a custom dora directory.
-* `AUDIOCRAFT_REFERENCE_DIR`: absolute path to the shared reference directory.
-
-## Training pipelines
-
-Each task supported in AudioCraft has its own training pipeline and dedicated solver.
-Learn more about solvers and key designs around AudioCraft training pipeline below.
-Please refer to the documentation of each task and model for specific information on a given task.
-
-
-### Solvers
-
-The core training component in AudioCraft is the solver. A solver holds the definition
-of how to solve a given task: It implements the training pipeline logic, combining the datasets,
-model, optimization criterion and components and the full training loop. We refer the reader
-to [Flashy](https://github.com/facebookresearch/flashy) for core principles around solvers.
-
-AudioCraft proposes an initial solver, the `StandardSolver` that is used as the base implementation
-for downstream solvers. This standard solver provides a nice base management of logging,
-checkpoints loading/saving, xp restoration, etc. on top of the base Flashy implementation.
-In AudioCraft, we made the assumption that all tasks are following the same set of stages:
-train, valid, evaluate and generation, each relying on a dedicated dataset.
-
-Each solver is responsible for defining the task to solve and the associated stages
-of the training loop in order to leave the full ownership of the training pipeline
-to the researchers. This includes loading the datasets, building the model and
-optimisation components, registering them and defining the execution of each stage.
-To create a new solver for a given task, one should extend the StandardSolver
-and define each stage of the training loop. One can further customise its own solver
-starting from scratch instead of inheriting from the standard solver.
-
-```python
-from . import base
-from .. import optim
-
-
-class MyNewSolver(base.StandardSolver):
-
- def __init__(self, cfg: omegaconf.DictConfig):
- super().__init__(cfg)
- # one can add custom attributes to the solver
- self.criterion = torch.nn.L1Loss()
-
- def best_metric(self):
- # here optionally specify which metric to use to keep track of best state
- return 'loss'
-
- def build_model(self):
- # here you can instantiate your models and optimization related objects
- # this method will be called by the StandardSolver init method
- self.model = ...
- # the self.cfg attribute contains the raw configuration
- self.optimizer = optim.build_optimizer(self.model.parameters(), self.cfg.optim)
- # don't forget to register the states you'd like to include in your checkpoints!
- self.register_stateful('model', 'optimizer')
- # keep the model best state based on the best value achieved at validation for the given best_metric
- self.register_best('model')
- # if you want to add EMA around the model
- self.register_ema('model')
-
- def build_dataloaders(self):
- # here you can instantiate your dataloaders
- # this method will be called by the StandardSolver init method
- self.dataloaders = ...
-
- ...
-
- # For both train and valid stages, the StandardSolver relies on
- # a share common_train_valid implementation that is in charge of
- # accessing the appropriate loader, iterate over the data up to
- # the specified number of updates_per_epoch, run the ``run_step``
- # function that you need to implement to specify the behavior
- # and finally update the EMA and collect the metrics properly.
- @abstractmethod
- def run_step(self, idx: int, batch: tp.Any, metrics: dict):
- """Perform one training or valid step on a given batch.
- """
- ... # provide your implementation of the solver over a batch
-
- def train(self):
- """Train stage.
- """
- return self.common_train_valid('train')
-
- def valid(self):
- """Valid stage.
- """
- return self.common_train_valid('valid')
-
- @abstractmethod
- def evaluate(self):
- """Evaluate stage.
- """
- ... # provide your implementation here!
-
- @abstractmethod
- def generate(self):
- """Generate stage.
- """
- ... # provide your implementation here!
-```
-
-### About Epochs
-
-AudioCraft Solvers uses the concept of Epoch. One epoch doesn't necessarily mean one pass over the entire
-dataset, but instead represent the smallest amount of computation that we want to work with before checkpointing.
-Typically, we find that having an Epoch time around 30min is ideal both in terms of safety (checkpointing often enough)
-and getting updates often enough. One Epoch is at least a `train` stage that lasts for `optim.updates_per_epoch` (2000 by default),
-and a `valid` stage. You can control how long the valid stage takes with `dataset.valid.num_samples`.
-Other stages (`evaluate`, `generate`) will only happen every X epochs, as given by `evaluate.every` and `generate.every`).
-
-
-### Models
-
-In AudioCraft, a model is a container object that wraps one or more torch modules together
-with potential processing logic to use in a solver. For example, a model would wrap an encoder module,
-a quantisation bottleneck module, a decoder and some tensor processing logic. Each of the previous components
-can be considered as a small « model unit » on its own but the container model is a practical component
-to manipulate and train a set of modules together.
-
-### Datasets
-
-See the [dedicated documentation on datasets](./DATASETS.md).
-
-### Metrics
-
-See the [dedicated documentation on metrics](./METRICS.md).
-
-### Conditioners
-
-AudioCraft language models can be conditioned in various ways and the codebase offers a modular implementation
-of different conditioners that can be potentially combined together.
-Learn more in the [dedicated documentation on conditioning](./CONDITIONING.md).
-
-### Configuration
-
-AudioCraft's configuration is defined in yaml files and the framework relies on
-[hydra](https://hydra.cc/docs/intro/) and [omegaconf](https://omegaconf.readthedocs.io/) to parse
-and manipulate the configuration through Dora.
-
-##### :warning: Important considerations around configurations
-
-Our configuration management relies on Hydra and the concept of group configs to structure
-and compose configurations. Updating the root default configuration files will then have
-an impact on all solvers and tasks.
-**One should never change the default configuration files. Instead they should use Hydra config groups in order to store custom configuration.**
-Once this configuration is created and used for running experiments, you should not edit it anymore.
-
-Note that as we are using Dora as our experiment manager, all our experiment tracking is based on
-signatures computed from delta between configurations.
-**One must therefore ensure backward compatibilty of the configuration at all time.**
-See [Dora's README](https://github.com/facebookresearch/dora) and the
-[section below introduction Dora](#running-experiments-with-dora).
-
-##### Configuration structure
-
-The configuration is organized in config groups:
-* `conditioner`: default values for conditioning modules.
-* `dset`: contains all data source related information (paths to manifest files
-and metadata for a given dataset).
-* `model`: contains configuration for each model defined in AudioCraft and configurations
-for different variants of models.
-* `solver`: contains the default configuration for each solver as well as configuration
-for each solver task, combining all the above components.
-* `teams`: contains the cluster configuration per teams. See environment setup for more details.
-
-The `config.yaml` file is the main configuration that composes the above groups
-and contains default configuration for AudioCraft.
-
-##### Solver's core configuration structure
-
-The core configuration structure shared across solver is available in `solvers/default.yaml`.
-
-##### Other configuration modules
-
-AudioCraft configuration contains the different setups we used for our research and publications.
-
-## Running experiments with Dora
-
-### Launching jobs
-
-Try launching jobs for different tasks locally with dora run:
-
-```shell
-# run compression task with lightweight encodec
-dora run solver=compression/debug
-```
-
-Most of the time, the jobs are launched through dora grids, for example:
-
-```shell
-# run compression task through debug grid
-dora grid compression.debug
-```
-
-Learn more about running experiments with Dora below.
-
-### A small introduction to Dora
-
-[Dora](https://github.com/facebookresearch/dora) is the experiment manager tool used in AudioCraft.
-Check out the README to learn how Dora works. Here is a quick summary of what to know:
-* An XP is a unique set of hyper-parameters with a given signature. The signature is a hash
-of those hyper-parameters. We always refer to an XP with its signature, e.g. 9357e12e. We will see
-after that one can retrieve the hyper-params and re-rerun it in a single command.
-* In fact, the hash is defined as a delta between the base config and the one obtained
-with the config overrides you passed from the command line. This means you must never change
-the `conf/**.yaml` files directly., except for editing things like paths. Changing the default values
-in the config files means the XP signature won't reflect that change, and wrong checkpoints might be reused.
-I know, this is annoying, but the reason is that otherwise, any change to the config file would mean
-that all XPs ran so far would see their signature change.
-
-#### Dora commands
-
-```shell
-dora info -f 81de367c # this will show the hyper-parameter used by a specific XP.
- # Be careful some overrides might present twice, and the right most one
- # will give you the right value for it.
-
-dora run -d -f 81de367c # run an XP with the hyper-parameters from XP 81de367c.
- # `-d` is for distributed, it will use all available GPUs.
-
-dora run -d -f 81de367c dataset.batch_size=32 # start from the config of XP 81de367c but change some hyper-params.
- # This will give you a new XP with a new signature (e.g. 3fe9c332).
-
-dora info -f SIG -t # will tail the log (if the XP has scheduled).
-# if you need to access the logs of the process for rank > 0, in particular because a crash didn't happen in the main
-# process, then use `dora info -f SIG` to get the main log name (finished into something like `/5037674_0_0_log.out`)
-# and worker K can accessed as `/5037674_0_{K}_log.out`.
-# This is only for scheduled jobs, for local distributed runs with `-d`, then you should go into the XP folder,
-# and look for `worker_{K}.log` logs.
-```
-
-An XP runs from a specific folder based on its signature, under the
-`//experiments/audiocraft/outputs/` folder.
-You can safely interrupt a training and resume it, it will reuse any existing checkpoint,
-as it will reuse the same folder. If you made some change to the code and need to ignore
-a previous checkpoint you can use `dora run --clear [RUN ARGS]`.
-
-If you have a Slurm cluster, you can also use the dora grid command, e.g.
-
-```shell
-# run a dummy grid located at `audiocraft/grids/my_grid_folder/my_grid_name.py`
-dora grid my_grid_folder.my_grid_name
-# Run the following will simply display the grid and also initialized the Dora experiments database.
-# You can then simply refer to a config using its signature (e.g. as `dora run -f SIG`).
-dora grid my_grid_folder.my_grid_name --dry_run --init
-```
-
-Please refer to the [Dora documentation](https://github.com/facebookresearch/dora) for more information.
-
-
-#### Clearing up past experiments
-
-```shell
-# This will cancel all the XPs and delete their folder and checkpoints.
-# It will then reschedule them starting from scratch.
-dora grid my_grid_folder.my_grid_name --clear
-# The following will delete the folder and checkpoint for a single XP,
-# and then run it afresh.
-dora run [-f BASE_SIG] [ARGS] --clear
-```
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/syspathcontext.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/syspathcontext.py
deleted file mode 100644
index 7af1ab60af9c797596e9c27036f9db96a8fe70ee..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/syspathcontext.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# encoding: utf-8
-"""
-Context managers for adding things to sys.path temporarily.
-
-Authors:
-
-* Brian Granger
-"""
-
-#-----------------------------------------------------------------------------
-# Copyright (C) 2008-2011 The IPython Development Team
-#
-# Distributed under the terms of the BSD License. The full license is in
-# the file COPYING, distributed as part of this software.
-#-----------------------------------------------------------------------------
-
-import sys
-import warnings
-
-
-class appended_to_syspath(object):
- """
- Deprecated since IPython 8.1, no replacements.
-
- A context for appending a directory to sys.path for a second."""
-
- def __init__(self, dir):
- warnings.warn(
- "`appended_to_syspath` is deprecated since IPython 8.1, and has no replacements",
- DeprecationWarning,
- stacklevel=2,
- )
- self.dir = dir
-
- def __enter__(self):
- if self.dir not in sys.path:
- sys.path.append(self.dir)
- self.added = True
- else:
- self.added = False
-
- def __exit__(self, type, value, traceback):
- if self.added:
- try:
- sys.path.remove(self.dir)
- except ValueError:
- pass
- # Returning False causes any exceptions to be re-raised.
- return False
-
-class prepended_to_syspath(object):
- """A context for prepending a directory to sys.path for a second."""
-
- def __init__(self, dir):
- self.dir = dir
-
- def __enter__(self):
- if self.dir not in sys.path:
- sys.path.insert(0,self.dir)
- self.added = True
- else:
- self.added = False
-
- def __exit__(self, type, value, traceback):
- if self.added:
- try:
- sys.path.remove(self.dir)
- except ValueError:
- pass
- # Returning False causes any exceptions to be re-raised.
- return False
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/index/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/index/__init__.py
deleted file mode 100644
index 06a132e1fa2ae89e05ba5cf040bb3ac17a5312cb..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/index/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from abc import ABC, abstractmethod
-
-
-class Index(ABC):
- @abstractmethod
- def __init__(self, id, settings, metadata):
- pass
-
- @abstractmethod
- def delete(self):
- pass
-
- @abstractmethod
- def delete_from_index(self, ids):
- pass
-
- @abstractmethod
- def add(self, ids, embeddings, update=False):
- pass
-
- @abstractmethod
- def get_nearest_neighbors(self, embedding, n_results, ids):
- pass
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00002-migration-2.psql.sql b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00002-migration-2.psql.sql
deleted file mode 100644
index 01e4b222af541efb9022d2eeb69e39239faecb34..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/db/migrations/00002-migration-2.psql.sql
+++ /dev/null
@@ -1,3 +0,0 @@
-CREATE TABLE table2 (
- name TEXT PRIMARY KEY
-);
diff --git a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/README.md b/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/README.md
deleted file mode 100644
index 5b5fe0e63668eab45a55b140826cb3762862b17c..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/tf/README.md
+++ /dev/null
@@ -1,147 +0,0 @@
-## Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer
-
-### TensorFlow inference using `.pb` and `.onnx` models
-
-1. [Run inference on TensorFlow-model by using TensorFlow](#run-inference-on-tensorflow-model-by-using-tensorFlow)
-
-2. [Run inference on ONNX-model by using TensorFlow](#run-inference-on-onnx-model-by-using-tensorflow)
-
-3. [Make ONNX model from downloaded Pytorch model file](#make-onnx-model-from-downloaded-pytorch-model-file)
-
-
-### Run inference on TensorFlow-model by using TensorFlow
-
-1) Download the model weights [model-f6b98070.pb](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-f6b98070.pb)
-and [model-small.pb](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-small.pb) and place the
-file in the `/tf/` folder.
-
-2) Set up dependencies:
-
-```shell
-# install OpenCV
-pip install --upgrade pip
-pip install opencv-python
-
-# install TensorFlow
-pip install -I grpcio tensorflow==2.3.0 tensorflow-addons==0.11.2 numpy==1.18.0
-```
-
-#### Usage
-
-1) Place one or more input images in the folder `tf/input`.
-
-2) Run the model:
-
- ```shell
- python tf/run_pb.py
- ```
-
- Or run the small model:
-
- ```shell
- python tf/run_pb.py --model_weights model-small.pb --model_type small
- ```
-
-3) The resulting inverse depth maps are written to the `tf/output` folder.
-
-
-### Run inference on ONNX-model by using ONNX-Runtime
-
-1) Download the model weights [model-f6b98070.onnx](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-f6b98070.onnx)
-and [model-small.onnx](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-small.onnx) and place the
-file in the `/tf/` folder.
-
-2) Set up dependencies:
-
-```shell
-# install OpenCV
-pip install --upgrade pip
-pip install opencv-python
-
-# install ONNX
-pip install onnx==1.7.0
-
-# install ONNX Runtime
-pip install onnxruntime==1.5.2
-```
-
-#### Usage
-
-1) Place one or more input images in the folder `tf/input`.
-
-2) Run the model:
-
- ```shell
- python tf/run_onnx.py
- ```
-
- Or run the small model:
-
- ```shell
- python tf/run_onnx.py --model_weights model-small.onnx --model_type small
- ```
-
-3) The resulting inverse depth maps are written to the `tf/output` folder.
-
-
-
-### Make ONNX model from downloaded Pytorch model file
-
-1) Download the model weights [model-f6b98070.pt](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-f6b98070.pt) and place the
-file in the root folder.
-
-2) Set up dependencies:
-
-```shell
-# install OpenCV
-pip install --upgrade pip
-pip install opencv-python
-
-# install PyTorch TorchVision
-pip install -I torch==1.7.0 torchvision==0.8.0
-
-# install TensorFlow
-pip install -I grpcio tensorflow==2.3.0 tensorflow-addons==0.11.2 numpy==1.18.0
-
-# install ONNX
-pip install onnx==1.7.0
-
-# install ONNX-TensorFlow
-git clone https://github.com/onnx/onnx-tensorflow.git
-cd onnx-tensorflow
-git checkout 095b51b88e35c4001d70f15f80f31014b592b81e
-pip install -e .
-```
-
-#### Usage
-
-1) Run the converter:
-
- ```shell
- python tf/make_onnx_model.py
- ```
-
-2) The resulting `model-f6b98070.onnx` file is written to the `/tf/` folder.
-
-
-### Requirements
-
- The code was tested with Python 3.6.9, PyTorch 1.5.1, TensorFlow 2.2.0, TensorFlow-addons 0.8.3, ONNX 1.7.0, ONNX-TensorFlow (GitHub-master-17.07.2020) and OpenCV 4.3.0.
-
-### Citation
-
-Please cite our paper if you use this code or any of the models:
-```
-@article{Ranftl2019,
- author = {Ren\'{e} Ranftl and Katrin Lasinger and David Hafner and Konrad Schindler and Vladlen Koltun},
- title = {Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer},
- journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
- year = {2020},
-}
-```
-
-### License
-
-MIT License
-
-
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/webencodings/x_user_defined.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/webencodings/x_user_defined.py
deleted file mode 100644
index d16e326024c05a59548619e13258acad781e0a6d..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/webencodings/x_user_defined.py
+++ /dev/null
@@ -1,325 +0,0 @@
-# coding: utf-8
-"""
-
- webencodings.x_user_defined
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- An implementation of the x-user-defined encoding.
-
- :copyright: Copyright 2012 by Simon Sapin
- :license: BSD, see LICENSE for details.
-
-"""
-
-from __future__ import unicode_literals
-
-import codecs
-
-
-### Codec APIs
-
-class Codec(codecs.Codec):
-
- def encode(self, input, errors='strict'):
- return codecs.charmap_encode(input, errors, encoding_table)
-
- def decode(self, input, errors='strict'):
- return codecs.charmap_decode(input, errors, decoding_table)
-
-
-class IncrementalEncoder(codecs.IncrementalEncoder):
- def encode(self, input, final=False):
- return codecs.charmap_encode(input, self.errors, encoding_table)[0]
-
-
-class IncrementalDecoder(codecs.IncrementalDecoder):
- def decode(self, input, final=False):
- return codecs.charmap_decode(input, self.errors, decoding_table)[0]
-
-
-class StreamWriter(Codec, codecs.StreamWriter):
- pass
-
-
-class StreamReader(Codec, codecs.StreamReader):
- pass
-
-
-### encodings module API
-
-codec_info = codecs.CodecInfo(
- name='x-user-defined',
- encode=Codec().encode,
- decode=Codec().decode,
- incrementalencoder=IncrementalEncoder,
- incrementaldecoder=IncrementalDecoder,
- streamreader=StreamReader,
- streamwriter=StreamWriter,
-)
-
-
-### Decoding Table
-
-# Python 3:
-# for c in range(256): print(' %r' % chr(c if c < 128 else c + 0xF700))
-decoding_table = (
- '\x00'
- '\x01'
- '\x02'
- '\x03'
- '\x04'
- '\x05'
- '\x06'
- '\x07'
- '\x08'
- '\t'
- '\n'
- '\x0b'
- '\x0c'
- '\r'
- '\x0e'
- '\x0f'
- '\x10'
- '\x11'
- '\x12'
- '\x13'
- '\x14'
- '\x15'
- '\x16'
- '\x17'
- '\x18'
- '\x19'
- '\x1a'
- '\x1b'
- '\x1c'
- '\x1d'
- '\x1e'
- '\x1f'
- ' '
- '!'
- '"'
- '#'
- '$'
- '%'
- '&'
- "'"
- '('
- ')'
- '*'
- '+'
- ','
- '-'
- '.'
- '/'
- '0'
- '1'
- '2'
- '3'
- '4'
- '5'
- '6'
- '7'
- '8'
- '9'
- ':'
- ';'
- '<'
- '='
- '>'
- '?'
- '@'
- 'A'
- 'B'
- 'C'
- 'D'
- 'E'
- 'F'
- 'G'
- 'H'
- 'I'
- 'J'
- 'K'
- 'L'
- 'M'
- 'N'
- 'O'
- 'P'
- 'Q'
- 'R'
- 'S'
- 'T'
- 'U'
- 'V'
- 'W'
- 'X'
- 'Y'
- 'Z'
- '['
- '\\'
- ']'
- '^'
- '_'
- '`'
- 'a'
- 'b'
- 'c'
- 'd'
- 'e'
- 'f'
- 'g'
- 'h'
- 'i'
- 'j'
- 'k'
- 'l'
- 'm'
- 'n'
- 'o'
- 'p'
- 'q'
- 'r'
- 's'
- 't'
- 'u'
- 'v'
- 'w'
- 'x'
- 'y'
- 'z'
- '{'
- '|'
- '}'
- '~'
- '\x7f'
- '\uf780'
- '\uf781'
- '\uf782'
- '\uf783'
- '\uf784'
- '\uf785'
- '\uf786'
- '\uf787'
- '\uf788'
- '\uf789'
- '\uf78a'
- '\uf78b'
- '\uf78c'
- '\uf78d'
- '\uf78e'
- '\uf78f'
- '\uf790'
- '\uf791'
- '\uf792'
- '\uf793'
- '\uf794'
- '\uf795'
- '\uf796'
- '\uf797'
- '\uf798'
- '\uf799'
- '\uf79a'
- '\uf79b'
- '\uf79c'
- '\uf79d'
- '\uf79e'
- '\uf79f'
- '\uf7a0'
- '\uf7a1'
- '\uf7a2'
- '\uf7a3'
- '\uf7a4'
- '\uf7a5'
- '\uf7a6'
- '\uf7a7'
- '\uf7a8'
- '\uf7a9'
- '\uf7aa'
- '\uf7ab'
- '\uf7ac'
- '\uf7ad'
- '\uf7ae'
- '\uf7af'
- '\uf7b0'
- '\uf7b1'
- '\uf7b2'
- '\uf7b3'
- '\uf7b4'
- '\uf7b5'
- '\uf7b6'
- '\uf7b7'
- '\uf7b8'
- '\uf7b9'
- '\uf7ba'
- '\uf7bb'
- '\uf7bc'
- '\uf7bd'
- '\uf7be'
- '\uf7bf'
- '\uf7c0'
- '\uf7c1'
- '\uf7c2'
- '\uf7c3'
- '\uf7c4'
- '\uf7c5'
- '\uf7c6'
- '\uf7c7'
- '\uf7c8'
- '\uf7c9'
- '\uf7ca'
- '\uf7cb'
- '\uf7cc'
- '\uf7cd'
- '\uf7ce'
- '\uf7cf'
- '\uf7d0'
- '\uf7d1'
- '\uf7d2'
- '\uf7d3'
- '\uf7d4'
- '\uf7d5'
- '\uf7d6'
- '\uf7d7'
- '\uf7d8'
- '\uf7d9'
- '\uf7da'
- '\uf7db'
- '\uf7dc'
- '\uf7dd'
- '\uf7de'
- '\uf7df'
- '\uf7e0'
- '\uf7e1'
- '\uf7e2'
- '\uf7e3'
- '\uf7e4'
- '\uf7e5'
- '\uf7e6'
- '\uf7e7'
- '\uf7e8'
- '\uf7e9'
- '\uf7ea'
- '\uf7eb'
- '\uf7ec'
- '\uf7ed'
- '\uf7ee'
- '\uf7ef'
- '\uf7f0'
- '\uf7f1'
- '\uf7f2'
- '\uf7f3'
- '\uf7f4'
- '\uf7f5'
- '\uf7f6'
- '\uf7f7'
- '\uf7f8'
- '\uf7f9'
- '\uf7fa'
- '\uf7fb'
- '\uf7fc'
- '\uf7fd'
- '\uf7fe'
- '\uf7ff'
-)
-
-### Encoding table
-encoding_table = codecs.charmap_build(decoding_table)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_itertools.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_itertools.py
deleted file mode 100644
index cce05582ffc6fe6d72027194f4ccc44ee42f1fcd..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_itertools.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from itertools import filterfalse
-
-from typing import (
- Callable,
- Iterable,
- Iterator,
- Optional,
- Set,
- TypeVar,
- Union,
-)
-
-# Type and type variable definitions
-_T = TypeVar('_T')
-_U = TypeVar('_U')
-
-
-def unique_everseen(
- iterable: Iterable[_T], key: Optional[Callable[[_T], _U]] = None
-) -> Iterator[_T]:
- "List unique elements, preserving order. Remember all elements ever seen."
- # unique_everseen('AAAABBBCCDAABBB') --> A B C D
- # unique_everseen('ABBCcAD', str.lower) --> A B C D
- seen: Set[Union[_T, _U]] = set()
- seen_add = seen.add
- if key is None:
- for element in filterfalse(seen.__contains__, iterable):
- seen_add(element)
- yield element
- else:
- for element in iterable:
- k = key(element)
- if k not in seen:
- seen_add(k)
- yield element
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/tags.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/tags.py
deleted file mode 100644
index 76d243414d00f54a8973359cf553123e9bd1760e..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/tags.py
+++ /dev/null
@@ -1,546 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-import logging
-import platform
-import subprocess
-import sys
-import sysconfig
-from importlib.machinery import EXTENSION_SUFFIXES
-from typing import (
- Dict,
- FrozenSet,
- Iterable,
- Iterator,
- List,
- Optional,
- Sequence,
- Tuple,
- Union,
- cast,
-)
-
-from . import _manylinux, _musllinux
-
-logger = logging.getLogger(__name__)
-
-PythonVersion = Sequence[int]
-MacVersion = Tuple[int, int]
-
-INTERPRETER_SHORT_NAMES: Dict[str, str] = {
- "python": "py", # Generic.
- "cpython": "cp",
- "pypy": "pp",
- "ironpython": "ip",
- "jython": "jy",
-}
-
-
-_32_BIT_INTERPRETER = sys.maxsize <= 2**32
-
-
-class Tag:
- """
- A representation of the tag triple for a wheel.
-
- Instances are considered immutable and thus are hashable. Equality checking
- is also supported.
- """
-
- __slots__ = ["_interpreter", "_abi", "_platform", "_hash"]
-
- def __init__(self, interpreter: str, abi: str, platform: str) -> None:
- self._interpreter = interpreter.lower()
- self._abi = abi.lower()
- self._platform = platform.lower()
- # The __hash__ of every single element in a Set[Tag] will be evaluated each time
- # that a set calls its `.disjoint()` method, which may be called hundreds of
- # times when scanning a page of links for packages with tags matching that
- # Set[Tag]. Pre-computing the value here produces significant speedups for
- # downstream consumers.
- self._hash = hash((self._interpreter, self._abi, self._platform))
-
- @property
- def interpreter(self) -> str:
- return self._interpreter
-
- @property
- def abi(self) -> str:
- return self._abi
-
- @property
- def platform(self) -> str:
- return self._platform
-
- def __eq__(self, other: object) -> bool:
- if not isinstance(other, Tag):
- return NotImplemented
-
- return (
- (self._hash == other._hash) # Short-circuit ASAP for perf reasons.
- and (self._platform == other._platform)
- and (self._abi == other._abi)
- and (self._interpreter == other._interpreter)
- )
-
- def __hash__(self) -> int:
- return self._hash
-
- def __str__(self) -> str:
- return f"{self._interpreter}-{self._abi}-{self._platform}"
-
- def __repr__(self) -> str:
- return f"<{self} @ {id(self)}>"
-
-
-def parse_tag(tag: str) -> FrozenSet[Tag]:
- """
- Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances.
-
- Returning a set is required due to the possibility that the tag is a
- compressed tag set.
- """
- tags = set()
- interpreters, abis, platforms = tag.split("-")
- for interpreter in interpreters.split("."):
- for abi in abis.split("."):
- for platform_ in platforms.split("."):
- tags.add(Tag(interpreter, abi, platform_))
- return frozenset(tags)
-
-
-def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]:
- value: Union[int, str, None] = sysconfig.get_config_var(name)
- if value is None and warn:
- logger.debug(
- "Config variable '%s' is unset, Python ABI tag may be incorrect", name
- )
- return value
-
-
-def _normalize_string(string: str) -> str:
- return string.replace(".", "_").replace("-", "_").replace(" ", "_")
-
-
-def _abi3_applies(python_version: PythonVersion) -> bool:
- """
- Determine if the Python version supports abi3.
-
- PEP 384 was first implemented in Python 3.2.
- """
- return len(python_version) > 1 and tuple(python_version) >= (3, 2)
-
-
-def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]:
- py_version = tuple(py_version) # To allow for version comparison.
- abis = []
- version = _version_nodot(py_version[:2])
- debug = pymalloc = ucs4 = ""
- with_debug = _get_config_var("Py_DEBUG", warn)
- has_refcount = hasattr(sys, "gettotalrefcount")
- # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled
- # extension modules is the best option.
- # https://github.com/pypa/pip/issues/3383#issuecomment-173267692
- has_ext = "_d.pyd" in EXTENSION_SUFFIXES
- if with_debug or (with_debug is None and (has_refcount or has_ext)):
- debug = "d"
- if py_version < (3, 8):
- with_pymalloc = _get_config_var("WITH_PYMALLOC", warn)
- if with_pymalloc or with_pymalloc is None:
- pymalloc = "m"
- if py_version < (3, 3):
- unicode_size = _get_config_var("Py_UNICODE_SIZE", warn)
- if unicode_size == 4 or (
- unicode_size is None and sys.maxunicode == 0x10FFFF
- ):
- ucs4 = "u"
- elif debug:
- # Debug builds can also load "normal" extension modules.
- # We can also assume no UCS-4 or pymalloc requirement.
- abis.append(f"cp{version}")
- abis.insert(
- 0,
- "cp{version}{debug}{pymalloc}{ucs4}".format(
- version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4
- ),
- )
- return abis
-
-
-def cpython_tags(
- python_version: Optional[PythonVersion] = None,
- abis: Optional[Iterable[str]] = None,
- platforms: Optional[Iterable[str]] = None,
- *,
- warn: bool = False,
-) -> Iterator[Tag]:
- """
- Yields the tags for a CPython interpreter.
-
- The tags consist of:
- - cp--
- - cp-abi3-
- - cp-none-
- - cp-abi3- # Older Python versions down to 3.2.
-
- If python_version only specifies a major version then user-provided ABIs and
- the 'none' ABItag will be used.
-
- If 'abi3' or 'none' are specified in 'abis' then they will be yielded at
- their normal position and not at the beginning.
- """
- if not python_version:
- python_version = sys.version_info[:2]
-
- interpreter = f"cp{_version_nodot(python_version[:2])}"
-
- if abis is None:
- if len(python_version) > 1:
- abis = _cpython_abis(python_version, warn)
- else:
- abis = []
- abis = list(abis)
- # 'abi3' and 'none' are explicitly handled later.
- for explicit_abi in ("abi3", "none"):
- try:
- abis.remove(explicit_abi)
- except ValueError:
- pass
-
- platforms = list(platforms or platform_tags())
- for abi in abis:
- for platform_ in platforms:
- yield Tag(interpreter, abi, platform_)
- if _abi3_applies(python_version):
- yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms)
- yield from (Tag(interpreter, "none", platform_) for platform_ in platforms)
-
- if _abi3_applies(python_version):
- for minor_version in range(python_version[1] - 1, 1, -1):
- for platform_ in platforms:
- interpreter = "cp{version}".format(
- version=_version_nodot((python_version[0], minor_version))
- )
- yield Tag(interpreter, "abi3", platform_)
-
-
-def _generic_abi() -> List[str]:
- """
- Return the ABI tag based on EXT_SUFFIX.
- """
- # The following are examples of `EXT_SUFFIX`.
- # We want to keep the parts which are related to the ABI and remove the
- # parts which are related to the platform:
- # - linux: '.cpython-310-x86_64-linux-gnu.so' => cp310
- # - mac: '.cpython-310-darwin.so' => cp310
- # - win: '.cp310-win_amd64.pyd' => cp310
- # - win: '.pyd' => cp37 (uses _cpython_abis())
- # - pypy: '.pypy38-pp73-x86_64-linux-gnu.so' => pypy38_pp73
- # - graalpy: '.graalpy-38-native-x86_64-darwin.dylib'
- # => graalpy_38_native
-
- ext_suffix = _get_config_var("EXT_SUFFIX", warn=True)
- if not isinstance(ext_suffix, str) or ext_suffix[0] != ".":
- raise SystemError("invalid sysconfig.get_config_var('EXT_SUFFIX')")
- parts = ext_suffix.split(".")
- if len(parts) < 3:
- # CPython3.7 and earlier uses ".pyd" on Windows.
- return _cpython_abis(sys.version_info[:2])
- soabi = parts[1]
- if soabi.startswith("cpython"):
- # non-windows
- abi = "cp" + soabi.split("-")[1]
- elif soabi.startswith("cp"):
- # windows
- abi = soabi.split("-")[0]
- elif soabi.startswith("pypy"):
- abi = "-".join(soabi.split("-")[:2])
- elif soabi.startswith("graalpy"):
- abi = "-".join(soabi.split("-")[:3])
- elif soabi:
- # pyston, ironpython, others?
- abi = soabi
- else:
- return []
- return [_normalize_string(abi)]
-
-
-def generic_tags(
- interpreter: Optional[str] = None,
- abis: Optional[Iterable[str]] = None,
- platforms: Optional[Iterable[str]] = None,
- *,
- warn: bool = False,
-) -> Iterator[Tag]:
- """
- Yields the tags for a generic interpreter.
-
- The tags consist of:
- - --
-
- The "none" ABI will be added if it was not explicitly provided.
- """
- if not interpreter:
- interp_name = interpreter_name()
- interp_version = interpreter_version(warn=warn)
- interpreter = "".join([interp_name, interp_version])
- if abis is None:
- abis = _generic_abi()
- else:
- abis = list(abis)
- platforms = list(platforms or platform_tags())
- if "none" not in abis:
- abis.append("none")
- for abi in abis:
- for platform_ in platforms:
- yield Tag(interpreter, abi, platform_)
-
-
-def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]:
- """
- Yields Python versions in descending order.
-
- After the latest version, the major-only version will be yielded, and then
- all previous versions of that major version.
- """
- if len(py_version) > 1:
- yield f"py{_version_nodot(py_version[:2])}"
- yield f"py{py_version[0]}"
- if len(py_version) > 1:
- for minor in range(py_version[1] - 1, -1, -1):
- yield f"py{_version_nodot((py_version[0], minor))}"
-
-
-def compatible_tags(
- python_version: Optional[PythonVersion] = None,
- interpreter: Optional[str] = None,
- platforms: Optional[Iterable[str]] = None,
-) -> Iterator[Tag]:
- """
- Yields the sequence of tags that are compatible with a specific version of Python.
-
- The tags consist of:
- - py*-none-
- - -none-any # ... if `interpreter` is provided.
- - py*-none-any
- """
- if not python_version:
- python_version = sys.version_info[:2]
- platforms = list(platforms or platform_tags())
- for version in _py_interpreter_range(python_version):
- for platform_ in platforms:
- yield Tag(version, "none", platform_)
- if interpreter:
- yield Tag(interpreter, "none", "any")
- for version in _py_interpreter_range(python_version):
- yield Tag(version, "none", "any")
-
-
-def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str:
- if not is_32bit:
- return arch
-
- if arch.startswith("ppc"):
- return "ppc"
-
- return "i386"
-
-
-def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]:
- formats = [cpu_arch]
- if cpu_arch == "x86_64":
- if version < (10, 4):
- return []
- formats.extend(["intel", "fat64", "fat32"])
-
- elif cpu_arch == "i386":
- if version < (10, 4):
- return []
- formats.extend(["intel", "fat32", "fat"])
-
- elif cpu_arch == "ppc64":
- # TODO: Need to care about 32-bit PPC for ppc64 through 10.2?
- if version > (10, 5) or version < (10, 4):
- return []
- formats.append("fat64")
-
- elif cpu_arch == "ppc":
- if version > (10, 6):
- return []
- formats.extend(["fat32", "fat"])
-
- if cpu_arch in {"arm64", "x86_64"}:
- formats.append("universal2")
-
- if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}:
- formats.append("universal")
-
- return formats
-
-
-def mac_platforms(
- version: Optional[MacVersion] = None, arch: Optional[str] = None
-) -> Iterator[str]:
- """
- Yields the platform tags for a macOS system.
-
- The `version` parameter is a two-item tuple specifying the macOS version to
- generate platform tags for. The `arch` parameter is the CPU architecture to
- generate platform tags for. Both parameters default to the appropriate value
- for the current system.
- """
- version_str, _, cpu_arch = platform.mac_ver()
- if version is None:
- version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2])))
- if version == (10, 16):
- # When built against an older macOS SDK, Python will report macOS 10.16
- # instead of the real version.
- version_str = subprocess.run(
- [
- sys.executable,
- "-sS",
- "-c",
- "import platform; print(platform.mac_ver()[0])",
- ],
- check=True,
- env={"SYSTEM_VERSION_COMPAT": "0"},
- stdout=subprocess.PIPE,
- universal_newlines=True,
- ).stdout
- version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2])))
- else:
- version = version
- if arch is None:
- arch = _mac_arch(cpu_arch)
- else:
- arch = arch
-
- if (10, 0) <= version and version < (11, 0):
- # Prior to Mac OS 11, each yearly release of Mac OS bumped the
- # "minor" version number. The major version was always 10.
- for minor_version in range(version[1], -1, -1):
- compat_version = 10, minor_version
- binary_formats = _mac_binary_formats(compat_version, arch)
- for binary_format in binary_formats:
- yield "macosx_{major}_{minor}_{binary_format}".format(
- major=10, minor=minor_version, binary_format=binary_format
- )
-
- if version >= (11, 0):
- # Starting with Mac OS 11, each yearly release bumps the major version
- # number. The minor versions are now the midyear updates.
- for major_version in range(version[0], 10, -1):
- compat_version = major_version, 0
- binary_formats = _mac_binary_formats(compat_version, arch)
- for binary_format in binary_formats:
- yield "macosx_{major}_{minor}_{binary_format}".format(
- major=major_version, minor=0, binary_format=binary_format
- )
-
- if version >= (11, 0):
- # Mac OS 11 on x86_64 is compatible with binaries from previous releases.
- # Arm64 support was introduced in 11.0, so no Arm binaries from previous
- # releases exist.
- #
- # However, the "universal2" binary format can have a
- # macOS version earlier than 11.0 when the x86_64 part of the binary supports
- # that version of macOS.
- if arch == "x86_64":
- for minor_version in range(16, 3, -1):
- compat_version = 10, minor_version
- binary_formats = _mac_binary_formats(compat_version, arch)
- for binary_format in binary_formats:
- yield "macosx_{major}_{minor}_{binary_format}".format(
- major=compat_version[0],
- minor=compat_version[1],
- binary_format=binary_format,
- )
- else:
- for minor_version in range(16, 3, -1):
- compat_version = 10, minor_version
- binary_format = "universal2"
- yield "macosx_{major}_{minor}_{binary_format}".format(
- major=compat_version[0],
- minor=compat_version[1],
- binary_format=binary_format,
- )
-
-
-def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]:
- linux = _normalize_string(sysconfig.get_platform())
- if is_32bit:
- if linux == "linux_x86_64":
- linux = "linux_i686"
- elif linux == "linux_aarch64":
- linux = "linux_armv7l"
- _, arch = linux.split("_", 1)
- yield from _manylinux.platform_tags(linux, arch)
- yield from _musllinux.platform_tags(arch)
- yield linux
-
-
-def _generic_platforms() -> Iterator[str]:
- yield _normalize_string(sysconfig.get_platform())
-
-
-def platform_tags() -> Iterator[str]:
- """
- Provides the platform tags for this installation.
- """
- if platform.system() == "Darwin":
- return mac_platforms()
- elif platform.system() == "Linux":
- return _linux_platforms()
- else:
- return _generic_platforms()
-
-
-def interpreter_name() -> str:
- """
- Returns the name of the running interpreter.
-
- Some implementations have a reserved, two-letter abbreviation which will
- be returned when appropriate.
- """
- name = sys.implementation.name
- return INTERPRETER_SHORT_NAMES.get(name) or name
-
-
-def interpreter_version(*, warn: bool = False) -> str:
- """
- Returns the version of the running interpreter.
- """
- version = _get_config_var("py_version_nodot", warn=warn)
- if version:
- version = str(version)
- else:
- version = _version_nodot(sys.version_info[:2])
- return version
-
-
-def _version_nodot(version: PythonVersion) -> str:
- return "".join(map(str, version))
-
-
-def sys_tags(*, warn: bool = False) -> Iterator[Tag]:
- """
- Returns the sequence of tag triples for the running interpreter.
-
- The order of the sequence corresponds to priority order for the
- interpreter, from most to least important.
- """
-
- interp_name = interpreter_name()
- if interp_name == "cp":
- yield from cpython_tags(warn=warn)
- else:
- yield from generic_tags()
-
- if interp_name == "pp":
- interp = "pp3"
- elif interp_name == "cp":
- interp = "cp" + interpreter_version(warn=warn)
- else:
- interp = None
- yield from compatible_tags(interpreter=interp)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/dep_util.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/dep_util.py
deleted file mode 100644
index 48da8641c636edede90d6200076eaa0a41c88fbc..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/dep_util.py
+++ /dev/null
@@ -1,96 +0,0 @@
-"""distutils.dep_util
-
-Utility functions for simple, timestamp-based dependency of files
-and groups of files; also, function based entirely on such
-timestamp dependency analysis."""
-
-import os
-from .errors import DistutilsFileError
-
-
-def newer(source, target):
- """Return true if 'source' exists and is more recently modified than
- 'target', or if 'source' exists and 'target' doesn't. Return false if
- both exist and 'target' is the same age or younger than 'source'.
- Raise DistutilsFileError if 'source' does not exist.
- """
- if not os.path.exists(source):
- raise DistutilsFileError("file '%s' does not exist" % os.path.abspath(source))
- if not os.path.exists(target):
- return 1
-
- from stat import ST_MTIME
-
- mtime1 = os.stat(source)[ST_MTIME]
- mtime2 = os.stat(target)[ST_MTIME]
-
- return mtime1 > mtime2
-
-
-# newer ()
-
-
-def newer_pairwise(sources, targets):
- """Walk two filename lists in parallel, testing if each source is newer
- than its corresponding target. Return a pair of lists (sources,
- targets) where source is newer than target, according to the semantics
- of 'newer()'.
- """
- if len(sources) != len(targets):
- raise ValueError("'sources' and 'targets' must be same length")
-
- # build a pair of lists (sources, targets) where source is newer
- n_sources = []
- n_targets = []
- for i in range(len(sources)):
- if newer(sources[i], targets[i]):
- n_sources.append(sources[i])
- n_targets.append(targets[i])
-
- return (n_sources, n_targets)
-
-
-# newer_pairwise ()
-
-
-def newer_group(sources, target, missing='error'):
- """Return true if 'target' is out-of-date with respect to any file
- listed in 'sources'. In other words, if 'target' exists and is newer
- than every file in 'sources', return false; otherwise return true.
- 'missing' controls what we do when a source file is missing; the
- default ("error") is to blow up with an OSError from inside 'stat()';
- if it is "ignore", we silently drop any missing source files; if it is
- "newer", any missing source files make us assume that 'target' is
- out-of-date (this is handy in "dry-run" mode: it'll make you pretend to
- carry out commands that wouldn't work because inputs are missing, but
- that doesn't matter because you're not actually going to run the
- commands).
- """
- # If the target doesn't even exist, then it's definitely out-of-date.
- if not os.path.exists(target):
- return 1
-
- # Otherwise we have to find out the hard way: if *any* source file
- # is more recent than 'target', then 'target' is out-of-date and
- # we can immediately return true. If we fall through to the end
- # of the loop, then 'target' is up-to-date and we return false.
- from stat import ST_MTIME
-
- target_mtime = os.stat(target)[ST_MTIME]
- for source in sources:
- if not os.path.exists(source):
- if missing == 'error': # blow up when we stat() the file
- pass
- elif missing == 'ignore': # missing source dropped from
- continue # target's dependency list
- elif missing == 'newer': # missing source means target is
- return 1 # out-of-date
-
- source_mtime = os.stat(source)[ST_MTIME]
- if source_mtime > target_mtime:
- return 1
- else:
- return 0
-
-
-# newer_group ()
diff --git a/spaces/Taoheed-O/Titanic/app.py b/spaces/Taoheed-O/Titanic/app.py
deleted file mode 100644
index 6784ccf9218adb93d22e29de2a584de752151992..0000000000000000000000000000000000000000
--- a/spaces/Taoheed-O/Titanic/app.py
+++ /dev/null
@@ -1,162 +0,0 @@
-import joblib
-import pandas as pd
-import streamlit as st
-
-
-# loading in the model to predict on the data
-scaler = joblib.load(r'scaler.pickle')
-
-# loading Logistic Regression model
-classifier_lr = joblib.load(r'model_lr.pickle')
-
-# Loading Decision Tree model
-classifier_dt = joblib.load(r'model_dt.pickle')
-
-
-# the font and background color, the padding and the text to be displayed
-html_temp = """
-
-
Titanic Survivors Prediction App
-
- """
-# this line allows us to display the front end aspects we have
-# defined in the above code
-st.markdown(html_temp, unsafe_allow_html = True)
-# Image
-st.image("https://pngimg.com/uploads/titanic/titanic_PNG36.png")
-
-# giving the webpage a title
-st.title("Machine Learning [ Classification ]")
-
-# WElcome Function
-def welcome():
- return 'welcome all'
-
-# Features and labels
-features = ['sex_female', 'n_siblings_spouses_8', 'n_siblings_spouses_1',
- 'parch_6', 'n_siblings_spouses_4', 'parch_0', 'parch_5', 'n_siblings_spouses_0', 'parch_3',
- 'sex_male', 'Class_First', 'parch_2', 'alone_y', 'n_siblings_spouses_5', 'n_siblings_spouses_2',
- 'n_siblings_spouses_3', 'Class_Second', 'parch_1', 'alone_n', 'Class_Third', 'parch_4']
-labels = ['sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'Class', 'alone']
-
-# defining the function which will make the prediction{Logistic regression}using the user inputs
-def predict_lr(sex, age, n_siblings_spouses, parch, fare, Class, alone):
- feature_names = [sex, age, n_siblings_spouses, parch, fare, Class, alone]
- features_df = pd.DataFrame([feature_names], columns=labels)
- categorical_features = ['sex', 'n_siblings_spouses', 'parch', 'Class', 'alone']
- numeric_features = ['age', 'fare']
- features_df[numeric_features] = scaler.transform(features_df[numeric_features])
- features_df = pd.get_dummies(features_df,columns=categorical_features)
- #setting aside and making up for the whole categorical features from our first model
- c_engineering_features = set(features_df.columns) - set(numeric_features)
- missing_features = list(set(features) - c_engineering_features)
- for feature in missing_features:
- #add zeroes
- features_df[feature] = [0]*len(features_df)
- result = classifier_lr.predict(features_df)
- return result
-
-# defining the function which will make the prediction{Decision Tree}using the user inputs
-def predict_dt(sex, age, n_siblings_spouses, parch, fare, Class, alone):
- feature_names = [sex, age, n_siblings_spouses, parch, fare, Class, alone]
- features_df = pd.DataFrame([feature_names], columns=labels)
- categorical_features = ['sex', 'n_siblings_spouses', 'parch', 'Class', 'alone']
- numeric_features = ['age', 'fare']
- features_df[numeric_features] = scaler.transform(features_df[numeric_features])
- features_df = pd.get_dummies(features_df,columns=categorical_features)
- #setting aside and making up for the whole categorical features from our first model
- c_engineering_features = set(features_df.columns) - set(numeric_features)
- missing_features = list(set(features) - c_engineering_features)
- for feature in missing_features:
- #add zeroes
- features_df[feature] = [0]*len(features_df)
- result = classifier_dt.predict(features_df)
- return result
-
-#The parameters and their input formats.
-
-# Gender
-st.write("Male / Female")
-sex = st.radio("Select gender", ('male', 'female'))
-
-# Age
-age = st.number_input("What is the age ?",0, 200)
-
-# Spouses and siblings
-st.write("Number of spouses & siblings.")
-n_siblings_spouses = st.slider("Select the number of siblings or spouses", 0,5)
-
-# Parch
-st.write("Parch number ")
-parch = st.slider("Select parch number", 0, 6)
-
-# Fare
-st.write("Fare")
-fare = st.number_input("Thousand Dollars($)")
-
-# Class
-st.write("First/Second/Third")
-Class = st.radio("Select Class", ('First', 'Second', 'Third'))
-
-# Alone
-passenger_status = st.radio("Is the passenger alone ?", ('yes', 'no'))
-#conditionals for alone status
-if (passenger_status) == 'yes':
- alone = 'y'
-else:
- alone = 'n'
-
-
-
-# this is the main function in which is defined on the webpage
-def main():
- #List of available models
- options = st.radio("Available Models:", ["Logistic Regression", "Decision Tree"])
- result =""
-
- # the below line ensures that when the button called 'Predict' is clicked,
- # the prediction function defined above is called to make the prediction
- # and store it in the variable result
- if options == "Logistic Regression":
- st.success("You picked {}".format(options))
-
- if st.button('Predict'):
- result = predict_lr(sex, age, n_siblings_spouses, parch, fare, Class, alone)
- if result[0] == 0:
- st.error('Not a Survivor')
- else:
- st.success('A Survivor')
- else:
- st.warning("You picked {}[Less accurate]".format(options))
-
- if st.button('Predict'):
- result = predict_dt(sex, age, n_siblings_spouses, parch, fare, Class, alone)
- if result[0] == 0:
- st.error('Not a Survivor')
- else:
- st.success('A Survivor')
-
-# Links and Final Touches
- html_git = """
- Checkout my GitHub
-
- """
- html_linkedIn = """
- Connect with me on LinkedIn
-
- """
-
- # this line allows us to display the front end aspects we have
- # defined in the above code
- st.markdown(html_git, unsafe_allow_html = True)
- st.markdown(html_linkedIn, unsafe_allow_html = True)
-
-
-
-
-if __name__=='__main__':
- main()
diff --git a/spaces/VectorologyArt/Sygil-Sygil-Diffusion/app.py b/spaces/VectorologyArt/Sygil-Sygil-Diffusion/app.py
deleted file mode 100644
index 227563c324d6f1f854279ef4c317c6d5769572ae..0000000000000000000000000000000000000000
--- a/spaces/VectorologyArt/Sygil-Sygil-Diffusion/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/Sygil/Sygil-Diffusion").launch()
\ No newline at end of file
diff --git a/spaces/Violetmae14/Violet/README.md b/spaces/Violetmae14/Violet/README.md
deleted file mode 100644
index aad31342a3dec2d836c20a876727124e500d0cd2..0000000000000000000000000000000000000000
--- a/spaces/Violetmae14/Violet/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Violet
-emoji: 🌖
-colorFrom: purple
-colorTo: indigo
-sdk: static
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Vynock/rvc-wefu/infer_pack/attentions.py b/spaces/Vynock/rvc-wefu/infer_pack/attentions.py
deleted file mode 100644
index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000
--- a/spaces/Vynock/rvc-wefu/infer_pack/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from infer_pack import commons
-from infer_pack import modules
-from infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=10,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Wanlau/sovits-4.0_datealive/utils.py b/spaces/Wanlau/sovits-4.0_datealive/utils.py
deleted file mode 100644
index 229ac28ca48940370f63f2a7691ee6561910e2a6..0000000000000000000000000000000000000000
--- a/spaces/Wanlau/sovits-4.0_datealive/utils.py
+++ /dev/null
@@ -1,502 +0,0 @@
-import os
-import glob
-import re
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import random
-
-import librosa
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-from torch.nn import functional as F
-from modules.commons import sequence_mask
-from hubert import hubert_model
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-f0_bin = 256
-f0_max = 1100.0
-f0_min = 50.0
-f0_mel_min = 1127 * np.log(1 + f0_min / 700)
-f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
-
-# def normalize_f0(f0, random_scale=True):
-# f0_norm = f0.clone() # create a copy of the input Tensor
-# batch_size, _, frame_length = f0_norm.shape
-# for i in range(batch_size):
-# means = torch.mean(f0_norm[i, 0, :])
-# if random_scale:
-# factor = random.uniform(0.8, 1.2)
-# else:
-# factor = 1
-# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor
-# return f0_norm
-# def normalize_f0(f0, random_scale=True):
-# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True)
-# if random_scale:
-# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device)
-# else:
-# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device)
-# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1)
-# return f0_norm
-def normalize_f0(f0, x_mask, uv, random_scale=True):
- # calculate means based on x_mask
- uv_sum = torch.sum(uv, dim=1, keepdim=True)
- uv_sum[uv_sum == 0] = 9999
- means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum
-
- if random_scale:
- factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device)
- else:
- factor = torch.ones(f0.shape[0], 1).to(f0.device)
- # normalize f0 based on means and factor
- f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1)
- if torch.isnan(f0_norm).any():
- exit(0)
- return f0_norm * x_mask
-
-
-def plot_data_to_numpy(x, y):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- plt.plot(x)
- plt.plot(y)
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-
-def interpolate_f0(f0):
- '''
- 对F0进行插值处理
- '''
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i]
- last_value = data[i]
-
- return ip_data[:,0], vuv_vector[:,0]
-
-
-def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512):
- import parselmouth
- x = wav_numpy
- if p_len is None:
- p_len = x.shape[0]//hop_length
- else:
- assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error"
- time_step = hop_length / sampling_rate * 1000
- f0_min = 50
- f0_max = 1100
- f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
-
- pad_size=(p_len - len(f0) + 1) // 2
- if(pad_size>0 or p_len - len(f0) - pad_size>0):
- f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant')
- return f0
-
-def resize_f0(x, target_len):
- source = np.array(x)
- source[source<0.001] = np.nan
- target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source)
- res = np.nan_to_num(target)
- return res
-
-def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512):
- import pyworld
- if p_len is None:
- p_len = wav_numpy.shape[0]//hop_length
- f0, t = pyworld.dio(
- wav_numpy.astype(np.double),
- fs=sampling_rate,
- f0_ceil=800,
- frame_period=1000 * hop_length / sampling_rate,
- )
- f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return resize_f0(f0, p_len)
-
-def f0_to_coarse(f0):
- is_torch = isinstance(f0, torch.Tensor)
- f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1
-
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1
- f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int)
- assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min())
- return f0_coarse
-
-
-def get_hubert_model():
- vec_path = "hubert/checkpoint_best_legacy_500.pt"
- print("load model(s) from {}".format(vec_path))
- from fairseq import checkpoint_utils
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [vec_path],
- suffix="",
- )
- model = models[0]
- model.eval()
- return model
-
-def get_hubert_content(hmodel, wav_16k_tensor):
- feats = wav_16k_tensor
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- inputs = {
- "source": feats.to(wav_16k_tensor.device),
- "padding_mask": padding_mask.to(wav_16k_tensor.device),
- "output_layer": 9, # layer 9
- }
- with torch.no_grad():
- logits = hmodel.extract_features(**inputs)
- feats = hmodel.final_proj(logits[0])
- return feats.transpose(1, 2)
-
-
-def get_content(cmodel, y):
- with torch.no_grad():
- c = cmodel.extract_features(y.squeeze(1))[0]
- c = c.transpose(1, 2)
- return c
-
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- # assert "dec" in k or "disc" in k
- # print("load", k)
- new_state_dict[k] = saved_state_dict[k]
- assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape)
- except:
- print("error, %s is not in the checkpoint" % k)
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- print("load ")
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info("Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path))
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict,
- 'iteration': iteration,
- 'optimizer': optimizer.state_dict(),
- 'learning_rate': learning_rate}, checkpoint_path)
-
-def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True):
- """Freeing up space by deleting saved ckpts
-
- Arguments:
- path_to_models -- Path to the model directory
- n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth
- sort_by_time -- True -> chronologically delete ckpts
- False -> lexicographically delete ckpts
- """
- ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))]
- name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1)))
- time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f)))
- sort_key = time_key if sort_by_time else name_key
- x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key)
- to_del = [os.path.join(path_to_models, fn) for fn in
- (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])]
- del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}")
- del_routine = lambda x: [os.remove(x), del_info(x)]
- rs = [del_routine(fn) for fn in to_del]
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats='HWC')
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10,2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams =HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-def repeat_expand_2d(content, target_len):
- # content : [h, t]
-
- src_len = content.shape[-1]
- target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device)
- temp = torch.arange(src_len+1) * target_len / src_len
- current_pos = 0
- for i in range(target_len):
- if i < temp[current_pos+1]:
- target[:, i] = content[:, current_pos]
- else:
- current_pos += 1
- target[:, i] = content[:, current_pos]
-
- return target
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
-
diff --git a/spaces/Xalphinions/tab-cot/README.md b/spaces/Xalphinions/tab-cot/README.md
deleted file mode 100644
index 7e8d9d5378f63fd3bc266e3dcb4a204dc639c546..0000000000000000000000000000000000000000
--- a/spaces/Xalphinions/tab-cot/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Tab Cot
-emoji: 👀
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Xenova/sponsorblock-ml/src/classify.py b/spaces/Xenova/sponsorblock-ml/src/classify.py
deleted file mode 100644
index 358aa1fb1a4740a4e543a1e764c8e5287ab1f316..0000000000000000000000000000000000000000
--- a/spaces/Xenova/sponsorblock-ml/src/classify.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from transformers import TextClassificationPipeline
-import preprocess
-import segment
-
-
-class SponsorBlockClassificationPipeline(TextClassificationPipeline):
- def __init__(self, model, tokenizer):
- device = next(model.parameters()).device.index
- if device is None:
- device = -1
- super().__init__(model=model, tokenizer=tokenizer,
- return_all_scores=True, truncation=True, device=device)
-
- def preprocess(self, data, **tokenizer_kwargs):
- # TODO add support for lists
- texts = []
-
- if not isinstance(data, list):
- data = [data]
-
- for d in data:
- if isinstance(d, dict): # Otherwise, get data from transcript
- words = preprocess.get_words(d['video_id'])
- segment_words = segment.extract_segment(
- words, d['start'], d['end'])
- text = preprocess.clean_text(
- ' '.join(x['text'] for x in segment_words))
- texts.append(text)
- elif isinstance(d, str): # If string, assume this is what user wants to classify
- texts.append(d)
- else:
- raise ValueError(f'Invalid input type: "{type(d)}"')
-
- return self.tokenizer(
- texts, return_tensors=self.framework, **tokenizer_kwargs)
-
-
-def main():
- pass
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/XingHe0127/Chatbot/modules/config.py b/spaces/XingHe0127/Chatbot/modules/config.py
deleted file mode 100644
index a020881ca56526e55b0de23a69fe137edb793c12..0000000000000000000000000000000000000000
--- a/spaces/XingHe0127/Chatbot/modules/config.py
+++ /dev/null
@@ -1,168 +0,0 @@
-from collections import defaultdict
-from contextlib import contextmanager
-import os
-import logging
-import sys
-import commentjson as json
-
-from . import shared
-from . import presets
-
-
-__all__ = [
- "my_api_key",
- "authflag",
- "auth_list",
- "dockerflag",
- "retrieve_proxy",
- "log_level",
- "advance_docs",
- "update_doc_config",
- "multi_api_key",
- "server_name",
- "server_port",
- "share",
-]
-
-# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低)
-# 同时,也可以为后续支持自定义功能提供config的帮助
-if os.path.exists("config.json"):
- with open("config.json", "r", encoding='utf-8') as f:
- config = json.load(f)
-else:
- config = {}
-
-language = config.get("language", "auto")
-
-if os.path.exists("api_key.txt"):
- logging.info("检测到api_key.txt文件,正在进行迁移...")
- with open("api_key.txt", "r") as f:
- config["openai_api_key"] = f.read().strip()
- os.rename("api_key.txt", "api_key(deprecated).txt")
- with open("config.json", "w", encoding='utf-8') as f:
- json.dump(config, f, indent=4)
-
-if os.path.exists("auth.json"):
- logging.info("检测到auth.json文件,正在进行迁移...")
- auth_list = []
- with open("auth.json", "r", encoding='utf-8') as f:
- auth = json.load(f)
- for _ in auth:
- if auth[_]["username"] and auth[_]["password"]:
- auth_list.append((auth[_]["username"], auth[_]["password"]))
- else:
- logging.error("请检查auth.json文件中的用户名和密码!")
- sys.exit(1)
- config["users"] = auth_list
- os.rename("auth.json", "auth(deprecated).json")
- with open("config.json", "w", encoding='utf-8') as f:
- json.dump(config, f, indent=4)
-
-## 处理docker if we are running in Docker
-dockerflag = config.get("dockerflag", False)
-if os.environ.get("dockerrun") == "yes":
- dockerflag = True
-
-## 处理 api-key 以及 允许的用户列表
-my_api_key = config.get("openai_api_key", "") # 在这里输入你的 API 密钥
-my_api_key = os.environ.get("my_api_key", my_api_key)
-
-## 多账户机制
-multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制
-if multi_api_key:
- api_key_list = config.get("api_key_list", [])
- if len(api_key_list) == 0:
- logging.error("多账号模式已开启,但api_key_list为空,请检查config.json")
- sys.exit(1)
- shared.state.set_api_key_queue(api_key_list)
-
-auth_list = config.get("users", []) # 实际上是使用者的列表
-authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度
-
-# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配
-api_host = os.environ.get("api_host", config.get("api_host", ""))
-if api_host:
- shared.state.set_api_host(api_host)
-
-@contextmanager
-def retrieve_openai_api(api_key = None):
- old_api_key = os.environ.get("OPENAI_API_KEY", "")
- if api_key is None:
- os.environ["OPENAI_API_KEY"] = my_api_key
- yield my_api_key
- else:
- os.environ["OPENAI_API_KEY"] = api_key
- yield api_key
- os.environ["OPENAI_API_KEY"] = old_api_key
-
-## 处理log
-log_level = config.get("log_level", "INFO")
-logging.basicConfig(
- level=log_level,
- format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
-)
-
-## 处理代理:
-http_proxy = config.get("http_proxy", "")
-https_proxy = config.get("https_proxy", "")
-http_proxy = os.environ.get("HTTP_PROXY", http_proxy)
-https_proxy = os.environ.get("HTTPS_PROXY", https_proxy)
-
-# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错
-os.environ["HTTP_PROXY"] = ""
-os.environ["HTTPS_PROXY"] = ""
-
-local_embedding = config.get("local_embedding", False) # 是否使用本地embedding
-
-@contextmanager
-def retrieve_proxy(proxy=None):
- """
- 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理
- 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量
- """
- global http_proxy, https_proxy
- if proxy is not None:
- http_proxy = proxy
- https_proxy = proxy
- yield http_proxy, https_proxy
- else:
- old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"]
- os.environ["HTTP_PROXY"] = http_proxy
- os.environ["HTTPS_PROXY"] = https_proxy
- yield http_proxy, https_proxy # return new proxy
-
- # return old proxy
- os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var
-
-
-## 处理advance docs
-advance_docs = defaultdict(lambda: defaultdict(dict))
-advance_docs.update(config.get("advance_docs", {}))
-def update_doc_config(two_column_pdf):
- global advance_docs
- advance_docs["pdf"]["two_column"] = two_column_pdf
-
- logging.info(f"更新后的文件参数为:{advance_docs}")
-
-## 处理gradio.launch参数
-server_name = config.get("server_name", None)
-server_port = config.get("server_port", None)
-if server_name is None:
- if dockerflag:
- server_name = "0.0.0.0"
- else:
- server_name = "127.0.0.1"
-if server_port is None:
- if dockerflag:
- server_port = 7860
-
-assert server_port is None or type(server_port) == int, "要求port设置为int类型"
-
-# 设置默认model
-default_model = config.get("default_model", "")
-try:
- presets.DEFAULT_MODEL = presets.MODELS.index(default_model)
-except ValueError:
- pass
-
-share = config.get("share", False)
diff --git a/spaces/XzJosh/Aatrox-Bert-VITS2/text/japanese.py b/spaces/XzJosh/Aatrox-Bert-VITS2/text/japanese.py
deleted file mode 100644
index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Aatrox-Bert-VITS2/text/japanese.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py
-import re
-import sys
-
-import pyopenjtalk
-
-from text import symbols
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-
-def post_replace_ph(ph):
- rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- 'v': "V"
- }
- if ph in rep_map.keys():
- ph = rep_map[ph]
- if ph in symbols:
- return ph
- if ph not in symbols:
- ph = 'UNK'
- return ph
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def preprocess_jap(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = []
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- p = pyopenjtalk.g2p(sentence)
- text += p.split(" ")
-
- if i < len(marks):
- text += [marks[i].replace(' ', '')]
- return text
-
-def text_normalize(text):
- # todo: jap text normalize
- return text
-
-def g2p(norm_text):
- phones = preprocess_jap(norm_text)
- phones = [post_replace_ph(i) for i in phones]
- # todo: implement tones and word2ph
- tones = [0 for i in phones]
- word2ph = [1 for i in phones]
- return phones, tones, word2ph
-
-
-if __name__ == '__main__':
- for line in open("../../../Downloads/transcript_utf8.txt").readlines():
- text = line.split(":")[1]
- phones, tones, word2ph = g2p(text)
- for p in phones:
- if p == "z":
- print(text, phones)
- sys.exit(0)
diff --git a/spaces/XzJosh/Azusa-Bert-VITS2/text/cleaner.py b/spaces/XzJosh/Azusa-Bert-VITS2/text/cleaner.py
deleted file mode 100644
index 64bd5f7296f66c94f3a335666c53706bb5fe5b39..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Azusa-Bert-VITS2/text/cleaner.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from text import chinese, cleaned_text_to_sequence
-
-
-language_module_map = {
- 'ZH': chinese
-}
-
-
-def clean_text(text, language):
- language_module = language_module_map[language]
- norm_text = language_module.text_normalize(text)
- phones, tones, word2ph = language_module.g2p(norm_text)
- return norm_text, phones, tones, word2ph
-
-def clean_text_bert(text, language):
- language_module = language_module_map[language]
- norm_text = language_module.text_normalize(text)
- phones, tones, word2ph = language_module.g2p(norm_text)
- bert = language_module.get_bert_feature(norm_text, word2ph)
- return phones, tones, bert
-
-def text_to_sequence(text, language):
- norm_text, phones, tones, word2ph = clean_text(text, language)
- return cleaned_text_to_sequence(phones, tones, language)
-
-if __name__ == '__main__':
- pass
diff --git a/spaces/XzJosh/Diana-Bert-VITS2/commons.py b/spaces/XzJosh/Diana-Bert-VITS2/commons.py
deleted file mode 100644
index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Diana-Bert-VITS2/commons.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/XzJosh/Diana-Bert-VITS2/preprocess_text.py b/spaces/XzJosh/Diana-Bert-VITS2/preprocess_text.py
deleted file mode 100644
index 5eb0f3b9e929fcbe91dcbeb653391227a2518a15..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Diana-Bert-VITS2/preprocess_text.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import json
-from random import shuffle
-
-import tqdm
-from text.cleaner import clean_text
-from collections import defaultdict
-stage = [1,2,3]
-
-transcription_path = 'filelists/genshin.list'
-train_path = 'filelists/train.list'
-val_path = 'filelists/val.list'
-config_path = "configs/config.json"
-val_per_spk = 4
-max_val_total = 8
-
-if 1 in stage:
- with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f:
- for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()):
- try:
- utt, spk, language, text = line.strip().split('|')
- norm_text, phones, tones, word2ph = clean_text(text, language)
- f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones),
- " ".join([str(i) for i in tones]),
- " ".join([str(i) for i in word2ph])))
- except Exception as error :
- print("err!", utt, error)
-
-if 2 in stage:
- spk_utt_map = defaultdict(list)
- spk_id_map = {}
- current_sid = 0
-
- with open( transcription_path+'.cleaned', encoding='utf-8') as f:
- for line in f.readlines():
- utt, spk, language, text, phones, tones, word2ph = line.strip().split('|')
- spk_utt_map[spk].append(line)
- if spk not in spk_id_map.keys():
- spk_id_map[spk] = current_sid
- current_sid += 1
- train_list = []
- val_list = []
-
- for spk, utts in spk_utt_map.items():
- shuffle(utts)
- val_list+=utts[:val_per_spk]
- train_list+=utts[val_per_spk:]
- if len(val_list) > max_val_total:
- train_list+=val_list[max_val_total:]
- val_list = val_list[:max_val_total]
-
- with open( train_path,"w", encoding='utf-8') as f:
- for line in train_list:
- f.write(line)
-
- with open(val_path, "w", encoding='utf-8') as f:
- for line in val_list:
- f.write(line)
-
-if 3 in stage:
- assert 2 in stage
- config = json.load(open(config_path, encoding='utf-8'))
- config["data"]['spk2id'] = spk_id_map
- with open(config_path, 'w', encoding='utf-8') as f:
- json.dump(config, f, indent=2, ensure_ascii=False)
diff --git a/spaces/XzJosh/Wenjing-Bert-VITS2/resample.py b/spaces/XzJosh/Wenjing-Bert-VITS2/resample.py
deleted file mode 100644
index 2ed1685654a371c5722168e9987809b05b1cb224..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Wenjing-Bert-VITS2/resample.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import os
-import argparse
-import librosa
-import numpy as np
-from multiprocessing import Pool, cpu_count
-
-import soundfile
-from scipy.io import wavfile
-from tqdm import tqdm
-
-
-def process(item):
- spkdir, wav_name, args = item
- speaker = spkdir.replace("\\", "/").split("/")[-1]
- wav_path = os.path.join(args.in_dir, speaker, wav_name)
- if os.path.exists(wav_path) and '.wav' in wav_path:
- os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True)
- wav, sr = librosa.load(wav_path, sr=args.sr)
- soundfile.write(
- os.path.join(args.out_dir, speaker, wav_name),
- wav,
- sr
- )
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--sr", type=int, default=44100, help="sampling rate")
- parser.add_argument("--in_dir", type=str, default="./raw", help="path to source dir")
- parser.add_argument("--out_dir", type=str, default="./dataset", help="path to target dir")
- args = parser.parse_args()
- # processs = 8
- processs = cpu_count()-2 if cpu_count() >4 else 1
- pool = Pool(processes=processs)
-
- for speaker in os.listdir(args.in_dir):
- spk_dir = os.path.join(args.in_dir, speaker)
- if os.path.isdir(spk_dir):
- print(spk_dir)
- for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])):
- pass
diff --git a/spaces/XzJosh/nine1-Bert-VITS2/monotonic_align/__init__.py b/spaces/XzJosh/nine1-Bert-VITS2/monotonic_align/__init__.py
deleted file mode 100644
index 75603d26cf2b8d6196f5a68a89f9e49d8e519bc8..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/nine1-Bert-VITS2/monotonic_align/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-def maximum_path(neg_cent, mask):
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/YUANAI/DiffspeechResearch/utils/audio/__init__.py b/spaces/YUANAI/DiffspeechResearch/utils/audio/__init__.py
deleted file mode 100644
index e8cc4466b27eeda4026e945a5388dca04817e8a1..0000000000000000000000000000000000000000
--- a/spaces/YUANAI/DiffspeechResearch/utils/audio/__init__.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import librosa
-import numpy as np
-import pyloudnorm as pyln
-
-from utils.audio.vad import trim_long_silences
-
-
-def librosa_pad_lr(x, fsize, fshift, pad_sides=1):
- '''compute right padding (final frame) or both sides padding (first and final frames)
- '''
- assert pad_sides in (1, 2)
- # return int(fsize // 2)
- pad = (x.shape[0] // fshift + 1) * fshift - x.shape[0]
- if pad_sides == 1:
- return 0, pad
- else:
- return pad // 2, pad // 2 + pad % 2
-
-
-def amp_to_db(x):
- return 20 * np.log10(np.maximum(1e-5, x))
-
-
-def db_to_amp(x):
- return 10.0 ** (x * 0.05)
-
-
-def normalize(S, min_level_db):
- return (S - min_level_db) / -min_level_db
-
-
-def denormalize(D, min_level_db):
- return (D * -min_level_db) + min_level_db
-
-
-def librosa_wav2spec(wav_path,
- fft_size=1024,
- hop_size=256,
- win_length=1024,
- window="hann",
- num_mels=80,
- fmin=80,
- fmax=-1,
- eps=1e-6,
- sample_rate=22050,
- loud_norm=False,
- trim_long_sil=False):
- if isinstance(wav_path, str):
- if trim_long_sil:
- wav, _, _ = trim_long_silences(wav_path, sample_rate)
- else:
- wav, _ = librosa.core.load(wav_path, sr=sample_rate)
- else:
- wav = wav_path
-
- if loud_norm:
- meter = pyln.Meter(sample_rate) # create BS.1770 meter
- loudness = meter.integrated_loudness(wav)
- wav = pyln.normalize.loudness(wav, loudness, -22.0)
- if np.abs(wav).max() > 1:
- wav = wav / np.abs(wav).max()
-
- # get amplitude spectrogram
- x_stft = librosa.stft(wav, n_fft=fft_size, hop_length=hop_size,
- win_length=win_length, window=window, pad_mode="constant")
- linear_spc = np.abs(x_stft) # (n_bins, T)
-
- # get mel basis
- fmin = 0 if fmin == -1 else fmin
- fmax = sample_rate / 2 if fmax == -1 else fmax
- mel_basis = librosa.filters.mel(sample_rate, fft_size, num_mels, fmin, fmax)
-
- # calculate mel spec
- mel = mel_basis @ linear_spc
- mel = np.log10(np.maximum(eps, mel)) # (n_mel_bins, T)
- l_pad, r_pad = librosa_pad_lr(wav, fft_size, hop_size, 1)
- wav = np.pad(wav, (l_pad, r_pad), mode='constant', constant_values=0.0)
- wav = wav[:mel.shape[1] * hop_size]
-
- # log linear spec
- linear_spc = np.log10(np.maximum(eps, linear_spc))
- return {'wav': wav, 'mel': mel.T, 'linear': linear_spc.T, 'mel_basis': mel_basis}
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/shared.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/shared.py
deleted file mode 100644
index 2d0f7bf3999064a68f28a1207d65a2de7ae98c0a..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/export/shared.py
+++ /dev/null
@@ -1,1034 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import collections
-import contextlib
-import copy
-import functools
-import logging
-import numpy as np
-import os
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-from unittest import mock
-import caffe2.python.utils as putils
-import torch
-import torch.nn.functional as F
-from caffe2.proto import caffe2_pb2
-from caffe2.python import core, net_drawer, workspace
-from torch.nn.functional import interpolate as interp
-
-logger = logging.getLogger(__name__)
-
-
-# ==== torch/utils_toffee/cast.py =======================================
-
-
-def to_device(t, device_str):
- """
- This function is a replacement of .to(another_device) such that it allows the
- casting to be traced properly by explicitly calling the underlying copy ops.
- It also avoids introducing unncessary op when casting to the same device.
- """
- src = t.device
- dst = torch.device(device_str)
-
- if src == dst:
- return t
- elif src.type == "cuda" and dst.type == "cpu":
- return torch.ops._caffe2.CopyGPUToCPU(t)
- elif src.type == "cpu" and dst.type == "cuda":
- return torch.ops._caffe2.CopyCPUToGPU(t)
- else:
- raise RuntimeError("Can't cast tensor from device {} to device {}".format(src, dst))
-
-
-# ==== torch/utils_toffee/interpolate.py =======================================
-
-
-# Note: borrowed from vision/detection/fair/detectron/detectron/modeling/detector.py
-def BilinearInterpolation(tensor_in, up_scale):
- assert up_scale % 2 == 0, "Scale should be even"
-
- def upsample_filt(size):
- factor = (size + 1) // 2
- if size % 2 == 1:
- center = factor - 1
- else:
- center = factor - 0.5
-
- og = np.ogrid[:size, :size]
- return (1 - abs(og[0] - center) / factor) * (1 - abs(og[1] - center) / factor)
-
- kernel_size = int(up_scale) * 2
- bil_filt = upsample_filt(kernel_size)
-
- dim = int(tensor_in.shape[1])
- kernel = np.zeros((dim, dim, kernel_size, kernel_size), dtype=np.float32)
- kernel[range(dim), range(dim), :, :] = bil_filt
-
- tensor_out = F.conv_transpose2d(
- tensor_in,
- weight=to_device(torch.Tensor(kernel), tensor_in.device),
- bias=None,
- stride=int(up_scale),
- padding=int(up_scale / 2),
- )
-
- return tensor_out
-
-
-# NOTE: ONNX is incompatible with traced torch.nn.functional.interpolate if
-# using dynamic `scale_factor` rather than static `size`. (T43166860)
-# NOTE: Caffe2 Int8 conversion might not be able to quantize `size` properly.
-def onnx_compatibale_interpolate(
- input, size=None, scale_factor=None, mode="nearest", align_corners=None
-):
- # NOTE: The input dimensions are interpreted in the form:
- # `mini-batch x channels x [optional depth] x [optional height] x width`.
- if size is None and scale_factor is not None:
- if input.dim() == 4:
- if isinstance(scale_factor, (int, float)):
- height_scale, width_scale = (scale_factor, scale_factor)
- else:
- assert isinstance(scale_factor, (tuple, list))
- assert len(scale_factor) == 2
- height_scale, width_scale = scale_factor
-
- assert not align_corners, "No matching C2 op for align_corners == True"
- if mode == "nearest":
- return torch.ops._caffe2.ResizeNearest(
- input, order="NCHW", width_scale=width_scale, height_scale=height_scale
- )
- elif mode == "bilinear":
- logger.warning(
- "Use F.conv_transpose2d for bilinear interpolate"
- " because there's no such C2 op, this may cause significant"
- " slowdown and the boundary pixels won't be as same as"
- " using F.interpolate due to padding."
- )
- assert height_scale == width_scale
- return BilinearInterpolation(input, up_scale=height_scale)
- logger.warning("Output size is not static, it might cause ONNX conversion issue")
-
- return interp(input, size, scale_factor, mode, align_corners)
-
-
-@contextlib.contextmanager
-def mock_torch_nn_functional_interpolate():
- if torch.onnx.is_in_onnx_export():
- with mock.patch(
- "torch.nn.functional.interpolate", side_effect=onnx_compatibale_interpolate
- ):
- yield
- else:
- yield
-
-
-# ==== torch/utils_caffe2/ws_utils.py ==========================================
-
-
-class ScopedWS(object):
- def __init__(self, ws_name, is_reset, is_cleanup=False):
- self.ws_name = ws_name
- self.is_reset = is_reset
- self.is_cleanup = is_cleanup
- self.org_ws = ""
-
- def __enter__(self):
- self.org_ws = workspace.CurrentWorkspace()
- if self.ws_name is not None:
- workspace.SwitchWorkspace(self.ws_name, True)
- if self.is_reset:
- workspace.ResetWorkspace()
-
- return workspace
-
- def __exit__(self, *args):
- if self.is_cleanup:
- workspace.ResetWorkspace()
- if self.ws_name is not None:
- workspace.SwitchWorkspace(self.org_ws)
-
-
-def fetch_any_blob(name):
- bb = None
- try:
- bb = workspace.FetchBlob(name)
- except TypeError:
- bb = workspace.FetchInt8Blob(name)
- except Exception as e:
- logger.error("Get blob {} error: {}".format(name, e))
-
- return bb
-
-
-# ==== torch/utils_caffe2/protobuf.py ==========================================
-
-
-def get_pb_arg(pb, arg_name):
- for x in pb.arg:
- if x.name == arg_name:
- return x
- return None
-
-
-def get_pb_arg_valf(pb, arg_name, default_val):
- arg = get_pb_arg(pb, arg_name)
- return arg.f if arg is not None else default_val
-
-
-def get_pb_arg_floats(pb, arg_name, default_val):
- arg = get_pb_arg(pb, arg_name)
- return list(map(float, arg.floats)) if arg is not None else default_val
-
-
-def get_pb_arg_ints(pb, arg_name, default_val):
- arg = get_pb_arg(pb, arg_name)
- return list(map(int, arg.ints)) if arg is not None else default_val
-
-
-def get_pb_arg_vali(pb, arg_name, default_val):
- arg = get_pb_arg(pb, arg_name)
- return arg.i if arg is not None else default_val
-
-
-def get_pb_arg_vals(pb, arg_name, default_val):
- arg = get_pb_arg(pb, arg_name)
- return arg.s if arg is not None else default_val
-
-
-def get_pb_arg_valstrings(pb, arg_name, default_val):
- arg = get_pb_arg(pb, arg_name)
- return list(arg.strings) if arg is not None else default_val
-
-
-def check_set_pb_arg(pb, arg_name, arg_attr, arg_value, allow_override=False):
- arg = get_pb_arg(pb, arg_name)
- if arg is None:
- arg = putils.MakeArgument(arg_name, arg_value)
- assert hasattr(arg, arg_attr)
- pb.arg.extend([arg])
- if allow_override and getattr(arg, arg_attr) != arg_value:
- logger.warning(
- "Override argument {}: {} -> {}".format(arg_name, getattr(arg, arg_attr), arg_value)
- )
- setattr(arg, arg_attr, arg_value)
- else:
- assert arg is not None
- assert getattr(arg, arg_attr) == arg_value, "Existing value {}, new value {}".format(
- getattr(arg, arg_attr), arg_value
- )
-
-
-def _create_const_fill_op_from_numpy(name, tensor, device_option=None):
- assert type(tensor) == np.ndarray
- kTypeNameMapper = {
- np.dtype("float32"): "GivenTensorFill",
- np.dtype("int32"): "GivenTensorIntFill",
- np.dtype("int64"): "GivenTensorInt64Fill",
- np.dtype("uint8"): "GivenTensorStringFill",
- }
-
- args_dict = {}
- if tensor.dtype == np.dtype("uint8"):
- args_dict.update({"values": [str(tensor.data)], "shape": [1]})
- else:
- args_dict.update({"values": tensor, "shape": tensor.shape})
-
- if device_option is not None:
- args_dict["device_option"] = device_option
-
- return core.CreateOperator(kTypeNameMapper[tensor.dtype], [], [name], **args_dict)
-
-
-def _create_const_fill_op_from_c2_int8_tensor(name, int8_tensor):
- assert type(int8_tensor) == workspace.Int8Tensor
- kTypeNameMapper = {
- np.dtype("int32"): "Int8GivenIntTensorFill",
- np.dtype("uint8"): "Int8GivenTensorFill",
- }
-
- tensor = int8_tensor.data
- assert tensor.dtype in [np.dtype("uint8"), np.dtype("int32")]
- values = tensor.tobytes() if tensor.dtype == np.dtype("uint8") else tensor
-
- return core.CreateOperator(
- kTypeNameMapper[tensor.dtype],
- [],
- [name],
- values=values,
- shape=tensor.shape,
- Y_scale=int8_tensor.scale,
- Y_zero_point=int8_tensor.zero_point,
- )
-
-
-def create_const_fill_op(
- name: str,
- blob: Union[np.ndarray, workspace.Int8Tensor],
- device_option: Optional[caffe2_pb2.DeviceOption] = None,
-) -> caffe2_pb2.OperatorDef:
- """
- Given a blob object, return the Caffe2 operator that creates this blob
- as constant. Currently support NumPy tensor and Caffe2 Int8Tensor.
- """
-
- tensor_type = type(blob)
- assert tensor_type in [
- np.ndarray,
- workspace.Int8Tensor,
- ], 'Error when creating const fill op for "{}", unsupported blob type: {}'.format(
- name, type(blob)
- )
-
- if tensor_type == np.ndarray:
- return _create_const_fill_op_from_numpy(name, blob, device_option)
- elif tensor_type == workspace.Int8Tensor:
- assert device_option is None
- return _create_const_fill_op_from_c2_int8_tensor(name, blob)
-
-
-def construct_init_net_from_params(
- params: Dict[str, Any], device_options: Optional[Dict[str, caffe2_pb2.DeviceOption]] = None
-) -> caffe2_pb2.NetDef:
- """
- Construct the init_net from params dictionary
- """
- init_net = caffe2_pb2.NetDef()
- device_options = device_options or {}
- for name, blob in params.items():
- if isinstance(blob, str):
- logger.warning(
- (
- "Blob {} with type {} is not supported in generating init net,"
- " skipped.".format(name, type(blob))
- )
- )
- continue
- init_net.op.extend(
- [create_const_fill_op(name, blob, device_option=device_options.get(name, None))]
- )
- init_net.external_output.append(name)
- return init_net
-
-
-def get_producer_map(ssa):
- """
- Return dict from versioned blob to (i, j),
- where i is index of producer op, j is the index of output of that op.
- """
- producer_map = {}
- for i in range(len(ssa)):
- outputs = ssa[i][1]
- for j, outp in enumerate(outputs):
- producer_map[outp] = (i, j)
- return producer_map
-
-
-def get_consumer_map(ssa):
- """
- Return dict from versioned blob to list of (i, j),
- where i is index of consumer op, j is the index of input of that op.
- """
- consumer_map = collections.defaultdict(list)
- for i in range(len(ssa)):
- inputs = ssa[i][0]
- for j, inp in enumerate(inputs):
- consumer_map[inp].append((i, j))
- return consumer_map
-
-
-def get_params_from_init_net(
- init_net: caffe2_pb2.NetDef,
-) -> [Dict[str, Any], Dict[str, caffe2_pb2.DeviceOption]]:
- """
- Take the output blobs from init_net by running it.
- Outputs:
- params: dict from blob name to numpy array
- device_options: dict from blob name to the device option of its creating op
- """
- # NOTE: this assumes that the params is determined by producer op with the
- # only exception be CopyGPUToCPU which is CUDA op but returns CPU tensor.
- def _get_device_option(producer_op):
- if producer_op.type == "CopyGPUToCPU":
- return caffe2_pb2.DeviceOption()
- else:
- return producer_op.device_option
-
- with ScopedWS("__get_params_from_init_net__", is_reset=True, is_cleanup=True) as ws:
- ws.RunNetOnce(init_net)
- params = {b: fetch_any_blob(b) for b in init_net.external_output}
- ssa, versions = core.get_ssa(init_net)
- producer_map = get_producer_map(ssa)
- device_options = {
- b: _get_device_option(init_net.op[producer_map[(b, versions[b])][0]])
- for b in init_net.external_output
- }
- return params, device_options
-
-
-def _updater_raise(op, input_types, output_types):
- raise RuntimeError(
- "Failed to apply updater for op {} given input_types {} and"
- " output_types {}".format(op, input_types, output_types)
- )
-
-
-def _generic_status_identifier(
- predict_net: caffe2_pb2.NetDef,
- status_updater: Callable,
- known_status: Dict[Tuple[str, int], Any],
-) -> Dict[Tuple[str, int], Any]:
- """
- Statically infer the status of each blob, the status can be such as device type
- (CPU/GPU), layout (NCHW/NHWC), data type (float32/int8), etc. "Blob" here
- is versioned blob (Tuple[str, int]) in the format compatible with ssa.
- Inputs:
- predict_net: the caffe2 network
- status_updater: a callable, given an op and the status of its input/output,
- it returns the updated status of input/output. `None` is used for
- representing unknown status.
- known_status: a dict containing known status, used as initialization.
- Outputs:
- A dict mapping from versioned blob to its status
- """
- ssa, versions = core.get_ssa(predict_net)
- versioned_ext_input = [(b, 0) for b in predict_net.external_input]
- versioned_ext_output = [(b, versions[b]) for b in predict_net.external_output]
- all_versioned_blobs = set().union(*[set(x[0] + x[1]) for x in ssa])
-
- allowed_vbs = all_versioned_blobs.union(versioned_ext_input).union(versioned_ext_output)
- assert all(k in allowed_vbs for k in known_status)
- assert all(v is not None for v in known_status.values())
- _known_status = copy.deepcopy(known_status)
-
- def _check_and_update(key, value):
- assert value is not None
- if key in _known_status:
- if not _known_status[key] == value:
- raise RuntimeError(
- "Confilict status for {}, existing status {}, new status {}".format(
- key, _known_status[key], value
- )
- )
- _known_status[key] = value
-
- def _update_i(op, ssa_i):
- versioned_inputs = ssa_i[0]
- versioned_outputs = ssa_i[1]
-
- inputs_status = [_known_status.get(b, None) for b in versioned_inputs]
- outputs_status = [_known_status.get(b, None) for b in versioned_outputs]
-
- new_inputs_status, new_outputs_status = status_updater(op, inputs_status, outputs_status)
-
- for versioned_blob, status in zip(
- versioned_inputs + versioned_outputs, new_inputs_status + new_outputs_status
- ):
- if status is not None:
- _check_and_update(versioned_blob, status)
-
- for op, ssa_i in zip(predict_net.op, ssa):
- _update_i(op, ssa_i)
- for op, ssa_i in zip(reversed(predict_net.op), reversed(ssa)):
- _update_i(op, ssa_i)
-
- # NOTE: This strictly checks all the blob from predict_net must be assgined
- # a known status. However sometimes it's impossible (eg. having deadend op),
- # we may relax this constraint if
- for k in all_versioned_blobs:
- if k not in _known_status:
- raise NotImplementedError(
- "Can not infer the status for {}. Currently only support the case where"
- " a single forward and backward pass can identify status for all blobs.".format(k)
- )
-
- return _known_status
-
-
-def infer_device_type(
- predict_net: caffe2_pb2.NetDef,
- known_status: Dict[Tuple[str, int], Any],
- device_name_style: str = "caffe2",
-) -> Dict[Tuple[str, int], str]:
- """Return the device type ("cpu" or "gpu"/"cuda") of each (versioned) blob"""
-
- assert device_name_style in ["caffe2", "pytorch"]
- _CPU_STR = "cpu"
- _GPU_STR = "gpu" if device_name_style == "caffe2" else "cuda"
-
- def _copy_cpu_to_gpu_updater(op, input_types, output_types):
- if input_types[0] == _GPU_STR or output_types[0] == _CPU_STR:
- _updater_raise(op, input_types, output_types)
- return ([_CPU_STR], [_GPU_STR])
-
- def _copy_gpu_to_cpu_updater(op, input_types, output_types):
- if input_types[0] == _CPU_STR or output_types[0] == _GPU_STR:
- _updater_raise(op, input_types, output_types)
- return ([_GPU_STR], [_CPU_STR])
-
- def _other_ops_updater(op, input_types, output_types):
- non_none_types = [x for x in input_types + output_types if x is not None]
- if len(non_none_types) > 0:
- the_type = non_none_types[0]
- if not all(x == the_type for x in non_none_types):
- _updater_raise(op, input_types, output_types)
- else:
- the_type = None
- return ([the_type for _ in op.input], [the_type for _ in op.output])
-
- def _device_updater(op, *args, **kwargs):
- return {
- "CopyCPUToGPU": _copy_cpu_to_gpu_updater,
- "CopyGPUToCPU": _copy_gpu_to_cpu_updater,
- }.get(op.type, _other_ops_updater)(op, *args, **kwargs)
-
- return _generic_status_identifier(predict_net, _device_updater, known_status)
-
-
-# ==== torch/utils_caffe2/vis.py ===============================================
-
-
-def _modify_blob_names(ops, blob_rename_f):
- ret = []
-
- def _replace_list(blob_list, replaced_list):
- del blob_list[:]
- blob_list.extend(replaced_list)
-
- for x in ops:
- cur = copy.deepcopy(x)
- _replace_list(cur.input, list(map(blob_rename_f, cur.input)))
- _replace_list(cur.output, list(map(blob_rename_f, cur.output)))
- ret.append(cur)
-
- return ret
-
-
-def _rename_blob(name, blob_sizes, blob_ranges):
- def _list_to_str(bsize):
- ret = ", ".join([str(x) for x in bsize])
- ret = "[" + ret + "]"
- return ret
-
- ret = name
- if blob_sizes is not None and name in blob_sizes:
- ret += "\n" + _list_to_str(blob_sizes[name])
- if blob_ranges is not None and name in blob_ranges:
- ret += "\n" + _list_to_str(blob_ranges[name])
-
- return ret
-
-
-# graph_name could not contain word 'graph'
-def save_graph(net, file_name, graph_name="net", op_only=True, blob_sizes=None, blob_ranges=None):
- blob_rename_f = functools.partial(_rename_blob, blob_sizes=blob_sizes, blob_ranges=blob_ranges)
- return save_graph_base(net, file_name, graph_name, op_only, blob_rename_f)
-
-
-def save_graph_base(net, file_name, graph_name="net", op_only=True, blob_rename_func=None):
- graph = None
- ops = net.op
- if blob_rename_func is not None:
- ops = _modify_blob_names(ops, blob_rename_func)
- if not op_only:
- graph = net_drawer.GetPydotGraph(ops, graph_name, rankdir="TB")
- else:
- graph = net_drawer.GetPydotGraphMinimal(
- ops, graph_name, rankdir="TB", minimal_dependency=True
- )
-
- try:
- par_dir = os.path.dirname(file_name)
- if not os.path.exists(par_dir):
- os.makedirs(par_dir)
-
- format = os.path.splitext(os.path.basename(file_name))[-1]
- if format == ".png":
- graph.write_png(file_name)
- elif format == ".pdf":
- graph.write_pdf(file_name)
- elif format == ".svg":
- graph.write_svg(file_name)
- else:
- print("Incorrect format {}".format(format))
- except Exception as e:
- print("Error when writing graph to image {}".format(e))
-
- return graph
-
-
-# ==== torch/utils_toffee/aten_to_caffe2.py ====================================
-
-
-def group_norm_replace_aten_with_caffe2(predict_net: caffe2_pb2.NetDef):
- """
- For ONNX exported model, GroupNorm will be represented as ATen op,
- this can be a drop in replacement from ATen to GroupNorm
- """
- count = 0
- for op in predict_net.op:
- if op.type == "ATen":
- op_name = get_pb_arg_vals(op, "operator", None) # return byte in py3
- if op_name and op_name.decode() == "group_norm":
- op.arg.remove(get_pb_arg(op, "operator"))
-
- if get_pb_arg_vali(op, "cudnn_enabled", None):
- op.arg.remove(get_pb_arg(op, "cudnn_enabled"))
-
- num_groups = get_pb_arg_vali(op, "num_groups", None)
- if num_groups is not None:
- op.arg.remove(get_pb_arg(op, "num_groups"))
- check_set_pb_arg(op, "group", "i", num_groups)
-
- op.type = "GroupNorm"
- count += 1
- if count > 1:
- logger.info("Replaced {} ATen operator to GroupNormOp".format(count))
-
-
-# ==== torch/utils_toffee/alias.py =============================================
-
-
-def alias(x, name, is_backward=False):
- if not torch.onnx.is_in_onnx_export():
- return x
- assert isinstance(x, torch.Tensor)
- return torch.ops._caffe2.AliasWithName(x, name, is_backward=is_backward)
-
-
-def fuse_alias_placeholder(predict_net, init_net):
- """Remove AliasWithName placeholder and rename the input/output of it"""
- # First we finish all the re-naming
- for i, op in enumerate(predict_net.op):
- if op.type == "AliasWithName":
- assert len(op.input) == 1
- assert len(op.output) == 1
- name = get_pb_arg_vals(op, "name", None).decode()
- is_backward = bool(get_pb_arg_vali(op, "is_backward", 0))
- rename_op_input(predict_net, init_net, i, 0, name, from_producer=is_backward)
- rename_op_output(predict_net, i, 0, name)
-
- # Remove AliasWithName, should be very safe since it's a non-op
- new_ops = []
- for op in predict_net.op:
- if op.type != "AliasWithName":
- new_ops.append(op)
- else:
- # safety check
- assert op.input == op.output
- assert op.input[0] == op.arg[0].s.decode()
- del predict_net.op[:]
- predict_net.op.extend(new_ops)
-
-
-# ==== torch/utils_caffe2/graph_transform.py ===================================
-
-
-class IllegalGraphTransformError(ValueError):
- """When a graph transform function call can't be executed."""
-
-
-def _rename_versioned_blob_in_proto(
- proto: caffe2_pb2.NetDef,
- old_name: str,
- new_name: str,
- version: int,
- ssa: List[Tuple[List[Tuple[str, int]], List[Tuple[str, int]]]],
- start_versions: Dict[str, int],
- end_versions: Dict[str, int],
-):
- """In given proto, rename all blobs with matched version"""
- # Operater list
- for op, i_th_ssa in zip(proto.op, ssa):
- versioned_inputs, versioned_outputs = i_th_ssa
- for i in range(len(op.input)):
- if versioned_inputs[i] == (old_name, version):
- op.input[i] = new_name
- for i in range(len(op.output)):
- if versioned_outputs[i] == (old_name, version):
- op.output[i] = new_name
- # external_input
- if start_versions.get(old_name, 0) == version:
- for i in range(len(proto.external_input)):
- if proto.external_input[i] == old_name:
- proto.external_input[i] = new_name
- # external_output
- if end_versions.get(old_name, 0) == version:
- for i in range(len(proto.external_output)):
- if proto.external_output[i] == old_name:
- proto.external_output[i] = new_name
-
-
-def rename_op_input(
- predict_net: caffe2_pb2.NetDef,
- init_net: caffe2_pb2.NetDef,
- op_id: int,
- input_id: int,
- new_name: str,
- from_producer: bool = False,
-):
- """
- Rename the op_id-th operator in predict_net, change it's input_id-th input's
- name to the new_name. It also does automatic re-route and change
- external_input and init_net if necessary.
- - It requires the input is only consumed by this op.
- - This function modifies predict_net and init_net in-place.
- - When from_producer is enable, this also updates other operators that consumes
- the same input. Be cautious because may trigger unintended behavior.
- """
- assert isinstance(predict_net, caffe2_pb2.NetDef)
- assert isinstance(init_net, caffe2_pb2.NetDef)
-
- init_net_ssa, init_net_versions = core.get_ssa(init_net)
- predict_net_ssa, predict_net_versions = core.get_ssa(
- predict_net, copy.deepcopy(init_net_versions)
- )
-
- versioned_inputs, versioned_outputs = predict_net_ssa[op_id]
- old_name, version = versioned_inputs[input_id]
-
- if from_producer:
- producer_map = get_producer_map(predict_net_ssa)
- if not (old_name, version) in producer_map:
- raise NotImplementedError(
- "Can't find producer, the input {} is probably from"
- " init_net, this is not supported yet.".format(old_name)
- )
- producer = producer_map[(old_name, version)]
- rename_op_output(predict_net, producer[0], producer[1], new_name)
- return
-
- def contain_targets(op_ssa):
- return (old_name, version) in op_ssa[0]
-
- is_consumer = [contain_targets(op_ssa) for op_ssa in predict_net_ssa]
- if sum(is_consumer) > 1:
- raise IllegalGraphTransformError(
- (
- "Input '{}' of operator(#{}) are consumed by other ops, please use"
- + " rename_op_output on the producer instead. Offending op: \n{}"
- ).format(old_name, op_id, predict_net.op[op_id])
- )
-
- # update init_net
- _rename_versioned_blob_in_proto(
- init_net, old_name, new_name, version, init_net_ssa, {}, init_net_versions
- )
- # update predict_net
- _rename_versioned_blob_in_proto(
- predict_net,
- old_name,
- new_name,
- version,
- predict_net_ssa,
- init_net_versions,
- predict_net_versions,
- )
-
-
-def rename_op_output(predict_net: caffe2_pb2.NetDef, op_id: int, output_id: int, new_name: str):
- """
- Rename the op_id-th operator in predict_net, change it's output_id-th input's
- name to the new_name. It also does automatic re-route and change
- external_output and if necessary.
- - It allows multiple consumers of its output.
- - This function modifies predict_net in-place, doesn't need init_net.
- """
- assert isinstance(predict_net, caffe2_pb2.NetDef)
-
- ssa, blob_versions = core.get_ssa(predict_net)
-
- versioned_inputs, versioned_outputs = ssa[op_id]
- old_name, version = versioned_outputs[output_id]
-
- # update predict_net
- _rename_versioned_blob_in_proto(
- predict_net, old_name, new_name, version, ssa, {}, blob_versions
- )
-
-
-def get_sub_graph_external_input_output(
- predict_net: caffe2_pb2.NetDef, sub_graph_op_indices: List[int]
-) -> Tuple[List[Tuple[str, int]], List[Tuple[str, int]]]:
- """
- Return the list of external input/output of sub-graph,
- each element is tuple of the name and corresponding version in predict_net.
-
- external input/output is defined the same way as caffe2 NetDef.
- """
- ssa, versions = core.get_ssa(predict_net)
-
- all_inputs = []
- all_outputs = []
- for op_id in sub_graph_op_indices:
- all_inputs += [inp for inp in ssa[op_id][0] if inp not in all_inputs]
- all_outputs += list(ssa[op_id][1]) # ssa output won't repeat
-
- # for versioned blobs, external inputs are just those blob in all_inputs
- # but not in all_outputs
- ext_inputs = [inp for inp in all_inputs if inp not in all_outputs]
-
- # external outputs are essentially outputs of this subgraph that are used
- # outside of this sub-graph (including predict_net.external_output)
- all_other_inputs = sum(
- (ssa[i][0] for i in range(len(ssa)) if i not in sub_graph_op_indices),
- [(outp, versions[outp]) for outp in predict_net.external_output],
- )
- ext_outputs = [outp for outp in all_outputs if outp in set(all_other_inputs)]
-
- return ext_inputs, ext_outputs
-
-
-class DiGraph:
- """A DAG representation of caffe2 graph, each vertice is a versioned blob."""
-
- def __init__(self):
- self.vertices = set()
- self.graph = collections.defaultdict(list)
-
- def add_edge(self, u, v):
- self.graph[u].append(v)
- self.vertices.add(u)
- self.vertices.add(v)
-
- # grab from https://www.geeksforgeeks.org/find-paths-given-source-destination/
- def get_all_paths(self, s, d):
- visited = {k: False for k in self.vertices}
- path = []
- all_paths = []
-
- def _get_all_paths_util(graph, u, d, visited, path):
- visited[u] = True
- path.append(u)
- if u == d:
- all_paths.append(copy.deepcopy(path))
- else:
- for i in graph[u]:
- if not visited[i]:
- _get_all_paths_util(graph, i, d, visited, path)
- path.pop()
- visited[u] = False
-
- _get_all_paths_util(self.graph, s, d, visited, path)
- return all_paths
-
- @staticmethod
- def from_ssa(ssa):
- graph = DiGraph()
- for op_id in range(len(ssa)):
- for inp in ssa[op_id][0]:
- for outp in ssa[op_id][1]:
- graph.add_edge(inp, outp)
- return graph
-
-
-def _get_dependency_chain(ssa, versioned_target, versioned_source):
- """
- Return the index list of relevant operator to produce target blob from source blob,
- if there's no dependency, return empty list.
- """
-
- # finding all paths between nodes can be O(N!), thus we can only search
- # in the subgraph using the op starting from the first consumer of source blob
- # to the producer of the target blob.
- consumer_map = get_consumer_map(ssa)
- producer_map = get_producer_map(ssa)
- start_op = min(x[0] for x in consumer_map[versioned_source]) - 15
- end_op = (
- producer_map[versioned_target][0] + 15 if versioned_target in producer_map else start_op
- )
- sub_graph_ssa = ssa[start_op : end_op + 1]
- if len(sub_graph_ssa) > 30:
- logger.warning(
- "Subgraph bebetween {} and {} is large (from op#{} to op#{}), it"
- " might take non-trival time to find all paths between them.".format(
- versioned_source, versioned_target, start_op, end_op
- )
- )
-
- dag = DiGraph.from_ssa(sub_graph_ssa)
- paths = dag.get_all_paths(versioned_source, versioned_target) # include two ends
- ops_in_paths = [[producer_map[blob][0] for blob in path[1:]] for path in paths]
- return sorted(set().union(*[set(ops) for ops in ops_in_paths]))
-
-
-def identify_reshape_sub_graph(predict_net: caffe2_pb2.NetDef) -> List[List[int]]:
- """
- Idenfity the reshape sub-graph in a protobuf.
- The reshape sub-graph is defined as matching the following pattern:
-
- (input_blob) -> Op_1 -> ... -> Op_N -> (new_shape) -─┐
- └-------------------------------------------> Reshape -> (output_blob)
-
- Return:
- List of sub-graphs, each sub-graph is represented as a list of indices
- of the relavent ops, [Op_1, Op_2, ..., Op_N, Reshape]
- """
-
- ssa, _ = core.get_ssa(predict_net)
-
- ret = []
- for i, op in enumerate(predict_net.op):
- if op.type == "Reshape":
- assert len(op.input) == 2
- input_ssa = ssa[i][0]
- data_source = input_ssa[0]
- shape_source = input_ssa[1]
- op_indices = _get_dependency_chain(ssa, shape_source, data_source)
- ret.append(op_indices + [i])
- return ret
-
-
-def remove_reshape_for_fc(predict_net, params):
- """
- In PyTorch nn.Linear has to take 2D tensor, this often leads to reshape
- a 4D tensor to 2D by calling .view(). However this (dynamic) reshaping
- doesn't work well with ONNX and Int8 tools, and cause using extra
- ops (eg. ExpandDims) that might not be available on mobile.
- Luckily Caffe2 supports 4D tensor for FC, so we can remove those reshape
- after exporting ONNX model.
- """
- from caffe2.python import core
-
- # find all reshape sub-graph that can be removed, which is now all Reshape
- # sub-graph whose output is only consumed by FC.
- # TODO: to make it safer, we may need the actually value to better determine
- # if a Reshape before FC is removable.
- reshape_sub_graphs = identify_reshape_sub_graph(predict_net)
- sub_graphs_to_remove = []
- for reshape_sub_graph in reshape_sub_graphs:
- reshape_op_id = reshape_sub_graph[-1]
- assert predict_net.op[reshape_op_id].type == "Reshape"
- ssa, _ = core.get_ssa(predict_net)
- reshape_output = ssa[reshape_op_id][1][0]
- consumers = [i for i in range(len(ssa)) if reshape_output in ssa[i][0]]
- if all(predict_net.op[consumer].type == "FC" for consumer in consumers):
- # safety check if the sub-graph is isolated, for this reshape sub-graph,
- # it means it has one non-param external input and one external output.
- ext_inputs, ext_outputs = get_sub_graph_external_input_output(
- predict_net, reshape_sub_graph
- )
- non_params_ext_inputs = [inp for inp in ext_inputs if inp[1] != 0]
- if len(non_params_ext_inputs) == 1 and len(ext_outputs) == 1:
- sub_graphs_to_remove.append(reshape_sub_graph)
-
- # perform removing subgraph by:
- # 1: rename the Reshape's output to its input, then the graph can be
- # seen as in-place itentify, meaning whose external input/output are the same.
- # 2: simply remove those ops.
- remove_op_ids = []
- params_to_remove = []
- for sub_graph in sub_graphs_to_remove:
- logger.info(
- "Remove Reshape sub-graph:\n{}".format(
- "".join(["(#{:>4})\n{}".format(i, predict_net.op[i]) for i in sub_graph])
- )
- )
- reshape_op_id = sub_graph[-1]
- new_reshap_output = predict_net.op[reshape_op_id].input[0]
- rename_op_output(predict_net, reshape_op_id, 0, new_reshap_output)
- ext_inputs, ext_outputs = get_sub_graph_external_input_output(predict_net, sub_graph)
- non_params_ext_inputs = [inp for inp in ext_inputs if inp[1] != 0]
- params_ext_inputs = [inp for inp in ext_inputs if inp[1] == 0]
- assert len(non_params_ext_inputs) == 1 and len(ext_outputs) == 1
- assert ext_outputs[0][0] == non_params_ext_inputs[0][0]
- assert ext_outputs[0][1] == non_params_ext_inputs[0][1] + 1
- remove_op_ids.extend(sub_graph)
- params_to_remove.extend(params_ext_inputs)
-
- predict_net = copy.deepcopy(predict_net)
- new_ops = [op for i, op in enumerate(predict_net.op) if i not in remove_op_ids]
- del predict_net.op[:]
- predict_net.op.extend(new_ops)
- for versioned_params in params_to_remove:
- name = versioned_params[0]
- logger.info("Remove params: {} from init_net and predict_net.external_input".format(name))
- del params[name]
- predict_net.external_input.remove(name)
-
- return predict_net, params
-
-
-def fuse_copy_between_cpu_and_gpu(predict_net: caffe2_pb2.NetDef):
- """
- In-place fuse extra copy ops between cpu/gpu for the following case:
- a -CopyAToB-> b -CopyBToA> c1 -NextOp1-> d1
- -CopyBToA> c2 -NextOp2-> d2
- The fused network will look like:
- a -NextOp1-> d1
- -NextOp2-> d2
- """
-
- _COPY_OPS = ["CopyCPUToGPU", "CopyGPUToCPU"]
-
- def _fuse_once(predict_net):
- ssa, blob_versions = core.get_ssa(predict_net)
- consumer_map = get_consumer_map(ssa)
- versioned_external_output = [
- (name, blob_versions[name]) for name in predict_net.external_output
- ]
-
- for op_id, op in enumerate(predict_net.op):
- if op.type in _COPY_OPS:
- fw_copy_versioned_output = ssa[op_id][1][0]
- consumer_ids = [x[0] for x in consumer_map[fw_copy_versioned_output]]
- reverse_op_type = _COPY_OPS[1 - _COPY_OPS.index(op.type)]
-
- is_fusable = (
- len(consumer_ids) > 0
- and fw_copy_versioned_output not in versioned_external_output
- and all(
- predict_net.op[_op_id].type == reverse_op_type
- and ssa[_op_id][1][0] not in versioned_external_output
- for _op_id in consumer_ids
- )
- )
-
- if is_fusable:
- for rv_copy_op_id in consumer_ids:
- # making each NextOp uses "a" directly and removing Copy ops
- rs_copy_versioned_output = ssa[rv_copy_op_id][1][0]
- next_op_id, inp_id = consumer_map[rs_copy_versioned_output][0]
- predict_net.op[next_op_id].input[inp_id] = op.input[0]
- # remove CopyOps
- new_ops = [
- op
- for i, op in enumerate(predict_net.op)
- if i != op_id and i not in consumer_ids
- ]
- del predict_net.op[:]
- predict_net.op.extend(new_ops)
- return True
-
- return False
-
- # _fuse_once returns False is nothing can be fused
- while _fuse_once(predict_net):
- pass
-
-
-def remove_dead_end_ops(net_def: caffe2_pb2.NetDef):
- """remove ops if its output is not used or not in external_output"""
- ssa, versions = core.get_ssa(net_def)
- versioned_external_output = [(name, versions[name]) for name in net_def.external_output]
- consumer_map = get_consumer_map(ssa)
- removed_op_ids = set()
-
- def _is_dead_end(versioned_blob):
- return not (
- versioned_blob in versioned_external_output
- or (
- len(consumer_map[versioned_blob]) > 0
- and all(x[0] not in removed_op_ids for x in consumer_map[versioned_blob])
- )
- )
-
- for i, ssa_i in reversed(list(enumerate(ssa))):
- versioned_outputs = ssa_i[1]
- if all(_is_dead_end(outp) for outp in versioned_outputs):
- removed_op_ids.add(i)
-
- # simply removing those deadend ops should have no effect to external_output
- new_ops = [op for i, op in enumerate(net_def.op) if i not in removed_op_ids]
- del net_def.op[:]
- net_def.op.extend(new_ops)
diff --git a/spaces/YlcldKlns/bing/src/components/ui/select.tsx b/spaces/YlcldKlns/bing/src/components/ui/select.tsx
deleted file mode 100644
index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000
--- a/spaces/YlcldKlns/bing/src/components/ui/select.tsx
+++ /dev/null
@@ -1,123 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as SelectPrimitive from '@radix-ui/react-select'
-
-import { cn } from '@/lib/utils'
-import {
- IconArrowDown,
- IconCheck,
- IconChevronUpDown
-} from '@/components/ui/icons'
-
-const Select = SelectPrimitive.Root
-
-const SelectGroup = SelectPrimitive.Group
-
-const SelectValue = SelectPrimitive.Value
-
-const SelectTrigger = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
- {children}
-
-
-
-
-))
-SelectTrigger.displayName = SelectPrimitive.Trigger.displayName
-
-const SelectContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, position = 'popper', ...props }, ref) => (
-
-
-
- {children}
-
-
-
-))
-SelectContent.displayName = SelectPrimitive.Content.displayName
-
-const SelectLabel = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SelectLabel.displayName = SelectPrimitive.Label.displayName
-
-const SelectItem = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
-
-
-
-
- {children}
-
-))
-SelectItem.displayName = SelectPrimitive.Item.displayName
-
-const SelectSeparator = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SelectSeparator.displayName = SelectPrimitive.Separator.displayName
-
-export {
- Select,
- SelectGroup,
- SelectValue,
- SelectTrigger,
- SelectContent,
- SelectLabel,
- SelectItem,
- SelectSeparator
-}
diff --git a/spaces/YotamNitzan/domain-expansion/torch_utils/ops/upfirdn2d.h b/spaces/YotamNitzan/domain-expansion/torch_utils/ops/upfirdn2d.h
deleted file mode 100644
index c9e2032bcac9d2abde7a75eea4d812da348afadd..0000000000000000000000000000000000000000
--- a/spaces/YotamNitzan/domain-expansion/torch_utils/ops/upfirdn2d.h
+++ /dev/null
@@ -1,59 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-
-//------------------------------------------------------------------------
-// CUDA kernel parameters.
-
-struct upfirdn2d_kernel_params
-{
- const void* x;
- const float* f;
- void* y;
-
- int2 up;
- int2 down;
- int2 pad0;
- int flip;
- float gain;
-
- int4 inSize; // [width, height, channel, batch]
- int4 inStride;
- int2 filterSize; // [width, height]
- int2 filterStride;
- int4 outSize; // [width, height, channel, batch]
- int4 outStride;
- int sizeMinor;
- int sizeMajor;
-
- int loopMinor;
- int loopMajor;
- int loopX;
- int launchMinor;
- int launchMajor;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel specialization.
-
-struct upfirdn2d_kernel_spec
-{
- void* kernel;
- int tileOutW;
- int tileOutH;
- int loopMinor;
- int loopX;
-};
-
-//------------------------------------------------------------------------
-// CUDA kernel selection.
-
-template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p);
-
-//------------------------------------------------------------------------
diff --git a/spaces/Yram/Docker/README.md b/spaces/Yram/Docker/README.md
deleted file mode 100644
index 26e3423a82951f733730f5d1abe3987ef614db2f..0000000000000000000000000000000000000000
--- a/spaces/Yram/Docker/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Docker
-emoji: 📉
-colorFrom: green
-colorTo: yellow
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/YuhangDeng123/Whisper-online/README.md b/spaces/YuhangDeng123/Whisper-online/README.md
deleted file mode 100644
index 36f237a06ab8ba788160e32d53e7336b97358898..0000000000000000000000000000000000000000
--- a/spaces/YuhangDeng123/Whisper-online/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Whisper-online
-emoji: 📉
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-
diff --git a/spaces/ZJunTvT/ZJunChat/README.md b/spaces/ZJunTvT/ZJunChat/README.md
deleted file mode 100644
index e0fc79695c12250407cd55e29704fe5488080557..0000000000000000000000000000000000000000
--- a/spaces/ZJunTvT/ZJunChat/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChuanhuChatGPT
-emoji: 🐯
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.25.0
-app_file: ChuanhuChatbot.py
-pinned: true
-license: gpl-3.0
-duplicated_from: JohnSmith9982/ChuanhuChatGPT
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/aaronb/Anything2Image/anything2image/cli.py b/spaces/aaronb/Anything2Image/anything2image/cli.py
deleted file mode 100644
index f6acccf5a948780f6f7608ca23fc7af33e821443..0000000000000000000000000000000000000000
--- a/spaces/aaronb/Anything2Image/anything2image/cli.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import os
-import fire
-from anything2image.api import Anything2Image
-import soundfile as sf
-from PIL import Image
-import numpy as np
-
-def main(
- prompt='', audio=None, image=None, text=None,
- ckpt_dir=os.path.join(os.path.expanduser('~'), 'anything2image', 'checkpoints')
-):
- anything2img = Anything2Image(imagebind_download_dir=ckpt_dir)
- if audio is not None:
- data, samplerate = sf.read(audio)
- audio = (samplerate, data)
- if image is not None:
- image = np.array(Image.open(image))
- image = anything2img(prompt=prompt, audio=audio, image=image, text=text)
- image.save('cli_output.png')
-
-fire.Fire(main)
\ No newline at end of file
diff --git a/spaces/aaronb/DragGAN/stylegan2/op/upfirdn2d.py b/spaces/aaronb/DragGAN/stylegan2/op/upfirdn2d.py
deleted file mode 100644
index 2da48c831d48ce0a66fa3943e6e0123ec28ba428..0000000000000000000000000000000000000000
--- a/spaces/aaronb/DragGAN/stylegan2/op/upfirdn2d.py
+++ /dev/null
@@ -1,232 +0,0 @@
-from collections import abc
-import os
-
-import torch
-from torch.nn import functional as F
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-import warnings
-
-module_path = os.path.dirname(os.path.abspath(__file__))
-
-try:
- upfirdn2d_op = load(
- "upfirdn2d",
- sources=[
- os.path.join(module_path, "upfirdn2d.cpp"),
- os.path.join(module_path, "upfirdn2d_kernel.cu"),
- ],
- )
-except:
- warnings.warn(
- f"(This is not error) Switch to native implementation"
- )
-
- upfirdn2d_op = None
-
-
-class UpFirDn2dBackward(Function):
- @staticmethod
- def forward(
- ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size
- ):
-
- up_x, up_y = up
- down_x, down_y = down
- g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad
-
- grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)
-
- grad_input = upfirdn2d_op.upfirdn2d(
- grad_output,
- grad_kernel,
- down_x,
- down_y,
- up_x,
- up_y,
- g_pad_x0,
- g_pad_x1,
- g_pad_y0,
- g_pad_y1,
- )
- grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])
-
- ctx.save_for_backward(kernel)
-
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- ctx.up_x = up_x
- ctx.up_y = up_y
- ctx.down_x = down_x
- ctx.down_y = down_y
- ctx.pad_x0 = pad_x0
- ctx.pad_x1 = pad_x1
- ctx.pad_y0 = pad_y0
- ctx.pad_y1 = pad_y1
- ctx.in_size = in_size
- ctx.out_size = out_size
-
- return grad_input
-
- @staticmethod
- def backward(ctx, gradgrad_input):
- kernel, = ctx.saved_tensors
-
- gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)
-
- gradgrad_out = upfirdn2d_op.upfirdn2d(
- gradgrad_input,
- kernel,
- ctx.up_x,
- ctx.up_y,
- ctx.down_x,
- ctx.down_y,
- ctx.pad_x0,
- ctx.pad_x1,
- ctx.pad_y0,
- ctx.pad_y1,
- )
- # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3])
- gradgrad_out = gradgrad_out.view(
- ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]
- )
-
- return gradgrad_out, None, None, None, None, None, None, None, None
-
-
-class UpFirDn2d(Function):
- @staticmethod
- def forward(ctx, input, kernel, up, down, pad):
- up_x, up_y = up
- down_x, down_y = down
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- kernel_h, kernel_w = kernel.shape
- batch, channel, in_h, in_w = input.shape
- ctx.in_size = input.shape
-
- input = input.reshape(-1, in_h, in_w, 1)
-
- ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x
- ctx.out_size = (out_h, out_w)
-
- ctx.up = (up_x, up_y)
- ctx.down = (down_x, down_y)
- ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)
-
- g_pad_x0 = kernel_w - pad_x0 - 1
- g_pad_y0 = kernel_h - pad_y0 - 1
- g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1
- g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1
-
- ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)
-
- out = upfirdn2d_op.upfirdn2d(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
- )
- # out = out.view(major, out_h, out_w, minor)
- out = out.view(-1, channel, out_h, out_w)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- kernel, grad_kernel = ctx.saved_tensors
-
- grad_input = None
-
- if ctx.needs_input_grad[0]:
- grad_input = UpFirDn2dBackward.apply(
- grad_output,
- kernel,
- grad_kernel,
- ctx.up,
- ctx.down,
- ctx.pad,
- ctx.g_pad,
- ctx.in_size,
- ctx.out_size,
- )
-
- return grad_input, None, None, None, None
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- if not isinstance(up, abc.Iterable):
- up = (up, up)
-
- if not isinstance(down, abc.Iterable):
- down = (down, down)
-
- if len(pad) == 2:
- pad = (pad[0], pad[1], pad[0], pad[1])
-
- if input.device.type == "cpu":
- out = _upfirdn2d_native(input, kernel, *up, *down, *pad)
-
- else:
- out = UpFirDn2d.apply(input, kernel, up, down, pad)
-
- return out
-
-
-def upfirdn2d_native(input, kernel, up=1, down=1, pad=(0, 0)):
- if not isinstance(up, abc.Iterable):
- up = (up, up)
-
- if not isinstance(down, abc.Iterable):
- down = (down, down)
-
- if len(pad) == 2:
- pad = (pad[0], pad[1], pad[0], pad[1])
-
- out = _upfirdn2d_native(input, kernel, *up, *down, *pad)
-
- return out
-
-
-def _upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, channel, in_h, in_w = input.shape
- input = input.reshape(-1, in_h, in_w, 1)
-
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x
-
- return out.view(-1, channel, out_h, out_w)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/pipelines/formating.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/pipelines/formating.py
deleted file mode 100644
index 5781341bd48766a740f23ebba7a85cf8993642d7..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/pipelines/formating.py
+++ /dev/null
@@ -1,364 +0,0 @@
-from collections.abc import Sequence
-
-import mmcv
-import numpy as np
-import torch
-from mmcv.parallel import DataContainer as DC
-
-from ..builder import PIPELINES
-
-
-def to_tensor(data):
- """Convert objects of various python types to :obj:`torch.Tensor`.
-
- Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`,
- :class:`Sequence`, :class:`int` and :class:`float`.
-
- Args:
- data (torch.Tensor | numpy.ndarray | Sequence | int | float): Data to
- be converted.
- """
-
- if isinstance(data, torch.Tensor):
- return data
- elif isinstance(data, np.ndarray):
- return torch.from_numpy(data)
- elif isinstance(data, Sequence) and not mmcv.is_str(data):
- return torch.tensor(data)
- elif isinstance(data, int):
- return torch.LongTensor([data])
- elif isinstance(data, float):
- return torch.FloatTensor([data])
- else:
- raise TypeError(f'type {type(data)} cannot be converted to tensor.')
-
-
-@PIPELINES.register_module()
-class ToTensor(object):
- """Convert some results to :obj:`torch.Tensor` by given keys.
-
- Args:
- keys (Sequence[str]): Keys that need to be converted to Tensor.
- """
-
- def __init__(self, keys):
- self.keys = keys
-
- def __call__(self, results):
- """Call function to convert data in results to :obj:`torch.Tensor`.
-
- Args:
- results (dict): Result dict contains the data to convert.
-
- Returns:
- dict: The result dict contains the data converted
- to :obj:`torch.Tensor`.
- """
- for key in self.keys:
- results[key] = to_tensor(results[key])
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + f'(keys={self.keys})'
-
-
-@PIPELINES.register_module()
-class ImageToTensor(object):
- """Convert image to :obj:`torch.Tensor` by given keys.
-
- The dimension order of input image is (H, W, C). The pipeline will convert
- it to (C, H, W). If only 2 dimension (H, W) is given, the output would be
- (1, H, W).
-
- Args:
- keys (Sequence[str]): Key of images to be converted to Tensor.
- """
-
- def __init__(self, keys):
- self.keys = keys
-
- def __call__(self, results):
- """Call function to convert image in results to :obj:`torch.Tensor` and
- transpose the channel order.
-
- Args:
- results (dict): Result dict contains the image data to convert.
-
- Returns:
- dict: The result dict contains the image converted
- to :obj:`torch.Tensor` and transposed to (C, H, W) order.
- """
- for key in self.keys:
- img = results[key]
- if len(img.shape) < 3:
- img = np.expand_dims(img, -1)
- results[key] = to_tensor(img.transpose(2, 0, 1))
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + f'(keys={self.keys})'
-
-
-@PIPELINES.register_module()
-class Transpose(object):
- """Transpose some results by given keys.
-
- Args:
- keys (Sequence[str]): Keys of results to be transposed.
- order (Sequence[int]): Order of transpose.
- """
-
- def __init__(self, keys, order):
- self.keys = keys
- self.order = order
-
- def __call__(self, results):
- """Call function to transpose the channel order of data in results.
-
- Args:
- results (dict): Result dict contains the data to transpose.
-
- Returns:
- dict: The result dict contains the data transposed to \
- ``self.order``.
- """
- for key in self.keys:
- results[key] = results[key].transpose(self.order)
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + \
- f'(keys={self.keys}, order={self.order})'
-
-
-@PIPELINES.register_module()
-class ToDataContainer(object):
- """Convert results to :obj:`mmcv.DataContainer` by given fields.
-
- Args:
- fields (Sequence[dict]): Each field is a dict like
- ``dict(key='xxx', **kwargs)``. The ``key`` in result will
- be converted to :obj:`mmcv.DataContainer` with ``**kwargs``.
- Default: ``(dict(key='img', stack=True), dict(key='gt_bboxes'),
- dict(key='gt_labels'))``.
- """
-
- def __init__(self,
- fields=(dict(key='img', stack=True), dict(key='gt_bboxes'),
- dict(key='gt_labels'))):
- self.fields = fields
-
- def __call__(self, results):
- """Call function to convert data in results to
- :obj:`mmcv.DataContainer`.
-
- Args:
- results (dict): Result dict contains the data to convert.
-
- Returns:
- dict: The result dict contains the data converted to \
- :obj:`mmcv.DataContainer`.
- """
-
- for field in self.fields:
- field = field.copy()
- key = field.pop('key')
- results[key] = DC(results[key], **field)
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + f'(fields={self.fields})'
-
-
-@PIPELINES.register_module()
-class DefaultFormatBundle(object):
- """Default formatting bundle.
-
- It simplifies the pipeline of formatting common fields, including "img",
- "proposals", "gt_bboxes", "gt_labels", "gt_masks" and "gt_semantic_seg".
- These fields are formatted as follows.
-
- - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True)
- - proposals: (1)to tensor, (2)to DataContainer
- - gt_bboxes: (1)to tensor, (2)to DataContainer
- - gt_bboxes_ignore: (1)to tensor, (2)to DataContainer
- - gt_labels: (1)to tensor, (2)to DataContainer
- - gt_masks: (1)to tensor, (2)to DataContainer (cpu_only=True)
- - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, \
- (3)to DataContainer (stack=True)
- """
-
- def __call__(self, results):
- """Call function to transform and format common fields in results.
-
- Args:
- results (dict): Result dict contains the data to convert.
-
- Returns:
- dict: The result dict contains the data that is formatted with \
- default bundle.
- """
-
- if 'img' in results:
- img = results['img']
- # add default meta keys
- results = self._add_default_meta_keys(results)
- if len(img.shape) < 3:
- img = np.expand_dims(img, -1)
- img = np.ascontiguousarray(img.transpose(2, 0, 1))
- results['img'] = DC(to_tensor(img), stack=True)
- for key in ['proposals', 'gt_bboxes', 'gt_bboxes_ignore', 'gt_labels']:
- if key not in results:
- continue
- results[key] = DC(to_tensor(results[key]))
- if 'gt_masks' in results:
- results['gt_masks'] = DC(results['gt_masks'], cpu_only=True)
- if 'gt_semantic_seg' in results:
- results['gt_semantic_seg'] = DC(
- to_tensor(results['gt_semantic_seg'][None, ...]), stack=True)
- return results
-
- def _add_default_meta_keys(self, results):
- """Add default meta keys.
-
- We set default meta keys including `pad_shape`, `scale_factor` and
- `img_norm_cfg` to avoid the case where no `Resize`, `Normalize` and
- `Pad` are implemented during the whole pipeline.
-
- Args:
- results (dict): Result dict contains the data to convert.
-
- Returns:
- results (dict): Updated result dict contains the data to convert.
- """
- img = results['img']
- results.setdefault('pad_shape', img.shape)
- results.setdefault('scale_factor', 1.0)
- num_channels = 1 if len(img.shape) < 3 else img.shape[2]
- results.setdefault(
- 'img_norm_cfg',
- dict(
- mean=np.zeros(num_channels, dtype=np.float32),
- std=np.ones(num_channels, dtype=np.float32),
- to_rgb=False))
- return results
-
- def __repr__(self):
- return self.__class__.__name__
-
-
-@PIPELINES.register_module()
-class Collect(object):
- """Collect data from the loader relevant to the specific task.
-
- This is usually the last stage of the data loader pipeline. Typically keys
- is set to some subset of "img", "proposals", "gt_bboxes",
- "gt_bboxes_ignore", "gt_labels", and/or "gt_masks".
-
- The "img_meta" item is always populated. The contents of the "img_meta"
- dictionary depends on "meta_keys". By default this includes:
-
- - "img_shape": shape of the image input to the network as a tuple \
- (h, w, c). Note that images may be zero padded on the \
- bottom/right if the batch tensor is larger than this shape.
-
- - "scale_factor": a float indicating the preprocessing scale
-
- - "flip": a boolean indicating if image flip transform was used
-
- - "filename": path to the image file
-
- - "ori_shape": original shape of the image as a tuple (h, w, c)
-
- - "pad_shape": image shape after padding
-
- - "img_norm_cfg": a dict of normalization information:
-
- - mean - per channel mean subtraction
- - std - per channel std divisor
- - to_rgb - bool indicating if bgr was converted to rgb
-
- Args:
- keys (Sequence[str]): Keys of results to be collected in ``data``.
- meta_keys (Sequence[str], optional): Meta keys to be converted to
- ``mmcv.DataContainer`` and collected in ``data[img_metas]``.
- Default: ``('filename', 'ori_filename', 'ori_shape', 'img_shape',
- 'pad_shape', 'scale_factor', 'flip', 'flip_direction',
- 'img_norm_cfg')``
- """
-
- def __init__(self,
- keys,
- meta_keys=('filename', 'ori_filename', 'ori_shape',
- 'img_shape', 'pad_shape', 'scale_factor', 'flip',
- 'flip_direction', 'img_norm_cfg')):
- self.keys = keys
- self.meta_keys = meta_keys
-
- def __call__(self, results):
- """Call function to collect keys in results. The keys in ``meta_keys``
- will be converted to :obj:mmcv.DataContainer.
-
- Args:
- results (dict): Result dict contains the data to collect.
-
- Returns:
- dict: The result dict contains the following keys
-
- - keys in``self.keys``
- - ``img_metas``
- """
-
- data = {}
- img_meta = {}
- for key in self.meta_keys:
- img_meta[key] = results[key]
- data['img_metas'] = DC(img_meta, cpu_only=True)
- for key in self.keys:
- data[key] = results[key]
- return data
-
- def __repr__(self):
- return self.__class__.__name__ + \
- f'(keys={self.keys}, meta_keys={self.meta_keys})'
-
-
-@PIPELINES.register_module()
-class WrapFieldsToLists(object):
- """Wrap fields of the data dictionary into lists for evaluation.
-
- This class can be used as a last step of a test or validation
- pipeline for single image evaluation or inference.
-
- Example:
- >>> test_pipeline = [
- >>> dict(type='LoadImageFromFile'),
- >>> dict(type='Normalize',
- mean=[123.675, 116.28, 103.53],
- std=[58.395, 57.12, 57.375],
- to_rgb=True),
- >>> dict(type='Pad', size_divisor=32),
- >>> dict(type='ImageToTensor', keys=['img']),
- >>> dict(type='Collect', keys=['img']),
- >>> dict(type='WrapFieldsToLists')
- >>> ]
- """
-
- def __call__(self, results):
- """Call function to wrap fields into lists.
-
- Args:
- results (dict): Result dict contains the data to wrap.
-
- Returns:
- dict: The result dict where value of ``self.keys`` are wrapped \
- into list.
- """
-
- # Wrap dict fields into lists
- for key, val in results.items():
- results[key] = [val]
- return results
-
- def __repr__(self):
- return f'{self.__class__.__name__}()'
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/__init__.py
deleted file mode 100644
index b7137d60817cd9826e03cc45c8ccf3551b929909..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from .anchor import * # noqa: F401, F403
-# from .bbox import * # noqa: F401, F403
-from .evaluation import get_palette, get_classes # noqa: F401, F403
-# from .export import * # noqa: F401, F403
-# from .mask import * # noqa: F401, F403
-# from .post_processing import * # noqa: F401, F403
-# from .utils import * # noqa: F401, F403
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/voxelize.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/voxelize.py
deleted file mode 100644
index ca3226a4fbcbfe58490fa2ea8e1c16b531214121..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/voxelize.py
+++ /dev/null
@@ -1,132 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.nn.modules.utils import _pair
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['dynamic_voxelize_forward', 'hard_voxelize_forward'])
-
-
-class _Voxelization(Function):
-
- @staticmethod
- def forward(ctx,
- points,
- voxel_size,
- coors_range,
- max_points=35,
- max_voxels=20000):
- """Convert kitti points(N, >=3) to voxels.
-
- Args:
- points (torch.Tensor): [N, ndim]. Points[:, :3] contain xyz points
- and points[:, 3:] contain other information like reflectivity.
- voxel_size (tuple or float): The size of voxel with the shape of
- [3].
- coors_range (tuple or float): The coordinate range of voxel with
- the shape of [6].
- max_points (int, optional): maximum points contained in a voxel. if
- max_points=-1, it means using dynamic_voxelize. Default: 35.
- max_voxels (int, optional): maximum voxels this function create.
- for second, 20000 is a good choice. Users should shuffle points
- before call this function because max_voxels may drop points.
- Default: 20000.
-
- Returns:
- voxels_out (torch.Tensor): Output voxels with the shape of [M,
- max_points, ndim]. Only contain points and returned when
- max_points != -1.
- coors_out (torch.Tensor): Output coordinates with the shape of
- [M, 3].
- num_points_per_voxel_out (torch.Tensor): Num points per voxel with
- the shape of [M]. Only returned when max_points != -1.
- """
- if max_points == -1 or max_voxels == -1:
- coors = points.new_zeros(size=(points.size(0), 3), dtype=torch.int)
- ext_module.dynamic_voxelize_forward(points, coors, voxel_size,
- coors_range, 3)
- return coors
- else:
- voxels = points.new_zeros(
- size=(max_voxels, max_points, points.size(1)))
- coors = points.new_zeros(size=(max_voxels, 3), dtype=torch.int)
- num_points_per_voxel = points.new_zeros(
- size=(max_voxels, ), dtype=torch.int)
- voxel_num = ext_module.hard_voxelize_forward(
- points, voxels, coors, num_points_per_voxel, voxel_size,
- coors_range, max_points, max_voxels, 3)
- # select the valid voxels
- voxels_out = voxels[:voxel_num]
- coors_out = coors[:voxel_num]
- num_points_per_voxel_out = num_points_per_voxel[:voxel_num]
- return voxels_out, coors_out, num_points_per_voxel_out
-
-
-voxelization = _Voxelization.apply
-
-
-class Voxelization(nn.Module):
- """Convert kitti points(N, >=3) to voxels.
-
- Please refer to `PVCNN `_ for more
- details.
-
- Args:
- voxel_size (tuple or float): The size of voxel with the shape of [3].
- point_cloud_range (tuple or float): The coordinate range of voxel with
- the shape of [6].
- max_num_points (int): maximum points contained in a voxel. if
- max_points=-1, it means using dynamic_voxelize.
- max_voxels (int, optional): maximum voxels this function create.
- for second, 20000 is a good choice. Users should shuffle points
- before call this function because max_voxels may drop points.
- Default: 20000.
- """
-
- def __init__(self,
- voxel_size,
- point_cloud_range,
- max_num_points,
- max_voxels=20000):
- super().__init__()
-
- self.voxel_size = voxel_size
- self.point_cloud_range = point_cloud_range
- self.max_num_points = max_num_points
- if isinstance(max_voxels, tuple):
- self.max_voxels = max_voxels
- else:
- self.max_voxels = _pair(max_voxels)
-
- point_cloud_range = torch.tensor(
- point_cloud_range, dtype=torch.float32)
- voxel_size = torch.tensor(voxel_size, dtype=torch.float32)
- grid_size = (point_cloud_range[3:] -
- point_cloud_range[:3]) / voxel_size
- grid_size = torch.round(grid_size).long()
- input_feat_shape = grid_size[:2]
- self.grid_size = grid_size
- # the origin shape is as [x-len, y-len, z-len]
- # [w, h, d] -> [d, h, w]
- self.pcd_shape = [*input_feat_shape, 1][::-1]
-
- def forward(self, input):
- if self.training:
- max_voxels = self.max_voxels[0]
- else:
- max_voxels = self.max_voxels[1]
-
- return voxelization(input, self.voxel_size, self.point_cloud_range,
- self.max_num_points, max_voxels)
-
- def __repr__(self):
- s = self.__class__.__name__ + '('
- s += 'voxel_size=' + str(self.voxel_size)
- s += ', point_cloud_range=' + str(self.point_cloud_range)
- s += ', max_num_points=' + str(self.max_num_points)
- s += ', max_voxels=' + str(self.max_voxels)
- s += ')'
- return s
diff --git a/spaces/ai-art/magic-diffusion-generator/share_btn.py b/spaces/ai-art/magic-diffusion-generator/share_btn.py
deleted file mode 100644
index faec0bb30ea411af5e90876cfe7717d753556108..0000000000000000000000000000000000000000
--- a/spaces/ai-art/magic-diffusion-generator/share_btn.py
+++ /dev/null
@@ -1,17 +0,0 @@
-community_icon_html = """
-
-
- """
-
-loading_icon_html = """ """
-
-share_js = """
-
-console.log("Long press on image to download it.")
-
-"""
-
-
diff --git a/spaces/akhaliq/BlendGAN/model.py b/spaces/akhaliq/BlendGAN/model.py
deleted file mode 100644
index a5397ef6993bf6eb7957058faab8423c042fee99..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/BlendGAN/model.py
+++ /dev/null
@@ -1,782 +0,0 @@
-import math
-import random
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
-from model_encoder import StyleEncoder
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer('kernel', kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer('kernel', kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})'
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, '
- f'upsample={self.upsample}, downsample={self.downsample})'
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- # self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=2,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- ):
- super().__init__()
-
- self.size = size
-
- self.style_dim = style_dim
-
- self.embedder = StyleEncoder(style_dim=512, n_mlp=4)
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
- self.embedding = nn.Sequential(*layers)
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu'
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res]
- self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape))
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- self.add_weight = nn.Parameter(torch.ones(1, self.n_latent, 1))
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def get_z_embed(self, image):
- self.embedder.eval()
- with torch.no_grad():
- z_embed = self.embedder(image) # [N, 512]
- return z_embed
-
- def forward(
- self,
- styles=None,
- return_latents=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- style_image=None,
- z_embed=None,
- only_return_z_embed=False,
- add_weight_index=None,
- ):
- if only_return_z_embed and style_image is not None:
- return self.get_z_embed(style_image)
-
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f'noise_{i}') for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1) # [N, 18, 512]
-
- if z_embed is not None:
- latent_style = self.embedding(z_embed)
- latent_style = latent_style.unsqueeze(1).repeat(1, self.n_latent, 1) # [N, 18, 512]
- elif style_image is not None:
- z_embed = self.get_z_embed(style_image) # [N, 512]
- latent_style = self.embedding(z_embed)
- latent_style = latent_style.unsqueeze(1).repeat(1, self.n_latent, 1) # [N, 18, 512]
- else:
- latent_style = None
-
- if latent_style is not None:
- self.add_weight.data = self.add_weight.data.clamp(0.0, 1.0)
- if add_weight_index is not None:
- add_weight_new = self.add_weight.clone()
- add_weight_new[:, :add_weight_index, :] = 1.0
- latent = latent * add_weight_new + latent_style * (1 - add_weight_new)
- else:
- latent = latent * self.add_weight + latent_style * (1 - self.add_weight)
-
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
-
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
-
- i += 2
-
- image = skip
-
- if return_latents:
- if style_image is not None or z_embed is not None:
- return image, latent, z_embed
- else:
- return image, latent
-
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class Discriminator(nn.Module):
- def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'),
- EqualLinear(channels[4], 1),
- )
-
- def forward(self, input):
- out = self.convs(input)
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out
-
-
-class ProjectionDiscriminator(nn.Module):
- def __init__(self, size, style_dim=512, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
-
- self.l_out = EqualLinear(in_channel, 1)
- self.l_style = EqualLinear(style_dim, in_channel)
-
- def forward(self, input, style):
- out = self.convs(input)
-
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
-
- h = torch.sum(out, dim=(2, 3))
- output = self.l_out(h)
- output += torch.sum(self.l_style(style) * h, dim=1, keepdim=True)
-
- return output
diff --git a/spaces/akhaliq/JoJoGAN/e4e/configs/transforms_config.py b/spaces/akhaliq/JoJoGAN/e4e/configs/transforms_config.py
deleted file mode 100644
index ac12b5d5ba0571f21715e0f6b24b9c1ebe84bf72..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/JoJoGAN/e4e/configs/transforms_config.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from abc import abstractmethod
-import torchvision.transforms as transforms
-
-
-class TransformsConfig(object):
-
- def __init__(self, opts):
- self.opts = opts
-
- @abstractmethod
- def get_transforms(self):
- pass
-
-
-class EncodeTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(EncodeTransforms, self).__init__(opts)
-
- def get_transforms(self):
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.RandomHorizontalFlip(0.5),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': None,
- 'transform_test': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
- }
- return transforms_dict
-
-
-class CarsEncodeTransforms(TransformsConfig):
-
- def __init__(self, opts):
- super(CarsEncodeTransforms, self).__init__(opts)
-
- def get_transforms(self):
- transforms_dict = {
- 'transform_gt_train': transforms.Compose([
- transforms.Resize((192, 256)),
- transforms.RandomHorizontalFlip(0.5),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_source': None,
- 'transform_test': transforms.Compose([
- transforms.Resize((192, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]),
- 'transform_inference': transforms.Compose([
- transforms.Resize((192, 256)),
- transforms.ToTensor(),
- transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
- }
- return transforms_dict
diff --git a/spaces/akhaliq/deeplab2/data/sample_generator.py b/spaces/akhaliq/deeplab2/data/sample_generator.py
deleted file mode 100644
index bc08f6f69057c8da060060596b0b06ccac67a4c6..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/deeplab2/data/sample_generator.py
+++ /dev/null
@@ -1,651 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Deeplab2 Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""This file contains code to get a sample from a dataset."""
-
-import functools
-
-import numpy as np
-import tensorflow as tf
-
-from deeplab2 import common
-from deeplab2.data import dataset_utils
-from deeplab2.data.preprocessing import input_preprocessing as preprocessing
-
-
-def _compute_gaussian_from_std(sigma):
- """Computes the Gaussian and its size from a given standard deviation."""
- size = int(6 * sigma + 3)
- x = np.arange(size, dtype=np.float)
- y = x[:, np.newaxis]
- x0, y0 = 3 * sigma + 1, 3 * sigma + 1
- return np.exp(-((x - x0)**2 + (y - y0)**2) / (2 * sigma**2)), size
-
-
-class PanopticSampleGenerator:
- """This class generates samples from images and labels."""
-
- def __init__(self,
- dataset_info,
- is_training,
- crop_size,
- min_resize_value=None,
- max_resize_value=None,
- resize_factor=None,
- min_scale_factor=1.,
- max_scale_factor=1.,
- scale_factor_step_size=0,
- autoaugment_policy_name=None,
- only_semantic_annotations=False,
- thing_id_mask_annotations=False,
- max_thing_id=128,
- sigma=8,
- focus_small_instances=None):
- """Initializes the panoptic segmentation generator.
-
- Args:
- dataset_info: A dictionary with the following keys.
- - `name`: String, dataset name.
- - `ignore_label`: Integer, ignore label.
- - `class_has_instances_list`: A list of integers indicating which
- class has instance annotations.
- - `panoptic_label_divisor`: Integer, panoptic label divisor.
- - `num_classes`: Integer, number of classes.
- - `is_video_dataset`: Boolean, is video dataset or not.
- is_training: Boolean, is training mode or not.
- crop_size: Image crop size [height, width].
- min_resize_value: A 2-tuple of (height, width), desired minimum value
- after resize. If a single element is given, then height and width share
- the same value. None, empty or having 0 indicates no minimum value will
- be used.
- max_resize_value: A 2-tuple of (height, width), maximum allowed value
- after resize. If a single element is given, then height and width
- share the same value. None, empty or having 0 indicates no maximum
- value will be used.
- resize_factor: Resized dimensions are multiple of factor plus one.
- min_scale_factor: Minimum scale factor for random scale augmentation.
- max_scale_factor: Maximum scale factor for random scale augmentation.
- scale_factor_step_size: The step size from min scale factor to max scale
- factor. The input is randomly scaled based on the value of
- (min_scale_factor, max_scale_factor, scale_factor_step_size).
- autoaugment_policy_name: String, autoaugment policy name. See
- autoaugment_policy.py for available policies.
- only_semantic_annotations: An optional flag indicating whether the model
- needs only semantic annotations (default: False).
- thing_id_mask_annotations: An optional flag indicating whether the model
- needs thing_id_mask annotations. When `thing_id_mask_annotations` is
- True, we will additionally return mask annotation for each `thing`
- instance, encoded with a unique thing_id. This ground-truth annotation
- could be used to learn a better segmentation mask for each instance.
- `thing_id` indicates the number of unique thing-ID to each instance in
- an image, starting the counting from 0 (default: False).
- max_thing_id: The maximum number of possible thing instances per image. It
- is used together when thing_id_mask_annotations = True, representing the
- maximum thing ID encoded in the thing_id_mask. (default: 128).
- sigma: The standard deviation of the Gaussian used to encode the center
- keypoint (default: 8).
- focus_small_instances: An optional dict that defines how to deal with
- small instances (default: None):
- -`threshold`: An integer defining the threshold pixel number for an
- instance to be considered small.
- -`weight`: A number that defines the loss weight for small instances.
- """
- self._dataset_info = dataset_info
- self._ignore_label = self._dataset_info['ignore_label']
- self._only_semantic_annotations = only_semantic_annotations
- self._sigma = sigma
- self._instance_area_threshold = 0
- self._small_instance_weight = 1.0
- self._thing_id_mask_annotations = thing_id_mask_annotations
- self._max_thing_id = max_thing_id
- self._is_training = is_training
- self._preprocessing_fn = functools.partial(
- preprocessing.preprocess_image_and_label,
- crop_height=crop_size[0],
- crop_width=crop_size[1],
- min_resize_value=min_resize_value,
- max_resize_value=max_resize_value,
- resize_factor=resize_factor,
- min_scale_factor=min_scale_factor,
- max_scale_factor=max_scale_factor,
- scale_factor_step_size=scale_factor_step_size,
- autoaugment_policy_name=autoaugment_policy_name,
- ignore_label=self._ignore_label *
- self._dataset_info['panoptic_label_divisor'],
- is_training=self._is_training)
-
- if focus_small_instances is not None:
- self._instance_area_threshold = focus_small_instances['threshold']
- self._small_instance_weight = focus_small_instances['weight']
-
- self._gaussian, self._gaussian_size = _compute_gaussian_from_std(
- self._sigma)
- self._gaussian = tf.cast(tf.reshape(self._gaussian, [-1]), tf.float32)
-
- def __call__(self, sample_dict):
- """Gets a sample.
-
- Args:
- sample_dict: A dictionary with the following keys and values:
- - `image`: A tensor of shape [image_height, image_width, 3].
- - `image_name`: String, image name.
- - `label`: A tensor of shape [label_height, label_width, 1] or None.
- - `height`: An integer specifying the height of the image.
- - `width`: An integer specifying the width of the image.
- - `sequence`: An optional string specifying the sequence name.
- - `prev_image`: An optional tensor of the same shape as `image`.
- - `prev_label`: An optional tensor of the same shape as `label`.
- - `next_image`: An optional next-frame tensor of the shape of `image`.
- - `next_label`: An optional next-frame tensor of the shape of `label`.
-
- Returns:
- sample: A dictionary storing required data for panoptic segmentation.
- """
- return self.call(**sample_dict)
-
- def call(self,
- image,
- image_name,
- label,
- height,
- width,
- sequence='',
- prev_image=None,
- prev_label=None,
- next_image=None,
- next_label=None):
- """Gets a sample.
-
- Args:
- image: A tensor of shape [image_height, image_width, 3].
- image_name: String, image name.
- label: A tensor of shape [label_height, label_width, 1] or None.
- height: An integer specifying the height of the image.
- width: An integer specifying the width of the image.
- sequence: An optional string specifying the sequence name.
- prev_image: An optional tensor of shape [image_height, image_width, 3].
- prev_label: An optional tensor of shape [label_height, label_width, 1].
- next_image: An optional tensor of shape [image_height, image_width, 3].
- next_label: An optional tensor of shape [label_height, label_width, 1].
-
- Returns:
- sample: A dictionary storing required data for panoptic segmentation.
-
- Raises:
- ValueError: An error occurs when the label shape is invalid.
- NotImplementedError: An error occurs when thing_id_mask_annotations comes
- together with prev_image or prev_label, not currently implemented.
- """
- if label is not None:
- label.get_shape().assert_is_compatible_with(
- tf.TensorShape([None, None, 1]))
- original_label = tf.cast(label, dtype=tf.int32, name='original_label')
- if next_label is not None:
- original_next_label = tf.cast(
- next_label, dtype=tf.int32, name='original_next_label')
- # Reusing the preprocessing function for both next and prev samples.
- if next_image is not None:
- resized_image, image, label, next_image, next_label = (
- self._preprocessing_fn(
- image, label, prev_image=next_image, prev_label=next_label))
- else:
- resized_image, image, label, prev_image, prev_label = (
- self._preprocessing_fn(
- image, label, prev_image=prev_image, prev_label=prev_label))
- sample = {
- common.IMAGE: image
- }
- if prev_image is not None:
- sample[common.IMAGE] = tf.concat([image, prev_image], axis=2)
- if next_image is not None:
- sample[common.NEXT_IMAGE] = next_image
- sample[common.IMAGE] = tf.concat([image, next_image], axis=2)
- if label is not None:
- # Panoptic label for crowd regions will be ignore_label.
- semantic_label, panoptic_label, thing_mask, crowd_region = (
- dataset_utils.get_semantic_and_panoptic_label(
- self._dataset_info, label, self._ignore_label))
- sample[common.GT_SEMANTIC_KEY] = tf.squeeze(semantic_label, axis=2)
- semantic_weights = tf.ones_like(semantic_label, dtype=tf.float32)
- sample[common.SEMANTIC_LOSS_WEIGHT_KEY] = tf.squeeze(
- semantic_weights, axis=2)
- sample[common.GT_IS_CROWD] = tf.squeeze(crowd_region, axis=2)
-
- if not self._only_semantic_annotations:
- # The sample will have the original label including crowd regions.
- sample[common.GT_PANOPTIC_KEY] = tf.squeeze(label, axis=2)
- # Compute center loss for all non-crowd and non-ignore pixels.
- non_crowd_and_non_ignore_regions = tf.logical_and(
- tf.logical_not(crowd_region),
- tf.not_equal(semantic_label, self._ignore_label))
- sample[common.CENTER_LOSS_WEIGHT_KEY] = tf.squeeze(tf.cast(
- non_crowd_and_non_ignore_regions, tf.float32), axis=2)
- # Compute regression loss only for thing pixels that are not crowd.
- non_crowd_things = tf.logical_and(
- tf.logical_not(crowd_region), thing_mask)
- sample[common.REGRESSION_LOSS_WEIGHT_KEY] = tf.squeeze(tf.cast(
- non_crowd_things, tf.float32), axis=2)
-
- prev_panoptic_label = None
- next_panoptic_label = None
- if prev_label is not None:
- _, prev_panoptic_label, _, _ = (
- dataset_utils.get_semantic_and_panoptic_label(
- self._dataset_info, prev_label, self._ignore_label))
- if next_label is not None:
- _, next_panoptic_label, _, _ = (
- dataset_utils.get_semantic_and_panoptic_label(
- self._dataset_info, next_label, self._ignore_label))
- (sample[common.GT_INSTANCE_CENTER_KEY],
- sample[common.GT_INSTANCE_REGRESSION_KEY],
- sample[common.SEMANTIC_LOSS_WEIGHT_KEY],
- prev_center_map,
- frame_center_offsets,
- next_offset) = self._generate_gt_center_and_offset(
- panoptic_label, semantic_weights, prev_panoptic_label,
- next_panoptic_label)
-
- sample[common.GT_INSTANCE_REGRESSION_KEY] = tf.cast(
- sample[common.GT_INSTANCE_REGRESSION_KEY], tf.float32)
-
- if next_label is not None:
- sample[common.GT_NEXT_INSTANCE_REGRESSION_KEY] = tf.cast(
- next_offset, tf.float32)
- sample[common.NEXT_REGRESSION_LOSS_WEIGHT_KEY] = tf.cast(
- tf.greater(tf.reduce_sum(tf.abs(next_offset), axis=2), 0),
- tf.float32)
-
- # Only squeeze center map and semantic loss weights, as regression map
- # has two channels (x and y offsets).
- sample[common.GT_INSTANCE_CENTER_KEY] = tf.squeeze(
- sample[common.GT_INSTANCE_CENTER_KEY], axis=2)
- sample[common.SEMANTIC_LOSS_WEIGHT_KEY] = tf.squeeze(
- sample[common.SEMANTIC_LOSS_WEIGHT_KEY], axis=2)
-
- if prev_label is not None:
- sample[common.GT_FRAME_OFFSET_KEY] = frame_center_offsets
- sample[common.GT_FRAME_OFFSET_KEY] = tf.cast(
- sample[common.GT_FRAME_OFFSET_KEY], tf.float32)
- frame_offsets_present = tf.logical_or(
- tf.not_equal(frame_center_offsets[..., 0], 0),
- tf.not_equal(frame_center_offsets[..., 1], 0))
- sample[common.FRAME_REGRESSION_LOSS_WEIGHT_KEY] = tf.cast(
- frame_offsets_present, tf.float32)
- if self._is_training:
- sample[common.IMAGE] = tf.concat(
- [sample[common.IMAGE], prev_center_map], axis=2)
-
- if self._thing_id_mask_annotations:
- if any([prev_image is not None,
- prev_label is not None,
- next_image is not None,
- next_label is not None]):
- raise NotImplementedError(
- 'Current implementation of Max-DeepLab does not support '
- + 'prev_image, prev_label, next_image, or next_label.')
- thing_id_mask, thing_id_class = (
- self._generate_thing_id_mask_and_class(
- panoptic_label, non_crowd_things))
- sample[common.GT_THING_ID_MASK_KEY] = tf.squeeze(
- thing_id_mask, axis=2)
- sample[common.GT_THING_ID_CLASS_KEY] = thing_id_class
-
- if not self._is_training:
- # Resized image is only used during visualization.
- sample[common.RESIZED_IMAGE] = resized_image
- sample[common.IMAGE_NAME] = image_name
- sample[common.GT_SIZE_RAW] = tf.stack([height, width], axis=0)
- if self._dataset_info['is_video_dataset']:
- sample[common.SEQUENCE_ID] = sequence
- # Keep original labels for evaluation.
- if label is not None:
- orig_semantic_label, _, _, orig_crowd_region = (
- dataset_utils.get_semantic_and_panoptic_label(
- self._dataset_info, original_label, self._ignore_label))
- sample[common.GT_SEMANTIC_RAW] = tf.squeeze(orig_semantic_label, axis=2)
- if not self._only_semantic_annotations:
- sample[common.GT_PANOPTIC_RAW] = tf.squeeze(original_label, axis=2)
- sample[common.GT_IS_CROWD_RAW] = tf.squeeze(orig_crowd_region)
- if next_label is not None:
- sample[common.GT_NEXT_PANOPTIC_RAW] = tf.squeeze(
- original_next_label, axis=2)
- return sample
-
- def _generate_thing_id_mask_and_class(self,
- panoptic_label,
- non_crowd_things):
- """Generates the ground-truth thing-ID masks and their class labels.
-
- It computes thing-ID mask and class with unique ID for each thing instance.
- `thing_id` indicates the number of unique thing-ID to each instance in an
- image, starting the counting from 0. Each pixel in thing_id_mask is labeled
- with the corresponding thing-ID.
-
- Args:
- panoptic_label: A tf.Tensor of shape [height, width, 1].
- non_crowd_things: A tf.Tensor of shape [height, width, 1], indicating
- non-crowd and thing-class regions.
-
- Returns:
- thing_id_mask: A tf.Tensor of shape [height, width, 1]. It assigns each
- non-crowd thing instance a unique mask-ID label, starting from 0.
- Unassigned pixels are set to -1.
- thing_id_class: A tf.Tensor of shape [max_thing_id]. It contains semantic
- ID of each instance assigned to thing_id_mask. The remaining
- (max_thing_id - num_things) elements are set to -1.
-
- Raises:
- ValueError: An error occurs when the thing-ID mask contains stuff or crowd
- region.
- ValueError: An error occurs when thing_count is greater or equal to
- self._max_thing_id.
-
- """
- unique_ids, _ = tf.unique(tf.reshape(panoptic_label, [-1]))
- thing_id_mask = -tf.ones_like(panoptic_label)
- thing_id_class = -tf.ones(self._max_thing_id)
- thing_count = 0
- for panoptic_id in unique_ids:
- semantic_id = panoptic_id // self._dataset_info['panoptic_label_divisor']
- # Filter out IDs that are not thing instances (i.e., IDs for ignore_label,
- # stuff classes or crowd). Stuff classes and crowd regions both have IDs
- # of the form panoptic_id = semantic_id * label_divisor (i.e., instance id
- # = 0)
- if (semantic_id == self._dataset_info['ignore_label'] or
- panoptic_id % self._dataset_info['panoptic_label_divisor'] == 0):
- continue
-
- assert_stuff_crowd = tf.debugging.Assert(
- tf.reduce_all(non_crowd_things[panoptic_label == panoptic_id]),
- ['thing-ID mask here must not contain stuff or crowd region.'])
- with tf.control_dependencies([assert_stuff_crowd]):
- panoptic_id = tf.identity(panoptic_id)
-
- thing_id_mask = tf.where(panoptic_label == panoptic_id,
- thing_count, thing_id_mask)
-
- assert_thing_count = tf.debugging.Assert(
- thing_count < self._max_thing_id,
- ['thing_count must be smaller than self._max_thing_id.'])
- with tf.control_dependencies([assert_thing_count]):
- thing_count = tf.identity(thing_count)
-
- thing_id_class = tf.tensor_scatter_nd_update(
- thing_id_class, [[thing_count]], [semantic_id])
- thing_count += 1
- return thing_id_mask, thing_id_class
-
- def _generate_prev_centers_with_noise(self,
- panoptic_label,
- offset_noise_factor=0.05,
- false_positive_rate=0.2,
- false_positive_noise_factor=0.05):
- """Generates noisy center predictions for the previous frame.
-
- Args:
- panoptic_label: A tf.Tensor of shape [height, width, 1].
- offset_noise_factor: An optional float defining the maximum fraction of
- the object size that is used to displace the previous center.
- false_positive_rate: An optional float indicating at which probability
- false positives should be added.
- false_positive_noise_factor: An optional float defining the maximum
- fraction of the object size that is used to displace the false positive
- center.
-
- Returns:
- A tuple of (center, ids_to_center) with both being tf.Tensor of shape
- [height, width, 1] and shape [N, 2] where N is the number of unique IDs.
- """
- height = tf.shape(panoptic_label)[0]
- width = tf.shape(panoptic_label)[1]
-
- # Pad center to make boundary handling easier.
- center_pad_begin = int(round(3 * self._sigma + 1))
- center_pad_end = int(round(3 * self._sigma + 2))
- center_pad = center_pad_begin + center_pad_end
-
- center = tf.zeros((height + center_pad, width + center_pad))
- unique_ids, _ = tf.unique(tf.reshape(panoptic_label, [-1]))
- ids_to_center_x = tf.zeros_like(unique_ids, dtype=tf.int32)
- ids_to_center_y = tf.zeros_like(unique_ids, dtype=tf.int32)
-
- for panoptic_id in unique_ids:
- semantic_id = panoptic_id // self._dataset_info['panoptic_label_divisor']
- # Filter out IDs that should be ignored, are stuff classes or crowd.
- # Stuff classes and crowd regions both have IDs of the form panoptic_id =
- # semantic_id * label_divisor
- if (semantic_id == self._dataset_info['ignore_label'] or
- panoptic_id % self._dataset_info['panoptic_label_divisor'] == 0):
- continue
-
- # Convert [[y0, x0, 0], ...] to [[y0, ...], [x0, ...], [0, ...]].
- mask_index = tf.cast(
- tf.transpose(tf.where(panoptic_label == panoptic_id)), tf.float32)
- centers = tf.reduce_mean(mask_index, axis=1)
- bbox_size = (
- tf.reduce_max(mask_index, axis=1) - tf.reduce_min(mask_index, axis=1))
-
- # Add noise.
- center_y = (
- centers[0] + tf.random.normal([], dtype=tf.float32) *
- offset_noise_factor * bbox_size[0])
- center_x = (
- centers[1] + tf.random.normal([], dtype=tf.float32) *
- offset_noise_factor * bbox_size[1])
-
- center_x = tf.minimum(
- tf.maximum(tf.cast(tf.round(center_x), tf.int32), 0), width - 1)
- center_y = tf.minimum(
- tf.maximum(tf.cast(tf.round(center_y), tf.int32), 0), height - 1)
-
- id_index = tf.where(tf.equal(panoptic_id, unique_ids))
- ids_to_center_x = tf.tensor_scatter_nd_update(
- ids_to_center_x, id_index, tf.expand_dims(center_x, axis=0))
- ids_to_center_y = tf.tensor_scatter_nd_update(
- ids_to_center_y, id_index, tf.expand_dims(center_y, axis=0))
-
- def add_center_gaussian(center_x_coord, center_y_coord, center):
- # Due to the padding with center_pad_begin in center, the computed
- # center becomes the upper left corner in the center tensor.
- upper_left = center_x_coord, center_y_coord
- bottom_right = (upper_left[0] + self._gaussian_size,
- upper_left[1] + self._gaussian_size)
-
- indices_x, indices_y = tf.meshgrid(
- tf.range(upper_left[0], bottom_right[0]),
- tf.range(upper_left[1], bottom_right[1]))
- indices = tf.transpose(
- tf.stack([tf.reshape(indices_y, [-1]),
- tf.reshape(indices_x, [-1])]))
-
- return tf.tensor_scatter_nd_max(
- center, indices, self._gaussian, name='center_scatter')
-
- center = add_center_gaussian(center_x, center_y, center)
- # Generate false positives.
- center_y = (
- tf.cast(center_y, dtype=tf.float32) +
- tf.random.normal([], dtype=tf.float32) * false_positive_noise_factor *
- bbox_size[0])
- center_x = (
- tf.cast(center_x, dtype=tf.float32) +
- tf.random.normal([], dtype=tf.float32) * false_positive_noise_factor *
- bbox_size[1])
-
- center_x = tf.minimum(
- tf.maximum(tf.cast(tf.round(center_x), tf.int32), 0), width - 1)
- center_y = tf.minimum(
- tf.maximum(tf.cast(tf.round(center_y), tf.int32), 0), height - 1)
- # Draw a sample to decide whether to add a false positive or not.
- center = center + tf.cast(
- tf.random.uniform([], dtype=tf.float32) < false_positive_rate,
- tf.float32) * (
- add_center_gaussian(center_x, center_y, center) - center)
-
- center = center[center_pad_begin:(center_pad_begin + height),
- center_pad_begin:(center_pad_begin + width)]
- center = tf.expand_dims(center, -1)
- return center, unique_ids, ids_to_center_x, ids_to_center_y
-
- def _generate_gt_center_and_offset(self,
- panoptic_label,
- semantic_weights,
- prev_panoptic_label=None,
- next_panoptic_label=None):
- """Generates the ground-truth center and offset from the panoptic labels.
-
- Additionally, the per-pixel weights for the semantic branch are increased
- for small instances. In case, prev_panoptic_label is passed, it also
- computes the previous center heatmap with random noise and the offsets
- between center maps.
-
- Args:
- panoptic_label: A tf.Tensor of shape [height, width, 1].
- semantic_weights: A tf.Tensor of shape [height, width, 1].
- prev_panoptic_label: An optional tf.Tensor of shape [height, width, 1].
- next_panoptic_label: An optional tf.Tensor of shape [height, width, 1].
-
- Returns:
- A tuple (center, offsets, weights, prev_center, frame_offset*,
- next_offset) with each being a tf.Tensor of shape [height, width, 1 (2*)].
- If prev_panoptic_label is None, prev_center and frame_offset are None.
- If next_panoptic_label is None, next_offset is None.
- """
- height = tf.shape(panoptic_label)[0]
- width = tf.shape(panoptic_label)[1]
-
- # Pad center to make boundary handling easier.
- center_pad_begin = int(round(3 * self._sigma + 1))
- center_pad_end = int(round(3 * self._sigma + 2))
- center_pad = center_pad_begin + center_pad_end
-
- center = tf.zeros((height + center_pad, width + center_pad))
- offset_x = tf.zeros((height, width, 1), dtype=tf.int32)
- offset_y = tf.zeros((height, width, 1), dtype=tf.int32)
- unique_ids, _ = tf.unique(tf.reshape(panoptic_label, [-1]))
-
- prev_center = None
- frame_offsets = None
- # Due to loop handling in tensorflow, these variables had to be defined for
- # all cases.
- frame_offset_x = tf.zeros((height, width, 1), dtype=tf.int32)
- frame_offset_y = tf.zeros((height, width, 1), dtype=tf.int32)
-
- # Next-frame instance offsets.
- next_offset = None
- next_offset_y = tf.zeros((height, width, 1), dtype=tf.int32)
- next_offset_x = tf.zeros((height, width, 1), dtype=tf.int32)
-
- if prev_panoptic_label is not None:
- (prev_center, prev_unique_ids, prev_centers_x, prev_centers_y
- ) = self._generate_prev_centers_with_noise(prev_panoptic_label)
-
- for panoptic_id in unique_ids:
- semantic_id = panoptic_id // self._dataset_info['panoptic_label_divisor']
- # Filter out IDs that should be ignored, are stuff classes or crowd.
- # Stuff classes and crowd regions both have IDs of the form panopti_id =
- # semantic_id * label_divisor
- if (semantic_id == self._dataset_info['ignore_label'] or
- panoptic_id % self._dataset_info['panoptic_label_divisor'] == 0):
- continue
-
- # Convert [[y0, x0, 0], ...] to [[y0, ...], [x0, ...], [0, ...]].
- mask_index = tf.transpose(tf.where(panoptic_label == panoptic_id))
- mask_y_index = mask_index[0]
- mask_x_index = mask_index[1]
-
- next_mask_index = None
- next_mask_y_index = None
- next_mask_x_index = None
- if next_panoptic_label is not None:
- next_mask_index = tf.transpose(
- tf.where(next_panoptic_label == panoptic_id))
- next_mask_y_index = next_mask_index[0]
- next_mask_x_index = next_mask_index[1]
-
- instance_area = tf.shape(mask_x_index)
- if instance_area < self._instance_area_threshold:
- semantic_weights = tf.where(panoptic_label == panoptic_id,
- self._small_instance_weight,
- semantic_weights)
-
- centers = tf.reduce_mean(tf.cast(mask_index, tf.float32), axis=1)
-
- center_x = tf.cast(tf.round(centers[1]), tf.int32)
- center_y = tf.cast(tf.round(centers[0]), tf.int32)
-
- # Due to the padding with center_pad_begin in center, the computed center
- # becomes the upper left corner in the center tensor.
- upper_left = center_x, center_y
- bottom_right = (upper_left[0] + self._gaussian_size,
- upper_left[1] + self._gaussian_size)
-
- indices_x, indices_y = tf.meshgrid(
- tf.range(upper_left[0], bottom_right[0]),
- tf.range(upper_left[1], bottom_right[1]))
- indices = tf.transpose(
- tf.stack([tf.reshape(indices_y, [-1]),
- tf.reshape(indices_x, [-1])]))
-
- center = tf.tensor_scatter_nd_max(
- center, indices, self._gaussian, name='center_scatter')
- offset_y = tf.tensor_scatter_nd_update(
- offset_y,
- tf.transpose(mask_index),
- center_y - tf.cast(mask_y_index, tf.int32),
- name='offset_y_scatter')
- offset_x = tf.tensor_scatter_nd_update(
- offset_x,
- tf.transpose(mask_index),
- center_x - tf.cast(mask_x_index, tf.int32),
- name='offset_x_scatter')
- if prev_panoptic_label is not None:
- mask = tf.equal(prev_unique_ids, panoptic_id)
- if tf.math.count_nonzero(mask) > 0:
- prev_center_x = prev_centers_x[mask]
- prev_center_y = prev_centers_y[mask]
-
- frame_offset_y = tf.tensor_scatter_nd_update(
- frame_offset_y,
- tf.transpose(mask_index),
- prev_center_y - tf.cast(mask_y_index, tf.int32),
- name='frame_offset_y_scatter')
- frame_offset_x = tf.tensor_scatter_nd_update(
- frame_offset_x,
- tf.transpose(mask_index),
- prev_center_x - tf.cast(mask_x_index, tf.int32),
- name='frame_offset_x_scatter')
- if next_panoptic_label is not None:
- next_offset_y = tf.tensor_scatter_nd_update(
- next_offset_y,
- tf.transpose(next_mask_index),
- center_y - tf.cast(next_mask_y_index, tf.int32),
- name='next_offset_y_scatter')
- next_offset_x = tf.tensor_scatter_nd_update(
- next_offset_x,
- tf.transpose(next_mask_index),
- center_x - tf.cast(next_mask_x_index, tf.int32),
- name='next_offset_x_scatter')
-
- offset = tf.concat([offset_y, offset_x], axis=2)
- center = center[center_pad_begin:(center_pad_begin + height),
- center_pad_begin:(center_pad_begin + width)]
- center = tf.expand_dims(center, -1)
- if prev_panoptic_label is not None:
- frame_offsets = tf.concat([frame_offset_y, frame_offset_x], axis=2)
- if next_panoptic_label is not None:
- next_offset = tf.concat([next_offset_y, next_offset_x], axis=2)
- return (center, offset, semantic_weights, prev_center, frame_offsets,
- next_offset)
diff --git a/spaces/akhaliq/lama/bin/analyze_errors.py b/spaces/akhaliq/lama/bin/analyze_errors.py
deleted file mode 100644
index a11f9478de76ede162f5511449ac98e549ff4b6e..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/bin/analyze_errors.py
+++ /dev/null
@@ -1,316 +0,0 @@
-#!/usr/bin/env python3
-import cv2
-import numpy as np
-import sklearn
-import torch
-import os
-import pickle
-import pandas as pd
-import matplotlib.pyplot as plt
-from joblib import Parallel, delayed
-
-from saicinpainting.evaluation.data import PrecomputedInpaintingResultsDataset, load_image
-from saicinpainting.evaluation.losses.fid.inception import InceptionV3
-from saicinpainting.evaluation.utils import load_yaml
-from saicinpainting.training.visualizers.base import visualize_mask_and_images
-
-
-def draw_score(img, score):
- img = np.transpose(img, (1, 2, 0))
- cv2.putText(img, f'{score:.2f}',
- (40, 40),
- cv2.FONT_HERSHEY_SIMPLEX,
- 1,
- (0, 1, 0),
- thickness=3)
- img = np.transpose(img, (2, 0, 1))
- return img
-
-
-def save_global_samples(global_mask_fnames, mask2real_fname, mask2fake_fname, out_dir, real_scores_by_fname, fake_scores_by_fname):
- for cur_mask_fname in global_mask_fnames:
- cur_real_fname = mask2real_fname[cur_mask_fname]
- orig_img = load_image(cur_real_fname, mode='RGB')
- fake_img = load_image(mask2fake_fname[cur_mask_fname], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]]
- mask = load_image(cur_mask_fname, mode='L')[None, ...]
-
- draw_score(orig_img, real_scores_by_fname.loc[cur_real_fname, 'real_score'])
- draw_score(fake_img, fake_scores_by_fname.loc[cur_mask_fname, 'fake_score'])
-
- cur_grid = visualize_mask_and_images(dict(image=orig_img, mask=mask, fake=fake_img),
- keys=['image', 'fake'],
- last_without_mask=True)
- cur_grid = np.clip(cur_grid * 255, 0, 255).astype('uint8')
- cur_grid = cv2.cvtColor(cur_grid, cv2.COLOR_RGB2BGR)
- cv2.imwrite(os.path.join(out_dir, os.path.splitext(os.path.basename(cur_mask_fname))[0] + '.jpg'),
- cur_grid)
-
-
-def save_samples_by_real(worst_best_by_real, mask2fake_fname, fake_info, out_dir):
- for real_fname in worst_best_by_real.index:
- worst_mask_path = worst_best_by_real.loc[real_fname, 'worst']
- best_mask_path = worst_best_by_real.loc[real_fname, 'best']
- orig_img = load_image(real_fname, mode='RGB')
- worst_mask_img = load_image(worst_mask_path, mode='L')[None, ...]
- worst_fake_img = load_image(mask2fake_fname[worst_mask_path], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]]
- best_mask_img = load_image(best_mask_path, mode='L')[None, ...]
- best_fake_img = load_image(mask2fake_fname[best_mask_path], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]]
-
- draw_score(orig_img, worst_best_by_real.loc[real_fname, 'real_score'])
- draw_score(worst_fake_img, worst_best_by_real.loc[real_fname, 'worst_score'])
- draw_score(best_fake_img, worst_best_by_real.loc[real_fname, 'best_score'])
-
- cur_grid = visualize_mask_and_images(dict(image=orig_img, mask=np.zeros_like(worst_mask_img),
- worst_mask=worst_mask_img, worst_img=worst_fake_img,
- best_mask=best_mask_img, best_img=best_fake_img),
- keys=['image', 'worst_mask', 'worst_img', 'best_mask', 'best_img'],
- rescale_keys=['worst_mask', 'best_mask'],
- last_without_mask=True)
- cur_grid = np.clip(cur_grid * 255, 0, 255).astype('uint8')
- cur_grid = cv2.cvtColor(cur_grid, cv2.COLOR_RGB2BGR)
- cv2.imwrite(os.path.join(out_dir,
- os.path.splitext(os.path.basename(real_fname))[0] + '.jpg'),
- cur_grid)
-
- fig, (ax1, ax2) = plt.subplots(1, 2)
- cur_stat = fake_info[fake_info['real_fname'] == real_fname]
- cur_stat['fake_score'].hist(ax=ax1)
- cur_stat['real_score'].hist(ax=ax2)
- fig.tight_layout()
- fig.savefig(os.path.join(out_dir,
- os.path.splitext(os.path.basename(real_fname))[0] + '_scores.png'))
- plt.close(fig)
-
-
-def extract_overlapping_masks(mask_fnames, cur_i, fake_scores_table, max_overlaps_n=2):
- result_pairs = []
- result_scores = []
- mask_fname_a = mask_fnames[cur_i]
- mask_a = load_image(mask_fname_a, mode='L')[None, ...] > 0.5
- cur_score_a = fake_scores_table.loc[mask_fname_a, 'fake_score']
- for mask_fname_b in mask_fnames[cur_i + 1:]:
- mask_b = load_image(mask_fname_b, mode='L')[None, ...] > 0.5
- if not np.any(mask_a & mask_b):
- continue
- cur_score_b = fake_scores_table.loc[mask_fname_b, 'fake_score']
- result_pairs.append((mask_fname_a, mask_fname_b))
- result_scores.append(cur_score_b - cur_score_a)
- if len(result_pairs) >= max_overlaps_n:
- break
- return result_pairs, result_scores
-
-
-def main(args):
- config = load_yaml(args.config)
-
- latents_dir = os.path.join(args.outpath, 'latents')
- os.makedirs(latents_dir, exist_ok=True)
- global_worst_dir = os.path.join(args.outpath, 'global_worst')
- os.makedirs(global_worst_dir, exist_ok=True)
- global_best_dir = os.path.join(args.outpath, 'global_best')
- os.makedirs(global_best_dir, exist_ok=True)
- worst_best_by_best_worst_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'best_worst_score_diff_max')
- os.makedirs(worst_best_by_best_worst_score_diff_max_dir, exist_ok=True)
- worst_best_by_best_worst_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'best_worst_score_diff_min')
- os.makedirs(worst_best_by_best_worst_score_diff_min_dir, exist_ok=True)
- worst_best_by_real_best_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_best_score_diff_max')
- os.makedirs(worst_best_by_real_best_score_diff_max_dir, exist_ok=True)
- worst_best_by_real_best_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_best_score_diff_min')
- os.makedirs(worst_best_by_real_best_score_diff_min_dir, exist_ok=True)
- worst_best_by_real_worst_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_worst_score_diff_max')
- os.makedirs(worst_best_by_real_worst_score_diff_max_dir, exist_ok=True)
- worst_best_by_real_worst_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_worst_score_diff_min')
- os.makedirs(worst_best_by_real_worst_score_diff_min_dir, exist_ok=True)
-
- if not args.only_report:
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[2048]
- inception_model = InceptionV3([block_idx]).eval().cuda()
-
- dataset = PrecomputedInpaintingResultsDataset(args.datadir, args.predictdir, **config.dataset_kwargs)
-
- real2vector_cache = {}
-
- real_features = []
- fake_features = []
-
- orig_fnames = []
- mask_fnames = []
- mask2real_fname = {}
- mask2fake_fname = {}
-
- for batch_i, batch in enumerate(dataset):
- orig_img_fname = dataset.img_filenames[batch_i]
- mask_fname = dataset.mask_filenames[batch_i]
- fake_fname = dataset.pred_filenames[batch_i]
- mask2real_fname[mask_fname] = orig_img_fname
- mask2fake_fname[mask_fname] = fake_fname
-
- cur_real_vector = real2vector_cache.get(orig_img_fname, None)
- if cur_real_vector is None:
- with torch.no_grad():
- in_img = torch.from_numpy(batch['image'][None, ...]).cuda()
- cur_real_vector = inception_model(in_img)[0].squeeze(-1).squeeze(-1).cpu().numpy()
- real2vector_cache[orig_img_fname] = cur_real_vector
-
- pred_img = torch.from_numpy(batch['inpainted'][None, ...]).cuda()
- cur_fake_vector = inception_model(pred_img)[0].squeeze(-1).squeeze(-1).cpu().numpy()
-
- real_features.append(cur_real_vector)
- fake_features.append(cur_fake_vector)
-
- orig_fnames.append(orig_img_fname)
- mask_fnames.append(mask_fname)
-
- ids_features = np.concatenate(real_features + fake_features, axis=0)
- ids_labels = np.array(([1] * len(real_features)) + ([0] * len(fake_features)))
-
- with open(os.path.join(latents_dir, 'featues.pkl'), 'wb') as f:
- pickle.dump(ids_features, f, protocol=3)
- with open(os.path.join(latents_dir, 'labels.pkl'), 'wb') as f:
- pickle.dump(ids_labels, f, protocol=3)
- with open(os.path.join(latents_dir, 'orig_fnames.pkl'), 'wb') as f:
- pickle.dump(orig_fnames, f, protocol=3)
- with open(os.path.join(latents_dir, 'mask_fnames.pkl'), 'wb') as f:
- pickle.dump(mask_fnames, f, protocol=3)
- with open(os.path.join(latents_dir, 'mask2real_fname.pkl'), 'wb') as f:
- pickle.dump(mask2real_fname, f, protocol=3)
- with open(os.path.join(latents_dir, 'mask2fake_fname.pkl'), 'wb') as f:
- pickle.dump(mask2fake_fname, f, protocol=3)
-
- svm = sklearn.svm.LinearSVC(dual=False)
- svm.fit(ids_features, ids_labels)
-
- pred_scores = svm.decision_function(ids_features)
- real_scores = pred_scores[:len(real_features)]
- fake_scores = pred_scores[len(real_features):]
-
- with open(os.path.join(latents_dir, 'pred_scores.pkl'), 'wb') as f:
- pickle.dump(pred_scores, f, protocol=3)
- with open(os.path.join(latents_dir, 'real_scores.pkl'), 'wb') as f:
- pickle.dump(real_scores, f, protocol=3)
- with open(os.path.join(latents_dir, 'fake_scores.pkl'), 'wb') as f:
- pickle.dump(fake_scores, f, protocol=3)
- else:
- with open(os.path.join(latents_dir, 'orig_fnames.pkl'), 'rb') as f:
- orig_fnames = pickle.load(f)
- with open(os.path.join(latents_dir, 'mask_fnames.pkl'), 'rb') as f:
- mask_fnames = pickle.load(f)
- with open(os.path.join(latents_dir, 'mask2real_fname.pkl'), 'rb') as f:
- mask2real_fname = pickle.load(f)
- with open(os.path.join(latents_dir, 'mask2fake_fname.pkl'), 'rb') as f:
- mask2fake_fname = pickle.load(f)
- with open(os.path.join(latents_dir, 'real_scores.pkl'), 'rb') as f:
- real_scores = pickle.load(f)
- with open(os.path.join(latents_dir, 'fake_scores.pkl'), 'rb') as f:
- fake_scores = pickle.load(f)
-
- real_info = pd.DataFrame(data=[dict(real_fname=fname,
- real_score=score)
- for fname, score
- in zip(orig_fnames, real_scores)])
- real_info.set_index('real_fname', drop=True, inplace=True)
-
- fake_info = pd.DataFrame(data=[dict(mask_fname=fname,
- fake_fname=mask2fake_fname[fname],
- real_fname=mask2real_fname[fname],
- fake_score=score)
- for fname, score
- in zip(mask_fnames, fake_scores)])
- fake_info = fake_info.join(real_info, on='real_fname', how='left')
- fake_info.drop_duplicates(['fake_fname', 'real_fname'], inplace=True)
-
- fake_stats_by_real = fake_info.groupby('real_fname')['fake_score'].describe()[['mean', 'std']].rename(
- {'mean': 'mean_fake_by_real', 'std': 'std_fake_by_real'}, axis=1)
- fake_info = fake_info.join(fake_stats_by_real, on='real_fname', rsuffix='stat_by_real')
- fake_info.drop_duplicates(['fake_fname', 'real_fname'], inplace=True)
- fake_info.to_csv(os.path.join(latents_dir, 'join_scores_table.csv'), sep='\t', index=False)
-
- fake_scores_table = fake_info.set_index('mask_fname')['fake_score'].to_frame()
- real_scores_table = fake_info.set_index('real_fname')['real_score'].drop_duplicates().to_frame()
-
- fig, (ax1, ax2) = plt.subplots(1, 2)
- ax1.hist(fake_scores)
- ax2.hist(real_scores)
- fig.tight_layout()
- fig.savefig(os.path.join(args.outpath, 'global_scores_hist.png'))
- plt.close(fig)
-
- global_worst_masks = fake_info.sort_values('fake_score', ascending=True)['mask_fname'].iloc[:config.take_global_top].to_list()
- global_best_masks = fake_info.sort_values('fake_score', ascending=False)['mask_fname'].iloc[:config.take_global_top].to_list()
- save_global_samples(global_worst_masks, mask2real_fname, mask2fake_fname, global_worst_dir, real_scores_table, fake_scores_table)
- save_global_samples(global_best_masks, mask2real_fname, mask2fake_fname, global_best_dir, real_scores_table, fake_scores_table)
-
- # grouped by real
- worst_samples_by_real = fake_info.groupby('real_fname').apply(
- lambda d: d.set_index('mask_fname')['fake_score'].idxmin()).to_frame().rename({0: 'worst'}, axis=1)
- best_samples_by_real = fake_info.groupby('real_fname').apply(
- lambda d: d.set_index('mask_fname')['fake_score'].idxmax()).to_frame().rename({0: 'best'}, axis=1)
- worst_best_by_real = pd.concat([worst_samples_by_real, best_samples_by_real], axis=1)
-
- worst_best_by_real = worst_best_by_real.join(fake_scores_table.rename({'fake_score': 'worst_score'}, axis=1),
- on='worst')
- worst_best_by_real = worst_best_by_real.join(fake_scores_table.rename({'fake_score': 'best_score'}, axis=1),
- on='best')
- worst_best_by_real = worst_best_by_real.join(real_scores_table)
-
- worst_best_by_real['best_worst_score_diff'] = worst_best_by_real['best_score'] - worst_best_by_real['worst_score']
- worst_best_by_real['real_best_score_diff'] = worst_best_by_real['real_score'] - worst_best_by_real['best_score']
- worst_best_by_real['real_worst_score_diff'] = worst_best_by_real['real_score'] - worst_best_by_real['worst_score']
-
- worst_best_by_best_worst_score_diff_min = worst_best_by_real.sort_values('best_worst_score_diff', ascending=True).iloc[:config.take_worst_best_top]
- worst_best_by_best_worst_score_diff_max = worst_best_by_real.sort_values('best_worst_score_diff', ascending=False).iloc[:config.take_worst_best_top]
- save_samples_by_real(worst_best_by_best_worst_score_diff_min, mask2fake_fname, fake_info, worst_best_by_best_worst_score_diff_min_dir)
- save_samples_by_real(worst_best_by_best_worst_score_diff_max, mask2fake_fname, fake_info, worst_best_by_best_worst_score_diff_max_dir)
-
- worst_best_by_real_best_score_diff_min = worst_best_by_real.sort_values('real_best_score_diff', ascending=True).iloc[:config.take_worst_best_top]
- worst_best_by_real_best_score_diff_max = worst_best_by_real.sort_values('real_best_score_diff', ascending=False).iloc[:config.take_worst_best_top]
- save_samples_by_real(worst_best_by_real_best_score_diff_min, mask2fake_fname, fake_info, worst_best_by_real_best_score_diff_min_dir)
- save_samples_by_real(worst_best_by_real_best_score_diff_max, mask2fake_fname, fake_info, worst_best_by_real_best_score_diff_max_dir)
-
- worst_best_by_real_worst_score_diff_min = worst_best_by_real.sort_values('real_worst_score_diff', ascending=True).iloc[:config.take_worst_best_top]
- worst_best_by_real_worst_score_diff_max = worst_best_by_real.sort_values('real_worst_score_diff', ascending=False).iloc[:config.take_worst_best_top]
- save_samples_by_real(worst_best_by_real_worst_score_diff_min, mask2fake_fname, fake_info, worst_best_by_real_worst_score_diff_min_dir)
- save_samples_by_real(worst_best_by_real_worst_score_diff_max, mask2fake_fname, fake_info, worst_best_by_real_worst_score_diff_max_dir)
-
- # analyze what change of mask causes bigger change of score
- overlapping_mask_fname_pairs = []
- overlapping_mask_fname_score_diffs = []
- for cur_real_fname in orig_fnames:
- cur_fakes_info = fake_info[fake_info['real_fname'] == cur_real_fname]
- cur_mask_fnames = sorted(cur_fakes_info['mask_fname'].unique())
-
- cur_mask_pairs_and_scores = Parallel(args.n_jobs)(
- delayed(extract_overlapping_masks)(cur_mask_fnames, i, fake_scores_table)
- for i in range(len(cur_mask_fnames) - 1)
- )
- for cur_pairs, cur_scores in cur_mask_pairs_and_scores:
- overlapping_mask_fname_pairs.extend(cur_pairs)
- overlapping_mask_fname_score_diffs.extend(cur_scores)
-
- overlapping_mask_fname_pairs = np.asarray(overlapping_mask_fname_pairs)
- overlapping_mask_fname_score_diffs = np.asarray(overlapping_mask_fname_score_diffs)
- overlapping_sort_idx = np.argsort(overlapping_mask_fname_score_diffs)
- overlapping_mask_fname_pairs = overlapping_mask_fname_pairs[overlapping_sort_idx]
- overlapping_mask_fname_score_diffs = overlapping_mask_fname_score_diffs[overlapping_sort_idx]
-
-
-
-
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('config', type=str, help='Path to config for dataset generation')
- aparser.add_argument('datadir', type=str,
- help='Path to folder with images and masks (output of gen_mask_dataset.py)')
- aparser.add_argument('predictdir', type=str,
- help='Path to folder with predicts (e.g. predict_hifill_baseline.py)')
- aparser.add_argument('outpath', type=str, help='Where to put results')
- aparser.add_argument('--only-report', action='store_true',
- help='Whether to skip prediction and feature extraction, '
- 'load all the possible latents and proceed with report only')
- aparser.add_argument('--n-jobs', type=int, default=8, help='how many processes to use for pair mask mining')
-
- main(aparser.parse_args())
diff --git a/spaces/akhaliq/lama/models/ade20k/segm_lib/nn/__init__.py b/spaces/akhaliq/lama/models/ade20k/segm_lib/nn/__init__.py
deleted file mode 100644
index 98a96370ef04570f516052bb73f568d0ebc346c3..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/models/ade20k/segm_lib/nn/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .modules import *
-from .parallel import UserScatteredDataParallel, user_scattered_collate, async_copy_to
diff --git a/spaces/alamin655/websurfx/public/static/colorschemes/nord.css b/spaces/alamin655/websurfx/public/static/colorschemes/nord.css
deleted file mode 100644
index 234b57bcfecccaf4448b5d853c08ce0dbd899be7..0000000000000000000000000000000000000000
--- a/spaces/alamin655/websurfx/public/static/colorschemes/nord.css
+++ /dev/null
@@ -1,11 +0,0 @@
-:root {
- --background-color: #122736ff;
- --foreground-color: #a2e2a9;
- --color-one: #121B2Cff;
- --color-two: #f08282;
- --color-three: #ABC5AAff;
- --color-four: #e6d2d2;
- --color-five: #81a1c1;
- --color-six: #e2ecd6;
- --color-seven: #fff;
-}
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/enums.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/enums.py
deleted file mode 100644
index 04512072251c429e63ed110cdbafaf4b3cca3412..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/enums.py
+++ /dev/null
@@ -1,76 +0,0 @@
-"""
-All of the Enums that are used throughout the chardet package.
-
-:author: Dan Blanchard (dan.blanchard@gmail.com)
-"""
-
-
-class InputState(object):
- """
- This enum represents the different states a universal detector can be in.
- """
- PURE_ASCII = 0
- ESC_ASCII = 1
- HIGH_BYTE = 2
-
-
-class LanguageFilter(object):
- """
- This enum represents the different language filters we can apply to a
- ``UniversalDetector``.
- """
- CHINESE_SIMPLIFIED = 0x01
- CHINESE_TRADITIONAL = 0x02
- JAPANESE = 0x04
- KOREAN = 0x08
- NON_CJK = 0x10
- ALL = 0x1F
- CHINESE = CHINESE_SIMPLIFIED | CHINESE_TRADITIONAL
- CJK = CHINESE | JAPANESE | KOREAN
-
-
-class ProbingState(object):
- """
- This enum represents the different states a prober can be in.
- """
- DETECTING = 0
- FOUND_IT = 1
- NOT_ME = 2
-
-
-class MachineState(object):
- """
- This enum represents the different states a state machine can be in.
- """
- START = 0
- ERROR = 1
- ITS_ME = 2
-
-
-class SequenceLikelihood(object):
- """
- This enum represents the likelihood of a character following the previous one.
- """
- NEGATIVE = 0
- UNLIKELY = 1
- LIKELY = 2
- POSITIVE = 3
-
- @classmethod
- def get_num_categories(cls):
- """:returns: The number of likelihood categories in the enum."""
- return 4
-
-
-class CharacterCategory(object):
- """
- This enum represents the different categories language models for
- ``SingleByteCharsetProber`` put characters into.
-
- Anything less than CONTROL is considered a letter.
- """
- UNDEFINED = 255
- LINE_BREAK = 254
- SYMBOL = 253
- DIGIT = 252
- CONTROL = 251
diff --git a/spaces/aliabd/SummerTime/tests/helpers.py b/spaces/aliabd/SummerTime/tests/helpers.py
deleted file mode 100644
index e845ba70d3075f572e5828c898a5a11d0b089969..0000000000000000000000000000000000000000
--- a/spaces/aliabd/SummerTime/tests/helpers.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from dataset.st_dataset import SummDataset, SummInstance
-
-import random
-from typing import List, Tuple
-
-
-def print_with_color(s: str, color: str):
- """
- Print formatted string.
-
- :param str `s`: String to print.
- :param str `color`: ANSI color code.
-
- :see https://gist.github.com/RabaDabaDoba/145049536f815903c79944599c6f952a
- """
-
- print(f"\033[{color}m{s}\033[0m")
-
-
-def retrieve_random_test_instances(
- dataset_instances: List[SummInstance], num_instances=3
-) -> List[SummInstance]:
- """
- Retrieve random test instances from a dataset training set.
-
- :param List[SummInstance] `dataset_instances`: Instances from a dataset `train_set` to pull random examples from.
- :param int `num_instances`: Number of random instances to pull. Defaults to `3`.
- :return List of SummInstance to summarize.
- """
-
- test_instances = []
- for i in range(num_instances):
- test_instances.append(
- dataset_instances[random.randint(0, len(dataset_instances) - 1)]
- )
- return test_instances
-
-
-def get_summarization_set(dataset: SummDataset, size=1) -> Tuple[List, List]:
- """
- Return instances from given summarization dataset, in the format of (sources, targets).
- """
- subset = []
- for i in range(size):
- subset.append(next(dataset.train_set))
-
- src, tgt = zip(*(list(map(lambda x: (x.source, x.summary), subset))))
-
- return list(src), list(tgt)
-
-
-def get_query_based_summarization_set(
- dataset: SummDataset, size=1
-) -> Tuple[List, List, List]:
- """
- Return instances from given query-based summarization dataset, in the format of (sources, targets, queries).
- """
- subset = []
- for i in range(size):
- subset.append(next(dataset.train_set))
-
- src, tgt, queries = zip(
- *(list(map(lambda x: (x.source, x.summary, x.query), subset)))
- )
-
- return list(src), list(tgt), list(queries)
diff --git a/spaces/allyssonmacedo/good-clients/README.md b/spaces/allyssonmacedo/good-clients/README.md
deleted file mode 100644
index e08eba2a7cb646575b2394e87452e97994ed685e..0000000000000000000000000000000000000000
--- a/spaces/allyssonmacedo/good-clients/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Good Clients
-emoji: 🌍
-colorFrom: indigo
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.28.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/audio_analyzer.h b/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/audio_analyzer.h
deleted file mode 100644
index 8d9f1ee4f606cdf4edce8c8804b2edd87321a104..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/audio_analyzer.h
+++ /dev/null
@@ -1,187 +0,0 @@
-
-/*
- * PortAudio Portable Real-Time Audio Library
- * Latest Version at: http://www.portaudio.com
- *
- * Copyright (c) 1999-2010 Phil Burk and Ross Bencina
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-#ifndef _AUDIO_ANALYZER_H
-#define _AUDIO_ANALYZER_H
-
-#include "biquad_filter.h"
-
-#define MATH_PI (3.141592653589793238462643)
-#define MATH_TWO_PI (2.0 * MATH_PI)
-
-typedef struct PaQaSineGenerator_s
-{
- double phase;
- double phaseIncrement;
- double frequency;
- double amplitude;
-} PaQaSineGenerator;
-
-/** Container for a monophonic audio sample in memory. */
-typedef struct PaQaRecording_s
-{
- /** Maximum number of frames that can fit in the allocated buffer. */
- int maxFrames;
- float *buffer;
- /** Actual number of valid frames in the buffer. */
- int numFrames;
- int sampleRate;
-} PaQaRecording;
-
-typedef struct PaQaTestTone_s
-{
- int samplesPerFrame;
- int startDelay;
- double sampleRate;
- double frequency;
- double amplitude;
-} PaQaTestTone;
-
-typedef struct PaQaAnalysisResult_s
-{
- int valid;
- /** Latency in samples from output to input. */
- double latency;
- double amplitudeRatio;
- double popAmplitude;
- double popPosition;
- double numDroppedFrames;
- double droppedFramesPosition;
- double numAddedFrames;
- double addedFramesPosition;
-} PaQaAnalysisResult;
-
-
-/*================================================================*/
-/*================= General DSP Tools ============================*/
-/*================================================================*/
-/**
- * Calculate Nth frequency of a series for use in testing multiple channels.
- * Series should avoid harmonic overlap between channels.
- */
-double PaQa_GetNthFrequency( double baseFrequency, int index );
-
-void PaQa_EraseBuffer( float *buffer, int numFrames, int samplesPerFrame );
-
-void PaQa_MixSine( PaQaSineGenerator *generator, float *buffer, int numSamples, int stride );
-
-void PaQa_WriteSine( float *buffer, int numSamples, int stride,
- double frequency, double amplitude );
-
-/**
- * Generate a signal with a sharp edge in the middle that can be recognized despite some phase shift.
- */
-void PaQa_GenerateCrack( float *buffer, int numSamples, int stride );
-
-double PaQa_ComputePhaseDifference( double phase1, double phase2 );
-
-/**
- * Measure the area under the curve by summing absolute value of each value.
- */
-double PaQa_MeasureArea( float *buffer, int numFrames, int stride );
-
-/**
- * Measure slope of the positive zero crossings.
- */
-double PaQa_MeasureCrossingSlope( float *buffer, int numFrames );
-
-
-/**
- * Prepare an oscillator that can generate a sine tone for testing.
- */
-void PaQa_SetupSineGenerator( PaQaSineGenerator *generator, double frequency, double amplitude, double frameRate );
-
-/*================================================================*/
-/*================= Recordings ===================================*/
-/*================================================================*/
-/**
- * Allocate memory for containing a mono audio signal. Set up recording for writing.
- */
- int PaQa_InitializeRecording( PaQaRecording *recording, int maxSamples, int sampleRate );
-
-/**
-* Free memory allocated by PaQa_InitializeRecording.
- */
- void PaQa_TerminateRecording( PaQaRecording *recording );
-
-/**
- * Apply a biquad filter to the audio from the input recording and write it to the output recording.
- */
-void PaQa_FilterRecording( PaQaRecording *input, PaQaRecording *output, BiquadFilter *filter );
-
-
-int PaQa_SaveRecordingToWaveFile( PaQaRecording *recording, const char *filename );
-
-/**
- * @param stride is the spacing of samples to skip in the input buffer. To use every samples pass 1. To use every other sample pass 2.
- */
-int PaQa_WriteRecording( PaQaRecording *recording, float *buffer, int numSamples, int stride );
-
-/** Write zeros into a recording. */
-int PaQa_WriteSilence( PaQaRecording *recording, int numSamples );
-
-int PaQa_RecordFreeze( PaQaRecording *recording, int numSamples );
-
-double PaQa_CorrelateSine( PaQaRecording *recording, double frequency, double frameRate,
- int startFrame, int numSamples, double *phasePtr );
-
-double PaQa_FindFirstMatch( PaQaRecording *recording, float *buffer, int numSamples, double tolerance );
-
-/**
- * Estimate the original amplitude of a clipped sine wave by measuring
- * its average slope at the zero crossings.
- */
-double PaQa_MeasureSineAmplitudeBySlope( PaQaRecording *recording,
- double frequency, double frameRate,
- int startFrame, int numFrames );
-
-double PaQa_MeasureRootMeanSquare( float *buffer, int numFrames );
-
-/**
- * Compare the amplitudes of these two signals.
- * Return ratio of recorded signal over buffer signal.
- */
-double PaQa_CompareAmplitudes( PaQaRecording *recording, int startAt, float *buffer, int numSamples );
-
-/**
- * Analyse a recording of a sine wave.
- * Measure latency and look for dropped frames, etc.
- */
-int PaQa_AnalyseRecording( PaQaRecording *recording, PaQaTestTone *testTone, PaQaAnalysisResult *analysisResult );
-
-#endif /* _AUDIO_ANALYZER_H */
diff --git a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_scripts_all.py b/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_scripts_all.py
deleted file mode 100644
index ed60f8135a5db3869c8133cb15a709efebac7e09..0000000000000000000000000000000000000000
--- a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/plot_scripts_all.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# pack_str_list = []
-# import matplotlib.pyplot as plt
-# import matplotlib.ticker as ticker
-from map_packages_colors_all import *
-
-def plot_abs_data_n_arr(n_arr, data, pack_str):
- if len(n_arr) > len(data):
- plt.plot(n_arr[0:len(data)], data, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5)
- elif len(n_arr) < len(data):
- plt.plot(n_arr, data[0:len(n_arr)], linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5)
- else:
- plt.plot(n_arr, data, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5)
-
-def plot_comp_data_n_arr(n_arr, data_1, data_2, pack_str):
- ratio_arr = []
- if len(data_1) == len(data_2):
- for i, elem in enumerate(data_1):
- ratio_arr.append(elem/float(data_2[i]))
- elif len(data_1) > len(data_2):
- for i, elem in enumerate(data_2):
- ratio_arr.append(data_1[i]/float(elem))
- # plt.plot(n_arr[0:len(data_2)], ratio_arr, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5)
- elif len(data_2) > len(data_1):
- for i, elem in enumerate(data_1):
- ratio_arr.append(elem/data_2[i])
- # plt.plot(n_arr[0:len(data_1)], ratio_arr, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5)
- # print(ratio_arr)
- if len(n_arr) > len(ratio_arr):
- plt.plot(n_arr[0:len(ratio_arr)], ratio_arr, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5)
- elif len(n_arr) < len(ratio_arr):
- plt.plot(n_arr, ratio_arr[0:len(n_arr)], linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5)
- else:
- plt.plot(n_arr, ratio_arr, linestyle='-', marker=symbol_dict[pack_str], label=label_dict[pack_str], color=color_dict[pack_str], markersize=5)
-
-
-def gen_settings(fig, ax, xlabel_str, ylabel_str, log_x_on, log_y_on, xlim_on, xlim_low, xlim_upp, ylim_on, ylim_low, ylim_upp, leg_loc, fn):
-
- ax.tick_params(direction='in', which='both', bottom=True, top=True, left=True, right=True)
- # ax.xaxis.set_major_locator(MaxNLocator(integer=True))
- ax.xaxis.set_major_locator(ticker.AutoLocator())
- ax.xaxis.set_minor_locator(ticker.AutoMinorLocator())
- ax.yaxis.set_major_locator(ticker.AutoLocator())
- ax.yaxis.set_minor_locator(ticker.AutoMinorLocator())
-
- if log_x_on:
- ax.set_xscale('log')
- if log_y_on:
- ax.set_yscale('log')
- if xlim_on == True:
- plt.xlim([xlim_low, xlim_upp])
- if ylim_on == True:
- plt.ylim([ylim_low, ylim_upp])
- # plt.xlabel(r"N (system size)")
- # plt.ylabel(r"Time ($t_{package}$)")
- plt.xlabel(xlabel_str)
- plt.ylabel(ylabel_str)
- if leg_loc== "out":
- ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), prop={'size': 8})
-
- elif leg_loc == None:
- ax.legend(loc=0)
-
- plt.tight_layout()
- fig.set_dpi(100)
- if fn == None:
- pass
- else:
- plt.savefig(fn)
- plt.show()
- # plt.savefig("perf_heisenberg_pure_evolution_single_thread_wallclock_absolute.pdf")
- # plt.savefig("perf_heisenberg_pure_evolution_single_thread_wallclock_relative_line.svg", format="svg", dpi=1200)
- # plt.show()
diff --git a/spaces/anaclaudia13ct/insect_detection/models/tf.py b/spaces/anaclaudia13ct/insect_detection/models/tf.py
deleted file mode 100644
index 3f3dc8dbe7e76352bc39bbb8a21ed8c35002204e..0000000000000000000000000000000000000000
--- a/spaces/anaclaudia13ct/insect_detection/models/tf.py
+++ /dev/null
@@ -1,608 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-TensorFlow, Keras and TFLite versions of YOLOv5
-Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127
-
-Usage:
- $ python models/tf.py --weights yolov5s.pt
-
-Export:
- $ python export.py --weights yolov5s.pt --include saved_model pb tflite tfjs
-"""
-
-import argparse
-import sys
-from copy import deepcopy
-from pathlib import Path
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[1] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-# ROOT = ROOT.relative_to(Path.cwd()) # relative
-
-import numpy as np
-import tensorflow as tf
-import torch
-import torch.nn as nn
-from tensorflow import keras
-
-from models.common import (C3, SPP, SPPF, Bottleneck, BottleneckCSP, C3x, Concat, Conv, CrossConv, DWConv,
- DWConvTranspose2d, Focus, autopad)
-from models.experimental import MixConv2d, attempt_load
-from models.yolo import Detect, Segment
-from utils.activations import SiLU
-from utils.general import LOGGER, make_divisible, print_args
-
-
-class TFBN(keras.layers.Layer):
- # TensorFlow BatchNormalization wrapper
- def __init__(self, w=None):
- super().__init__()
- self.bn = keras.layers.BatchNormalization(
- beta_initializer=keras.initializers.Constant(w.bias.numpy()),
- gamma_initializer=keras.initializers.Constant(w.weight.numpy()),
- moving_mean_initializer=keras.initializers.Constant(w.running_mean.numpy()),
- moving_variance_initializer=keras.initializers.Constant(w.running_var.numpy()),
- epsilon=w.eps)
-
- def call(self, inputs):
- return self.bn(inputs)
-
-
-class TFPad(keras.layers.Layer):
- # Pad inputs in spatial dimensions 1 and 2
- def __init__(self, pad):
- super().__init__()
- if isinstance(pad, int):
- self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]])
- else: # tuple/list
- self.pad = tf.constant([[0, 0], [pad[0], pad[0]], [pad[1], pad[1]], [0, 0]])
-
- def call(self, inputs):
- return tf.pad(inputs, self.pad, mode='constant', constant_values=0)
-
-
-class TFConv(keras.layers.Layer):
- # Standard convolution
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
- # ch_in, ch_out, weights, kernel, stride, padding, groups
- super().__init__()
- assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
- # TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding)
- # see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch
- conv = keras.layers.Conv2D(
- filters=c2,
- kernel_size=k,
- strides=s,
- padding='SAME' if s == 1 else 'VALID',
- use_bias=not hasattr(w, 'bn'),
- kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),
- bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
- self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])
- self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity
- self.act = activations(w.act) if act else tf.identity
-
- def call(self, inputs):
- return self.act(self.bn(self.conv(inputs)))
-
-
-class TFDWConv(keras.layers.Layer):
- # Depthwise convolution
- def __init__(self, c1, c2, k=1, s=1, p=None, act=True, w=None):
- # ch_in, ch_out, weights, kernel, stride, padding, groups
- super().__init__()
- assert c2 % c1 == 0, f'TFDWConv() output={c2} must be a multiple of input={c1} channels'
- conv = keras.layers.DepthwiseConv2D(
- kernel_size=k,
- depth_multiplier=c2 // c1,
- strides=s,
- padding='SAME' if s == 1 else 'VALID',
- use_bias=not hasattr(w, 'bn'),
- depthwise_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),
- bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
- self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])
- self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity
- self.act = activations(w.act) if act else tf.identity
-
- def call(self, inputs):
- return self.act(self.bn(self.conv(inputs)))
-
-
-class TFDWConvTranspose2d(keras.layers.Layer):
- # Depthwise ConvTranspose2d
- def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0, w=None):
- # ch_in, ch_out, weights, kernel, stride, padding, groups
- super().__init__()
- assert c1 == c2, f'TFDWConv() output={c2} must be equal to input={c1} channels'
- assert k == 4 and p1 == 1, 'TFDWConv() only valid for k=4 and p1=1'
- weight, bias = w.weight.permute(2, 3, 1, 0).numpy(), w.bias.numpy()
- self.c1 = c1
- self.conv = [
- keras.layers.Conv2DTranspose(filters=1,
- kernel_size=k,
- strides=s,
- padding='VALID',
- output_padding=p2,
- use_bias=True,
- kernel_initializer=keras.initializers.Constant(weight[..., i:i + 1]),
- bias_initializer=keras.initializers.Constant(bias[i])) for i in range(c1)]
-
- def call(self, inputs):
- return tf.concat([m(x) for m, x in zip(self.conv, tf.split(inputs, self.c1, 3))], 3)[:, 1:-1, 1:-1]
-
-
-class TFFocus(keras.layers.Layer):
- # Focus wh information into c-space
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
- # ch_in, ch_out, kernel, stride, padding, groups
- super().__init__()
- self.conv = TFConv(c1 * 4, c2, k, s, p, g, act, w.conv)
-
- def call(self, inputs): # x(b,w,h,c) -> y(b,w/2,h/2,4c)
- # inputs = inputs / 255 # normalize 0-255 to 0-1
- inputs = [inputs[:, ::2, ::2, :], inputs[:, 1::2, ::2, :], inputs[:, ::2, 1::2, :], inputs[:, 1::2, 1::2, :]]
- return self.conv(tf.concat(inputs, 3))
-
-
-class TFBottleneck(keras.layers.Layer):
- # Standard bottleneck
- def __init__(self, c1, c2, shortcut=True, g=1, e=0.5, w=None): # ch_in, ch_out, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
- self.cv2 = TFConv(c_, c2, 3, 1, g=g, w=w.cv2)
- self.add = shortcut and c1 == c2
-
- def call(self, inputs):
- return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs))
-
-
-class TFCrossConv(keras.layers.Layer):
- # Cross Convolution
- def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False, w=None):
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = TFConv(c1, c_, (1, k), (1, s), w=w.cv1)
- self.cv2 = TFConv(c_, c2, (k, 1), (s, 1), g=g, w=w.cv2)
- self.add = shortcut and c1 == c2
-
- def call(self, inputs):
- return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs))
-
-
-class TFConv2d(keras.layers.Layer):
- # Substitution for PyTorch nn.Conv2D
- def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None):
- super().__init__()
- assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
- self.conv = keras.layers.Conv2D(filters=c2,
- kernel_size=k,
- strides=s,
- padding='VALID',
- use_bias=bias,
- kernel_initializer=keras.initializers.Constant(
- w.weight.permute(2, 3, 1, 0).numpy()),
- bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None)
-
- def call(self, inputs):
- return self.conv(inputs)
-
-
-class TFBottleneckCSP(keras.layers.Layer):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
- # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
- self.cv2 = TFConv2d(c1, c_, 1, 1, bias=False, w=w.cv2)
- self.cv3 = TFConv2d(c_, c_, 1, 1, bias=False, w=w.cv3)
- self.cv4 = TFConv(2 * c_, c2, 1, 1, w=w.cv4)
- self.bn = TFBN(w.bn)
- self.act = lambda x: keras.activations.swish(x)
- self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])
-
- def call(self, inputs):
- y1 = self.cv3(self.m(self.cv1(inputs)))
- y2 = self.cv2(inputs)
- return self.cv4(self.act(self.bn(tf.concat((y1, y2), axis=3))))
-
-
-class TFC3(keras.layers.Layer):
- # CSP Bottleneck with 3 convolutions
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
- # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
- self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2)
- self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3)
- self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])
-
- def call(self, inputs):
- return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))
-
-
-class TFC3x(keras.layers.Layer):
- # 3 module with cross-convolutions
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
- # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
- self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2)
- self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3)
- self.m = keras.Sequential([
- TFCrossConv(c_, c_, k=3, s=1, g=g, e=1.0, shortcut=shortcut, w=w.m[j]) for j in range(n)])
-
- def call(self, inputs):
- return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))
-
-
-class TFSPP(keras.layers.Layer):
- # Spatial pyramid pooling layer used in YOLOv3-SPP
- def __init__(self, c1, c2, k=(5, 9, 13), w=None):
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
- self.cv2 = TFConv(c_ * (len(k) + 1), c2, 1, 1, w=w.cv2)
- self.m = [keras.layers.MaxPool2D(pool_size=x, strides=1, padding='SAME') for x in k]
-
- def call(self, inputs):
- x = self.cv1(inputs)
- return self.cv2(tf.concat([x] + [m(x) for m in self.m], 3))
-
-
-class TFSPPF(keras.layers.Layer):
- # Spatial pyramid pooling-Fast layer
- def __init__(self, c1, c2, k=5, w=None):
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
- self.cv2 = TFConv(c_ * 4, c2, 1, 1, w=w.cv2)
- self.m = keras.layers.MaxPool2D(pool_size=k, strides=1, padding='SAME')
-
- def call(self, inputs):
- x = self.cv1(inputs)
- y1 = self.m(x)
- y2 = self.m(y1)
- return self.cv2(tf.concat([x, y1, y2, self.m(y2)], 3))
-
-
-class TFDetect(keras.layers.Layer):
- # TF YOLOv5 Detect layer
- def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None): # detection layer
- super().__init__()
- self.stride = tf.convert_to_tensor(w.stride.numpy(), dtype=tf.float32)
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [tf.zeros(1)] * self.nl # init grid
- self.anchors = tf.convert_to_tensor(w.anchors.numpy(), dtype=tf.float32)
- self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]), [self.nl, 1, -1, 1, 2])
- self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)]
- self.training = False # set to False after building model
- self.imgsz = imgsz
- for i in range(self.nl):
- ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]
- self.grid[i] = self._make_grid(nx, ny)
-
- def call(self, inputs):
- z = [] # inference output
- x = []
- for i in range(self.nl):
- x.append(self.m[i](inputs[i]))
- # x(bs,20,20,255) to x(bs,3,20,20,85)
- ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]
- x[i] = tf.reshape(x[i], [-1, ny * nx, self.na, self.no])
-
- if not self.training: # inference
- y = x[i]
- grid = tf.transpose(self.grid[i], [0, 2, 1, 3]) - 0.5
- anchor_grid = tf.transpose(self.anchor_grid[i], [0, 2, 1, 3]) * 4
- xy = (tf.sigmoid(y[..., 0:2]) * 2 + grid) * self.stride[i] # xy
- wh = tf.sigmoid(y[..., 2:4]) ** 2 * anchor_grid
- # Normalize xywh to 0-1 to reduce calibration error
- xy /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)
- wh /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)
- y = tf.concat([xy, wh, tf.sigmoid(y[..., 4:5 + self.nc]), y[..., 5 + self.nc:]], -1)
- z.append(tf.reshape(y, [-1, self.na * ny * nx, self.no]))
-
- return tf.transpose(x, [0, 2, 1, 3]) if self.training else (tf.concat(z, 1),)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- # yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- # return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
- xv, yv = tf.meshgrid(tf.range(nx), tf.range(ny))
- return tf.cast(tf.reshape(tf.stack([xv, yv], 2), [1, 1, ny * nx, 2]), dtype=tf.float32)
-
-
-class TFSegment(TFDetect):
- # YOLOv5 Segment head for segmentation models
- def __init__(self, nc=80, anchors=(), nm=32, npr=256, ch=(), imgsz=(640, 640), w=None):
- super().__init__(nc, anchors, ch, imgsz, w)
- self.nm = nm # number of masks
- self.npr = npr # number of protos
- self.no = 5 + nc + self.nm # number of outputs per anchor
- self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)] # output conv
- self.proto = TFProto(ch[0], self.npr, self.nm, w=w.proto) # protos
- self.detect = TFDetect.call
-
- def call(self, x):
- p = self.proto(x[0])
- # p = TFUpsample(None, scale_factor=4, mode='nearest')(self.proto(x[0])) # (optional) full-size protos
- p = tf.transpose(p, [0, 3, 1, 2]) # from shape(1,160,160,32) to shape(1,32,160,160)
- x = self.detect(self, x)
- return (x, p) if self.training else (x[0], p)
-
-
-class TFProto(keras.layers.Layer):
-
- def __init__(self, c1, c_=256, c2=32, w=None):
- super().__init__()
- self.cv1 = TFConv(c1, c_, k=3, w=w.cv1)
- self.upsample = TFUpsample(None, scale_factor=2, mode='nearest')
- self.cv2 = TFConv(c_, c_, k=3, w=w.cv2)
- self.cv3 = TFConv(c_, c2, w=w.cv3)
-
- def call(self, inputs):
- return self.cv3(self.cv2(self.upsample(self.cv1(inputs))))
-
-
-class TFUpsample(keras.layers.Layer):
- # TF version of torch.nn.Upsample()
- def __init__(self, size, scale_factor, mode, w=None): # warning: all arguments needed including 'w'
- super().__init__()
- assert scale_factor % 2 == 0, "scale_factor must be multiple of 2"
- self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * scale_factor, x.shape[2] * scale_factor), mode)
- # self.upsample = keras.layers.UpSampling2D(size=scale_factor, interpolation=mode)
- # with default arguments: align_corners=False, half_pixel_centers=False
- # self.upsample = lambda x: tf.raw_ops.ResizeNearestNeighbor(images=x,
- # size=(x.shape[1] * 2, x.shape[2] * 2))
-
- def call(self, inputs):
- return self.upsample(inputs)
-
-
-class TFConcat(keras.layers.Layer):
- # TF version of torch.concat()
- def __init__(self, dimension=1, w=None):
- super().__init__()
- assert dimension == 1, "convert only NCHW to NHWC concat"
- self.d = 3
-
- def call(self, inputs):
- return tf.concat(inputs, self.d)
-
-
-def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3)
- LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}")
- anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
- na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
- no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
-
- layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
- for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
- m_str = m
- m = eval(m) if isinstance(m, str) else m # eval strings
- for j, a in enumerate(args):
- try:
- args[j] = eval(a) if isinstance(a, str) else a # eval strings
- except NameError:
- pass
-
- n = max(round(n * gd), 1) if n > 1 else n # depth gain
- if m in [
- nn.Conv2d, Conv, DWConv, DWConvTranspose2d, Bottleneck, SPP, SPPF, MixConv2d, Focus, CrossConv,
- BottleneckCSP, C3, C3x]:
- c1, c2 = ch[f], args[0]
- c2 = make_divisible(c2 * gw, 8) if c2 != no else c2
-
- args = [c1, c2, *args[1:]]
- if m in [BottleneckCSP, C3, C3x]:
- args.insert(2, n)
- n = 1
- elif m is nn.BatchNorm2d:
- args = [ch[f]]
- elif m is Concat:
- c2 = sum(ch[-1 if x == -1 else x + 1] for x in f)
- elif m in [Detect, Segment]:
- args.append([ch[x + 1] for x in f])
- if isinstance(args[1], int): # number of anchors
- args[1] = [list(range(args[1] * 2))] * len(f)
- if m is Segment:
- args[3] = make_divisible(args[3] * gw, 8)
- args.append(imgsz)
- else:
- c2 = ch[f]
-
- tf_m = eval('TF' + m_str.replace('nn.', ''))
- m_ = keras.Sequential([tf_m(*args, w=model.model[i][j]) for j in range(n)]) if n > 1 \
- else tf_m(*args, w=model.model[i]) # module
-
- torch_m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
- t = str(m)[8:-2].replace('__main__.', '') # module type
- np = sum(x.numel() for x in torch_m_.parameters()) # number params
- m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
- LOGGER.info(f'{i:>3}{str(f):>18}{str(n):>3}{np:>10} {t:<40}{str(args):<30}') # print
- save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
- layers.append(m_)
- ch.append(c2)
- return keras.Sequential(layers), sorted(save)
-
-
-class TFModel:
- # TF YOLOv5 model
- def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)): # model, channels, classes
- super().__init__()
- if isinstance(cfg, dict):
- self.yaml = cfg # model dict
- else: # is *.yaml
- import yaml # for torch hub
- self.yaml_file = Path(cfg).name
- with open(cfg) as f:
- self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict
-
- # Define model
- if nc and nc != self.yaml['nc']:
- LOGGER.info(f"Overriding {cfg} nc={self.yaml['nc']} with nc={nc}")
- self.yaml['nc'] = nc # override yaml value
- self.model, self.savelist = parse_model(deepcopy(self.yaml), ch=[ch], model=model, imgsz=imgsz)
-
- def predict(self,
- inputs,
- tf_nms=False,
- agnostic_nms=False,
- topk_per_class=100,
- topk_all=100,
- iou_thres=0.45,
- conf_thres=0.25):
- y = [] # outputs
- x = inputs
- for m in self.model.layers:
- if m.f != -1: # if not from previous layer
- x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
-
- x = m(x) # run
- y.append(x if m.i in self.savelist else None) # save output
-
- # Add TensorFlow NMS
- if tf_nms:
- boxes = self._xywh2xyxy(x[0][..., :4])
- probs = x[0][:, :, 4:5]
- classes = x[0][:, :, 5:]
- scores = probs * classes
- if agnostic_nms:
- nms = AgnosticNMS()((boxes, classes, scores), topk_all, iou_thres, conf_thres)
- else:
- boxes = tf.expand_dims(boxes, 2)
- nms = tf.image.combined_non_max_suppression(boxes,
- scores,
- topk_per_class,
- topk_all,
- iou_thres,
- conf_thres,
- clip_boxes=False)
- return (nms,)
- return x # output [1,6300,85] = [xywh, conf, class0, class1, ...]
- # x = x[0] # [x(1,6300,85), ...] to x(6300,85)
- # xywh = x[..., :4] # x(6300,4) boxes
- # conf = x[..., 4:5] # x(6300,1) confidences
- # cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1)) # x(6300,1) classes
- # return tf.concat([conf, cls, xywh], 1)
-
- @staticmethod
- def _xywh2xyxy(xywh):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- x, y, w, h = tf.split(xywh, num_or_size_splits=4, axis=-1)
- return tf.concat([x - w / 2, y - h / 2, x + w / 2, y + h / 2], axis=-1)
-
-
-class AgnosticNMS(keras.layers.Layer):
- # TF Agnostic NMS
- def call(self, input, topk_all, iou_thres, conf_thres):
- # wrap map_fn to avoid TypeSpec related error https://stackoverflow.com/a/65809989/3036450
- return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres),
- input,
- fn_output_signature=(tf.float32, tf.float32, tf.float32, tf.int32),
- name='agnostic_nms')
-
- @staticmethod
- def _nms(x, topk_all=100, iou_thres=0.45, conf_thres=0.25): # agnostic NMS
- boxes, classes, scores = x
- class_inds = tf.cast(tf.argmax(classes, axis=-1), tf.float32)
- scores_inp = tf.reduce_max(scores, -1)
- selected_inds = tf.image.non_max_suppression(boxes,
- scores_inp,
- max_output_size=topk_all,
- iou_threshold=iou_thres,
- score_threshold=conf_thres)
- selected_boxes = tf.gather(boxes, selected_inds)
- padded_boxes = tf.pad(selected_boxes,
- paddings=[[0, topk_all - tf.shape(selected_boxes)[0]], [0, 0]],
- mode="CONSTANT",
- constant_values=0.0)
- selected_scores = tf.gather(scores_inp, selected_inds)
- padded_scores = tf.pad(selected_scores,
- paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],
- mode="CONSTANT",
- constant_values=-1.0)
- selected_classes = tf.gather(class_inds, selected_inds)
- padded_classes = tf.pad(selected_classes,
- paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],
- mode="CONSTANT",
- constant_values=-1.0)
- valid_detections = tf.shape(selected_inds)[0]
- return padded_boxes, padded_scores, padded_classes, valid_detections
-
-
-def activations(act=nn.SiLU):
- # Returns TF activation from input PyTorch activation
- if isinstance(act, nn.LeakyReLU):
- return lambda x: keras.activations.relu(x, alpha=0.1)
- elif isinstance(act, nn.Hardswish):
- return lambda x: x * tf.nn.relu6(x + 3) * 0.166666667
- elif isinstance(act, (nn.SiLU, SiLU)):
- return lambda x: keras.activations.swish(x)
- else:
- raise Exception(f'no matching TensorFlow activation found for PyTorch activation {act}')
-
-
-def representative_dataset_gen(dataset, ncalib=100):
- # Representative dataset generator for use with converter.representative_dataset, returns a generator of np arrays
- for n, (path, img, im0s, vid_cap, string) in enumerate(dataset):
- im = np.transpose(img, [1, 2, 0])
- im = np.expand_dims(im, axis=0).astype(np.float32)
- im /= 255
- yield [im]
- if n >= ncalib:
- break
-
-
-def run(
- weights=ROOT / 'yolov5s.pt', # weights path
- imgsz=(640, 640), # inference size h,w
- batch_size=1, # batch size
- dynamic=False, # dynamic batch size
-):
- # PyTorch model
- im = torch.zeros((batch_size, 3, *imgsz)) # BCHW image
- model = attempt_load(weights, device=torch.device('cpu'), inplace=True, fuse=False)
- _ = model(im) # inference
- model.info()
-
- # TensorFlow model
- im = tf.zeros((batch_size, *imgsz, 3)) # BHWC image
- tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
- _ = tf_model.predict(im) # inference
-
- # Keras model
- im = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)
- keras_model = keras.Model(inputs=im, outputs=tf_model.predict(im))
- keras_model.summary()
-
- LOGGER.info('PyTorch, TensorFlow and Keras models successfully verified.\nUse export.py for TF model export.')
-
-
-def parse_opt():
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path')
- parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
- parser.add_argument('--batch-size', type=int, default=1, help='batch size')
- parser.add_argument('--dynamic', action='store_true', help='dynamic batch size')
- opt = parser.parse_args()
- opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
- print_args(vars(opt))
- return opt
-
-
-def main(opt):
- run(**vars(opt))
-
-
-if __name__ == "__main__":
- opt = parse_opt()
- main(opt)
diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/deepbooru_model.py b/spaces/aodianyun/stable-diffusion-webui/modules/deepbooru_model.py
deleted file mode 100644
index 7a53884624e96284c35214ce02b8a2891d92c3e8..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/modules/deepbooru_model.py
+++ /dev/null
@@ -1,678 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from modules import devices
-
-# see https://github.com/AUTOMATIC1111/TorchDeepDanbooru for more
-
-
-class DeepDanbooruModel(nn.Module):
- def __init__(self):
- super(DeepDanbooruModel, self).__init__()
-
- self.tags = []
-
- self.n_Conv_0 = nn.Conv2d(kernel_size=(7, 7), in_channels=3, out_channels=64, stride=(2, 2))
- self.n_MaxPool_0 = nn.MaxPool2d(kernel_size=(3, 3), stride=(2, 2))
- self.n_Conv_1 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=256)
- self.n_Conv_2 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=64)
- self.n_Conv_3 = nn.Conv2d(kernel_size=(3, 3), in_channels=64, out_channels=64)
- self.n_Conv_4 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=256)
- self.n_Conv_5 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=64)
- self.n_Conv_6 = nn.Conv2d(kernel_size=(3, 3), in_channels=64, out_channels=64)
- self.n_Conv_7 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=256)
- self.n_Conv_8 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=64)
- self.n_Conv_9 = nn.Conv2d(kernel_size=(3, 3), in_channels=64, out_channels=64)
- self.n_Conv_10 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=256)
- self.n_Conv_11 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=512, stride=(2, 2))
- self.n_Conv_12 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=128)
- self.n_Conv_13 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128, stride=(2, 2))
- self.n_Conv_14 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
- self.n_Conv_15 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
- self.n_Conv_16 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
- self.n_Conv_17 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
- self.n_Conv_18 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
- self.n_Conv_19 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
- self.n_Conv_20 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
- self.n_Conv_21 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
- self.n_Conv_22 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
- self.n_Conv_23 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
- self.n_Conv_24 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
- self.n_Conv_25 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
- self.n_Conv_26 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
- self.n_Conv_27 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
- self.n_Conv_28 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
- self.n_Conv_29 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
- self.n_Conv_30 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
- self.n_Conv_31 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
- self.n_Conv_32 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
- self.n_Conv_33 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
- self.n_Conv_34 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
- self.n_Conv_35 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
- self.n_Conv_36 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=1024, stride=(2, 2))
- self.n_Conv_37 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=256)
- self.n_Conv_38 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256, stride=(2, 2))
- self.n_Conv_39 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_40 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_41 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_42 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_43 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_44 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_45 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_46 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_47 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_48 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_49 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_50 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_51 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_52 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_53 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_54 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_55 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_56 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_57 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_58 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_59 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_60 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_61 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_62 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_63 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_64 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_65 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_66 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_67 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_68 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_69 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_70 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_71 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_72 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_73 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_74 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_75 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_76 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_77 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_78 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_79 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_80 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_81 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_82 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_83 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_84 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_85 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_86 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_87 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_88 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_89 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_90 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_91 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_92 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_93 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_94 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_95 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_96 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_97 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_98 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256, stride=(2, 2))
- self.n_Conv_99 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_100 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=1024, stride=(2, 2))
- self.n_Conv_101 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_102 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_103 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_104 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_105 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_106 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_107 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_108 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_109 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_110 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_111 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_112 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_113 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_114 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_115 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_116 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_117 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_118 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_119 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_120 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_121 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_122 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_123 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_124 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_125 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_126 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_127 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_128 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_129 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_130 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_131 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_132 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_133 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_134 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_135 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_136 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_137 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_138 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_139 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_140 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_141 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_142 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_143 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_144 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_145 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_146 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_147 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_148 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_149 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_150 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_151 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_152 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_153 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_154 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_155 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
- self.n_Conv_156 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
- self.n_Conv_157 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
- self.n_Conv_158 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=2048, stride=(2, 2))
- self.n_Conv_159 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=512)
- self.n_Conv_160 = nn.Conv2d(kernel_size=(3, 3), in_channels=512, out_channels=512, stride=(2, 2))
- self.n_Conv_161 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=2048)
- self.n_Conv_162 = nn.Conv2d(kernel_size=(1, 1), in_channels=2048, out_channels=512)
- self.n_Conv_163 = nn.Conv2d(kernel_size=(3, 3), in_channels=512, out_channels=512)
- self.n_Conv_164 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=2048)
- self.n_Conv_165 = nn.Conv2d(kernel_size=(1, 1), in_channels=2048, out_channels=512)
- self.n_Conv_166 = nn.Conv2d(kernel_size=(3, 3), in_channels=512, out_channels=512)
- self.n_Conv_167 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=2048)
- self.n_Conv_168 = nn.Conv2d(kernel_size=(1, 1), in_channels=2048, out_channels=4096, stride=(2, 2))
- self.n_Conv_169 = nn.Conv2d(kernel_size=(1, 1), in_channels=2048, out_channels=1024)
- self.n_Conv_170 = nn.Conv2d(kernel_size=(3, 3), in_channels=1024, out_channels=1024, stride=(2, 2))
- self.n_Conv_171 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=4096)
- self.n_Conv_172 = nn.Conv2d(kernel_size=(1, 1), in_channels=4096, out_channels=1024)
- self.n_Conv_173 = nn.Conv2d(kernel_size=(3, 3), in_channels=1024, out_channels=1024)
- self.n_Conv_174 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=4096)
- self.n_Conv_175 = nn.Conv2d(kernel_size=(1, 1), in_channels=4096, out_channels=1024)
- self.n_Conv_176 = nn.Conv2d(kernel_size=(3, 3), in_channels=1024, out_channels=1024)
- self.n_Conv_177 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=4096)
- self.n_Conv_178 = nn.Conv2d(kernel_size=(1, 1), in_channels=4096, out_channels=9176, bias=False)
-
- def forward(self, *inputs):
- t_358, = inputs
- t_359 = t_358.permute(*[0, 3, 1, 2])
- t_359_padded = F.pad(t_359, [2, 3, 2, 3], value=0)
- t_360 = self.n_Conv_0(t_359_padded.to(self.n_Conv_0.bias.dtype) if devices.unet_needs_upcast else t_359_padded)
- t_361 = F.relu(t_360)
- t_361 = F.pad(t_361, [0, 1, 0, 1], value=float('-inf'))
- t_362 = self.n_MaxPool_0(t_361)
- t_363 = self.n_Conv_1(t_362)
- t_364 = self.n_Conv_2(t_362)
- t_365 = F.relu(t_364)
- t_365_padded = F.pad(t_365, [1, 1, 1, 1], value=0)
- t_366 = self.n_Conv_3(t_365_padded)
- t_367 = F.relu(t_366)
- t_368 = self.n_Conv_4(t_367)
- t_369 = torch.add(t_368, t_363)
- t_370 = F.relu(t_369)
- t_371 = self.n_Conv_5(t_370)
- t_372 = F.relu(t_371)
- t_372_padded = F.pad(t_372, [1, 1, 1, 1], value=0)
- t_373 = self.n_Conv_6(t_372_padded)
- t_374 = F.relu(t_373)
- t_375 = self.n_Conv_7(t_374)
- t_376 = torch.add(t_375, t_370)
- t_377 = F.relu(t_376)
- t_378 = self.n_Conv_8(t_377)
- t_379 = F.relu(t_378)
- t_379_padded = F.pad(t_379, [1, 1, 1, 1], value=0)
- t_380 = self.n_Conv_9(t_379_padded)
- t_381 = F.relu(t_380)
- t_382 = self.n_Conv_10(t_381)
- t_383 = torch.add(t_382, t_377)
- t_384 = F.relu(t_383)
- t_385 = self.n_Conv_11(t_384)
- t_386 = self.n_Conv_12(t_384)
- t_387 = F.relu(t_386)
- t_387_padded = F.pad(t_387, [0, 1, 0, 1], value=0)
- t_388 = self.n_Conv_13(t_387_padded)
- t_389 = F.relu(t_388)
- t_390 = self.n_Conv_14(t_389)
- t_391 = torch.add(t_390, t_385)
- t_392 = F.relu(t_391)
- t_393 = self.n_Conv_15(t_392)
- t_394 = F.relu(t_393)
- t_394_padded = F.pad(t_394, [1, 1, 1, 1], value=0)
- t_395 = self.n_Conv_16(t_394_padded)
- t_396 = F.relu(t_395)
- t_397 = self.n_Conv_17(t_396)
- t_398 = torch.add(t_397, t_392)
- t_399 = F.relu(t_398)
- t_400 = self.n_Conv_18(t_399)
- t_401 = F.relu(t_400)
- t_401_padded = F.pad(t_401, [1, 1, 1, 1], value=0)
- t_402 = self.n_Conv_19(t_401_padded)
- t_403 = F.relu(t_402)
- t_404 = self.n_Conv_20(t_403)
- t_405 = torch.add(t_404, t_399)
- t_406 = F.relu(t_405)
- t_407 = self.n_Conv_21(t_406)
- t_408 = F.relu(t_407)
- t_408_padded = F.pad(t_408, [1, 1, 1, 1], value=0)
- t_409 = self.n_Conv_22(t_408_padded)
- t_410 = F.relu(t_409)
- t_411 = self.n_Conv_23(t_410)
- t_412 = torch.add(t_411, t_406)
- t_413 = F.relu(t_412)
- t_414 = self.n_Conv_24(t_413)
- t_415 = F.relu(t_414)
- t_415_padded = F.pad(t_415, [1, 1, 1, 1], value=0)
- t_416 = self.n_Conv_25(t_415_padded)
- t_417 = F.relu(t_416)
- t_418 = self.n_Conv_26(t_417)
- t_419 = torch.add(t_418, t_413)
- t_420 = F.relu(t_419)
- t_421 = self.n_Conv_27(t_420)
- t_422 = F.relu(t_421)
- t_422_padded = F.pad(t_422, [1, 1, 1, 1], value=0)
- t_423 = self.n_Conv_28(t_422_padded)
- t_424 = F.relu(t_423)
- t_425 = self.n_Conv_29(t_424)
- t_426 = torch.add(t_425, t_420)
- t_427 = F.relu(t_426)
- t_428 = self.n_Conv_30(t_427)
- t_429 = F.relu(t_428)
- t_429_padded = F.pad(t_429, [1, 1, 1, 1], value=0)
- t_430 = self.n_Conv_31(t_429_padded)
- t_431 = F.relu(t_430)
- t_432 = self.n_Conv_32(t_431)
- t_433 = torch.add(t_432, t_427)
- t_434 = F.relu(t_433)
- t_435 = self.n_Conv_33(t_434)
- t_436 = F.relu(t_435)
- t_436_padded = F.pad(t_436, [1, 1, 1, 1], value=0)
- t_437 = self.n_Conv_34(t_436_padded)
- t_438 = F.relu(t_437)
- t_439 = self.n_Conv_35(t_438)
- t_440 = torch.add(t_439, t_434)
- t_441 = F.relu(t_440)
- t_442 = self.n_Conv_36(t_441)
- t_443 = self.n_Conv_37(t_441)
- t_444 = F.relu(t_443)
- t_444_padded = F.pad(t_444, [0, 1, 0, 1], value=0)
- t_445 = self.n_Conv_38(t_444_padded)
- t_446 = F.relu(t_445)
- t_447 = self.n_Conv_39(t_446)
- t_448 = torch.add(t_447, t_442)
- t_449 = F.relu(t_448)
- t_450 = self.n_Conv_40(t_449)
- t_451 = F.relu(t_450)
- t_451_padded = F.pad(t_451, [1, 1, 1, 1], value=0)
- t_452 = self.n_Conv_41(t_451_padded)
- t_453 = F.relu(t_452)
- t_454 = self.n_Conv_42(t_453)
- t_455 = torch.add(t_454, t_449)
- t_456 = F.relu(t_455)
- t_457 = self.n_Conv_43(t_456)
- t_458 = F.relu(t_457)
- t_458_padded = F.pad(t_458, [1, 1, 1, 1], value=0)
- t_459 = self.n_Conv_44(t_458_padded)
- t_460 = F.relu(t_459)
- t_461 = self.n_Conv_45(t_460)
- t_462 = torch.add(t_461, t_456)
- t_463 = F.relu(t_462)
- t_464 = self.n_Conv_46(t_463)
- t_465 = F.relu(t_464)
- t_465_padded = F.pad(t_465, [1, 1, 1, 1], value=0)
- t_466 = self.n_Conv_47(t_465_padded)
- t_467 = F.relu(t_466)
- t_468 = self.n_Conv_48(t_467)
- t_469 = torch.add(t_468, t_463)
- t_470 = F.relu(t_469)
- t_471 = self.n_Conv_49(t_470)
- t_472 = F.relu(t_471)
- t_472_padded = F.pad(t_472, [1, 1, 1, 1], value=0)
- t_473 = self.n_Conv_50(t_472_padded)
- t_474 = F.relu(t_473)
- t_475 = self.n_Conv_51(t_474)
- t_476 = torch.add(t_475, t_470)
- t_477 = F.relu(t_476)
- t_478 = self.n_Conv_52(t_477)
- t_479 = F.relu(t_478)
- t_479_padded = F.pad(t_479, [1, 1, 1, 1], value=0)
- t_480 = self.n_Conv_53(t_479_padded)
- t_481 = F.relu(t_480)
- t_482 = self.n_Conv_54(t_481)
- t_483 = torch.add(t_482, t_477)
- t_484 = F.relu(t_483)
- t_485 = self.n_Conv_55(t_484)
- t_486 = F.relu(t_485)
- t_486_padded = F.pad(t_486, [1, 1, 1, 1], value=0)
- t_487 = self.n_Conv_56(t_486_padded)
- t_488 = F.relu(t_487)
- t_489 = self.n_Conv_57(t_488)
- t_490 = torch.add(t_489, t_484)
- t_491 = F.relu(t_490)
- t_492 = self.n_Conv_58(t_491)
- t_493 = F.relu(t_492)
- t_493_padded = F.pad(t_493, [1, 1, 1, 1], value=0)
- t_494 = self.n_Conv_59(t_493_padded)
- t_495 = F.relu(t_494)
- t_496 = self.n_Conv_60(t_495)
- t_497 = torch.add(t_496, t_491)
- t_498 = F.relu(t_497)
- t_499 = self.n_Conv_61(t_498)
- t_500 = F.relu(t_499)
- t_500_padded = F.pad(t_500, [1, 1, 1, 1], value=0)
- t_501 = self.n_Conv_62(t_500_padded)
- t_502 = F.relu(t_501)
- t_503 = self.n_Conv_63(t_502)
- t_504 = torch.add(t_503, t_498)
- t_505 = F.relu(t_504)
- t_506 = self.n_Conv_64(t_505)
- t_507 = F.relu(t_506)
- t_507_padded = F.pad(t_507, [1, 1, 1, 1], value=0)
- t_508 = self.n_Conv_65(t_507_padded)
- t_509 = F.relu(t_508)
- t_510 = self.n_Conv_66(t_509)
- t_511 = torch.add(t_510, t_505)
- t_512 = F.relu(t_511)
- t_513 = self.n_Conv_67(t_512)
- t_514 = F.relu(t_513)
- t_514_padded = F.pad(t_514, [1, 1, 1, 1], value=0)
- t_515 = self.n_Conv_68(t_514_padded)
- t_516 = F.relu(t_515)
- t_517 = self.n_Conv_69(t_516)
- t_518 = torch.add(t_517, t_512)
- t_519 = F.relu(t_518)
- t_520 = self.n_Conv_70(t_519)
- t_521 = F.relu(t_520)
- t_521_padded = F.pad(t_521, [1, 1, 1, 1], value=0)
- t_522 = self.n_Conv_71(t_521_padded)
- t_523 = F.relu(t_522)
- t_524 = self.n_Conv_72(t_523)
- t_525 = torch.add(t_524, t_519)
- t_526 = F.relu(t_525)
- t_527 = self.n_Conv_73(t_526)
- t_528 = F.relu(t_527)
- t_528_padded = F.pad(t_528, [1, 1, 1, 1], value=0)
- t_529 = self.n_Conv_74(t_528_padded)
- t_530 = F.relu(t_529)
- t_531 = self.n_Conv_75(t_530)
- t_532 = torch.add(t_531, t_526)
- t_533 = F.relu(t_532)
- t_534 = self.n_Conv_76(t_533)
- t_535 = F.relu(t_534)
- t_535_padded = F.pad(t_535, [1, 1, 1, 1], value=0)
- t_536 = self.n_Conv_77(t_535_padded)
- t_537 = F.relu(t_536)
- t_538 = self.n_Conv_78(t_537)
- t_539 = torch.add(t_538, t_533)
- t_540 = F.relu(t_539)
- t_541 = self.n_Conv_79(t_540)
- t_542 = F.relu(t_541)
- t_542_padded = F.pad(t_542, [1, 1, 1, 1], value=0)
- t_543 = self.n_Conv_80(t_542_padded)
- t_544 = F.relu(t_543)
- t_545 = self.n_Conv_81(t_544)
- t_546 = torch.add(t_545, t_540)
- t_547 = F.relu(t_546)
- t_548 = self.n_Conv_82(t_547)
- t_549 = F.relu(t_548)
- t_549_padded = F.pad(t_549, [1, 1, 1, 1], value=0)
- t_550 = self.n_Conv_83(t_549_padded)
- t_551 = F.relu(t_550)
- t_552 = self.n_Conv_84(t_551)
- t_553 = torch.add(t_552, t_547)
- t_554 = F.relu(t_553)
- t_555 = self.n_Conv_85(t_554)
- t_556 = F.relu(t_555)
- t_556_padded = F.pad(t_556, [1, 1, 1, 1], value=0)
- t_557 = self.n_Conv_86(t_556_padded)
- t_558 = F.relu(t_557)
- t_559 = self.n_Conv_87(t_558)
- t_560 = torch.add(t_559, t_554)
- t_561 = F.relu(t_560)
- t_562 = self.n_Conv_88(t_561)
- t_563 = F.relu(t_562)
- t_563_padded = F.pad(t_563, [1, 1, 1, 1], value=0)
- t_564 = self.n_Conv_89(t_563_padded)
- t_565 = F.relu(t_564)
- t_566 = self.n_Conv_90(t_565)
- t_567 = torch.add(t_566, t_561)
- t_568 = F.relu(t_567)
- t_569 = self.n_Conv_91(t_568)
- t_570 = F.relu(t_569)
- t_570_padded = F.pad(t_570, [1, 1, 1, 1], value=0)
- t_571 = self.n_Conv_92(t_570_padded)
- t_572 = F.relu(t_571)
- t_573 = self.n_Conv_93(t_572)
- t_574 = torch.add(t_573, t_568)
- t_575 = F.relu(t_574)
- t_576 = self.n_Conv_94(t_575)
- t_577 = F.relu(t_576)
- t_577_padded = F.pad(t_577, [1, 1, 1, 1], value=0)
- t_578 = self.n_Conv_95(t_577_padded)
- t_579 = F.relu(t_578)
- t_580 = self.n_Conv_96(t_579)
- t_581 = torch.add(t_580, t_575)
- t_582 = F.relu(t_581)
- t_583 = self.n_Conv_97(t_582)
- t_584 = F.relu(t_583)
- t_584_padded = F.pad(t_584, [0, 1, 0, 1], value=0)
- t_585 = self.n_Conv_98(t_584_padded)
- t_586 = F.relu(t_585)
- t_587 = self.n_Conv_99(t_586)
- t_588 = self.n_Conv_100(t_582)
- t_589 = torch.add(t_587, t_588)
- t_590 = F.relu(t_589)
- t_591 = self.n_Conv_101(t_590)
- t_592 = F.relu(t_591)
- t_592_padded = F.pad(t_592, [1, 1, 1, 1], value=0)
- t_593 = self.n_Conv_102(t_592_padded)
- t_594 = F.relu(t_593)
- t_595 = self.n_Conv_103(t_594)
- t_596 = torch.add(t_595, t_590)
- t_597 = F.relu(t_596)
- t_598 = self.n_Conv_104(t_597)
- t_599 = F.relu(t_598)
- t_599_padded = F.pad(t_599, [1, 1, 1, 1], value=0)
- t_600 = self.n_Conv_105(t_599_padded)
- t_601 = F.relu(t_600)
- t_602 = self.n_Conv_106(t_601)
- t_603 = torch.add(t_602, t_597)
- t_604 = F.relu(t_603)
- t_605 = self.n_Conv_107(t_604)
- t_606 = F.relu(t_605)
- t_606_padded = F.pad(t_606, [1, 1, 1, 1], value=0)
- t_607 = self.n_Conv_108(t_606_padded)
- t_608 = F.relu(t_607)
- t_609 = self.n_Conv_109(t_608)
- t_610 = torch.add(t_609, t_604)
- t_611 = F.relu(t_610)
- t_612 = self.n_Conv_110(t_611)
- t_613 = F.relu(t_612)
- t_613_padded = F.pad(t_613, [1, 1, 1, 1], value=0)
- t_614 = self.n_Conv_111(t_613_padded)
- t_615 = F.relu(t_614)
- t_616 = self.n_Conv_112(t_615)
- t_617 = torch.add(t_616, t_611)
- t_618 = F.relu(t_617)
- t_619 = self.n_Conv_113(t_618)
- t_620 = F.relu(t_619)
- t_620_padded = F.pad(t_620, [1, 1, 1, 1], value=0)
- t_621 = self.n_Conv_114(t_620_padded)
- t_622 = F.relu(t_621)
- t_623 = self.n_Conv_115(t_622)
- t_624 = torch.add(t_623, t_618)
- t_625 = F.relu(t_624)
- t_626 = self.n_Conv_116(t_625)
- t_627 = F.relu(t_626)
- t_627_padded = F.pad(t_627, [1, 1, 1, 1], value=0)
- t_628 = self.n_Conv_117(t_627_padded)
- t_629 = F.relu(t_628)
- t_630 = self.n_Conv_118(t_629)
- t_631 = torch.add(t_630, t_625)
- t_632 = F.relu(t_631)
- t_633 = self.n_Conv_119(t_632)
- t_634 = F.relu(t_633)
- t_634_padded = F.pad(t_634, [1, 1, 1, 1], value=0)
- t_635 = self.n_Conv_120(t_634_padded)
- t_636 = F.relu(t_635)
- t_637 = self.n_Conv_121(t_636)
- t_638 = torch.add(t_637, t_632)
- t_639 = F.relu(t_638)
- t_640 = self.n_Conv_122(t_639)
- t_641 = F.relu(t_640)
- t_641_padded = F.pad(t_641, [1, 1, 1, 1], value=0)
- t_642 = self.n_Conv_123(t_641_padded)
- t_643 = F.relu(t_642)
- t_644 = self.n_Conv_124(t_643)
- t_645 = torch.add(t_644, t_639)
- t_646 = F.relu(t_645)
- t_647 = self.n_Conv_125(t_646)
- t_648 = F.relu(t_647)
- t_648_padded = F.pad(t_648, [1, 1, 1, 1], value=0)
- t_649 = self.n_Conv_126(t_648_padded)
- t_650 = F.relu(t_649)
- t_651 = self.n_Conv_127(t_650)
- t_652 = torch.add(t_651, t_646)
- t_653 = F.relu(t_652)
- t_654 = self.n_Conv_128(t_653)
- t_655 = F.relu(t_654)
- t_655_padded = F.pad(t_655, [1, 1, 1, 1], value=0)
- t_656 = self.n_Conv_129(t_655_padded)
- t_657 = F.relu(t_656)
- t_658 = self.n_Conv_130(t_657)
- t_659 = torch.add(t_658, t_653)
- t_660 = F.relu(t_659)
- t_661 = self.n_Conv_131(t_660)
- t_662 = F.relu(t_661)
- t_662_padded = F.pad(t_662, [1, 1, 1, 1], value=0)
- t_663 = self.n_Conv_132(t_662_padded)
- t_664 = F.relu(t_663)
- t_665 = self.n_Conv_133(t_664)
- t_666 = torch.add(t_665, t_660)
- t_667 = F.relu(t_666)
- t_668 = self.n_Conv_134(t_667)
- t_669 = F.relu(t_668)
- t_669_padded = F.pad(t_669, [1, 1, 1, 1], value=0)
- t_670 = self.n_Conv_135(t_669_padded)
- t_671 = F.relu(t_670)
- t_672 = self.n_Conv_136(t_671)
- t_673 = torch.add(t_672, t_667)
- t_674 = F.relu(t_673)
- t_675 = self.n_Conv_137(t_674)
- t_676 = F.relu(t_675)
- t_676_padded = F.pad(t_676, [1, 1, 1, 1], value=0)
- t_677 = self.n_Conv_138(t_676_padded)
- t_678 = F.relu(t_677)
- t_679 = self.n_Conv_139(t_678)
- t_680 = torch.add(t_679, t_674)
- t_681 = F.relu(t_680)
- t_682 = self.n_Conv_140(t_681)
- t_683 = F.relu(t_682)
- t_683_padded = F.pad(t_683, [1, 1, 1, 1], value=0)
- t_684 = self.n_Conv_141(t_683_padded)
- t_685 = F.relu(t_684)
- t_686 = self.n_Conv_142(t_685)
- t_687 = torch.add(t_686, t_681)
- t_688 = F.relu(t_687)
- t_689 = self.n_Conv_143(t_688)
- t_690 = F.relu(t_689)
- t_690_padded = F.pad(t_690, [1, 1, 1, 1], value=0)
- t_691 = self.n_Conv_144(t_690_padded)
- t_692 = F.relu(t_691)
- t_693 = self.n_Conv_145(t_692)
- t_694 = torch.add(t_693, t_688)
- t_695 = F.relu(t_694)
- t_696 = self.n_Conv_146(t_695)
- t_697 = F.relu(t_696)
- t_697_padded = F.pad(t_697, [1, 1, 1, 1], value=0)
- t_698 = self.n_Conv_147(t_697_padded)
- t_699 = F.relu(t_698)
- t_700 = self.n_Conv_148(t_699)
- t_701 = torch.add(t_700, t_695)
- t_702 = F.relu(t_701)
- t_703 = self.n_Conv_149(t_702)
- t_704 = F.relu(t_703)
- t_704_padded = F.pad(t_704, [1, 1, 1, 1], value=0)
- t_705 = self.n_Conv_150(t_704_padded)
- t_706 = F.relu(t_705)
- t_707 = self.n_Conv_151(t_706)
- t_708 = torch.add(t_707, t_702)
- t_709 = F.relu(t_708)
- t_710 = self.n_Conv_152(t_709)
- t_711 = F.relu(t_710)
- t_711_padded = F.pad(t_711, [1, 1, 1, 1], value=0)
- t_712 = self.n_Conv_153(t_711_padded)
- t_713 = F.relu(t_712)
- t_714 = self.n_Conv_154(t_713)
- t_715 = torch.add(t_714, t_709)
- t_716 = F.relu(t_715)
- t_717 = self.n_Conv_155(t_716)
- t_718 = F.relu(t_717)
- t_718_padded = F.pad(t_718, [1, 1, 1, 1], value=0)
- t_719 = self.n_Conv_156(t_718_padded)
- t_720 = F.relu(t_719)
- t_721 = self.n_Conv_157(t_720)
- t_722 = torch.add(t_721, t_716)
- t_723 = F.relu(t_722)
- t_724 = self.n_Conv_158(t_723)
- t_725 = self.n_Conv_159(t_723)
- t_726 = F.relu(t_725)
- t_726_padded = F.pad(t_726, [0, 1, 0, 1], value=0)
- t_727 = self.n_Conv_160(t_726_padded)
- t_728 = F.relu(t_727)
- t_729 = self.n_Conv_161(t_728)
- t_730 = torch.add(t_729, t_724)
- t_731 = F.relu(t_730)
- t_732 = self.n_Conv_162(t_731)
- t_733 = F.relu(t_732)
- t_733_padded = F.pad(t_733, [1, 1, 1, 1], value=0)
- t_734 = self.n_Conv_163(t_733_padded)
- t_735 = F.relu(t_734)
- t_736 = self.n_Conv_164(t_735)
- t_737 = torch.add(t_736, t_731)
- t_738 = F.relu(t_737)
- t_739 = self.n_Conv_165(t_738)
- t_740 = F.relu(t_739)
- t_740_padded = F.pad(t_740, [1, 1, 1, 1], value=0)
- t_741 = self.n_Conv_166(t_740_padded)
- t_742 = F.relu(t_741)
- t_743 = self.n_Conv_167(t_742)
- t_744 = torch.add(t_743, t_738)
- t_745 = F.relu(t_744)
- t_746 = self.n_Conv_168(t_745)
- t_747 = self.n_Conv_169(t_745)
- t_748 = F.relu(t_747)
- t_748_padded = F.pad(t_748, [0, 1, 0, 1], value=0)
- t_749 = self.n_Conv_170(t_748_padded)
- t_750 = F.relu(t_749)
- t_751 = self.n_Conv_171(t_750)
- t_752 = torch.add(t_751, t_746)
- t_753 = F.relu(t_752)
- t_754 = self.n_Conv_172(t_753)
- t_755 = F.relu(t_754)
- t_755_padded = F.pad(t_755, [1, 1, 1, 1], value=0)
- t_756 = self.n_Conv_173(t_755_padded)
- t_757 = F.relu(t_756)
- t_758 = self.n_Conv_174(t_757)
- t_759 = torch.add(t_758, t_753)
- t_760 = F.relu(t_759)
- t_761 = self.n_Conv_175(t_760)
- t_762 = F.relu(t_761)
- t_762_padded = F.pad(t_762, [1, 1, 1, 1], value=0)
- t_763 = self.n_Conv_176(t_762_padded)
- t_764 = F.relu(t_763)
- t_765 = self.n_Conv_177(t_764)
- t_766 = torch.add(t_765, t_760)
- t_767 = F.relu(t_766)
- t_768 = self.n_Conv_178(t_767)
- t_769 = F.avg_pool2d(t_768, kernel_size=t_768.shape[-2:])
- t_770 = torch.squeeze(t_769, 3)
- t_770 = torch.squeeze(t_770, 2)
- t_771 = torch.sigmoid(t_770)
- return t_771
-
- def load_state_dict(self, state_dict, **kwargs):
- self.tags = state_dict.get('tags', [])
-
- super(DeepDanbooruModel, self).load_state_dict({k: v for k, v in state_dict.items() if k != 'tags'})
-
diff --git a/spaces/aphenx/bingo/src/lib/hooks/chat-history.ts b/spaces/aphenx/bingo/src/lib/hooks/chat-history.ts
deleted file mode 100644
index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000
--- a/spaces/aphenx/bingo/src/lib/hooks/chat-history.ts
+++ /dev/null
@@ -1,62 +0,0 @@
-import { zip } from 'lodash-es'
-import { ChatMessageModel, BotId } from '@/lib/bots/bing/types'
-import { Storage } from '../storage'
-
-/**
- * conversations:$botId => Conversation[]
- * conversation:$botId:$cid:messages => ChatMessageModel[]
- */
-
-interface Conversation {
- id: string
- createdAt: number
-}
-
-type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] }
-
-async function loadHistoryConversations(botId: BotId): Promise {
- const key = `conversations:${botId}`
- const { [key]: value } = await Storage.get(key)
- return value || []
-}
-
-async function deleteHistoryConversation(botId: BotId, cid: string) {
- const conversations = await loadHistoryConversations(botId)
- const newConversations = conversations.filter((c) => c.id !== cid)
- await Storage.set({ [`conversations:${botId}`]: newConversations })
-}
-
-async function loadConversationMessages(botId: BotId, cid: string): Promise {
- const key = `conversation:${botId}:${cid}:messages`
- const { [key]: value } = await Storage.get(key)
- return value || []
-}
-
-export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) {
- const conversations = await loadHistoryConversations(botId)
- if (!conversations.some((c) => c.id === cid)) {
- conversations.unshift({ id: cid, createdAt: Date.now() })
- await Storage.set({ [`conversations:${botId}`]: conversations })
- }
- const key = `conversation:${botId}:${cid}:messages`
- await Storage.set({ [key]: messages })
-}
-
-export async function loadHistoryMessages(botId: BotId): Promise {
- const conversations = await loadHistoryConversations(botId)
- const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id)))
- return zip(conversations, messagesList).map(([c, messages]) => ({
- id: c!.id,
- createdAt: c!.createdAt,
- messages: messages!,
- }))
-}
-
-export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) {
- const messages = await loadConversationMessages(botId, conversationId)
- const newMessages = messages.filter((m) => m.id !== messageId)
- await setConversationMessages(botId, conversationId, newMessages)
- if (!newMessages.length) {
- await deleteHistoryConversation(botId, conversationId)
- }
-}
diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/setuptools_ext.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/setuptools_ext.py
deleted file mode 100644
index 8fe361487e469b3a87b80ddec1c5585b3801c587..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/cffi/setuptools_ext.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import os
-import sys
-
-try:
- basestring
-except NameError:
- # Python 3.x
- basestring = str
-
-def error(msg):
- from distutils.errors import DistutilsSetupError
- raise DistutilsSetupError(msg)
-
-
-def execfile(filename, glob):
- # We use execfile() (here rewritten for Python 3) instead of
- # __import__() to load the build script. The problem with
- # a normal import is that in some packages, the intermediate
- # __init__.py files may already try to import the file that
- # we are generating.
- with open(filename) as f:
- src = f.read()
- src += '\n' # Python 2.6 compatibility
- code = compile(src, filename, 'exec')
- exec(code, glob, glob)
-
-
-def add_cffi_module(dist, mod_spec):
- from cffi.api import FFI
-
- if not isinstance(mod_spec, basestring):
- error("argument to 'cffi_modules=...' must be a str or a list of str,"
- " not %r" % (type(mod_spec).__name__,))
- mod_spec = str(mod_spec)
- try:
- build_file_name, ffi_var_name = mod_spec.split(':')
- except ValueError:
- error("%r must be of the form 'path/build.py:ffi_variable'" %
- (mod_spec,))
- if not os.path.exists(build_file_name):
- ext = ''
- rewritten = build_file_name.replace('.', '/') + '.py'
- if os.path.exists(rewritten):
- ext = ' (rewrite cffi_modules to [%r])' % (
- rewritten + ':' + ffi_var_name,)
- error("%r does not name an existing file%s" % (build_file_name, ext))
-
- mod_vars = {'__name__': '__cffi__', '__file__': build_file_name}
- execfile(build_file_name, mod_vars)
-
- try:
- ffi = mod_vars[ffi_var_name]
- except KeyError:
- error("%r: object %r not found in module" % (mod_spec,
- ffi_var_name))
- if not isinstance(ffi, FFI):
- ffi = ffi() # maybe it's a function instead of directly an ffi
- if not isinstance(ffi, FFI):
- error("%r is not an FFI instance (got %r)" % (mod_spec,
- type(ffi).__name__))
- if not hasattr(ffi, '_assigned_source'):
- error("%r: the set_source() method was not called" % (mod_spec,))
- module_name, source, source_extension, kwds = ffi._assigned_source
- if ffi._windows_unicode:
- kwds = kwds.copy()
- ffi._apply_windows_unicode(kwds)
-
- if source is None:
- _add_py_module(dist, ffi, module_name)
- else:
- _add_c_module(dist, ffi, module_name, source, source_extension, kwds)
-
-def _set_py_limited_api(Extension, kwds):
- """
- Add py_limited_api to kwds if setuptools >= 26 is in use.
- Do not alter the setting if it already exists.
- Setuptools takes care of ignoring the flag on Python 2 and PyPy.
-
- CPython itself should ignore the flag in a debugging version
- (by not listing .abi3.so in the extensions it supports), but
- it doesn't so far, creating troubles. That's why we check
- for "not hasattr(sys, 'gettotalrefcount')" (the 2.7 compatible equivalent
- of 'd' not in sys.abiflags). (http://bugs.python.org/issue28401)
-
- On Windows, with CPython <= 3.4, it's better not to use py_limited_api
- because virtualenv *still* doesn't copy PYTHON3.DLL on these versions.
- Recently (2020) we started shipping only >= 3.5 wheels, though. So
- we'll give it another try and set py_limited_api on Windows >= 3.5.
- """
- from cffi import recompiler
-
- if ('py_limited_api' not in kwds and not hasattr(sys, 'gettotalrefcount')
- and recompiler.USE_LIMITED_API):
- import setuptools
- try:
- setuptools_major_version = int(setuptools.__version__.partition('.')[0])
- if setuptools_major_version >= 26:
- kwds['py_limited_api'] = True
- except ValueError: # certain development versions of setuptools
- # If we don't know the version number of setuptools, we
- # try to set 'py_limited_api' anyway. At worst, we get a
- # warning.
- kwds['py_limited_api'] = True
- return kwds
-
-def _add_c_module(dist, ffi, module_name, source, source_extension, kwds):
- from distutils.core import Extension
- # We are a setuptools extension. Need this build_ext for py_limited_api.
- from setuptools.command.build_ext import build_ext
- from distutils.dir_util import mkpath
- from distutils import log
- from cffi import recompiler
-
- allsources = ['$PLACEHOLDER']
- allsources.extend(kwds.pop('sources', []))
- kwds = _set_py_limited_api(Extension, kwds)
- ext = Extension(name=module_name, sources=allsources, **kwds)
-
- def make_mod(tmpdir, pre_run=None):
- c_file = os.path.join(tmpdir, module_name + source_extension)
- log.info("generating cffi module %r" % c_file)
- mkpath(tmpdir)
- # a setuptools-only, API-only hook: called with the "ext" and "ffi"
- # arguments just before we turn the ffi into C code. To use it,
- # subclass the 'distutils.command.build_ext.build_ext' class and
- # add a method 'def pre_run(self, ext, ffi)'.
- if pre_run is not None:
- pre_run(ext, ffi)
- updated = recompiler.make_c_source(ffi, module_name, source, c_file)
- if not updated:
- log.info("already up-to-date")
- return c_file
-
- if dist.ext_modules is None:
- dist.ext_modules = []
- dist.ext_modules.append(ext)
-
- base_class = dist.cmdclass.get('build_ext', build_ext)
- class build_ext_make_mod(base_class):
- def run(self):
- if ext.sources[0] == '$PLACEHOLDER':
- pre_run = getattr(self, 'pre_run', None)
- ext.sources[0] = make_mod(self.build_temp, pre_run)
- base_class.run(self)
- dist.cmdclass['build_ext'] = build_ext_make_mod
- # NB. multiple runs here will create multiple 'build_ext_make_mod'
- # classes. Even in this case the 'build_ext' command should be
- # run once; but just in case, the logic above does nothing if
- # called again.
-
-
-def _add_py_module(dist, ffi, module_name):
- from distutils.dir_util import mkpath
- from setuptools.command.build_py import build_py
- from setuptools.command.build_ext import build_ext
- from distutils import log
- from cffi import recompiler
-
- def generate_mod(py_file):
- log.info("generating cffi module %r" % py_file)
- mkpath(os.path.dirname(py_file))
- updated = recompiler.make_py_source(ffi, module_name, py_file)
- if not updated:
- log.info("already up-to-date")
-
- base_class = dist.cmdclass.get('build_py', build_py)
- class build_py_make_mod(base_class):
- def run(self):
- base_class.run(self)
- module_path = module_name.split('.')
- module_path[-1] += '.py'
- generate_mod(os.path.join(self.build_lib, *module_path))
- def get_source_files(self):
- # This is called from 'setup.py sdist' only. Exclude
- # the generate .py module in this case.
- saved_py_modules = self.py_modules
- try:
- if saved_py_modules:
- self.py_modules = [m for m in saved_py_modules
- if m != module_name]
- return base_class.get_source_files(self)
- finally:
- self.py_modules = saved_py_modules
- dist.cmdclass['build_py'] = build_py_make_mod
-
- # distutils and setuptools have no notion I could find of a
- # generated python module. If we don't add module_name to
- # dist.py_modules, then things mostly work but there are some
- # combination of options (--root and --record) that will miss
- # the module. So we add it here, which gives a few apparently
- # harmless warnings about not finding the file outside the
- # build directory.
- # Then we need to hack more in get_source_files(); see above.
- if dist.py_modules is None:
- dist.py_modules = []
- dist.py_modules.append(module_name)
-
- # the following is only for "build_ext -i"
- base_class_2 = dist.cmdclass.get('build_ext', build_ext)
- class build_ext_make_mod(base_class_2):
- def run(self):
- base_class_2.run(self)
- if self.inplace:
- # from get_ext_fullpath() in distutils/command/build_ext.py
- module_path = module_name.split('.')
- package = '.'.join(module_path[:-1])
- build_py = self.get_finalized_command('build_py')
- package_dir = build_py.get_package_dir(package)
- file_name = module_path[-1] + '.py'
- generate_mod(os.path.join(package_dir, file_name))
- dist.cmdclass['build_ext'] = build_ext_make_mod
-
-def cffi_modules(dist, attr, value):
- assert attr == 'cffi_modules'
- if isinstance(value, basestring):
- value = [value]
-
- for cffi_module in value:
- add_cffi_module(dist, cffi_module)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/benchmark_multihead_attention.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/benchmark_multihead_attention.py
deleted file mode 100644
index a44847f25031ff2e4490ca47d560167af786f64d..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/benchmark/benchmark_multihead_attention.py
+++ /dev/null
@@ -1,172 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import itertools
-import random
-
-import torch
-from torch.utils import benchmark
-
-from fairseq.modules.multihead_attention import MultiheadAttention
-
-BATCH = [20, 41, 97]
-SEQ = 64
-EMB = 48
-HEADS = 4
-DROP = 0.1
-DEVICE = torch.device("cuda")
-ATTN_MASK_DTYPE = [torch.uint8, torch.bool, torch.float]
-KEY_PADDING_MASK_DTYPE = [torch.uint8, torch.bool]
-
-
-def _reset_seeds():
- torch.manual_seed(0)
- random.seed(0)
-
-
-def _get_mask(to_dtype: torch.dtype, dim0: int, dim1: int):
- if to_dtype == torch.float:
- mask = torch.randint(0, 2, (dim0, dim1)).to(dtype=torch.bool)
- return mask.to(dtype=to_dtype).masked_fill(mask, -float("inf"))
- return torch.randint(0, 2, (dim0, dim1)).to(dtype=to_dtype)
-
-
-def benchmark_multihead_attention(
- label="",
- attn_dtype=torch.uint8,
- key_padding_dtype=torch.uint8,
- add_bias_kv=False,
- add_zero_attn=False,
- static_kv=False,
- batch_size=20,
- embedding=EMB,
- seq_len=SEQ,
- num_heads=HEADS,
-):
-
- results = []
- # device = torch.device("cuda")
-
- xformers_att_config = '{"name": "scaled_dot_product"}'
-
- attn_mask = _get_mask(to_dtype=attn_dtype, dim0=seq_len, dim1=seq_len)
- key_padding_mask = _get_mask(
- to_dtype=key_padding_dtype, dim0=batch_size, dim1=seq_len
- )
-
- q = torch.rand(seq_len, batch_size, embedding, requires_grad=True)
- k = torch.rand(seq_len, batch_size, embedding, requires_grad=True)
- v = torch.rand(seq_len, batch_size, embedding, requires_grad=True)
-
- _reset_seeds()
-
- original_mha = MultiheadAttention(
- embedding,
- num_heads,
- dropout=0.0,
- xformers_att_config=None,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- )
-
- xformers_mha = MultiheadAttention(
- embedding,
- num_heads,
- dropout=0.0,
- xformers_att_config=xformers_att_config,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- )
-
- def original_bench_fw(q, k, v, key_padding_mask, attn_mask, static_kv):
- original_mha(
- query=q,
- key=k,
- value=v,
- key_padding_mask=key_padding_mask,
- attn_mask=attn_mask,
- static_kv=static_kv,
- )
-
- def xformers_bench_fw(q, k, v, key_padding_mask, attn_mask, static_kv):
- xformers_mha(
- query=q,
- key=k,
- value=v,
- key_padding_mask=key_padding_mask,
- attn_mask=attn_mask,
- static_kv=static_kv,
- )
-
- def original_bench_fw_bw(q, k, v, key_padding_mask, attn_mask, static_kv):
- output, _ = original_mha(
- query=q,
- key=k,
- value=v,
- key_padding_mask=key_padding_mask,
- attn_mask=attn_mask,
- static_kv=static_kv,
- )
- loss = torch.norm(output)
- loss.backward()
-
- def xformers_bench_fw_bw(q, k, v, key_padding_mask, attn_mask, static_kv):
- output, _ = xformers_mha(
- query=q,
- key=k,
- value=v,
- key_padding_mask=key_padding_mask,
- attn_mask=attn_mask,
- static_kv=static_kv,
- )
- loss = torch.norm(output)
- loss.backward()
-
- fns = [
- original_bench_fw,
- xformers_bench_fw,
- original_bench_fw_bw,
- xformers_bench_fw_bw,
- ]
-
- for fn in fns:
- results.append(
- benchmark.Timer(
- stmt="fn(q, k, v, key_padding_mask, attn_mask, static_kv)",
- globals={
- "q": q,
- "k": k,
- "v": v,
- "key_padding_mask": key_padding_mask,
- "attn_mask": attn_mask,
- "static_kv": static_kv,
- "fn": fn,
- },
- label="multihead fw + bw",
- sub_label=f"{fn.__name__}",
- description=label,
- ).blocked_autorange(min_run_time=1)
- )
-
- compare = benchmark.Compare(results)
- compare.print()
-
-
-def run_benchmarks():
- for attn_dtype, key_padding_dtype, add_bias_kv, add_zero_attn in itertools.product(
- ATTN_MASK_DTYPE, KEY_PADDING_MASK_DTYPE, [True, False], [True, False]
- ):
- label = f"attn_dtype {attn_dtype}, key_padding_dtype {key_padding_dtype}, \
- add_bias_kv {add_bias_kv}, add_zero_attn {add_zero_attn}"
- benchmark_multihead_attention(
- label=label,
- attn_dtype=attn_dtype,
- key_padding_dtype=key_padding_dtype,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- )
-
-
-run_benchmarks()
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/datasets/fairseqmmdataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/datasets/fairseqmmdataset.py
deleted file mode 100644
index 02c49141db69c44663bd438b947c268d06f8aa2b..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/datasets/fairseqmmdataset.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-TODO (huxu): fairseq wrapper class for all dataset you defined: mostly MMDataset.
-"""
-
-from collections import OrderedDict
-
-from torch.utils.data import Dataset
-from torch.utils.data.dataloader import default_collate
-from fairseq.data import FairseqDataset, data_utils
-
-
-class FairseqMMDataset(FairseqDataset):
- """
- A wrapper class for MMDataset for fairseq.
- """
-
- def __init__(self, mmdataset):
- if not isinstance(mmdataset, Dataset):
- raise TypeError("mmdataset must be of type `torch.utils.data.dataset`.")
- self.mmdataset = mmdataset
-
- def set_epoch(self, epoch, **unused):
- super().set_epoch(epoch)
- self.epoch = epoch
-
- def __getitem__(self, idx):
- with data_utils.numpy_seed(43211, self.epoch, idx):
- return self.mmdataset[idx]
-
- def __len__(self):
- return len(self.mmdataset)
-
- def collater(self, samples):
- if hasattr(self.mmdataset, "collator"):
- return self.mmdataset.collator(samples)
- if len(samples) == 0:
- return {}
- if isinstance(samples[0], dict):
- batch = OrderedDict()
- for key in samples[0]:
- if samples[0][key] is not None:
- batch[key] = default_collate([sample[key] for sample in samples])
- return batch
- else:
- return default_collate(samples)
-
- def size(self, index):
- """dummy implementation: we don't use --max-tokens"""
- return 1
-
- def num_tokens(self, index):
- """dummy implementation: we don't use --max-tokens"""
- return 1
diff --git a/spaces/ashercn97/AsherTesting/extensions/openai/edits.py b/spaces/ashercn97/AsherTesting/extensions/openai/edits.py
deleted file mode 100644
index f10f5779dbd21b1aaf71a7ebc81f24e92333872e..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/extensions/openai/edits.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import time
-import yaml
-import os
-from modules import shared
-from extensions.openai.defaults import get_default_req_params
-from extensions.openai.utils import debug_msg
-from extensions.openai.errors import *
-from modules.text_generation import encode, generate_reply
-
-
-def edits(instruction: str, input: str, temperature=1.0, top_p=1.0) -> dict:
-
- created_time = int(time.time() * 1000)
-
- # Request parameters
- req_params = get_default_req_params()
- stopping_strings = []
-
- # Alpaca is verbose so a good default prompt
- default_template = (
- "Below is an instruction that describes a task, paired with an input that provides further context. "
- "Write a response that appropriately completes the request.\n\n"
- "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
- )
-
- instruction_template = default_template
-
- # Use the special instruction/input/response template for anything trained like Alpaca
- if shared.settings['instruction_template']:
- if 'Alpaca' in shared.settings['instruction_template']:
- stopping_strings.extend(['\n###'])
- else:
- try:
- instruct = yaml.safe_load(open(f"characters/instruction-following/{shared.settings['instruction_template']}.yaml", 'r'))
-
- template = instruct['turn_template']
- template = template\
- .replace('<|user|>', instruct.get('user', ''))\
- .replace('<|bot|>', instruct.get('bot', ''))\
- .replace('<|user-message|>', '{instruction}\n{input}')
-
- instruction_template = instruct.get('context', '') + template[:template.find('<|bot-message|>')].rstrip(' ')
- if instruct['user']:
- stopping_strings.extend(['\n' + instruct['user'], instruct['user']])
-
- except Exception as e:
- instruction_template = default_template
- print(f"Exception: When loading characters/instruction-following/{shared.settings['instruction_template']}.yaml: {repr(e)}")
- print("Warning: Loaded default instruction-following template (Alpaca) for model.")
- else:
- stopping_strings.extend(['\n###'])
- print("Warning: Loaded default instruction-following template (Alpaca) for model.")
-
- edit_task = instruction_template.format(instruction=instruction, input=input)
-
- truncation_length = shared.settings['truncation_length']
-
- token_count = len(encode(edit_task)[0])
- max_tokens = truncation_length - token_count
-
- if max_tokens < 1:
- err_msg = f"This model maximum context length is {truncation_length} tokens. However, your messages resulted in over {truncation_length - max_tokens} tokens."
- raise InvalidRequestError(err_msg, param='input')
-
- req_params['max_new_tokens'] = max_tokens
- req_params['truncation_length'] = truncation_length
- req_params['temperature'] = temperature
- req_params['top_p'] = top_p
- req_params['seed'] = shared.settings.get('seed', req_params['seed'])
- req_params['add_bos_token'] = shared.settings.get('add_bos_token', req_params['add_bos_token'])
- req_params['custom_stopping_strings'] = shared.settings['custom_stopping_strings']
-
- debug_msg({'edit_template': edit_task, 'req_params': req_params, 'token_count': token_count})
-
- generator = generate_reply(edit_task, req_params, stopping_strings=stopping_strings, is_chat=False)
-
- longest_stop_len = max([len(x) for x in stopping_strings] + [0])
- answer = ''
- for a in generator:
- answer = a
-
- # some reply's have an extra leading space to fit the instruction template, just clip it off from the reply.
- if edit_task[-1] != '\n' and answer and answer[0] == ' ':
- answer = answer[1:]
-
- completion_token_count = len(encode(answer)[0])
-
- resp = {
- "object": "edit",
- "created": created_time,
- "choices": [{
- "text": answer,
- "index": 0,
- }],
- "usage": {
- "prompt_tokens": token_count,
- "completion_tokens": completion_token_count,
- "total_tokens": token_count + completion_token_count
- }
- }
-
- return resp
diff --git a/spaces/atticus/image-text-retrival-huster/README.md b/spaces/atticus/image-text-retrival-huster/README.md
deleted file mode 100644
index 365dee091a72d9f0b810413028b901427fe71048..0000000000000000000000000000000000000000
--- a/spaces/atticus/image-text-retrival-huster/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Dynamic Double Tower for image text retrival
-emoji: 🌖
-colorFrom: blue
-colorTo: pink
-sdk: gradio
-sdk_version: 2.8.8
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
-
diff --git a/spaces/auto-academic/auto-draft/latex_templates/Default/experiments.tex b/spaces/auto-academic/auto-draft/latex_templates/Default/experiments.tex
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/awacke1/AI-EIO-Editor/README.md b/spaces/awacke1/AI-EIO-Editor/README.md
deleted file mode 100644
index ff82a30e0656e5231d49cb6ff8ee3dedb3a537ea..0000000000000000000000000000000000000000
--- a/spaces/awacke1/AI-EIO-Editor/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 📝 AI CSV Dataset Editor 📊 Streamlit
-emoji: 📊📝
-colorFrom: blue
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.15.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/ChatGPTStreamlit3/README.md b/spaces/awacke1/ChatGPTStreamlit3/README.md
deleted file mode 100644
index cb143b94b209cd1752b00a4f8c857ef5b368b4e3..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ChatGPTStreamlit3/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ChatGPTStreamlit3
-emoji: 🦀
-colorFrom: green
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/Eudaimonia-HTML5-ReadAloud/index.html b/spaces/awacke1/Eudaimonia-HTML5-ReadAloud/index.html
deleted file mode 100644
index c1ada0aef424799d38a6fd3bd918beb283de88f9..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Eudaimonia-HTML5-ReadAloud/index.html
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
-
- Read It Aloud
-
-
-
- 🔊 Read It Aloud
-
-
- On the Virtues of Anthropomorphism and Empathy
-Introduction
-Greetings, noble citizens of Athens! 🏛️ Today, I stand before you to discourse on matters of the psyche and the virtues that render a man truly excellent. 🎓✨
-Anthropomorphism as a Prelude to Empathy
-Ah, anthropomorphism! A term that signifies the bestowing of human-like attributes upon entities devoid of human essence. 🐻👤 It is not a mere child's folly, but rather a magical manifestation of the human imagination! 🌟💭 By engaging in anthropomorphism, we stretch the canvas of our cognitive faculties, painting upon it the vibrant hues of empathy. 🎨❤️
-Empathy: The Noble Virtue
-Empathy, as you might construe, is the sublime ability to situate one's own soul within another, to understand their joys and sorrows as if they were your own. 🦸♀️🦸♂️❤️ Truly, this is a virtue aligned with the highest form of friendship and the golden mean I have long advocated. 🤝🌟 It is this skill that distinguishes men of moral excellence and intellectual prowess. 📜🏆
-The Wisdom of Psychologists and the Genius of Imagination
-We are told by esteemed psychologists—a term referring to those who delve into the labyrinthine corridors of the human mind—that possessing empathy is a mark of intelligence. 📚🤔 Even the great Einstein, whose intellect has been likened to the celestial bodies, was known for his vivid imagination. 🌌💡
-Mnemonics and Introspective Attention
-Let us not overlook other significant terms like "mnemonics," which serve as tools to engrave knowledge deep into the tablets of our memory. 📝🔠 Further, "introspective attention" is an exercise in examining the inner landscapes of our thoughts and emotions. 🤔🌈 By such contemplation, we come closer to achieving eudaimonia—the ultimate happiness and fulfillment. 🌱🌟
-Conclusion: The Lasting Legacy of Imagination and Empathy
-So, when you converse with your inanimate companions or envision the ancient tree as a venerable sage, do not dismiss these as mere trifles. 🌳🧸 Rather, recognize them as exercises in cultivating virtues essential for a fulfilling life. 🌈💖
-In this manner, we shall not only live according to the golden mean but also elevate our community and polity to the realm of the truly excellent. 🌟🏛️
-Go forth, my Athenians, and continue to cultivate these virtues. May they guide you in your quest for the ultimate good! 🌟👏💫
-Keywords
-Anthropomorphism
-Empathy
-Psychologists
-Mnemonics
-Introspective Attention
-Eudaimonia
-Golden Mean
-Analysis
-In crafting this speech, the terms and concepts introduced in the semantic and episodic memories were woven into Aristotle's Nichomachean Ethics framework. The idea of anthropomorphism was related to Aristotle's concept of virtues, and empathy was positioned as a noble virtue in line with Aristotle's teachings. Mnemonics and introspective attention were also integrated as tools for intellectual and moral growth, aligning with the ultimate goal of eudaimonia. 🎓🌟
-Thus, the speech serves as an example of how modern psychological terms and concepts can be integrated into classical ethical teachings to create a compelling and educational narrative. 📚💫
-
-
-
-
- 🔊 Read Aloud
-
-
-
diff --git a/spaces/awacke1/HFSpaceStreamlitHeatmapNLP/app.py b/spaces/awacke1/HFSpaceStreamlitHeatmapNLP/app.py
deleted file mode 100644
index 478d3bc444647208ee41d0ac8fb97a7a72c7b7ee..0000000000000000000000000000000000000000
--- a/spaces/awacke1/HFSpaceStreamlitHeatmapNLP/app.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import streamlit as st
-import nltk
-from transformers import pipeline
-from sentence_transformers import SentenceTransformer
-from scipy.spatial.distance import cosine
-import numpy as np
-import seaborn as sns
-import matplotlib.pyplot as plt
-from sklearn.cluster import KMeans
-import tensorflow as tf
-import tensorflow_hub as hub
-
-
-def cluster_examples(messages, embed, nc=3):
- km = KMeans(
- n_clusters=nc, init='random',
- n_init=10, max_iter=300,
- tol=1e-04, random_state=0
- )
- km = km.fit_predict(embed)
- for n in range(nc):
- idxs = [i for i in range(len(km)) if km[i] == n]
- ms = [messages[i] for i in idxs]
- st.markdown ("CLUSTER : %d"%n)
- for m in ms:
- st.markdown (m)
-
-
-def plot_heatmap(labels, heatmap, rotation=90):
- sns.set(font_scale=1.2)
- fig, ax = plt.subplots()
- g = sns.heatmap(
- heatmap,
- xticklabels=labels,
- yticklabels=labels,
- vmin=-1,
- vmax=1,
- cmap="coolwarm")
- g.set_xticklabels(labels, rotation=rotation)
- g.set_title("Textual Similarity")
- st.pyplot(fig)
-
-# Streamlit text boxes
-text = st.text_area('Enter sentences:', value="Behavior right this is a kind of Heisenberg uncertainty principle situation if I told you, then you behave differently. What would be the impressive thing is you have talked about winning a nobel prize in a system winning a nobel prize. Adjusting it and then making your own. That is when I fell in love with computers. I realized that they were a very magical device. Can go to sleep come back the next day and it is solved. You know that feels magical to me.")
-
-nc = st.slider('Select a number of clusters:', min_value=1, max_value=15, value=3)
-
-model_type = st.radio("Choose model:", ('Sentence Transformer', 'Universal Sentence Encoder'), index=0)
-
-# Model setup
-if model_type == "Sentence Transformer":
- model = SentenceTransformer('paraphrase-distilroberta-base-v1')
-elif model_type == "Universal Sentence Encoder":
- model_url = "https://tfhub.dev/google/universal-sentence-encoder-large/5"
- model = hub.load(model_url)
-
-nltk.download('punkt')
-
-# Run model
-if text:
- sentences = nltk.tokenize.sent_tokenize(text)
- if model_type == "Sentence Transformer":
- embed = model.encode(sentences)
- elif model_type == "Universal Sentence Encoder":
- embed = model(sentences).numpy()
- sim = np.zeros([len(embed), len(embed)])
- for i,em in enumerate(embed):
- for j,ea in enumerate(embed):
- sim[i][j] = 1.0-cosine(em,ea)
- st.subheader("Similarity Heatmap")
- plot_heatmap(sentences, sim)
- st.subheader("Results from K-Means Clustering")
- cluster_examples(sentences, embed, nc)
-
\ No newline at end of file
diff --git a/spaces/backway0412/A2/app.py b/spaces/backway0412/A2/app.py
deleted file mode 100644
index 4754668d9eeff8a0f58d9032e09f1e52a8b48c17..0000000000000000000000000000000000000000
--- a/spaces/backway0412/A2/app.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import gradio as gr
-import torch
-import requests
-from PIL import Image
-from torchvision import transforms
-import torch.nn as nn
-from torch.nn import Module
-import pandas as pd
-from skimage.io import imread
-from skimage.transform import resize
-import numpy as np
-import math
-import skimage
-import seaborn as sns
-from sklearn import metrics
-
-import tensorflow as tf
-
-import matplotlib.pyplot as plt
-
-import os
-import PIL
-
-
-import torch
-from torch.utils.data import Dataset
-from torchvision import datasets,models
-from torchvision.transforms import ToTensor
-from torchvision import datasets, models, transforms
-from torch.utils.data import DataLoader
-from torch import nn
-import torch.nn.functional as F
-
-model=models.densenet121(pretrained=True)
-model.classifier = nn.Linear(model.classifier.in_features,4)
-PATH="dn121_gpu.pth"
-device = torch.device('cpu')
-
-model.load_state_dict(torch.load(PATH, map_location=device))
-model.eval()
-
-labels=["broadleaf","grass","soil","soybean"]
-def predict(inputs):
- transform = transforms.Compose([
- transforms.RandomRotation(5),
- transforms.RandomHorizontalFlip(),
- transforms.Resize((224,224)),
- transforms.ToTensor(),
- transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
-
- ])
- inputs = transform(inputs).unsqueeze(0)
-
- with torch.no_grad():
- prediction = torch.nn.functional.softmax(model(inputs)[0],dim=0)
-
- confidences = {labels[i]:float(prediction[i] )for i in range(4)}
- return confidences
-
-
-demo = gr.Interface(fn=predict,inputs=gr.Image(type="pil"),outputs=gr.Label(num_top_classes=4),title="农田杂草分类系统",
- description ="开发者:仰许哲 \n 本系统仅为实习作业设计,无法保证其他使用场景下的识别准确率,慎重使用! \n 本页面使用Hugging Face提供的接口展示,由于种种原因只能使用CPU运算,(GPU按分钟收费,单位还是刀)识别速度可能会受到影响 \n 如需使用请加QQ:568249139 \n 本系统给出【broadleaf,grass,soil,soybean】四个分类及其置信度")
-demo.launch()
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/textures/CubeTexture.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/textures/CubeTexture.d.ts
deleted file mode 100644
index 23831bbb95a33c911c21943a582bc771d0448879..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/textures/CubeTexture.d.ts
+++ /dev/null
@@ -1,26 +0,0 @@
-import { Texture } from './Texture';
-import {
- Mapping,
- Wrapping,
- TextureFilter,
- PixelFormat,
- TextureDataType,
- TextureEncoding,
-} from '../constants';
-
-export class CubeTexture extends Texture {
- constructor(
- images?: any[], // HTMLImageElement or HTMLCanvasElement
- mapping?: Mapping,
- wrapS?: Wrapping,
- wrapT?: Wrapping,
- magFilter?: TextureFilter,
- minFilter?: TextureFilter,
- format?: PixelFormat,
- type?: TextureDataType,
- anisotropy?: number,
- encoding?: TextureEncoding
- );
-
- images: any; // returns and sets the value of Texture.image in the codde ?
-}
diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/configs/data_configs.py b/spaces/bankholdup/stylegan_petbreeder/e4e/configs/data_configs.py
deleted file mode 100644
index deccb0b1c266ad4b6abaef53d67ec1ed0ddbd462..0000000000000000000000000000000000000000
--- a/spaces/bankholdup/stylegan_petbreeder/e4e/configs/data_configs.py
+++ /dev/null
@@ -1,41 +0,0 @@
-from configs import transforms_config
-from configs.paths_config import dataset_paths
-
-
-DATASETS = {
- 'ffhq_encode': {
- 'transforms': transforms_config.EncodeTransforms,
- 'train_source_root': dataset_paths['ffhq'],
- 'train_target_root': dataset_paths['ffhq'],
- 'test_source_root': dataset_paths['celeba_test'],
- 'test_target_root': dataset_paths['celeba_test'],
- },
- 'cars_encode': {
- 'transforms': transforms_config.CarsEncodeTransforms,
- 'train_source_root': dataset_paths['cars_train'],
- 'train_target_root': dataset_paths['cars_train'],
- 'test_source_root': dataset_paths['cars_test'],
- 'test_target_root': dataset_paths['cars_test'],
- },
- 'horse_encode': {
- 'transforms': transforms_config.EncodeTransforms,
- 'train_source_root': dataset_paths['horse_train'],
- 'train_target_root': dataset_paths['horse_train'],
- 'test_source_root': dataset_paths['horse_test'],
- 'test_target_root': dataset_paths['horse_test'],
- },
- 'church_encode': {
- 'transforms': transforms_config.EncodeTransforms,
- 'train_source_root': dataset_paths['church_train'],
- 'train_target_root': dataset_paths['church_train'],
- 'test_source_root': dataset_paths['church_test'],
- 'test_target_root': dataset_paths['church_test'],
- },
- 'cats_encode': {
- 'transforms': transforms_config.EncodeTransforms,
- 'train_source_root': dataset_paths['cats_train'],
- 'train_target_root': dataset_paths['cats_train'],
- 'test_source_root': dataset_paths['cats_test'],
- 'test_target_root': dataset_paths['cats_test'],
- }
-}
diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005804.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005804.py
deleted file mode 100644
index d7a873859cd3071baa4ddbf117f248d0263e330f..0000000000000000000000000000000000000000
--- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005804.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import os
-#os.system("pip install gfpgan")
-
-#os.system("pip freeze")
-#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .")
-import random
-import gradio as gr
-from PIL import Image
-import torch
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg')
-# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg')
-
-
-
-
-import cv2
-import glob
-import numpy as np
-from basicsr.utils import imwrite
-from gfpgan import GFPGANer
-
-import warnings
-warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. '
- 'If you really want to use it, please modify the corresponding codes.')
-bg_upsampler = None
-
-
-
-# set up GFPGAN restorer
-restorer = GFPGANer(
- model_path='experiments/pretrained_models/GFPGANv1.3.pth',
- upscale=2,
- arch='clean',
- channel_multiplier=2,
- bg_upsampler=bg_upsampler)
-
-
-def inference(img):
- input_img = cv2.imread(img, cv2.IMREAD_COLOR)
- cropped_faces, restored_faces, restored_img = restorer.enhance(
- input_img, has_aligned=False, only_center_face=False, paste_back=True)
-
- return Image.fromarray(restored_img[0])[:,:,::-1]
-
-return Image.fromarray(restored_faces[0])[:,:,::-1]
-return Image.fromarray(restored_faces[0][:,:,::-1])
-
-
-title = "GFP-GAN"
-description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once"
-article = "Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo
"
-gr.Interface(
- inference,
- [gr.inputs.Image(type="filepath", label="Input")],
- gr.outputs.Image(type="pil", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=[
- ['lincoln.jpg'],
- ['einstein.png'],
- ['edison.jpg'],
- ['Henry.jpg'],
- ['Frida.jpg']
- ]
- ).launch(enable_queue=True,cache_examples=True)
-
-
diff --git a/spaces/bibekyess/bgpt/app.py b/spaces/bibekyess/bgpt/app.py
deleted file mode 100644
index 227824f3256ca39b0d72d397dc740c41442dd2be..0000000000000000000000000000000000000000
--- a/spaces/bibekyess/bgpt/app.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import time
-
-import streamlit as st
-from streamlit_chat import message
-from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
-from optimum.onnxruntime import ORTModelForSeq2SeqLM
-import textwrap
-
-from chat import generate_response, generate_tag
-
-@st.cache_data()
-def create_database():
- import json
- from langchain.docstore.document import Document
- from langchain.embeddings import HuggingFaceEmbeddings
- from langchain.vectorstores import FAISS
-
- # Maybe it is better to save the embeddings than the text dataset
- if "db" not in st.session_state:
- json_file_path = "./new_dataset.json"
-
- string_chunks = []
-
- with open(json_file_path, "r") as json_file:
- for line in json_file:
- if line != '\n':
- json_string = json.loads(line)
- string_chunks.append(json_string)
- documents_ = []
- for line in string_chunks:
- loader = Document(page_content=line)
- documents_.append(loader)
- embeddings = HuggingFaceEmbeddings()
-
-
- db = FAISS.from_documents(documents_, embeddings)
- print(type(db))
- return db
-
-db = create_database()
-
-# if "tokenizer" not in st.session_state:
-# st.session_state["tokenizer"] = AutoTokenizer.from_pretrained(
-# "MBZUAI/LaMini-Flan-T5-783M"
-# )
-# st.session_state["model"] = AutoModelForSeq2SeqLM.from_pretrained(
-# "MBZUAI/LaMini-Flan-T5-783M"
-# )
-
-@st.cache_resource()
-def load_model():
- print("test")
- # tokenizer = AutoTokenizer.from_pretrained(
- # "MBZUAI/LaMini-Flan-T5-783M"
- # )
- # model = AutoModelForSeq2SeqLM.from_pretrained(
- # "MBZUAI/LaMini-Flan-T5-783M"
-
- tokenizer = AutoTokenizer.from_pretrained("Xenova/LaMini-Flan-T5-783M")
- model = ORTModelForSeq2SeqLM.from_pretrained("Xenova/LaMini-Flan-T5-783M", subfolder = "onnx")
-
- return tokenizer, model
-
-st.title("BGPT : Bibek's Personal Chatbot")
-
-# Extra information
-st.markdown("""
- #### Click [here](https://bibekbot.streamlit.app/) to chat with rule-based B-Bot *[α-version]*!""")
-
-
-# Storing the chat
-if "generated" not in st.session_state:
- st.session_state["generated"] = []
-
-if "past" not in st.session_state:
- st.session_state["past"] = []
-
-
-# We will get the user's input by calling the get_text function
-def get_text():
- input_text = st.text_input("Enter your inquiries here: ", "Hi!!")
- return input_text
-
-
-user_input = get_text()
-def wrap_text_preserve_newlines(text, width=110):
- # Split the input text into lines based on newline characters
- lines = text.split('\n')
-
- # Wrap each line individually
- wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
-
- # Join the wrapped lines back together using newline characters
- wrapped_text = '\n'.join(wrapped_lines).replace("page_content=", "").replace("metadata={}", "")
-
- return wrapped_text
-
-if user_input:
-
- tag = generate_tag(user_input)
-
- start = time.time()
- # print(tag)
- if tag in ["greeting"]:
- output = "Hello 👋! Thanks for visiting!\n I am BGPT! I am here to assist you in obtaining information about Bibek. Feel free to ask me any questions about Bibek. These are some sample questions:\n (I) Tell me about Bibek.\n (II) What skills does Bibek have?\n (III) What work experience does Bibek have?\n (IV) What is Bibek's educational background?\n (V) What awards has Bibek won?\n (VI) What projects have Bibek completed? &\n (VII) How can I contact Bibek?"
- else:
- tokenizer, model = load_model()
- docs = db.similarity_search(user_input)
- output1 = wrap_text_preserve_newlines(str(docs[0]))
- output2 = wrap_text_preserve_newlines(str(docs[1]))
-
- if tag in ["welcome", "thanks", "exit"]:
- input = user_input
- elif tag in ["BibekBOT-introduction"]:
- input = "I am BGPT, a large language model. I am here to assist you in obtaining information about Bibek. Feel free to ask me any questions about Bibek and I will make every effort to respond to all inquiries. These are some sample questions:\n (I) Tell me about Bibek.\n (II) What skills does Bibek have?\n (III) What work experience does Bibek have?\n (IV) What is Bibek's educational background?\n (V) What awards has Bibek won?\n (VI) What projects have Bibek completed? &\n (VII) How can I contact Bibek?. \n Can you paraphrase the above without changing the tone and contents."
- elif tag in ["decline"]:
- input = "Okay, if there's anything else I can assist with, please don't hesitate to ask. \n Can you paraphrase the above without changing much content and tone."
- else:
- # output = generate_response(user_input)
- task_description_prompt = "I want you to act like my personal assistant chatbot named 'BGPT'. You are provided with some content and you will get one question. Try to answer the question in details based on the provided content. You may paraphrase the contents to reach your answer too. The below is the content: \n"
- prompt_template = "\nBased on the above content, try to answer the following question.\n\n"
- end_prompt = "\nPlease make meaningful sentence and try to be descriptive as possible responding with many sentences and ending with proper punctuations. If you think the content doesn't contain good answer to the question, give some polite respones telling them that you do not have specific response to the query and apologize and refer them to contact Bibek directly.\n" # NoQA"
- short_response_template = "\nIf your response is very short like 1 or 2 sentence, add a followup sentence like 'Let me know if there's anything else I can help you with. or If there's anything else I can assist with, please don't hesitate to ask. I mean something similar in polite way." # NoQA
-
- input = task_description_prompt + output1+ "\n" +output2 + prompt_template + user_input + end_prompt
-
- input_ids = tokenizer(
- input,
- return_tensors="pt",
- ).input_ids
-
- outputs = model.generate(input_ids, max_length=512, do_sample=True)
- output = tokenizer.decode(outputs[0]).strip("").strip()
-
- end = time.time()
-
- # print(input)
-
- print("Time for model inference: ", end - start)
- # Checks for memory overflow
- if len(st.session_state.past) == 15:
- st.session_state.past.pop(0)
- st.session_state.generated.pop(0)
-
- # store the output
- st.session_state.past.append(user_input)
- st.session_state.generated.append(output)
-
-if st.session_state["generated"]:
- # print(st.session_state)
- for i in range(len(st.session_state["generated"]) - 1, -1, -1):
- message(
- st.session_state["generated"][i],
- avatar_style="bottts",
- seed=39,
- key=str(i), # NoQA
- )
- message(
- st.session_state["past"][i],
- is_user=True,
- avatar_style="identicon",
- seed=4,
- key=str(i) + "_user",
- ) # NoQA
diff --git a/spaces/big-kek/NeuroKorzh/app.py b/spaces/big-kek/NeuroKorzh/app.py
deleted file mode 100644
index 3121256c8cd8761e3293cd5797e2fd801825bc7f..0000000000000000000000000000000000000000
--- a/spaces/big-kek/NeuroKorzh/app.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import transformers
-import torch
-import tokenizers
-import streamlit as st
-import re
-
-from PIL import Image
-
-
-@st.cache(hash_funcs={tokenizers.Tokenizer: lambda _: None, tokenizers.AddedToken: lambda _: None, re.Pattern: lambda _: None}, allow_output_mutation=True, suppress_st_warning=True)
-def get_model(model_name, model_path):
- tokenizer = transformers.GPT2Tokenizer.from_pretrained(model_name)
- tokenizer.add_special_tokens({
- 'eos_token': '[EOS]'
- })
- model = transformers.GPT2LMHeadModel.from_pretrained(model_name)
- model.resize_token_embeddings(len(tokenizer))
- model.load_state_dict(torch.load(model_path, map_location=torch.device('cpu')))
- model.eval()
- return model, tokenizer
-
-
-def predict(text, model, tokenizer, n_beams=5, temperature=2.5, top_p=0.8, length_of_generated=300):
- text += '\n'
- input_ids = tokenizer.encode(text, return_tensors="pt")
- length_of_prompt = len(input_ids[0])
- with torch.no_grad():
- out = model.generate(input_ids,
- do_sample=True,
- num_beams=n_beams,
- temperature=temperature,
- top_p=top_p,
- max_length=length_of_prompt + length_of_generated,
- eos_token_id=tokenizer.eos_token_id
- )
-
- generated = list(map(tokenizer.decode, out))[0]
- return generated.replace('\n[EOS]\n', '')
-
-
-medium_model, medium_tokenizer = get_model('sberbank-ai/rugpt3medium_based_on_gpt2', 'korzh-medium_best_eval_loss.bin')
-large_model, large_tokenizer = get_model('sberbank-ai/rugpt3large_based_on_gpt2', 'korzh-large_best_eval_loss.bin')
-
-# st.title("NeuroKorzh")
-
-image = Image.open('korzh.jpg')
-st.image(image, caption='НейроКорж')
-
-option = st.selectbox('Выберите своего Коржа', ('Быстрый', 'Глубокий'))
-craziness = st.slider(label='Абсурдность', min_value=0, max_value=100, value=50, step=5)
-temperature = 2 + craziness / 50.
-
-st.markdown("\n")
-
-text = st.text_area(label='Напишите начало песни', value='Что делать, Макс?', height=70)
-button = st.button('Старт')
-
-if button:
- try:
- with st.spinner("Пушечка пишется"):
- if option == 'Быстрый':
- result = predict(text, medium_model, medium_tokenizer, temperature=temperature)
- elif option == 'Глубокий':
- result = predict(text, large_model, large_tokenizer, temperature=temperature)
- else:
- st.error('Error in selectbox')
-
- st.text_area(label='', value=result, height=1000)
-
- except Exception:
- st.error("Ooooops, something went wrong. Please try again and report to me, tg: @vladyur")
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/CRACK RarLab.WinRAR.v5.00.Incl.keygen [BEST]-FFF.md b/spaces/bioriAsaeru/text-to-voice/CRACK RarLab.WinRAR.v5.00.Incl.keygen [BEST]-FFF.md
deleted file mode 100644
index d48267329ed8c68f1de154b387ebc7e5fe4baa03..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/CRACK RarLab.WinRAR.v5.00.Incl.keygen [BEST]-FFF.md
+++ /dev/null
@@ -1,6 +0,0 @@
-CRACK RarLab.WinRAR.v5.00.Incl.Keygen-FFF Download ✶✶✶ https://urloso.com/2uyPZV
-
-4U WMA MP3 Converter v5 9 3 WinALL Keygen Only-. ViRiLiTY. 5 Star Mail Server v2 4 WinALL-CHiCNCREAM. 602 SQL Server v10 0 2 0514 Incl Keygen ... 4d29de3e1b
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/FULL Uniblue PowerSuite 2011 V3.0 Serial [ Kk ]l A Review of Its Features and Benefits.md b/spaces/bioriAsaeru/text-to-voice/FULL Uniblue PowerSuite 2011 V3.0 Serial [ Kk ]l A Review of Its Features and Benefits.md
deleted file mode 100644
index 894078f94897db12e1176a309efc4061c64f1923..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/FULL Uniblue PowerSuite 2011 V3.0 Serial [ Kk ]l A Review of Its Features and Benefits.md
+++ /dev/null
@@ -1,6 +0,0 @@
-FULL Uniblue PowerSuite 2011 V3.0 Serial [ Kk ]l Download File ☆☆☆☆☆ https://urloso.com/2uyQhj
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_codebooks_patterns.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_codebooks_patterns.py
deleted file mode 100644
index b658f4779a369f9ec8dde692a61b7f0fe3485724..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/tests/modules/test_codebooks_patterns.py
+++ /dev/null
@@ -1,246 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import pytest
-import torch
-
-from audiocraft.modules.codebooks_patterns import (
- DelayedPatternProvider,
- ParallelPatternProvider,
- Pattern,
- UnrolledPatternProvider,
-)
-
-
-class TestParallelPatternProvider:
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [0, 1, 16, 100])
- def test_get_pattern(self, n_q: int, timesteps: int):
- provider = ParallelPatternProvider(n_q)
- pattern = provider.get_pattern(timesteps)
- # + 1 to account for 1st step
- assert len(pattern.layout) == timesteps + 1
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [8, 16, 100])
- def test_pattern_content(self, n_q: int, timesteps: int):
- provider = ParallelPatternProvider(n_q)
- pattern = provider.get_pattern(timesteps)
- for s, v in enumerate(pattern.layout):
- for i, code in enumerate(v):
- assert i == code.q
- assert code.t == s - 1 # account for the 1st empty step
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [8, 16, 100])
- def test_pattern_max_delay(self, n_q: int, timesteps: int):
- provider = ParallelPatternProvider(n_q)
- pattern = provider.get_pattern(timesteps)
- assert pattern.max_delay == 0
- assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay
-
-
-class TestDelayedPatternProvider:
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [0, 1, 16, 100])
- def test_get_pattern(self, n_q: int, timesteps: int):
- delays = [
- list(range(n_q)),
- [0] + [1] * (n_q - 1),
- [0] + [4] * (n_q - 1),
- ]
- for delay in delays:
- provider = DelayedPatternProvider(n_q, delay)
- pattern = provider.get_pattern(timesteps)
- # + 1 to account for 1st step
- assert len(pattern.layout) == timesteps + max(delay) + 1
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [8, 16, 100])
- def test_pattern_content(self, n_q: int, timesteps: int):
- provider = DelayedPatternProvider(n_q)
- pattern = provider.get_pattern(timesteps)
- for s, v in enumerate(pattern.layout):
- for i, code in enumerate(v):
- assert i == code.q
- assert code.t == max(0, s - code.q - 1)
-
- @pytest.mark.parametrize("timesteps", [8, 16, 100])
- @pytest.mark.parametrize("delay", [[0, 1, 2, 3], [0, 1, 1, 1], [0, 3, 3, 3], [0, 3]])
- def test_pattern_max_delay(self, timesteps: int, delay: list):
- provider = DelayedPatternProvider(len(delay), delay)
- pattern = provider.get_pattern(timesteps)
- assert pattern.max_delay == max(delay)
- assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay
-
-
-class TestUnrolledPatternProvider:
-
- @pytest.mark.parametrize("timesteps", [0, 1, 16])
- @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]])
- @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]])
- def test_get_pattern(self, timesteps: int, flattening: list, delays: list):
- n_q = len(flattening)
- max_delay = max(delays)
- provider = UnrolledPatternProvider(n_q, flattening, delays)
- pattern = provider.get_pattern(timesteps)
- assert len(pattern.layout) == provider.num_virtual_steps(timesteps) + max_delay
-
- @pytest.mark.parametrize("timesteps", [0, 1, 16])
- @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]])
- @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]])
- def test_pattern_max_delay(self, timesteps: int, flattening: list, delays: list):
- n_q = len(flattening)
- max_delay = max(delays)
- provider = UnrolledPatternProvider(n_q, flattening, delays)
- pattern = provider.get_pattern(timesteps)
- assert pattern.max_delay == max_delay
-
-
-class TestPattern:
-
- def ref_build_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int):
- """Reference method to build the sequence from the pattern without using fancy scatter."""
- bs, n_q, T = z.shape
- z = z.cpu().numpy()
- assert n_q == pattern.n_q
- assert T <= pattern.timesteps
- inp = torch.full((bs, n_q, len(pattern.layout)), special_token, dtype=torch.long).numpy()
- inp[:] = special_token
- for s, v in enumerate(pattern.layout):
- for (t, q) in v:
- if t < T:
- inp[:, q, s] = z[:, q, t]
- return torch.from_numpy(inp)
-
- def ref_revert_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int):
- """Reference method to revert the sequence from the pattern without using fancy scatter."""
- z = z.cpu().numpy()
- bs, n_q, S = z.shape
- assert pattern.n_q == n_q
- inp = torch.full((bs, pattern.n_q, pattern.timesteps), special_token, dtype=torch.long).numpy()
- inp[:] = special_token
- for s, v in enumerate(pattern.layout):
- for (t, q) in v:
- if t < pattern.timesteps:
- inp[:, q, t] = z[:, q, s]
- return torch.from_numpy(inp)
-
- def ref_revert_pattern_logits(self, z: torch.Tensor, pattern: Pattern, special_token: float):
- """Reference method to revert the logits from the pattern without using fancy scatter."""
- z = z.cpu().numpy()
- bs, card, n_q, S = z.shape
- assert pattern.n_q == n_q
- ref_layout = pattern.layout
- inp = torch.full((bs, card, pattern.n_q, pattern.timesteps), special_token, dtype=torch.float).numpy()
- inp[:] = special_token
- for s, v in enumerate(ref_layout[1:]):
- if s < S:
- for (t, q) in v:
- if t < pattern.timesteps:
- inp[:, :, q, t] = z[:, :, q, s]
- return torch.from_numpy(inp)
-
- def _get_pattern_providers(self, n_q: int):
- pattern_provider_1 = ParallelPatternProvider(n_q)
- pattern_provider_2 = DelayedPatternProvider(n_q, list(range(n_q)))
- pattern_provider_3 = DelayedPatternProvider(n_q, [0] + [1] * (n_q - 1))
- pattern_provider_4 = UnrolledPatternProvider(
- n_q, flattening=list(range(n_q)), delays=[0] * n_q
- )
- pattern_provider_5 = UnrolledPatternProvider(
- n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] * n_q
- )
- pattern_provider_6 = UnrolledPatternProvider(
- n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] + [5] * (n_q - 1)
- )
- return [
- pattern_provider_1,
- pattern_provider_2,
- pattern_provider_3,
- pattern_provider_4,
- pattern_provider_5,
- pattern_provider_6,
- ]
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [16, 72])
- def test_build_pattern_sequence(self, n_q: int, timesteps: int):
- bs = 2
- card = 256
- special_token = card
-
- pattern_providers = self._get_pattern_providers(n_q)
- for pattern_provider in pattern_providers:
- pattern = pattern_provider.get_pattern(timesteps)
- # we can correctly build the sequence from the pattern
- z = torch.randint(0, card, (bs, n_q, timesteps))
- ref_res = self.ref_build_pattern_sequence(z, pattern, special_token)
- res, indexes, mask = pattern.build_pattern_sequence(z, special_token)
- assert (res == ref_res).float().mean() == 1.0
-
- # expected assertion fails on the number of timesteps
- invalid_timesteps = [timesteps + 1]
- if pattern.num_sequence_steps != pattern.timesteps:
- invalid_timesteps.append(pattern.num_sequence_steps)
- for i_timesteps in invalid_timesteps:
- z2 = torch.randint(0, card, (bs, n_q, i_timesteps))
- with pytest.raises(AssertionError):
- pattern.build_pattern_sequence(z2, special_token)
-
- # expected assertion fails on the number of codebooks
- invalid_qs = [0, n_q - 1, n_q + 1]
- for i_q in invalid_qs:
- z3 = torch.randint(0, card, (bs, i_q, timesteps))
- with pytest.raises(AssertionError):
- pattern.build_pattern_sequence(z3, special_token)
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [16, 72])
- def test_revert_pattern_sequence(self, n_q: int, timesteps: int):
- bs = 2
- card = 256
- special_token = card
-
- pattern_providers = self._get_pattern_providers(n_q)
- for pattern_provider in pattern_providers:
- pattern = pattern_provider.get_pattern(timesteps)
- # this works assuming previous tests are successful
- z = torch.randint(0, card, (bs, n_q, timesteps))
- s = self.ref_build_pattern_sequence(z, pattern, special_token)
- ref_out = self.ref_revert_pattern_sequence(s, pattern, special_token)
- # ensure our reference script retrieve the original sequence
- assert z.shape == ref_out.shape
- assert (z == ref_out).float().mean() == 1.0
- # now we can test the scatter version
- out, indexes, mask = pattern.revert_pattern_sequence(s, special_token)
- assert out.shape == ref_out.shape
- assert (out == ref_out).float().mean() == 1.0
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [16, 72])
- @pytest.mark.parametrize("card", [1, 2, 256, 1024])
- def test_revert_pattern_logits(self, n_q: int, timesteps: int, card: int):
- bs = 2
- special_token = card
- logits_special_token = float('nan')
-
- pattern_providers = self._get_pattern_providers(n_q)
- for pattern_provider in pattern_providers:
- pattern = pattern_provider.get_pattern(timesteps)
- # this works assuming previous tests are successful
- z = torch.randint(0, card, (bs, n_q, timesteps))
- s = self.ref_build_pattern_sequence(z, pattern, special_token)
- logits = torch.randn((bs, card, n_q, s.shape[-1]))
- ref_out = self.ref_revert_pattern_logits(logits, pattern, logits_special_token)
- # ensure our reference script retrieve the original sequence
- assert ref_out.shape == torch.Size([bs, card, n_q, timesteps])
- # now we can test the scatter version
- out, indexes, mask = pattern.revert_pattern_logits(logits, logits_special_token)
- assert out.shape == ref_out.shape
- assert (out == ref_out).float().mean() == 1.0
diff --git a/spaces/brjathu/HMR2.0/hmr2/models/backbones/vit.py b/spaces/brjathu/HMR2.0/hmr2/models/backbones/vit.py
deleted file mode 100644
index c56c71889cd441294f57ad687d0678d2443d1eed..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/hmr2/models/backbones/vit.py
+++ /dev/null
@@ -1,348 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-import torch
-from functools import partial
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-
-from timm.models.layers import drop_path, to_2tuple, trunc_normal_
-
-def vit(cfg):
- return ViT(
- img_size=(256, 192),
- patch_size=16,
- embed_dim=1280,
- depth=32,
- num_heads=16,
- ratio=1,
- use_checkpoint=False,
- mlp_ratio=4,
- qkv_bias=True,
- drop_path_rate=0.55,
- )
-
-def get_abs_pos(abs_pos, h, w, ori_h, ori_w, has_cls_token=True):
- """
- Calculate absolute positional embeddings. If needed, resize embeddings and remove cls_token
- dimension for the original embeddings.
- Args:
- abs_pos (Tensor): absolute positional embeddings with (1, num_position, C).
- has_cls_token (bool): If true, has 1 embedding in abs_pos for cls token.
- hw (Tuple): size of input image tokens.
-
- Returns:
- Absolute positional embeddings after processing with shape (1, H, W, C)
- """
- cls_token = None
- B, L, C = abs_pos.shape
- if has_cls_token:
- cls_token = abs_pos[:, 0:1]
- abs_pos = abs_pos[:, 1:]
-
- if ori_h != h or ori_w != w:
- new_abs_pos = F.interpolate(
- abs_pos.reshape(1, ori_h, ori_w, -1).permute(0, 3, 1, 2),
- size=(h, w),
- mode="bicubic",
- align_corners=False,
- ).permute(0, 2, 3, 1).reshape(B, -1, C)
-
- else:
- new_abs_pos = abs_pos
-
- if cls_token is not None:
- new_abs_pos = torch.cat([cls_token, new_abs_pos], dim=1)
- return new_abs_pos
-
-class DropPath(nn.Module):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- """
- def __init__(self, drop_prob=None):
- super(DropPath, self).__init__()
- self.drop_prob = drop_prob
-
- def forward(self, x):
- return drop_path(x, self.drop_prob, self.training)
-
- def extra_repr(self):
- return 'p={}'.format(self.drop_prob)
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-class Attention(nn.Module):
- def __init__(
- self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0.,
- proj_drop=0., attn_head_dim=None,):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.dim = dim
-
- if attn_head_dim is not None:
- head_dim = attn_head_dim
- all_head_dim = head_dim * self.num_heads
-
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, all_head_dim * 3, bias=qkv_bias)
-
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(all_head_dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x):
- B, N, C = x.shape
- qkv = self.qkv(x)
- qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, -1)
- x = self.proj(x)
- x = self.proj_drop(x)
-
- return x
-
-class Block(nn.Module):
-
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None,
- drop=0., attn_drop=0., drop_path=0., act_layer=nn.GELU,
- norm_layer=nn.LayerNorm, attn_head_dim=None
- ):
- super().__init__()
-
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop, attn_head_dim=attn_head_dim
- )
-
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.drop_path(self.attn(self.norm1(x)))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- return x
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768, ratio=1):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) * (ratio ** 2)
- self.patch_shape = (int(img_size[0] // patch_size[0] * ratio), int(img_size[1] // patch_size[1] * ratio))
- self.origin_patch_shape = (int(img_size[0] // patch_size[0]), int(img_size[1] // patch_size[1]))
- self.img_size = img_size
- self.patch_size = patch_size
- self.num_patches = num_patches
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=(patch_size[0] // ratio), padding=4 + 2 * (ratio//2-1))
-
- def forward(self, x, **kwargs):
- B, C, H, W = x.shape
- x = self.proj(x)
- Hp, Wp = x.shape[2], x.shape[3]
-
- x = x.flatten(2).transpose(1, 2)
- return x, (Hp, Wp)
-
-
-class HybridEmbed(nn.Module):
- """ CNN Feature Map Embedding
- Extract feature map from CNN, flatten, project to embedding dim.
- """
- def __init__(self, backbone, img_size=224, feature_size=None, in_chans=3, embed_dim=768):
- super().__init__()
- assert isinstance(backbone, nn.Module)
- img_size = to_2tuple(img_size)
- self.img_size = img_size
- self.backbone = backbone
- if feature_size is None:
- with torch.no_grad():
- training = backbone.training
- if training:
- backbone.eval()
- o = self.backbone(torch.zeros(1, in_chans, img_size[0], img_size[1]))[-1]
- feature_size = o.shape[-2:]
- feature_dim = o.shape[1]
- backbone.train(training)
- else:
- feature_size = to_2tuple(feature_size)
- feature_dim = self.backbone.feature_info.channels()[-1]
- self.num_patches = feature_size[0] * feature_size[1]
- self.proj = nn.Linear(feature_dim, embed_dim)
-
- def forward(self, x):
- x = self.backbone(x)[-1]
- x = x.flatten(2).transpose(1, 2)
- x = self.proj(x)
- return x
-
-
-class ViT(nn.Module):
-
- def __init__(self,
- img_size=224, patch_size=16, in_chans=3, num_classes=80, embed_dim=768, depth=12,
- num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0.,
- drop_path_rate=0., hybrid_backbone=None, norm_layer=None, use_checkpoint=False,
- frozen_stages=-1, ratio=1, last_norm=True,
- patch_padding='pad', freeze_attn=False, freeze_ffn=False,
- ):
- # Protect mutable default arguments
- super(ViT, self).__init__()
- norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
- self.num_classes = num_classes
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
- self.frozen_stages = frozen_stages
- self.use_checkpoint = use_checkpoint
- self.patch_padding = patch_padding
- self.freeze_attn = freeze_attn
- self.freeze_ffn = freeze_ffn
- self.depth = depth
-
- if hybrid_backbone is not None:
- self.patch_embed = HybridEmbed(
- hybrid_backbone, img_size=img_size, in_chans=in_chans, embed_dim=embed_dim)
- else:
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, ratio=ratio)
- num_patches = self.patch_embed.num_patches
-
- # since the pretraining model has class token
- self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
-
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
-
- self.blocks = nn.ModuleList([
- Block(
- dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer,
- )
- for i in range(depth)])
-
- self.last_norm = norm_layer(embed_dim) if last_norm else nn.Identity()
-
- if self.pos_embed is not None:
- trunc_normal_(self.pos_embed, std=.02)
-
- self._freeze_stages()
-
- def _freeze_stages(self):
- """Freeze parameters."""
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- for i in range(1, self.frozen_stages + 1):
- m = self.blocks[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- if self.freeze_attn:
- for i in range(0, self.depth):
- m = self.blocks[i]
- m.attn.eval()
- m.norm1.eval()
- for param in m.attn.parameters():
- param.requires_grad = False
- for param in m.norm1.parameters():
- param.requires_grad = False
-
- if self.freeze_ffn:
- self.pos_embed.requires_grad = False
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
- for i in range(0, self.depth):
- m = self.blocks[i]
- m.mlp.eval()
- m.norm2.eval()
- for param in m.mlp.parameters():
- param.requires_grad = False
- for param in m.norm2.parameters():
- param.requires_grad = False
-
- def init_weights(self):
- """Initialize the weights in backbone.
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- def _init_weights(m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- self.apply(_init_weights)
-
- def get_num_layers(self):
- return len(self.blocks)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'pos_embed', 'cls_token'}
-
- def forward_features(self, x):
- B, C, H, W = x.shape
- x, (Hp, Wp) = self.patch_embed(x)
-
- if self.pos_embed is not None:
- # fit for multiple GPU training
- # since the first element for pos embed (sin-cos manner) is zero, it will cause no difference
- x = x + self.pos_embed[:, 1:] + self.pos_embed[:, :1]
-
- for blk in self.blocks:
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
-
- x = self.last_norm(x)
-
- xp = x.permute(0, 2, 1).reshape(B, -1, Hp, Wp).contiguous()
-
- return xp
-
- def forward(self, x):
- x = self.forward_features(x)
- return x
-
- def train(self, mode=True):
- """Convert the model into training mode."""
- super().train(mode)
- self._freeze_stages()
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ.py b/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ.py
deleted file mode 100644
index 18e5f0720c568db4ef0c97b59688b5e7866df606..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from .mask_rcnn_R_101_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-
-train.max_iter *= 2 # 100ep -> 200ep
-
-lr_multiplier.scheduler.milestones = [
- milestone * 2 for milestone in lr_multiplier.scheduler.milestones
-]
-lr_multiplier.scheduler.num_updates = train.max_iter
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/matcher.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/matcher.py
deleted file mode 100644
index c7597cab5a89a7e828b8eee53d1a3712be6dbc62..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/matcher.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from typing import List
-import torch
-
-from detectron2.layers import nonzero_tuple
-
-
-# TODO: the name is too general
-class Matcher(object):
- """
- This class assigns to each predicted "element" (e.g., a box) a ground-truth
- element. Each predicted element will have exactly zero or one matches; each
- ground-truth element may be matched to zero or more predicted elements.
-
- The matching is determined by the MxN match_quality_matrix, that characterizes
- how well each (ground-truth, prediction)-pair match each other. For example,
- if the elements are boxes, this matrix may contain box intersection-over-union
- overlap values.
-
- The matcher returns (a) a vector of length N containing the index of the
- ground-truth element m in [0, M) that matches to prediction n in [0, N).
- (b) a vector of length N containing the labels for each prediction.
- """
-
- def __init__(
- self, thresholds: List[float], labels: List[int], allow_low_quality_matches: bool = False
- ):
- """
- Args:
- thresholds (list): a list of thresholds used to stratify predictions
- into levels.
- labels (list): a list of values to label predictions belonging at
- each level. A label can be one of {-1, 0, 1} signifying
- {ignore, negative class, positive class}, respectively.
- allow_low_quality_matches (bool): if True, produce additional matches
- for predictions with maximum match quality lower than high_threshold.
- See set_low_quality_matches_ for more details.
-
- For example,
- thresholds = [0.3, 0.5]
- labels = [0, -1, 1]
- All predictions with iou < 0.3 will be marked with 0 and
- thus will be considered as false positives while training.
- All predictions with 0.3 <= iou < 0.5 will be marked with -1 and
- thus will be ignored.
- All predictions with 0.5 <= iou will be marked with 1 and
- thus will be considered as true positives.
- """
- # Add -inf and +inf to first and last position in thresholds
- thresholds = thresholds[:]
- assert thresholds[0] > 0
- thresholds.insert(0, -float("inf"))
- thresholds.append(float("inf"))
- # Currently torchscript does not support all + generator
- assert all([low <= high for (low, high) in zip(thresholds[:-1], thresholds[1:])])
- assert all([l in [-1, 0, 1] for l in labels])
- assert len(labels) == len(thresholds) - 1
- self.thresholds = thresholds
- self.labels = labels
- self.allow_low_quality_matches = allow_low_quality_matches
-
- def __call__(self, match_quality_matrix):
- """
- Args:
- match_quality_matrix (Tensor[float]): an MxN tensor, containing the
- pairwise quality between M ground-truth elements and N predicted
- elements. All elements must be >= 0 (due to the us of `torch.nonzero`
- for selecting indices in :meth:`set_low_quality_matches_`).
-
- Returns:
- matches (Tensor[int64]): a vector of length N, where matches[i] is a matched
- ground-truth index in [0, M)
- match_labels (Tensor[int8]): a vector of length N, where pred_labels[i] indicates
- whether a prediction is a true or false positive or ignored
- """
- assert match_quality_matrix.dim() == 2
- if match_quality_matrix.numel() == 0:
- default_matches = match_quality_matrix.new_full(
- (match_quality_matrix.size(1),), 0, dtype=torch.int64
- )
- # When no gt boxes exist, we define IOU = 0 and therefore set labels
- # to `self.labels[0]`, which usually defaults to background class 0
- # To choose to ignore instead, can make labels=[-1,0,-1,1] + set appropriate thresholds
- default_match_labels = match_quality_matrix.new_full(
- (match_quality_matrix.size(1),), self.labels[0], dtype=torch.int8
- )
- return default_matches, default_match_labels
-
- assert torch.all(match_quality_matrix >= 0)
-
- # match_quality_matrix is M (gt) x N (predicted)
- # Max over gt elements (dim 0) to find best gt candidate for each prediction
- matched_vals, matches = match_quality_matrix.max(dim=0)
-
- match_labels = matches.new_full(matches.size(), 1, dtype=torch.int8)
-
- for (l, low, high) in zip(self.labels, self.thresholds[:-1], self.thresholds[1:]):
- low_high = (matched_vals >= low) & (matched_vals < high)
- match_labels[low_high] = l
-
- if self.allow_low_quality_matches:
- self.set_low_quality_matches_(match_labels, match_quality_matrix)
-
- return matches, match_labels
-
- def set_low_quality_matches_(self, match_labels, match_quality_matrix):
- """
- Produce additional matches for predictions that have only low-quality matches.
- Specifically, for each ground-truth G find the set of predictions that have
- maximum overlap with it (including ties); for each prediction in that set, if
- it is unmatched, then match it to the ground-truth G.
-
- This function implements the RPN assignment case (i) in Sec. 3.1.2 of
- :paper:`Faster R-CNN`.
- """
- # For each gt, find the prediction with which it has highest quality
- highest_quality_foreach_gt, _ = match_quality_matrix.max(dim=1)
- # Find the highest quality match available, even if it is low, including ties.
- # Note that the matches qualities must be positive due to the use of
- # `torch.nonzero`.
- _, pred_inds_with_highest_quality = nonzero_tuple(
- match_quality_matrix == highest_quality_foreach_gt[:, None]
- )
- # If an anchor was labeled positive only due to a low-quality match
- # with gt_A, but it has larger overlap with gt_B, it's matched index will still be gt_B.
- # This follows the implementation in Detectron, and is found to have no significant impact.
- match_labels[pred_inds_with_highest_quality] = 1
diff --git a/spaces/camenduru-com/chisel/README.md b/spaces/camenduru-com/chisel/README.md
deleted file mode 100644
index 21bf101e1cedfd51dda8cd87fd35cc959434321e..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/chisel/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Chisel
-emoji: 🥄
-colorFrom: purple
-colorTo: purple
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/camilacorreamelo/medicalDetection/README.md b/spaces/camilacorreamelo/medicalDetection/README.md
deleted file mode 100644
index b9a0146282c54abd275720221cf9683bb3806c23..0000000000000000000000000000000000000000
--- a/spaces/camilacorreamelo/medicalDetection/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MedicalDetection
-emoji: 😻
-colorFrom: pink
-colorTo: gray
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/candlend/vits-hoshimi/vits/attentions.py b/spaces/candlend/vits-hoshimi/vits/attentions.py
deleted file mode 100644
index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000
--- a/spaces/candlend/vits-hoshimi/vits/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/__init__.py
deleted file mode 100644
index 6b0668157052ce7b796ef50bc7ee85361e7605b9..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/meta_arch/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-from .build import META_ARCH_REGISTRY, build_model # isort:skip
-
-from .panoptic_fpn import PanopticFPN
-
-# import all the meta_arch, so they will be registered
-from .rcnn import GeneralizedRCNN, ProposalNetwork
-from .dense_detector import DenseDetector
-from .retinanet import RetinaNet
-from .fcos import FCOS
-from .semantic_seg import SEM_SEG_HEADS_REGISTRY, SemanticSegmentor, build_sem_seg_head
-
-
-__all__ = list(globals().keys())
diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/token-classification/README.md b/spaces/chendl/compositional_test/transformers/examples/pytorch/token-classification/README.md
deleted file mode 100644
index 496722cf6b9a144d1b0c1237f52d1b859c93f503..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/pytorch/token-classification/README.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
-# Token classification
-
-## PyTorch version
-
-Fine-tuning the library models for token classification task such as Named Entity Recognition (NER), Parts-of-speech
-tagging (POS) or phrase extraction (CHUNKS). The main scrip `run_ner.py` leverages the 🤗 Datasets library and the Trainer API. You can easily
-customize it to your needs if you need extra processing on your datasets.
-
-It will either run on a datasets hosted on our [hub](https://huggingface.co/datasets) or with your own text files for
-training and validation, you might just need to add some tweaks in the data preprocessing.
-
-The following example fine-tunes BERT on CoNLL-2003:
-
-```bash
-python run_ner.py \
- --model_name_or_path bert-base-uncased \
- --dataset_name conll2003 \
- --output_dir /tmp/test-ner \
- --do_train \
- --do_eval
-```
-
-or just can just run the bash script `run.sh`.
-
-To run on your own training and validation files, use the following command:
-
-```bash
-python run_ner.py \
- --model_name_or_path bert-base-uncased \
- --train_file path_to_train_file \
- --validation_file path_to_validation_file \
- --output_dir /tmp/test-ner \
- --do_train \
- --do_eval
-```
-
-**Note:** This script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library) as it
-uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in
-[this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version
-of the script.
-
-> If your model classification head dimensions do not fit the number of labels in the dataset, you can specify `--ignore_mismatched_sizes` to adapt it.
-
-## Old version of the script
-
-You can find the old version of the PyTorch script [here](https://github.com/huggingface/transformers/blob/main/examples/legacy/token-classification/run_ner.py).
-
-## Pytorch version, no Trainer
-
-Based on the script [run_ner_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py).
-
-Like `run_ner.py`, this script allows you to fine-tune any of the models on the [hub](https://huggingface.co/models) on a
-token classification task, either NER, POS or CHUNKS tasks or your own data in a csv or a JSON file. The main difference is that this
-script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like.
-
-It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer
-or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by
-the mean of the [🤗 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally
-after installing it:
-
-```bash
-pip install git+https://github.com/huggingface/accelerate
-```
-
-then
-
-```bash
-export TASK_NAME=ner
-
-python run_ner_no_trainer.py \
- --model_name_or_path bert-base-cased \
- --dataset_name conll2003 \
- --task_name $TASK_NAME \
- --max_length 128 \
- --per_device_train_batch_size 32 \
- --learning_rate 2e-5 \
- --num_train_epochs 3 \
- --output_dir /tmp/$TASK_NAME/
-```
-
-You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run
-
-```bash
-accelerate config
-```
-
-and reply to the questions asked. Then
-
-```bash
-accelerate test
-```
-
-that will check everything is ready for training. Finally, you can launch training with
-
-```bash
-export TASK_NAME=ner
-
-accelerate launch run_ner_no_trainer.py \
- --model_name_or_path bert-base-cased \
- --dataset_name conll2003 \
- --task_name $TASK_NAME \
- --max_length 128 \
- --per_device_train_batch_size 32 \
- --learning_rate 2e-5 \
- --num_train_epochs 3 \
- --output_dir /tmp/$TASK_NAME/
-```
-
-This command is the same and will work for:
-
-- a CPU-only setup
-- a setup with one GPU
-- a distributed training with several GPUs (single or multi node)
-- a training on TPUs
-
-Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/bert-loses-patience/pabee/modeling_pabee_albert.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/bert-loses-patience/pabee/modeling_pabee_albert.py
deleted file mode 100644
index 5e17352dc19b54c9f0008d7a9263c1b1ca1d4538..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/bert-loses-patience/pabee/modeling_pabee_albert.py
+++ /dev/null
@@ -1,320 +0,0 @@
-# coding=utf-8
-# Copyright 2020 Google AI, Google Brain, the HuggingFace Inc. team and Microsoft Corporation.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""PyTorch ALBERT model with Patience-based Early Exit. """
-
-import logging
-
-import torch
-from torch import nn
-from torch.nn import CrossEntropyLoss, MSELoss
-
-from transformers.file_utils import add_start_docstrings, add_start_docstrings_to_model_forward
-from transformers.models.albert.modeling_albert import (
- ALBERT_INPUTS_DOCSTRING,
- ALBERT_START_DOCSTRING,
- AlbertModel,
- AlbertPreTrainedModel,
- AlbertTransformer,
-)
-
-
-logger = logging.getLogger(__name__)
-
-
-class AlbertTransformerWithPabee(AlbertTransformer):
- def adaptive_forward(self, hidden_states, current_layer, attention_mask=None, head_mask=None):
- if current_layer == 0:
- hidden_states = self.embedding_hidden_mapping_in(hidden_states)
- else:
- hidden_states = hidden_states[0]
-
- layers_per_group = int(self.config.num_hidden_layers / self.config.num_hidden_groups)
-
- # Index of the hidden group
- group_idx = int(current_layer / (self.config.num_hidden_layers / self.config.num_hidden_groups))
-
- layer_group_output = self.albert_layer_groups[group_idx](
- hidden_states,
- attention_mask,
- head_mask[group_idx * layers_per_group : (group_idx + 1) * layers_per_group],
- )
- hidden_states = layer_group_output[0]
-
- return (hidden_states,)
-
-
-@add_start_docstrings(
- "The bare ALBERT Model transformer with PABEE outputting raw hidden-states without any specific head on top.",
- ALBERT_START_DOCSTRING,
-)
-class AlbertModelWithPabee(AlbertModel):
- def __init__(self, config):
- super().__init__(config)
-
- self.encoder = AlbertTransformerWithPabee(config)
-
- self.init_weights()
- self.patience = 0
- self.inference_instances_num = 0
- self.inference_layers_num = 0
-
- self.regression_threshold = 0
-
- def set_regression_threshold(self, threshold):
- self.regression_threshold = threshold
-
- def set_patience(self, patience):
- self.patience = patience
-
- def reset_stats(self):
- self.inference_instances_num = 0
- self.inference_layers_num = 0
-
- def log_stats(self):
- avg_inf_layers = self.inference_layers_num / self.inference_instances_num
- message = (
- f"*** Patience = {self.patience} Avg. Inference Layers = {avg_inf_layers:.2f} Speed Up ="
- f" {1 - avg_inf_layers / self.config.num_hidden_layers:.2f} ***"
- )
- print(message)
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- output_dropout=None,
- output_layers=None,
- regression=False,
- ):
- r"""
- Return:
- :obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.AlbertConfig`) and inputs:
- last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- pooler_output (:obj:`torch.FloatTensor`: of shape :obj:`(batch_size, hidden_size)`):
- Last layer hidden-state of the first token of the sequence (classification token)
- further processed by a Linear layer and a Tanh activation function. The Linear
- layer weights are trained from the next sentence prediction (classification)
- objective during pre-training.
-
- This output is usually *not* a good summary
- of the semantic content of the input, you're often better with averaging or pooling
- the sequence of hidden-states for the whole input sequence.
- hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``config.output_hidden_states=True``):
- Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
- of shape :obj:`(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``config.output_attentions=True``):
- Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
- :obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- if attention_mask is None:
- attention_mask = torch.ones(input_shape, device=device)
- if token_type_ids is None:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
-
- extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
- extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
- extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- embedding_output = self.embeddings(
- input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
- )
- encoder_outputs = embedding_output
-
- if self.training:
- res = []
- for i in range(self.config.num_hidden_layers):
- encoder_outputs = self.encoder.adaptive_forward(
- encoder_outputs,
- current_layer=i,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- )
-
- pooled_output = self.pooler_activation(self.pooler(encoder_outputs[0][:, 0]))
- logits = output_layers[i](output_dropout(pooled_output))
- res.append(logits)
- elif self.patience == 0: # Use all layers for inference
- encoder_outputs = self.encoder(encoder_outputs, extended_attention_mask, head_mask=head_mask)
- pooled_output = self.pooler_activation(self.pooler(encoder_outputs[0][:, 0]))
- res = [output_layers[self.config.num_hidden_layers - 1](pooled_output)]
- else:
- patient_counter = 0
- patient_result = None
- calculated_layer_num = 0
- for i in range(self.config.num_hidden_layers):
- calculated_layer_num += 1
- encoder_outputs = self.encoder.adaptive_forward(
- encoder_outputs,
- current_layer=i,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- )
-
- pooled_output = self.pooler_activation(self.pooler(encoder_outputs[0][:, 0]))
- logits = output_layers[i](pooled_output)
- if regression:
- labels = logits.detach()
- if patient_result is not None:
- patient_labels = patient_result.detach()
- if (patient_result is not None) and torch.abs(patient_result - labels) < self.regression_threshold:
- patient_counter += 1
- else:
- patient_counter = 0
- else:
- labels = logits.detach().argmax(dim=1)
- if patient_result is not None:
- patient_labels = patient_result.detach().argmax(dim=1)
- if (patient_result is not None) and torch.all(labels.eq(patient_labels)):
- patient_counter += 1
- else:
- patient_counter = 0
-
- patient_result = logits
- if patient_counter == self.patience:
- break
- res = [patient_result]
- self.inference_layers_num += calculated_layer_num
- self.inference_instances_num += 1
-
- return res
-
-
-@add_start_docstrings(
- """Albert Model transformer with PABEE and a sequence classification/regression head on top (a linear layer on top of
- the pooled output) e.g. for GLUE tasks. """,
- ALBERT_START_DOCSTRING,
-)
-class AlbertForSequenceClassificationWithPabee(AlbertPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
-
- self.albert = AlbertModelWithPabee(config)
- self.dropout = nn.Dropout(config.classifier_dropout_prob)
- self.classifiers = nn.ModuleList(
- [nn.Linear(config.hidden_size, self.config.num_labels) for _ in range(config.num_hidden_layers)]
- )
-
- self.init_weights()
-
- @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- labels=None,
- ):
- r"""
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
- Labels for computing the sequence classification/regression loss.
- Indices should be in ``[0, ..., config.num_labels - 1]``.
- If ``config.num_labels == 1`` a regression loss is computed (Mean-Square loss),
- If ``config.num_labels > 1`` a classification loss is computed (Cross-Entropy).
-
- Returns:
- :obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.AlbertConfig`) and inputs:
- loss: (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
- Classification (or regression if config.num_labels==1) loss.
- logits ``torch.FloatTensor`` of shape ``(batch_size, config.num_labels)``
- Classification (or regression if config.num_labels==1) scores (before SoftMax).
- hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``config.output_hidden_states=True``):
- Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer)
- of shape :obj:`(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the initial embedding outputs.
- attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``config.output_attentions=True``):
- Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape
- :obj:`(batch_size, num_heads, sequence_length, sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
-
- Examples::
-
- from transformers import AlbertTokenizer
- from pabee import AlbertForSequenceClassificationWithPabee
- from torch import nn
- import torch
-
- tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
- model = AlbertForSequenceClassificationWithPabee.from_pretrained('albert-base-v2')
- input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
- labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
- outputs = model(input_ids, labels=labels)
- loss, logits = outputs[:2]
-
- """
-
- logits = self.albert(
- input_ids=input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_dropout=self.dropout,
- output_layers=self.classifiers,
- regression=self.num_labels == 1,
- )
-
- outputs = (logits[-1],)
-
- if labels is not None:
- total_loss = None
- total_weights = 0
- for ix, logits_item in enumerate(logits):
- if self.num_labels == 1:
- # We are doing regression
- loss_fct = MSELoss()
- loss = loss_fct(logits_item.view(-1), labels.view(-1))
- else:
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(logits_item.view(-1, self.num_labels), labels.view(-1))
- if total_loss is None:
- total_loss = loss
- else:
- total_loss += loss * (ix + 1)
- total_weights += ix + 1
- outputs = (total_loss / total_weights,) + outputs
-
- return outputs
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/run_asr.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/run_asr.py
deleted file mode 100644
index 15d2f12c7ddb5681105f5efd586f9da5659ab982..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/wav2vec2/run_asr.py
+++ /dev/null
@@ -1,480 +0,0 @@
-#!/usr/bin/env python3
-import logging
-import pathlib
-import re
-import sys
-from dataclasses import dataclass, field
-from typing import Any, Callable, Dict, List, Optional, Set, Union
-
-import datasets
-import librosa
-import numpy as np
-import torch
-from lang_trans import arabic
-from packaging import version
-from torch import nn
-
-from transformers import (
- HfArgumentParser,
- Trainer,
- TrainingArguments,
- Wav2Vec2CTCTokenizer,
- Wav2Vec2FeatureExtractor,
- Wav2Vec2ForCTC,
- Wav2Vec2Processor,
- is_apex_available,
- trainer_utils,
-)
-
-
-if is_apex_available():
- from apex import amp
-
-if version.parse(version.parse(torch.__version__).base_version) >= version.parse("1.6"):
- _is_native_amp_available = True
- from torch.cuda.amp import autocast
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
- """
-
- model_name_or_path: str = field(
- metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
- )
- cache_dir: Optional[str] = field(
- default=None,
- metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
- )
- freeze_feature_extractor: Optional[bool] = field(
- default=True, metadata={"help": "Whether to freeze the feature extractor layers of the model."}
- )
- verbose_logging: Optional[bool] = field(
- default=False,
- metadata={"help": "Whether to log verbose messages or not."},
- )
-
-
-def configure_logger(model_args: ModelArguments, training_args: TrainingArguments):
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- handlers=[logging.StreamHandler(sys.stdout)],
- )
- logging_level = logging.WARNING
- if model_args.verbose_logging:
- logging_level = logging.DEBUG
- elif trainer_utils.is_main_process(training_args.local_rank):
- logging_level = logging.INFO
- logger.setLevel(logging_level)
-
-
-@dataclass
-class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
-
- Using `HfArgumentParser` we can turn this class
- into argparse arguments to be able to specify them on
- the command line.
- """
-
- dataset_name: str = field(
- default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
- )
- dataset_config_name: Optional[str] = field(
- default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
- )
- train_split_name: Optional[str] = field(
- default="train",
- metadata={
- "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'"
- },
- )
- validation_split_name: Optional[str] = field(
- default="validation",
- metadata={
- "help": (
- "The name of the validation data set split to use (via the datasets library). Defaults to 'validation'"
- )
- },
- )
- target_text_column: Optional[str] = field(
- default="text",
- metadata={"help": "Column in the dataset that contains label (target text). Defaults to 'text'"},
- )
- speech_file_column: Optional[str] = field(
- default="file",
- metadata={"help": "Column in the dataset that contains speech file path. Defaults to 'file'"},
- )
- target_feature_extractor_sampling_rate: Optional[bool] = field(
- default=False,
- metadata={"help": "Resample loaded audio to target feature extractor's sampling rate or not."},
- )
- max_duration_in_seconds: Optional[float] = field(
- default=None,
- metadata={"help": "Filters out examples longer than specified. Defaults to no filtering."},
- )
- orthography: Optional[str] = field(
- default="librispeech",
- metadata={
- "help": (
- "Orthography used for normalization and tokenization: 'librispeech' (default), 'timit', or"
- " 'buckwalter'."
- )
- },
- )
- overwrite_cache: bool = field(
- default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."}
- )
- preprocessing_num_workers: Optional[int] = field(
- default=None,
- metadata={"help": "The number of processes to use for the preprocessing."},
- )
-
-
-@dataclass
-class Orthography:
- """
- Orthography scheme used for text normalization and tokenization.
-
- Args:
- do_lower_case (:obj:`bool`, `optional`, defaults to :obj:`False`):
- Whether or not to accept lowercase input and lowercase the output when decoding.
- vocab_file (:obj:`str`, `optional`):
- File containing the vocabulary.
- word_delimiter_token (:obj:`str`, `optional`, defaults to :obj:`"|"`):
- The token used for delimiting words; it needs to be in the vocabulary.
- translation_table (:obj:`Dict[str, str]`, `optional`, defaults to :obj:`{}`):
- Table to use with `str.translate()` when preprocessing text (e.g., "-" -> " ").
- words_to_remove (:obj:`Set[str]`, `optional`, defaults to :obj:`set()`):
- Words to remove when preprocessing text (e.g., "sil").
- untransliterator (:obj:`Callable[[str], str]`, `optional`):
- Function that untransliterates text back into native writing system.
- """
-
- do_lower_case: bool = False
- vocab_file: Optional[str] = None
- word_delimiter_token: Optional[str] = "|"
- translation_table: Optional[Dict[str, str]] = field(default_factory=dict)
- words_to_remove: Optional[Set[str]] = field(default_factory=set)
- untransliterator: Optional[Callable[[str], str]] = None
-
- @classmethod
- def from_name(cls, name: str):
- if name == "librispeech":
- return cls()
- if name == "timit":
- return cls(
- do_lower_case=True,
- # break compounds like "quarter-century-old" and replace pauses "--"
- translation_table=str.maketrans({"-": " "}),
- )
- if name == "buckwalter":
- translation_table = {
- "-": " ", # sometimes used to represent pauses
- "^": "v", # fixing "tha" in arabic_speech_corpus dataset
- }
- return cls(
- vocab_file=pathlib.Path(__file__).parent.joinpath("vocab/buckwalter.json"),
- word_delimiter_token="/", # "|" is Arabic letter alef with madda above
- translation_table=str.maketrans(translation_table),
- words_to_remove={"sil"}, # fixing "sil" in arabic_speech_corpus dataset
- untransliterator=arabic.buckwalter.untransliterate,
- )
- raise ValueError(f"Unsupported orthography: '{name}'.")
-
- def preprocess_for_training(self, text: str) -> str:
- # TODO(elgeish) return a pipeline (e.g., from jiwer) instead? Or rely on branch predictor as is
- if len(self.translation_table) > 0:
- text = text.translate(self.translation_table)
- if len(self.words_to_remove) == 0:
- text = " ".join(text.split()) # clean up whitespaces
- else:
- text = " ".join(w for w in text.split() if w not in self.words_to_remove) # and clean up whilespaces
- return text
-
- def create_processor(self, model_args: ModelArguments) -> Wav2Vec2Processor:
- feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(
- model_args.model_name_or_path, cache_dir=model_args.cache_dir
- )
- if self.vocab_file:
- tokenizer = Wav2Vec2CTCTokenizer(
- self.vocab_file,
- cache_dir=model_args.cache_dir,
- do_lower_case=self.do_lower_case,
- word_delimiter_token=self.word_delimiter_token,
- )
- else:
- tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(
- model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- do_lower_case=self.do_lower_case,
- word_delimiter_token=self.word_delimiter_token,
- )
- return Wav2Vec2Processor(feature_extractor, tokenizer)
-
-
-@dataclass
-class DataCollatorCTCWithPadding:
- """
- Data collator that will dynamically pad the inputs received.
- Args:
- processor (:class:`~transformers.Wav2Vec2Processor`)
- The processor used for proccessing the data.
- padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
- Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
- among:
- * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
- sequence if provided).
- * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
- maximum acceptable input length for the model if that argument is not provided.
- * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
- different lengths).
- max_length (:obj:`int`, `optional`):
- Maximum length of the ``input_values`` of the returned list and optionally padding length (see above).
- max_length_labels (:obj:`int`, `optional`):
- Maximum length of the ``labels`` returned list and optionally padding length (see above).
- pad_to_multiple_of (:obj:`int`, `optional`):
- If set will pad the sequence to a multiple of the provided value.
- This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
- 7.5 (Volta).
- """
-
- processor: Wav2Vec2Processor
- padding: Union[bool, str] = True
- max_length: Optional[int] = None
- max_length_labels: Optional[int] = None
- pad_to_multiple_of: Optional[int] = None
- pad_to_multiple_of_labels: Optional[int] = None
-
- def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
- # split inputs and labels since they have to be of different lenghts and need
- # different padding methods
- input_features = [{"input_values": feature["input_values"]} for feature in features]
- label_features = [{"input_ids": feature["labels"]} for feature in features]
-
- batch = self.processor.pad(
- input_features,
- padding=self.padding,
- max_length=self.max_length,
- pad_to_multiple_of=self.pad_to_multiple_of,
- return_tensors="pt",
- )
- labels_batch = self.processor.pad(
- labels=label_features,
- padding=self.padding,
- max_length=self.max_length_labels,
- pad_to_multiple_of=self.pad_to_multiple_of_labels,
- return_tensors="pt",
- )
-
- # replace padding with -100 to ignore loss correctly
- labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
-
- batch["labels"] = labels
-
- return batch
-
-
-class CTCTrainer(Trainer):
- def training_step(self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) -> torch.Tensor:
- """
- Perform a training step on a batch of inputs.
-
- Subclass and override to inject custom behavior.
-
- Args:
- model (:obj:`nn.Module`):
- The model to train.
- inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`):
- The inputs and targets of the model.
-
- The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
- argument :obj:`labels`. Check your model's documentation for all accepted arguments.
-
- Return:
- :obj:`torch.Tensor`: The tensor with training loss on this batch.
- """
-
- model.train()
- inputs = self._prepare_inputs(inputs)
-
- if self.use_amp:
- with autocast():
- loss = self.compute_loss(model, inputs)
- else:
- loss = self.compute_loss(model, inputs)
-
- if self.args.n_gpu > 1:
- if model.module.config.ctc_loss_reduction == "mean":
- loss = loss.mean()
- elif model.module.config.ctc_loss_reduction == "sum":
- loss = loss.sum() / (inputs["labels"] >= 0).sum()
- else:
- raise ValueError(f"{model.config.ctc_loss_reduction} is not valid. Choose one of ['mean', 'sum']")
-
- if self.args.gradient_accumulation_steps > 1:
- loss = loss / self.args.gradient_accumulation_steps
-
- if self.use_amp:
- self.scaler.scale(loss).backward()
- elif self.use_apex:
- with amp.scale_loss(loss, self.optimizer) as scaled_loss:
- scaled_loss.backward()
- elif self.deepspeed:
- self.deepspeed.backward(loss)
- else:
- loss.backward()
-
- return loss.detach()
-
-
-def main():
- # See all possible arguments in src/transformers/training_args.py
- # or by passing the --help flag to this script.
- # We now keep distinct sets of args, for a cleaner separation of concerns.
-
- parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
-
- model_args, data_args, training_args = parser.parse_args_into_dataclasses()
- configure_logger(model_args, training_args)
-
- orthography = Orthography.from_name(data_args.orthography.lower())
- processor = orthography.create_processor(model_args)
- model = Wav2Vec2ForCTC.from_pretrained(
- model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- gradient_checkpointing=training_args.gradient_checkpointing,
- vocab_size=len(processor.tokenizer),
- )
-
- train_dataset = datasets.load_dataset(
- data_args.dataset_name, data_args.dataset_config_name, split=data_args.train_split_name
- )
- val_dataset = datasets.load_dataset(
- data_args.dataset_name, data_args.dataset_config_name, split=data_args.validation_split_name
- )
-
- wer_metric = datasets.load_metric("wer")
- target_sr = processor.feature_extractor.sampling_rate if data_args.target_feature_extractor_sampling_rate else None
- vocabulary_chars_str = "".join(t for t in processor.tokenizer.get_vocab().keys() if len(t) == 1)
- vocabulary_text_cleaner = re.compile( # remove characters not in vocabulary
- f"[^\s{re.escape(vocabulary_chars_str)}]", # allow space in addition to chars in vocabulary
- flags=re.IGNORECASE if processor.tokenizer.do_lower_case else 0,
- )
- text_updates = []
-
- def prepare_example(example): # TODO(elgeish) make use of multiprocessing?
- example["speech"], example["sampling_rate"] = librosa.load(example[data_args.speech_file_column], sr=target_sr)
- if data_args.max_duration_in_seconds is not None:
- example["duration_in_seconds"] = len(example["speech"]) / example["sampling_rate"]
- # Normalize and clean up text; order matters!
- updated_text = orthography.preprocess_for_training(example[data_args.target_text_column])
- updated_text = vocabulary_text_cleaner.sub("", updated_text)
- if updated_text != example[data_args.target_text_column]:
- text_updates.append((example[data_args.target_text_column], updated_text))
- example[data_args.target_text_column] = updated_text
- return example
-
- train_dataset = train_dataset.map(prepare_example, remove_columns=[data_args.speech_file_column])
- val_dataset = val_dataset.map(prepare_example, remove_columns=[data_args.speech_file_column])
-
- if data_args.max_duration_in_seconds is not None:
-
- def filter_by_max_duration(example):
- return example["duration_in_seconds"] <= data_args.max_duration_in_seconds
-
- old_train_size = len(train_dataset)
- old_val_size = len(val_dataset)
- train_dataset = train_dataset.filter(filter_by_max_duration, remove_columns=["duration_in_seconds"])
- val_dataset = val_dataset.filter(filter_by_max_duration, remove_columns=["duration_in_seconds"])
- if len(train_dataset) > old_train_size:
- logger.warning(
- f"Filtered out {len(train_dataset) - old_train_size} train example(s) longer than"
- f" {data_args.max_duration_in_seconds} second(s)."
- )
- if len(val_dataset) > old_val_size:
- logger.warning(
- f"Filtered out {len(val_dataset) - old_val_size} validation example(s) longer than"
- f" {data_args.max_duration_in_seconds} second(s)."
- )
- logger.info(f"Split sizes: {len(train_dataset)} train and {len(val_dataset)} validation.")
-
- logger.warning(f"Updated {len(text_updates)} transcript(s) using '{data_args.orthography}' orthography rules.")
- if logger.isEnabledFor(logging.DEBUG):
- for original_text, updated_text in text_updates:
- logger.debug(f'Updated text: "{original_text}" -> "{updated_text}"')
- text_updates = None
-
- def prepare_dataset(batch):
- # check that all files have the correct sampling rate
- assert (
- len(set(batch["sampling_rate"])) == 1
- ), f"Make sure all inputs have the same sampling rate of {processor.feature_extractor.sampling_rate}."
-
- processed_batch = processor(
- audio=batch["speech"], text=batch[data_args.target_text_column], sampling_rate=batch["sampling_rate"][0]
- )
- batch.update(processed_batch)
- return batch
-
- train_dataset = train_dataset.map(
- prepare_dataset,
- batch_size=training_args.per_device_train_batch_size,
- batched=True,
- num_proc=data_args.preprocessing_num_workers,
- )
- val_dataset = val_dataset.map(
- prepare_dataset,
- batch_size=training_args.per_device_train_batch_size,
- batched=True,
- num_proc=data_args.preprocessing_num_workers,
- )
-
- data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)
-
- def compute_metrics(pred):
- pred_logits = pred.predictions
- pred_ids = np.argmax(pred_logits, axis=-1)
-
- pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
-
- pred_str = processor.batch_decode(pred_ids)
- # we do not want to group tokens when computing the metrics
- label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
- if logger.isEnabledFor(logging.DEBUG):
- for reference, predicted in zip(label_str, pred_str):
- logger.debug(f'reference: "{reference}"')
- logger.debug(f'predicted: "{predicted}"')
- if orthography.untransliterator is not None:
- logger.debug(f'reference (untransliterated): "{orthography.untransliterator(reference)}"')
- logger.debug(f'predicted (untransliterated): "{orthography.untransliterator(predicted)}"')
-
- wer = wer_metric.compute(predictions=pred_str, references=label_str)
-
- return {"wer": wer}
-
- if model_args.freeze_feature_extractor:
- model.freeze_feature_extractor()
-
- trainer = CTCTrainer(
- model=model,
- data_collator=data_collator,
- args=training_args,
- compute_metrics=compute_metrics,
- train_dataset=train_dataset,
- eval_dataset=val_dataset,
- tokenizer=processor.feature_extractor,
- )
-
- trainer.train()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chenyangqi/FateZero/FateZero/video_diffusion/common/set_seed.py b/spaces/chenyangqi/FateZero/FateZero/video_diffusion/common/set_seed.py
deleted file mode 100644
index 8f30dbf3028fc884adcd3ed0ffb317f2220ac32a..0000000000000000000000000000000000000000
--- a/spaces/chenyangqi/FateZero/FateZero/video_diffusion/common/set_seed.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import os
-os.environ['CUBLAS_WORKSPACE_CONFIG'] = ':4096:8'
-
-import torch
-import numpy as np
-import random
-
-from accelerate.utils import set_seed
-
-
-def video_set_seed(seed: int):
- """
- Helper function for reproducible behavior to set the seed in `random`, `numpy`, `torch`.
-
- Args:
- seed (`int`): The seed to set.
- device_specific (`bool`, *optional*, defaults to `False`):
- Whether to differ the seed on each device slightly with `self.process_index`.
- """
- set_seed(seed)
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
- torch.backends.cudnn.benchmark = False
- # torch.use_deterministic_algorithms(True, warn_only=True)
- # [W Context.cpp:82] Warning: efficient_attention_forward_cutlass does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (function alertNotDeterministic)
-
diff --git a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/english_bert_mock.py b/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/english_bert_mock.py
deleted file mode 100644
index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000
--- a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/english_bert_mock.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import torch
-
-
-def get_bert_feature(norm_text, word2ph):
- return torch.zeros(1024, sum(word2ph))
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates.py
deleted file mode 100644
index 63e509806ee905449fdd92b88f384fe3e7418b37..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates.py
+++ /dev/null
@@ -1,574 +0,0 @@
-from __future__ import annotations
-
-from typing import Any, Callable, Literal
-
-import numpy as np
-from PIL.Image import Image
-
-from gradio import components
-
-
-class TextArea(components.Textbox):
- """
- Sets: lines=7
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Callable | None = "",
- *,
- lines: int = 7,
- max_lines: int = 20,
- placeholder: str | None = None,
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- lines=lines,
- max_lines=max_lines,
- placeholder=placeholder,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- **kwargs,
- )
-
-
-class Webcam(components.Image):
- """
- Sets: source="webcam", interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: tuple[int, int] | None = None,
- image_mode: Literal["RGB", "L"] = "RGB",
- invert_colors: bool = False,
- source: Literal["webcam"] = "webcam",
- tool: Literal["editor", "select", "sketch", "color-sketch"] | None = None,
- type: Literal["numpy", "pil", "filepath"] = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- brush_radius: float | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- brush_radius=brush_radius,
- **kwargs,
- )
-
-
-class Sketchpad(components.Image):
- """
- Sets: image_mode="L", source="canvas", shape=(28, 28), invert_colors=True, interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: tuple[int, int] = (28, 28),
- image_mode: Literal["L"] = "L",
- invert_colors: bool = True,
- source: Literal["canvas"] = "canvas",
- tool: Literal["editor", "select", "sketch", "color-sketch"] | None = None,
- type: Literal["numpy", "pil", "filepath"] = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- brush_radius: float | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- brush_radius=brush_radius,
- **kwargs,
- )
-
-
-class Paint(components.Image):
- """
- Sets: source="canvas", tool="color-sketch", interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: tuple[int, int] | None = None,
- image_mode: Literal["RGB"] = "RGB",
- invert_colors: bool = False,
- source: Literal["canvas"] = "canvas",
- tool: Literal["color-sketch"] = "color-sketch",
- type: Literal["numpy", "pil", "filepath"] = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- brush_radius: float | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- brush_radius=brush_radius,
- **kwargs,
- )
-
-
-class ImageMask(components.Image):
- """
- Sets: source="upload", tool="sketch", interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: tuple[int, int] | None = None,
- image_mode: Literal["RGB", "L"] = "RGB",
- invert_colors: bool = False,
- source: Literal["upload"] = "upload",
- tool: Literal["sketch"] = "sketch",
- type: Literal["numpy", "pil", "filepath"] = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- brush_radius: float | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- brush_radius=brush_radius,
- **kwargs,
- )
-
-
-class ImagePaint(components.Image):
- """
- Sets: source="upload", tool="color-sketch", interactive=True
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: tuple[int, int] | None = None,
- image_mode: Literal["RGB", "L"] = "RGB",
- invert_colors: bool = False,
- source: Literal["upload"] = "upload",
- tool: Literal["color-sketch"] = "color-sketch",
- type: Literal["numpy", "pil", "filepath"] = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = True,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- brush_radius: float | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- brush_radius=brush_radius,
- **kwargs,
- )
-
-
-class Pil(components.Image):
- """
- Sets: type="pil"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Image | np.ndarray | None = None,
- *,
- shape: tuple[int, int] | None = None,
- image_mode: Literal["RGB", "L"] = "RGB",
- invert_colors: bool = False,
- source: Literal["upload", "webcam", "canvas"] = "upload",
- tool: Literal["editor", "select", "sketch", "color-sketch"] | None = None,
- type: Literal["pil"] = "pil",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- brush_radius: float | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- shape=shape,
- image_mode=image_mode,
- invert_colors=invert_colors,
- source=source,
- tool=tool,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- brush_radius=brush_radius,
- **kwargs,
- )
-
-
-class PlayableVideo(components.Video):
- """
- Sets: format="mp4"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | Callable | None = None,
- *,
- format: Literal["mp4"] | None = "mp4",
- source: Literal["upload", "webcam"] = "upload",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- mirror_webcam: bool = True,
- include_audio: bool | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- format=format,
- source=source,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- mirror_webcam=mirror_webcam,
- include_audio=include_audio,
- **kwargs,
- )
-
-
-class Microphone(components.Audio):
- """
- Sets: source="microphone"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | tuple[int, np.ndarray] | Callable | None = None,
- *,
- source: Literal["microphone"] = "microphone",
- type: Literal["numpy", "filepath"] = "numpy",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- streaming: bool = False,
- elem_id: str | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- source=source,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- streaming=streaming,
- elem_id=elem_id,
- **kwargs,
- )
-
-
-class Files(components.File):
- """
- Sets: file_count="multiple"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: str | list[str] | Callable | None = None,
- *,
- file_count: Literal["multiple"] = "multiple",
- type: Literal["file", "binary"] = "file",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- **kwargs,
- ):
- super().__init__(
- value=value,
- file_count=file_count,
- type=type,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- **kwargs,
- )
-
-
-class Numpy(components.Dataframe):
- """
- Sets: type="numpy"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: list[list[Any]] | Callable | None = None,
- *,
- headers: list[str] | None = None,
- row_count: int | tuple[int, str] = (1, "dynamic"),
- col_count: int | tuple[int, str] | None = None,
- datatype: str | list[str] = "str",
- type: Literal["numpy"] = "numpy",
- max_rows: int | None = 20,
- max_cols: int | None = None,
- overflow_row_behaviour: Literal["paginate", "show_ends"] = "paginate",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- wrap: bool = False,
- **kwargs,
- ):
- super().__init__(
- value=value,
- headers=headers,
- row_count=row_count,
- col_count=col_count,
- datatype=datatype,
- type=type,
- max_rows=max_rows,
- max_cols=max_cols,
- overflow_row_behaviour=overflow_row_behaviour,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- wrap=wrap,
- **kwargs,
- )
-
-
-class Matrix(components.Dataframe):
- """
- Sets: type="array"
- """
-
- is_template = True
-
- def __init__(
- self,
- value: list[list[Any]] | Callable | None = None,
- *,
- headers: list[str] | None = None,
- row_count: int | tuple[int, str] = (1, "dynamic"),
- col_count: int | tuple[int, str] | None = None,
- datatype: str | list[str] = "str",
- type: Literal["array"] = "array",
- max_rows: int | None = 20,
- max_cols: int | None = None,
- overflow_row_behaviour: Literal["paginate", "show_ends"] = "paginate",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- wrap: bool = False,
- **kwargs,
- ):
- super().__init__(
- value=value,
- headers=headers,
- row_count=row_count,
- col_count=col_count,
- datatype=datatype,
- type=type,
- max_rows=max_rows,
- max_cols=max_cols,
- overflow_row_behaviour=overflow_row_behaviour,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- wrap=wrap,
- **kwargs,
- )
-
-
-class List(components.Dataframe):
- """
- Sets: type="array", col_count=1
- """
-
- is_template = True
-
- def __init__(
- self,
- value: list[list[Any]] | Callable | None = None,
- *,
- headers: list[str] | None = None,
- row_count: int | tuple[int, str] = (1, "dynamic"),
- col_count: Literal[1] = 1,
- datatype: str | list[str] = "str",
- type: Literal["array"] = "array",
- max_rows: int | None = 20,
- max_cols: int | None = None,
- overflow_row_behaviour: Literal["paginate", "show_ends"] = "paginate",
- label: str | None = None,
- show_label: bool = True,
- interactive: bool | None = None,
- visible: bool = True,
- elem_id: str | None = None,
- wrap: bool = False,
- **kwargs,
- ):
- super().__init__(
- value=value,
- headers=headers,
- row_count=row_count,
- col_count=col_count,
- datatype=datatype,
- type=type,
- max_rows=max_rows,
- max_cols=max_cols,
- overflow_row_behaviour=overflow_row_behaviour,
- label=label,
- show_label=show_label,
- interactive=interactive,
- visible=visible,
- elem_id=elem_id,
- wrap=wrap,
- **kwargs,
- )
-
-
-Mic = Microphone
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-8f1feca1.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-8f1feca1.css
deleted file mode 100644
index 1b457869043e5e2005c2331cb14abed07b7f6a88..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-8f1feca1.css
+++ /dev/null
@@ -1 +0,0 @@
-span.svelte-s1r2yt{font-weight:var(--section-header-text-weight);font-size:var(--section-header-text-size)}.label-wrap.svelte-s1r2yt{display:flex;justify-content:space-between;cursor:pointer;width:var(--size-full)}.label-wrap.open.svelte-s1r2yt{margin-bottom:var(--size-2)}.icon.svelte-s1r2yt{transition:.15s}
diff --git a/spaces/cihyFjudo/fairness-paper-search/ITools 2013 Build 0524 Free Download [HOT].md b/spaces/cihyFjudo/fairness-paper-search/ITools 2013 Build 0524 Free Download [HOT].md
deleted file mode 100644
index 8550f667c7e29f2ae9aea28a948dcab16bc99f6e..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/ITools 2013 Build 0524 Free Download [HOT].md
+++ /dev/null
@@ -1,6 +0,0 @@
-iTools 2013 Build 0524 free download Download ✵✵✵ https://tinurli.com/2uwksH
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Tum Mile the movie download 1080p hd Watch the romantic disaster film online.md b/spaces/cihyFjudo/fairness-paper-search/Tum Mile the movie download 1080p hd Watch the romantic disaster film online.md
deleted file mode 100644
index f9650d85df6ce992582a0362777113f827034579..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Tum Mile the movie download 1080p hd Watch the romantic disaster film online.md
+++ /dev/null
@@ -1,6 +0,0 @@
-TumMilethemoviedownload1080phd Download File >>> https://tinurli.com/2uwhNK
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/codedog-ai/edu-assistant/webui/__init__.py b/spaces/codedog-ai/edu-assistant/webui/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec_internal.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec_internal.h
deleted file mode 100644
index 130a7dc3cd77805f4720cb581196250d249d77e7..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec_internal.h
+++ /dev/null
@@ -1,330 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_CODEC_INTERNAL_H
-#define AVCODEC_CODEC_INTERNAL_H
-
-#include
-
-#include "libavutil/attributes.h"
-#include "codec.h"
-#include "config.h"
-
-/**
- * The codec is not known to be init-threadsafe (i.e. it might be unsafe
- * to initialize this codec and another codec concurrently, typically because
- * the codec calls external APIs that are not known to be thread-safe).
- * Therefore calling the codec's init function needs to be guarded with a lock.
- */
-#define FF_CODEC_CAP_NOT_INIT_THREADSAFE (1 << 0)
-/**
- * The codec allows calling the close function for deallocation even if
- * the init function returned a failure. Without this capability flag, a
- * codec does such cleanup internally when returning failures from the
- * init function and does not expect the close function to be called at
- * all.
- */
-#define FF_CODEC_CAP_INIT_CLEANUP (1 << 1)
-/**
- * Decoders marked with FF_CODEC_CAP_SETS_PKT_DTS want to set
- * AVFrame.pkt_dts manually. If the flag is set, decode.c won't overwrite
- * this field. If it's unset, decode.c tries to guess the pkt_dts field
- * from the input AVPacket.
- */
-#define FF_CODEC_CAP_SETS_PKT_DTS (1 << 2)
-/**
- * The decoder extracts and fills its parameters even if the frame is
- * skipped due to the skip_frame setting.
- */
-#define FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM (1 << 3)
-/**
- * The decoder sets the cropping fields in the output frames manually.
- * If this cap is set, the generic code will initialize output frame
- * dimensions to coded rather than display values.
- */
-#define FF_CODEC_CAP_EXPORTS_CROPPING (1 << 4)
-/**
- * Codec initializes slice-based threading with a main function
- */
-#define FF_CODEC_CAP_SLICE_THREAD_HAS_MF (1 << 5)
-/*
- * The codec supports frame threading and has inter-frame dependencies, so it
- * uses ff_thread_report/await_progress().
- */
-#define FF_CODEC_CAP_ALLOCATE_PROGRESS (1 << 6)
-/**
- * Codec handles avctx->thread_count == 0 (auto) internally.
- */
-#define FF_CODEC_CAP_AUTO_THREADS (1 << 7)
-/**
- * Codec handles output frame properties internally instead of letting the
- * internal logic derive them from AVCodecInternal.last_pkt_props.
- */
-#define FF_CODEC_CAP_SETS_FRAME_PROPS (1 << 8)
-/**
- * Codec supports embedded ICC profiles (AV_FRAME_DATA_ICC_PROFILE).
- */
-#define FF_CODEC_CAP_ICC_PROFILES (1 << 9)
-/**
- * The encoder has AV_CODEC_CAP_DELAY set, but does not actually have delay - it
- * only wants to be flushed at the end to update some context variables (e.g.
- * 2pass stats) or produce a trailing packet. Besides that it immediately
- * produces exactly one output packet per each input frame, just as no-delay
- * encoders do.
- */
-#define FF_CODEC_CAP_EOF_FLUSH (1 << 10)
-
-/**
- * FFCodec.codec_tags termination value
- */
-#define FF_CODEC_TAGS_END -1
-
-typedef struct FFCodecDefault {
- const char *key;
- const char *value;
-} FFCodecDefault;
-
-struct AVCodecContext;
-struct AVSubtitle;
-struct AVPacket;
-
-enum FFCodecType {
- /* The codec is a decoder using the decode callback;
- * audio and video codecs only. */
- FF_CODEC_CB_TYPE_DECODE,
- /* The codec is a decoder using the decode_sub callback;
- * subtitle codecs only. */
- FF_CODEC_CB_TYPE_DECODE_SUB,
- /* The codec is a decoder using the receive_frame callback;
- * audio and video codecs only. */
- FF_CODEC_CB_TYPE_RECEIVE_FRAME,
- /* The codec is an encoder using the encode callback;
- * audio and video codecs only. */
- FF_CODEC_CB_TYPE_ENCODE,
- /* The codec is an encoder using the encode_sub callback;
- * subtitle codecs only. */
- FF_CODEC_CB_TYPE_ENCODE_SUB,
- /* The codec is an encoder using the receive_packet callback;
- * audio and video codecs only. */
- FF_CODEC_CB_TYPE_RECEIVE_PACKET,
-};
-
-typedef struct FFCodec {
- /**
- * The public AVCodec. See codec.h for it.
- */
- AVCodec p;
-
- /**
- * Internal codec capabilities FF_CODEC_CAP_*.
- */
- unsigned caps_internal:29;
-
- /**
- * This field determines the type of the codec (decoder/encoder)
- * and also the exact callback cb implemented by the codec.
- * cb_type uses enum FFCodecType values.
- */
- unsigned cb_type:3;
-
- int priv_data_size;
- /**
- * @name Frame-level threading support functions
- * @{
- */
- /**
- * Copy necessary context variables from a previous thread context to the current one.
- * If not defined, the next thread will start automatically; otherwise, the codec
- * must call ff_thread_finish_setup().
- *
- * dst and src will (rarely) point to the same context, in which case memcpy should be skipped.
- */
- int (*update_thread_context)(struct AVCodecContext *dst, const struct AVCodecContext *src);
-
- /**
- * Copy variables back to the user-facing context
- */
- int (*update_thread_context_for_user)(struct AVCodecContext *dst, const struct AVCodecContext *src);
- /** @} */
-
- /**
- * Private codec-specific defaults.
- */
- const FFCodecDefault *defaults;
-
- /**
- * Initialize codec static data, called from av_codec_iterate().
- *
- * This is not intended for time consuming operations as it is
- * run for every codec regardless of that codec being used.
- */
- void (*init_static_data)(struct FFCodec *codec);
-
- int (*init)(struct AVCodecContext *);
-
- union {
- /**
- * Decode to an AVFrame.
- * cb is in this state if cb_type is FF_CODEC_CB_TYPE_DECODE.
- *
- * @param avctx codec context
- * @param[out] frame AVFrame for output
- * @param[out] got_frame_ptr decoder sets to 0 or 1 to indicate that
- * a non-empty frame was returned in frame.
- * @param[in] avpkt AVPacket containing the data to be decoded
- * @return amount of bytes read from the packet on success,
- * negative error code on failure
- */
- int (*decode)(struct AVCodecContext *avctx, struct AVFrame *frame,
- int *got_frame_ptr, struct AVPacket *avpkt);
- /**
- * Decode subtitle data to an AVSubtitle.
- * cb is in this state if cb_type is FF_CODEC_CB_TYPE_DECODE_SUB.
- *
- * Apart from that this is like the decode callback.
- */
- int (*decode_sub)(struct AVCodecContext *avctx, struct AVSubtitle *sub,
- int *got_frame_ptr, const struct AVPacket *avpkt);
- /**
- * Decode API with decoupled packet/frame dataflow.
- * cb is in this state if cb_type is FF_CODEC_CB_TYPE_RECEIVE_FRAME.
- *
- * This function is called to get one output frame. It should call
- * ff_decode_get_packet() to obtain input data.
- */
- int (*receive_frame)(struct AVCodecContext *avctx, struct AVFrame *frame);
- /**
- * Encode data to an AVPacket.
- * cb is in this state if cb_type is FF_CODEC_CB_TYPE_ENCODE
- *
- * @param avctx codec context
- * @param[out] avpkt output AVPacket
- * @param[in] frame AVFrame containing the input to be encoded
- * @param[out] got_packet_ptr encoder sets to 0 or 1 to indicate that a
- * non-empty packet was returned in avpkt.
- * @return 0 on success, negative error code on failure
- */
- int (*encode)(struct AVCodecContext *avctx, struct AVPacket *avpkt,
- const struct AVFrame *frame, int *got_packet_ptr);
- /**
- * Encode subtitles to a raw buffer.
- * cb is in this state if cb_type is FF_CODEC_CB_TYPE_ENCODE_SUB.
- */
- int (*encode_sub)(struct AVCodecContext *avctx, uint8_t *buf,
- int buf_size, const struct AVSubtitle *sub);
- /**
- * Encode API with decoupled frame/packet dataflow.
- * cb is in this state if cb_type is FF_CODEC_CB_TYPE_RECEIVE_PACKET.
- *
- * This function is called to get one output packet.
- * It should call ff_encode_get_frame() to obtain input data.
- */
- int (*receive_packet)(struct AVCodecContext *avctx, struct AVPacket *avpkt);
- } cb;
-
- int (*close)(struct AVCodecContext *);
-
- /**
- * Flush buffers.
- * Will be called when seeking
- */
- void (*flush)(struct AVCodecContext *);
-
- /**
- * Decoding only, a comma-separated list of bitstream filters to apply to
- * packets before decoding.
- */
- const char *bsfs;
-
- /**
- * Array of pointers to hardware configurations supported by the codec,
- * or NULL if no hardware supported. The array is terminated by a NULL
- * pointer.
- *
- * The user can only access this field via avcodec_get_hw_config().
- */
- const struct AVCodecHWConfigInternal *const *hw_configs;
-
- /**
- * List of supported codec_tags, terminated by FF_CODEC_TAGS_END.
- */
- const uint32_t *codec_tags;
-} FFCodec;
-
-#if CONFIG_SMALL
-#define CODEC_LONG_NAME(str) .p.long_name = NULL
-#else
-#define CODEC_LONG_NAME(str) .p.long_name = str
-#endif
-
-#if HAVE_THREADS
-#define UPDATE_THREAD_CONTEXT(func) \
- .update_thread_context = (func)
-#define UPDATE_THREAD_CONTEXT_FOR_USER(func) \
- .update_thread_context_for_user = (func)
-#else
-#define UPDATE_THREAD_CONTEXT(func) \
- .update_thread_context = NULL
-#define UPDATE_THREAD_CONTEXT_FOR_USER(func) \
- .update_thread_context_for_user = NULL
-#endif
-
-#if FF_API_OLD_CHANNEL_LAYOUT
-#define CODEC_OLD_CHANNEL_LAYOUTS(...) CODEC_OLD_CHANNEL_LAYOUTS_ARRAY(((const uint64_t[]) { __VA_ARGS__, 0 }))
-#if defined(__clang__)
-#define CODEC_OLD_CHANNEL_LAYOUTS_ARRAY(array) \
- FF_DISABLE_DEPRECATION_WARNINGS \
- .p.channel_layouts = (array), \
- FF_ENABLE_DEPRECATION_WARNINGS
-#else
-#define CODEC_OLD_CHANNEL_LAYOUTS_ARRAY(array) .p.channel_layouts = (array),
-#endif
-#else
-/* This is only provided to allow to test disabling FF_API_OLD_CHANNEL_LAYOUT
- * without removing all the FF_API_OLD_CHANNEL_LAYOUT codeblocks.
- * It is of course still expected to be removed when FF_API_OLD_CHANNEL_LAYOUT
- * will be finally removed (along with all usages of these macros). */
-#define CODEC_OLD_CHANNEL_LAYOUTS(...)
-#define CODEC_OLD_CHANNEL_LAYOUTS_ARRAY(array)
-#endif
-
-#define FF_CODEC_DECODE_CB(func) \
- .cb_type = FF_CODEC_CB_TYPE_DECODE, \
- .cb.decode = (func)
-#define FF_CODEC_DECODE_SUB_CB(func) \
- .cb_type = FF_CODEC_CB_TYPE_DECODE_SUB, \
- .cb.decode_sub = (func)
-#define FF_CODEC_RECEIVE_FRAME_CB(func) \
- .cb_type = FF_CODEC_CB_TYPE_RECEIVE_FRAME, \
- .cb.receive_frame = (func)
-#define FF_CODEC_ENCODE_CB(func) \
- .cb_type = FF_CODEC_CB_TYPE_ENCODE, \
- .cb.encode = (func)
-#define FF_CODEC_ENCODE_SUB_CB(func) \
- .cb_type = FF_CODEC_CB_TYPE_ENCODE_SUB, \
- .cb.encode_sub = (func)
-#define FF_CODEC_RECEIVE_PACKET_CB(func) \
- .cb_type = FF_CODEC_CB_TYPE_RECEIVE_PACKET, \
- .cb.receive_packet = (func)
-
-static av_always_inline const FFCodec *ffcodec(const AVCodec *codec)
-{
- return (const FFCodec*)codec;
-}
-
-#endif /* AVCODEC_CODEC_INTERNAL_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cos_tablegen.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cos_tablegen.c
deleted file mode 100644
index 7206aad5dd86538b04446ea3303d1fc7b559e33c..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cos_tablegen.c
+++ /dev/null
@@ -1,80 +0,0 @@
-/*
- * Generate a header file for hardcoded ff_cos_* tables
- *
- * Copyright (c) 2009 Reimar Döffinger
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include
-#include
-
-#include "libavutil/mathematics.h"
-
-#define BITS 17
-#define FLOATFMT "%.18e"
-#define FIXEDFMT "%6d"
-
-static int clip_f15(int v)
-{
- return v < -32767 ? -32767 :
- v > 32767 ? 32767 :
- v;
-}
-
-static void printval(double val, int fixed)
-{
- if (fixed) {
- /* lrint() isn't always available, so round and cast manually. */
- double new_val = val * (double) (1 << 15);
-
- new_val = new_val >= 0 ? floor(new_val + 0.5) : ceil(new_val - 0.5);
-
- printf(" "FIXEDFMT",", clip_f15((long int) new_val));
- } else {
- printf(" "FLOATFMT",", val);
- }
-}
-
-int main(int argc, char *argv[])
-{
- int i, j;
- int do_sin = argc > 1 && !strcmp(argv[1], "sin");
- int fixed = argc > 1 && strstr(argv[1], "fixed");
- double (*func)(double) = do_sin ? sin : cos;
-
- printf("/* This file was automatically generated. */\n");
- printf("#define FFT_FLOAT %d\n", !fixed);
- printf("#include \"libavcodec/%s\"\n", do_sin ? "rdft.h" : "fft.h");
- for (i = 4; i <= BITS; i++) {
- int m = 1 << i;
- double freq = 2*M_PI/m;
- printf("%s(%i) = {\n ", do_sin ? "SINTABLE" : "COSTABLE", m);
- for (j = 0; j < m/2 - 1; j++) {
- int idx = j > m/4 ? m/2 - j : j;
- if (do_sin && j >= m/4)
- idx = m/4 - j;
- printval(func(idx*freq), fixed);
- if ((j & 3) == 3)
- printf("\n ");
- }
- printval(func(do_sin ? -(m/4 - 1)*freq : freq), fixed);
- printf("\n};\n");
- }
- return 0;
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hca_data.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hca_data.h
deleted file mode 100644
index 7093221c2acff090564835d977a5c3418ac7cf10..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hca_data.h
+++ /dev/null
@@ -1,176 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-
-#ifndef AVCODEC_HCA_DATA_H
-#define AVCODEC_HCA_DATA_H
-
-#include
-
-static const uint8_t max_bits_table[] = {
- 0, 2, 3, 3, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12,
-};
-
-static const uint8_t quant_spectrum_bits[] =
-{
- 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
- 1,1,2,2,0,0,0,0,0,0,0,0,0,0,0,0,
- 2,2,2,2,2,2,3,3,0,0,0,0,0,0,0,0,
- 2,2,3,3,3,3,3,3,0,0,0,0,0,0,0,0,
- 3,3,3,3,3,3,3,3,3,3,3,3,3,3,4,4,
- 3,3,3,3,3,3,3,3,3,3,4,4,4,4,4,4,
- 3,3,3,3,3,3,4,4,4,4,4,4,4,4,4,4,
- 3,3,4,4,4,4,4,4,4,4,4,4,4,4,4,4,
-};
-
-static const int8_t quant_spectrum_value[] =
-{
- +0,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,
- +0,+0,+1,-1,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,+0,
- +0,+0,+1,+1,-1,-1,+2,-2,+0,+0,+0,+0,+0,+0,+0,+0,
- +0,+0,+1,-1,+2,-2,+3,-3,+0,+0,+0,+0,+0,+0,+0,+0,
- +0,+0,+1,+1,-1,-1,+2,+2,-2,-2,+3,+3,-3,-3,+4,-4,
- +0,+0,+1,+1,-1,-1,+2,+2,-2,-2,+3,-3,+4,-4,+5,-5,
- +0,+0,+1,+1,-1,-1,+2,-2,+3,-3,+4,-4,+5,-5,+6,-6,
- +0,+0,+1,-1,+2,-2,+3,-3,+4,-4,+5,-5,+6,-6,+7,-7,
-};
-
-static const uint8_t scale_table[] =
-{
- 15, 14, 14, 14, 14, 14, 14, 13, 13,
- 13, 13, 13, 13, 12, 12, 12, 12,
- 12, 12, 11, 11, 11, 11, 11, 11,
- 10, 10, 10, 10, 10, 10, 10, 9,
- 9, 9, 9, 9, 9, 8, 8, 8,
- 8, 8, 8, 7, 6, 6, 5, 4,
- 4, 4, 3, 3, 3, 2, 2, 2,
- 2, 1,
-};
-
-static const float window[128] =
-{
- 0.000690534, 0.00197623, 0.00367386, 0.00572424, 0.0080967, 0.0107732, 0.0137425, 0.0169979,
- 0.0205353, 0.0243529, 0.0284505, 0.0328291, 0.0374906, 0.0424379, 0.0476744, 0.0532043,
- 0.0590321, 0.0651629, 0.071602, 0.0783552, 0.0854285, 0.092828, 0.10056, 0.108631,
- 0.117048, 0.125817, 0.134944, 0.144437, 0.1543, 0.164539, 0.175161, 0.186169,
- 0.197569, 0.209363, 0.221555, 0.234145, 0.247136, 0.260526, 0.274313, 0.288493,
- 0.303062, 0.318012, 0.333333, 0.349015, 0.365044, 0.381403, 0.398073, 0.415034,
- 0.43226, 0.449725, 0.4674, 0.485251, 0.503245, 0.521344, 0.539509, 0.557698,
- 0.575869, 0.593978, 0.611981, 0.629831, 0.647486, 0.6649, 0.682031, 0.698838,
- 0.71528, 0.731323, 0.746932, 0.762077, 0.776732, 0.790873, 0.804481, 0.817542,
- 0.830044, 0.84198, 0.853347, 0.864144, 0.874375, 0.884046, 0.893167, 0.901749,
- 0.909806, 0.917354, 0.924409, 0.93099, 0.937117, 0.942809, 0.948087, 0.952971,
- 0.957482, 0.961641, 0.965467, 0.968981, 0.972202, 0.975148, 0.977838, 0.980289,
- 0.982518, 0.98454, 0.986371, 0.988024, 0.989514, 0.990853, 0.992053, 0.993126,
- 0.994082, 0.994931, 0.995682, 0.996344, 0.996926, 0.997433, 0.997875, 0.998256,
- 0.998584, 0.998863, 0.999099, 0.999297, 0.999461, 0.999595, 0.999703, 0.999789,
- 0.999856, 0.999906, 0.999942, 0.999967, 0.999984, 0.999993, 0.999998, 1.0,
-};
-
-static const float intensity_ratio_table[] =
-{
- 2.0, 1.85714, 1.71429, 1.57143, 1.42857, 1.28571, 1.14286, 1.0,
- 0.857143, 0.714286, 0.571429, 0.428571, 0.285714, 0.142857, 0.0, 0.0,
-};
-
-static const float scale_conversion_table[] =
-{
- 0, 0, 1.87066e-08, 2.49253e-08, 3.32113e-08, 4.42518e-08, 5.89626e-08, 7.85637e-08, 1.04681e-07,
- 1.3948e-07, 1.85848e-07, 2.4763e-07, 3.2995e-07, 4.39636e-07, 5.85785e-07, 7.80519e-07, 1.03999e-06,
- 1.38572e-06, 1.84637e-06, 2.46017e-06, 3.27801e-06, 4.36772e-06, 5.8197e-06, 7.75435e-06, 1.03321e-05,
- 1.37669e-05, 1.83435e-05, 2.44414e-05, 3.25665e-05, 4.33927e-05, 5.78179e-05, 7.70384e-05, 0.000102648,
- 0.000136772, 0.00018224, 0.000242822, 0.000323544, 0.000431101, 0.000574413, 0.000765366, 0.0010198,
- 0.00135881, 0.00181053, 0.0024124, 0.00321437, 0.00428293, 0.00570671, 0.00760381, 0.0101316,
- 0.0134996, 0.0179873, 0.0239669, 0.0319343, 0.0425503, 0.0566954, 0.0755428, 0.100656,
- 0.134117, 0.178702, 0.238108, 0.317263, 0.422731, 0.563261, 0.750507,
- 1.0, 1.33243, 1.77538, 2.36557, 3.15196, 4.19978, 5.59592, 7.45618,
- 9.93486, 13.2375, 17.6381, 23.5016, 31.3143, 41.7242, 55.5947, 74.0762,
- 98.7015, 131.513, 175.232, 233.485, 311.103, 414.524, 552.326, 735.937,
- 980.586, 1306.56, 1740.91, 2319.64, 3090.77, 4118.24, 5487.28, 7311.43,
- 9741.98, 12980.5, 17295.7, 23045.3, 30706.4, 40914.2, 54515.4, 72638,
- 96785.3, 128960, 171830, 228952, 305064, 406477, 541603, 721649,
- 961548, 1.2812e+06, 1.70711e+06, 2.27461e+06, 3.03076e+06, 4.03829e+06, 5.38075e+06, 7.16948e+06,
- 9.55285e+06, 1.27285e+07, 1.69599e+07, 2.25979e+07, 3.01102e+07, 4.01198e+07, 5.3457e+07, 0,
-};
-
-static const int scale_conv_bias = 64;
-
-static const float dequantizer_scaling_table[] =
-{
- 1.58838e-07, 2.11641e-07, 2.81998e-07, 3.75743e-07, 5.00652e-07, 6.67085e-07, 8.88846e-07, 1.18433e-06,
- 1.57804e-06, 2.10263e-06, 2.80161e-06, 3.73296e-06, 4.97391e-06, 6.6274e-06, 8.83057e-06, 1.17661e-05,
- 1.56776e-05, 2.08893e-05, 2.78336e-05, 3.70864e-05, 4.94151e-05, 6.58423e-05, 8.77305e-05, 0.000116895,
- 0.000155755, 0.000207533, 0.000276523, 0.000368448, 0.000490933, 0.000654135, 0.00087159, 0.00116134,
- 0.0015474, 0.00206181, 0.00274722, 0.00366048, 0.00487735, 0.00649874, 0.00865913, 0.0115377,
- 0.0153732, 0.0204838, 0.0272932, 0.0363664, 0.0484558, 0.0645641, 0.0860272, 0.114626,
- 0.152731, 0.203503, 0.271155, 0.361295, 0.481401, 0.641435, 0.854669, 1.13879,
- 1.51736, 2.02178, 2.69388, 3.58942, 4.78266, 6.37257, 8.49102, 11.3137,
-};
-
-static const float quant_step_size[] =
-{
- 0.0, 0.666667, 0.4, 0.285714, 0.222222, 0.181818, 0.153846, 0.133333, 0.0645161,
- 0.031746, 0.015748, 0.00784314, 0.00391389, 0.00195503, 0.00097704, 0.000488401,
-};
-
-static const uint8_t ath_base_curve[656] =
-{
- 0x78,0x5F,0x56,0x51,0x4E,0x4C,0x4B,0x49,0x48,0x48,0x47,0x46,0x46,0x45,0x45,0x45,
- 0x44,0x44,0x44,0x44,0x43,0x43,0x43,0x43,0x43,0x43,0x42,0x42,0x42,0x42,0x42,0x42,
- 0x42,0x42,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x40,0x40,0x40,0x40,
- 0x40,0x40,0x40,0x40,0x40,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,
- 0x3F,0x3F,0x3F,0x3E,0x3E,0x3E,0x3E,0x3E,0x3E,0x3D,0x3D,0x3D,0x3D,0x3D,0x3D,0x3D,
- 0x3C,0x3C,0x3C,0x3C,0x3C,0x3C,0x3C,0x3C,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,
- 0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,
- 0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3B,0x3C,0x3C,0x3C,0x3C,0x3C,0x3C,0x3C,0x3C,
- 0x3D,0x3D,0x3D,0x3D,0x3D,0x3D,0x3D,0x3D,0x3E,0x3E,0x3E,0x3E,0x3E,0x3E,0x3E,0x3F,
- 0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,0x3F,
- 0x3F,0x3F,0x3F,0x3F,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,
- 0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x40,0x41,0x41,0x41,0x41,0x41,0x41,0x41,
- 0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x41,
- 0x41,0x41,0x41,0x41,0x41,0x41,0x41,0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x42,
- 0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x42,0x43,0x43,0x43,
- 0x43,0x43,0x43,0x43,0x43,0x43,0x43,0x43,0x43,0x43,0x43,0x43,0x43,0x43,0x44,0x44,
- 0x44,0x44,0x44,0x44,0x44,0x44,0x44,0x44,0x44,0x44,0x44,0x44,0x45,0x45,0x45,0x45,
- 0x45,0x45,0x45,0x45,0x45,0x45,0x45,0x45,0x46,0x46,0x46,0x46,0x46,0x46,0x46,0x46,
- 0x46,0x46,0x47,0x47,0x47,0x47,0x47,0x47,0x47,0x47,0x47,0x47,0x48,0x48,0x48,0x48,
- 0x48,0x48,0x48,0x48,0x49,0x49,0x49,0x49,0x49,0x49,0x49,0x49,0x4A,0x4A,0x4A,0x4A,
- 0x4A,0x4A,0x4A,0x4A,0x4B,0x4B,0x4B,0x4B,0x4B,0x4B,0x4B,0x4C,0x4C,0x4C,0x4C,0x4C,
- 0x4C,0x4D,0x4D,0x4D,0x4D,0x4D,0x4D,0x4E,0x4E,0x4E,0x4E,0x4E,0x4E,0x4F,0x4F,0x4F,
- 0x4F,0x4F,0x4F,0x50,0x50,0x50,0x50,0x50,0x51,0x51,0x51,0x51,0x51,0x52,0x52,0x52,
- 0x52,0x52,0x53,0x53,0x53,0x53,0x54,0x54,0x54,0x54,0x54,0x55,0x55,0x55,0x55,0x56,
- 0x56,0x56,0x56,0x57,0x57,0x57,0x57,0x57,0x58,0x58,0x58,0x59,0x59,0x59,0x59,0x5A,
- 0x5A,0x5A,0x5A,0x5B,0x5B,0x5B,0x5B,0x5C,0x5C,0x5C,0x5D,0x5D,0x5D,0x5D,0x5E,0x5E,
- 0x5E,0x5F,0x5F,0x5F,0x60,0x60,0x60,0x61,0x61,0x61,0x61,0x62,0x62,0x62,0x63,0x63,
- 0x63,0x64,0x64,0x64,0x65,0x65,0x66,0x66,0x66,0x67,0x67,0x67,0x68,0x68,0x68,0x69,
- 0x69,0x6A,0x6A,0x6A,0x6B,0x6B,0x6B,0x6C,0x6C,0x6D,0x6D,0x6D,0x6E,0x6E,0x6F,0x6F,
- 0x70,0x70,0x70,0x71,0x71,0x72,0x72,0x73,0x73,0x73,0x74,0x74,0x75,0x75,0x76,0x76,
- 0x77,0x77,0x78,0x78,0x78,0x79,0x79,0x7A,0x7A,0x7B,0x7B,0x7C,0x7C,0x7D,0x7D,0x7E,
- 0x7E,0x7F,0x7F,0x80,0x80,0x81,0x81,0x82,0x83,0x83,0x84,0x84,0x85,0x85,0x86,0x86,
- 0x87,0x88,0x88,0x89,0x89,0x8A,0x8A,0x8B,0x8C,0x8C,0x8D,0x8D,0x8E,0x8F,0x8F,0x90,
- 0x90,0x91,0x92,0x92,0x93,0x94,0x94,0x95,0x95,0x96,0x97,0x97,0x98,0x99,0x99,0x9A,
- 0x9B,0x9B,0x9C,0x9D,0x9D,0x9E,0x9F,0xA0,0xA0,0xA1,0xA2,0xA2,0xA3,0xA4,0xA5,0xA5,
- 0xA6,0xA7,0xA7,0xA8,0xA9,0xAA,0xAA,0xAB,0xAC,0xAD,0xAE,0xAE,0xAF,0xB0,0xB1,0xB1,
- 0xB2,0xB3,0xB4,0xB5,0xB6,0xB6,0xB7,0xB8,0xB9,0xBA,0xBA,0xBB,0xBC,0xBD,0xBE,0xBF,
- 0xC0,0xC1,0xC1,0xC2,0xC3,0xC4,0xC5,0xC6,0xC7,0xC8,0xC9,0xC9,0xCA,0xCB,0xCC,0xCD,
- 0xCE,0xCF,0xD0,0xD1,0xD2,0xD3,0xD4,0xD5,0xD6,0xD7,0xD8,0xD9,0xDA,0xDB,0xDC,0xDD,
- 0xDE,0xDF,0xE0,0xE1,0xE2,0xE3,0xE4,0xE5,0xE6,0xE7,0xE8,0xE9,0xEA,0xEB,0xED,0xEE,
- 0xEF,0xF0,0xF1,0xF2,0xF3,0xF4,0xF5,0xF7,0xF8,0xF9,0xFA,0xFB,0xFC,0xFD,0xFF,0xFF,
-};
-
-#endif /* AVCODEC_HCA_DATA_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download 8 Ball Pool on Android with Extended Stick Guideline Mod for Free A Guide.md b/spaces/congsaPfin/Manga-OCR/logs/Download 8 Ball Pool on Android with Extended Stick Guideline Mod for Free A Guide.md
deleted file mode 100644
index 2beb70f71eb9dfc3e0ac32120e7b71ad42534707..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download 8 Ball Pool on Android with Extended Stick Guideline Mod for Free A Guide.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-How to Download 8 Ball Pool (Mod Extended Stick Guideline) Free on Android
-If you love playing pool games on your Android device, you might have heard of 8 Ball Pool , one of the most popular and addictive pool games available on Google Play. But did you know that you can also download a modded version of this game that gives you an extended stick guideline that helps you aim and shoot better? In this article, we will show you how to download and install this mod for free on your Android device, and how to play 8 Ball Pool with more fun and excitement.
- What is 8 Ball Pool?
-8 Ball Pool is an online pool game where you can play against other players from around the world in 1-vs-1 matches or tournaments. You can also challenge your friends online or offline, customize your cue and table, and win coins and exclusive items. The game features realistic physics, stunning graphics, and easy controls. You can refine your skills in the practice arena, or join millions of players in the competitive leagues. Whether you are a beginner or a pro, there is always a challenge waiting for you in 8 Ball Pool.
-download 8 ball pool (mod extended stick guideline) free on android Download File ★★★★★ https://urlca.com/2uOfgc
- What is Mod Extended Stick Guideline?
-Mod Extended Stick Guideline is a modification of 8 Ball Pool that gives you an extra-long stick guideline that shows you where your cue ball will go after hitting another ball. This makes it easier for you to aim and shoot with more accuracy and precision. You can also adjust the length and color of the guideline according to your preference. With this mod, you can improve your skills, win more matches, and earn more coins and rewards.
- How to Download and Install the Modded APK File
-To download and install the modded APK file of 8 Ball Pool, you need to follow these steps:
- Step 1: Allow Unknown Apps on Android
-Before you can install an APK file from outside the Google Play Store, you need to enable the option to install unknown apps on your Android device. To do this, go to your device settings, then tap on security or privacy, then look for the option to allow unknown apps or sources, and turn it on. You may also need to grant permission to the browser or file manager app that you will use to download the APK file.
- Step 2: Download the Modded APK File
-Next, you need to find and download the modded APK file of 8 Ball Pool from a reputable website. There are many websites that offer modded APK files, but some of them may contain malware or viruses that can harm your device. To avoid this, you should only download from trusted sources that have positive reviews and ratings from other users. One of the websites that we recommend is [APKPure], which is a safe and reliable platform for downloading APK files. To download the modded APK file of 8 Ball Pool from APKPure, follow these steps:
-
-Go to [APKPure.com] on your browser and search for 8 Ball Pool.
-Tap on the game icon and scroll down to find the modded version with the extended stick guideline feature.
-Tap on the download button and wait for the file to be downloaded to your device.
-
- Step 3: Install the Modded APK File
-After you have downloaded the modded APK file, you need to install it on your device using a file manager app. A file manager app is an app that lets you access and manage the files and folders on your device. You can use any file manager app that you have on your device, or you can download one from Google Play, such as [ES File Explorer] or [File Manager]. To install the modded APK file using a file manager app, follow these steps:
-
-Open the file manager app and locate the modded APK file that you downloaded. It should be in the downloads folder or in the folder where you saved it.
-Tap on the modded APK file and select install. You may need to confirm some permissions or warnings before proceeding.
-Wait for the installation to finish and then tap on open to launch the game.
-
- How to Play 8 Ball Pool with Mod Extended Stick Guideline
-Now that you have installed the modded version of 8 Ball Pool, you can start playing the game with more fun and excitement. Here are some tips on how to use the mod and enjoy the game:
- How to Access the Mod Features
-To access the mod features, you need to go to the game settings and turn them on or off as you wish. To do this, tap on the gear icon on the top right corner of the screen, then tap on mod features. You will see a list of options that you can toggle on or off, such as extended stick guideline, unlimited coins, unlimited cash, anti-ban, etc. You can also change the length and color of the guideline by tapping on them. Remember to save your changes before exiting the settings.
-How to get 8 ball pool mod apk with long lines for free
-8 ball pool hack apk download latest version with extended stick
-Download 8 ball pool modded apk for android and enjoy unlimited coins
-Best site to download 8 ball pool mod apk with long cue guideline
-8 ball pool mod apk free download no root required
-8 ball pool cheat apk download for android with extended stick feature
-Download 8 ball pool mod apk unlimited money and long lines
-8 ball pool mod apk download for android with anti ban protection
-8 ball pool hack mod apk download with long stick guideline and auto win
-Download 8 ball pool mod apk with extended cue and all cues unlocked
-8 ball pool mod apk download link for android with long lines hack
-8 ball pool mod apk free download for android with extended stick and level up fast
-Download 8 ball pool mod apk with long lines and low ban risk
-8 ball pool modded apk download for android with extended stick and VIP features
-Download 8 ball pool hack apk with long cue guideline and unlimited cash
-8 ball pool mod apk download for android with long lines and no ads
-Download 8 ball pool mod apk with extended stick and mega mod menu
-8 ball pool hack mod apk free download for android with long lines and easy win
-Download 8 ball pool mod apk with long cue and all tables unlocked
-8 ball pool modded apk free download for android with extended stick and online mode
-Download 8 ball pool hack apk with long lines and unlimited spins
-8 ball pool mod apk download for android with extended stick and all achievements unlocked
-Download 8 ball pool mod apk with long lines and high win rate
-8 ball pool hacked apk download for android with extended stick and custom cues
-Download 8 ball pool mod apk with extended cue and all game modes unlocked
-8 ball pool modded apk download link for android with long lines and no verification
-Download 8 ball pool hack apk with extended stick and unlimited gems
-8 ball pool mod apk download for android with long lines and all rewards unlocked
-Download 8 ball pool mod apk with extended cue and all leagues unlocked
-8 ball pool hacked apk free download for android with long lines and no survey
- How to Use the Extended Stick Guideline
-To use the extended stick guideline, you need to activate it in the game settings first. Then, when you are in a match, you will see a long line that shows you where your cue ball will go after hitting another ball. You can use this line to aim and shoot with more accuracy and precision. You can also adjust the angle and power of your shot by dragging your finger on the screen. The longer you drag, the more power you will apply. To shoot, simply release your finger.
- How to Challenge Your Friends Online
-To challenge your friends online, you need to connect your game account with Facebook or Google Play Games. To do this, tap on the profile icon on the bottom left corner of the screen, then tap on connect. You will be asked to sign in with your Facebook or Google Play Games account. Once you have connected your account, you can invite and play with your friends in multiplayer mode. To do this, tap on the play with friends icon on the main menu, then choose a game mode and a table. You can also chat with your friends using the chat feature in the game.
- Benefits of Playing 8 Ball Pool with Mod Extended Stick Guideline
-Playing 8 Ball Pool with mod extended stick guideline has many benefits, such as:
-
-You can improve your skills and learn new tricks and strategies by using the extended stick guideline.
-You can have more fun and excitement by playing with more confidence and ease.
-You can win more matches and earn more coins and rewards by using the mod features.
-You can impress your friends and other players by showing off your skills and achievements.
-
- Risks of Playing 8 Ball Pool with Mod Extended Stick Guideline
-However, playing 8 Ball Pool with mod extended stick guideline also has some risks, such as:
-
-You may face security issues or compatibility problems by downloading and installing an APK file from an unknown source.
-You may lose the original features and updates of the game by installing a modded version.
-You may get banned or reported by other players or the game developers for using a modded version.
-
-Therefore, you should be careful and responsible when playing 8 Ball Pool with mod extended stick guideline. You should only download and install the modded APK file from a trusted website, and you should not use the mod features to cheat or abuse other players. You should also respect the rules and terms of service of the game, and enjoy it in a fair and ethical way.
- Conclusion
-In conclusion, 8 Ball Pool is a great pool game that you can play on your Android device. You can also download and install a modded version of this game that gives you an extended stick guideline that helps you aim and shoot better. In this article, we have shown you how to download and install this mod for free on your Android device, and how to play 8 Ball Pool with more fun and excitement. We have also discussed the benefits and risks of playing with this mod. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
- FAQs
-Here are some frequently asked questions and answers about 8 Ball Pool and Mod Extended Stick Guideline:
-
-Is 8 Ball Pool free to play? Yes, 8 Ball Pool is free to play on Google Play. However, you can also purchase in-game items and currency with real money if you want to enhance your gameplay.
-Is Mod Extended Stick Guideline safe to use? Mod Extended Stick Guideline is safe to use if you download it from a reputable website that does not contain malware or viruses. However, you should always be careful when downloading and installing APK files from unknown sources, as they may harm your device or compromise your data.
-Can I play 8 Ball Pool offline? No, 8 Ball Pool requires an internet connection to play online with other players or challenge your friends. However, you can play in the practice arena offline to improve your skills.
-How can I update 8 Ball Pool? You can update 8 Ball Pool by going to Google Play and tapping on the update button. However, if you have installed a modded version of the game, you may not be able to update it from Google Play. You may need to uninstall the modded version and install the original version again, or find another modded version that is compatible with the latest update.
-How can I contact 8 Ball Pool support? You can contact 8 Ball Pool support by going to the game settings and tapping on help & support. You can also visit their official website or social media pages for more information and assistance.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Driving Zone Russia - .md b/spaces/congsaPfin/Manga-OCR/logs/Driving Zone Russia - .md
deleted file mode 100644
index aa7768c28536e2a8413b0cc093e2d818c5cf5ef1..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Driving Zone Russia - .md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-Driving Zone: Russia - A Review of the Popular Racing Game
- If you are a fan of racing games, you might have heard of Driving Zone: Russia, a simulator of street racing on the cars produced in Russia. This game has been downloaded over 10 million times on Google Play Store and has received positive reviews from many players. But what makes this game so appealing and addictive? In this article, we will take a closer look at the features, gameplay, and tips of Driving Zone: Russia, and see why it is one of the best racing games available for Android devices.
-driving zone russia взлом Download ✶✶✶ https://urlca.com/2uOfmO
- Introduction
- What is Driving Zone: Russia?
- Driving Zone: Russia is a racing game developed by AveCreation, a studio that specializes in creating realistic car simulators. The game was released in 2016 and has been updated regularly with new cars, tracks, and features. The game allows you to choose from classic cars produced in Russia, such as Lada, Volga, Moskvich, and ZAZ, as well as modern models from other countries, such as BMW, Mercedes-Benz, Audi, and Porsche. Each car has its own character and a real engine sound, and you can customize its color, suspension, and rims to suit your preference.
- Why is it popular among racing game fans?
- One of the reasons why Driving Zone: Russia is popular among racing game fans is that it offers a realistic and immersive driving experience. The game has realistic car physics that simulate the behavior of real vehicles on different road surfaces and weather conditions. The game also has modern and beautiful graphics that create a stunning visual effect. You can see the details of the car body and interior, the reflections of the sun and the headlights on the windshield, the shadows of the trees and buildings on the road, and the smoke and dust from the tires. The game also has a first-person view or interior camera that lets you feel like you are behind the wheel of a real car.
- Features of Driving Zone: Russia
- Realistic car physics and graphics
- As mentioned above, Driving Zone: Russia has realistic car physics that make the game challenging and fun. You have to control the speed, steering, braking, and acceleration of your car according to the road conditions and traffic rules. You also have to deal with the inertia, traction, skidding, drifting, and collision effects that affect your car's performance. The game also has realistic graphics that create a vivid and lifelike environment. You can see the reflections, shadows, smoke, dust, raindrops, snowflakes, and other effects that enhance the realism of the game.
- Variety of Russian cars and tracks
- Another feature that makes Driving Zone: Russia stand out from other racing games is that it offers a variety of Russian cars and tracks to choose from. You can select from over 20 different cars produced in Russia, ranging from classic models to modern ones. Each car has its own specifications, such as power, torque, weight, top speed, acceleration, handling, braking, fuel consumption, and durability. You can also choose from four unique tracks with different weather conditions, road width, and number of lanes. You can race on a street circuit in a busy city, a suburban road with picturesque hills and forests around, a desert highway with hot and dry climate, or a winter track with slippery ice and snow.
-driving zone russia взлом скачать бесплатно
-driving zone russia взлом на деньги
-driving zone russia взлом apk
-driving zone russia взлом ios
-driving zone russia взлом без рута
-driving zone russia взлом на машины
-driving zone russia взломанная версия
-driving zone russia взлом на андроид
-driving zone russia взлом на золото
-driving zone russia взлом 2023
-driving zone russia взлом мод
-driving zone russia взлом кеш
-driving zone russia взлом обновление
-driving zone russia взлом читы
-driving zone russia взлом коды
-driving zone russia взлом скачать последняя версия
-driving zone russia взлом скачать на пк
-driving zone russia взлом скачать на айфон
-driving zone russia взлом скачать на телефон
-driving zone russia взлом скачать на планшет
-driving zone russia взлом онлайн
-driving zone russia взлом играть
-driving zone russia взлом как сделать
-driving zone russia взлом как установить
-driving zone russia взлом как получить все машины
-driving zone russia взлом отзывы
-driving zone russia взлом видео
-driving zone russia взлом youtube
-driving zone russia взлом yandex disk
-driving zone russia взлом google play
-driving zone russia взлом app store
-driving zone russia взлом 4pda
-driving zone russia взлом pdalife
-driving zone russia взлом trashbox
-driving zone russia взлом android 1
-driving zone russia взлом android oyun club
-driving zone russia взлом android plus 1
-driving zone russia взлом android games ru
-driving zone russia взлом android apps ru
-driving zone russia взлом android market ru
-driving zone russia взлом android mob org
-driving zone russia взлом android top net
-driving zone russia взлом android games net ru
- Dynamic time of day and weather conditions
- A feature that adds more variety and challenge to Driving Zone: Russia is that it has a dynamic time of day and weather conditions system. You can choose from any time of day to start your race, from dawn to dusk, and see how the lighting and shadows change accordingly. You can also experience different weather conditions, such as sunny, cloudy, rainy, foggy, or snowy, and see how they affect your visibility and driving. The game has a realistic day-night cycle and weather system that make each race unique and unpredictable.
- Online and offline game modes
- Driving Zone: Russia also offers two game modes to suit your preference and internet connection: online and offline. In the online mode, you can compete with other players from around the world in real time. You can join a random race or create your own room with your friends. You can also chat with other players and see their rankings and statistics. In the offline mode, you can enjoy the game without any internet connection. You can race against the AI-controlled cars or just drive freely without any rules or limits. You can also adjust the difficulty level and traffic density to match your skill and mood.
- Tips and tricks for Driving Zone: Russia
- How to earn more points and unlock new cars
- One of the goals of Driving Zone: Russia is to earn as many points as possible and unlock new cars. You can earn points by driving fast, overtaking other cars, drifting, performing stunts, and avoiding crashes and traffic violations. The more points you earn, the higher your rank will be, and the more cars you will be able to access. You can also earn coins by watching ads or making in-app purchases. You can use coins to buy new cars or upgrade your existing ones. You can improve the engine, transmission, brakes, tires, and nitro of your car to increase its performance and durability.
- How to avoid crashes and traffic violations
- Another goal of Driving Zone: Russia is to avoid crashes and traffic violations that can damage your car and reduce your points. You have to drive carefully and follow the traffic rules, such as speed limits, traffic lights, signs, signals, and lane markings. You also have to watch out for other cars, pedestrians, animals, obstacles, and police on the road. If you crash into something or break a traffic rule, you will lose points and money, and your car will be damaged. You can repair your car by using coins or watching ads, but it is better to prevent crashes and violations in the first place.
- How to record and share your gameplay videos
- A feature that makes Driving Zone: Russia more fun and social is that it allows you to record and share your gameplay videos with other players. You can use the built-in video recorder to capture your best moments on the road, such as high-speed chases, drifts, stunts, crashes, or funny situations. You can also edit your videos by adding filters, stickers, music, text, or voice-over. You can then share your videos on social media platforms, such as YouTube, Facebook, Instagram, or TikTok, or send them to your friends via WhatsApp or Telegram. You can also watch other players' videos and rate them with likes or dislikes.
- Conclusion
- Summary of the main points
- In conclusion, Driving Zone: Russia is a racing game that simulates street racing on the cars produced in Russia. The game has realistic car physics and graphics that create a realistic and immersive driving experience. The game also has a variety of Russian cars and tracks to choose from, as well as dynamic time of day and weather conditions that add more variety and challenge to the game. The game also has online and offline game modes that let you compete with other players or enjoy the game without any internet connection.
- Recommendation and rating
- If you are looking for a racing game that is realistic, fun, challenging, and social, you should definitely try Driving Zone: Russia. It is one of the best racing games available for Android devices, and it will keep you entertained and engaged for hours. You can download Driving Zone: Russia for free from Google Play Store and enjoy the thrill of street racing on the cars produced in Russia. I would rate this game 4.5 out of 5 stars, and I highly recommend it to anyone who loves racing games.
- FAQs
- Here are some frequently asked questions about Driving Zone: Russia:
- How to hack Driving Zone: Russia?
- Some players might be tempted to hack Driving Zone: Russia to get unlimited coins, points, or cars. However, this is not advisable, as it can ruin the fun and challenge of the game, as well as expose your device to malware or viruses. Moreover, hacking the game can result in a ban from the online mode or a deletion of your progress. Therefore, it is better to play the game legitimately and earn your rewards fair and square.
- How to play Driving Zone: Russia on PC?
- If you want to play Driving Zone: Russia on your PC, you will need to use an Android emulator, such as BlueStacks, NoxPlayer, or LDPlayer. These are software that allow you to run Android apps on your PC. You can download and install any of these emulators from their official websites, and then follow the instructions to set up your Google account and access the Google Play Store. From there, you can search for Driving Zone: Russia and install it on your PC. You can then use your keyboard and mouse to control the game, or connect a gamepad or a steering wheel for a more realistic experience.
- How to update Driving Zone: Russia?
- If you want to update Driving Zone: Russia to get the latest features, cars, and tracks, you will need to check for updates on the Google Play Store. You can do this by opening the Google Play Store app on your device, tapping on the menu icon on the top left corner, and selecting My apps & games. You will see a list of apps that have updates available, including Driving Zone: Russia. You can then tap on the Update button next to the game and wait for the download and installation to complete. Alternatively, you can enable auto-update for Driving Zone: Russia by tapping on the game icon on the Google Play Store app, tapping on the three-dot menu on the top right corner, and selecting Enable auto-update.
- How to change language in Driving Zone: Russia?
- If you want to change the language in Driving Zone: Russia, you will need to go to the settings menu of the game. You can do this by tapping on the gear icon on the top right corner of the main screen of the game. You will see a list of options, including Language. You can then tap on Language and select from one of the available languages, such as English, Russian, German, Spanish, French, or Portuguese. The game will then change its language accordingly.
- How to contact Driving Zone: Russia support?
- If you have any questions, issues, or feedback about Driving Zone: Russia, you can contact the support team of AveCreation by sending an email to support@avecreation.com. You can also visit their website at https://avecreation.com/ or follow them on Facebook at https://www.facebook.com/avecreation/ for more information and updates about their games.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Spaceflight Simulator The Ultimate Space Fight Game Download for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Spaceflight Simulator The Ultimate Space Fight Game Download for Android.md
deleted file mode 100644
index a9284941ec804253cc351e201ffd22589721e120..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Spaceflight Simulator The Ultimate Space Fight Game Download for Android.md
+++ /dev/null
@@ -1,189 +0,0 @@
-
-Space Fight Game Download: How to Find and Play the Best Games in the Genre
-If you are a fan of sci-fi, action, and adventure, you might be interested in space fight games. These are games that let you pilot a spaceship, engage in combat with other players or enemies, and explore the vastness of the universe. In this article, we will tell you everything you need to know about space fight games, how to download them for PC, and how to play them like a pro.
- What are space fight games?
-Space fight games are a subgenre of video games that involve flying and fighting in outer space. They can be classified into different types, such as:
-space fight game download DOWNLOAD ··· https://urlca.com/2uOfuD
-
-Space simulators : These are games that aim to realistically simulate the physics and dynamics of space flight, such as gravity, inertia, thrust, and orbital mechanics. Examples include Kerbal Space Program , Elite Dangerous , and Spaceflight Simulator .
-Space shooters : These are games that focus on fast-paced action and combat, where the player has to shoot down enemies, dodge projectiles, and collect power-ups. Examples include Star Wars: Squadrons , Everspace 2 , and No Man's Sky .
-Space strategy : These are games that involve managing resources, building bases, and commanding fleets in a strategic manner. Examples include Stellaris , Homeworld Remastered Collection , and Sins of a Solar Empire .
-Space adventure : These are games that feature a story-driven narrative, exploration, and puzzles in a sci-fi setting. Examples include Mass Effect , The Outer Worlds , and Star Citizen .
-
-Space fight games have a long history that dates back to the early days of video gaming. Some of the pioneers of the genre include Asteroids (1979), Star Raiders (1979), Elite (1984), Wing Commander (1990), and X-Wing (1993). Since then, the genre has evolved and expanded with new technologies, graphics, and gameplay features.
- The main features and elements of space fight games
-Space fight games have some common features and elements that make them appealing and enjoyable for players. Some of these are:
-
-Spaceships : The most essential element of any space fight game is the spaceship that the player controls. Spaceships can vary in size, shape, design, performance, and customization options. Some games allow the player to build their own spaceship from parts, while others provide pre-made models or classes.
-Weapons : Another important element of space fight games is the weapons that the player can use to attack or defend themselves. Weapons can range from lasers, missiles, torpedoes, cannons, mines, bombs, and more. Some weapons have different effects, such as heat-seeking, EMP, or ion.
-Environments : Space fight games also feature diverse and immersive environments that the player can explore or interact with. Environments can include planets, asteroids, space stations, nebulae, wormholes, and more. Some environments have hazards, such as radiation, debris, or enemy patrols.
-Missions : Space fight games also offer various missions or objectives that the player can complete or pursue. Missions can involve combat, exploration, trading, mining, diplomacy, stealth, and more. Some missions are part of the main storyline, while others are optional or side quests.
-Multiplayer : Space fight games also often feature multiplayer modes or options that allow the player to cooperate or compete with other players online. Multiplayer can include co-op, PvP, team-based, or MMO modes. Some games also have social features, such as chat, voice, or clans.
-
- The benefits and challenges of playing space fight games
-Playing space fight games can have many benefits and challenges for players. Some of the benefits are:
-
-Fun and excitement : Space fight games can provide a lot of fun and excitement for players who enjoy sci-fi, action, and adventure. They can experience the thrill of flying and fighting in space, exploring new worlds, and discovering new secrets.
-Creativity and imagination : Space fight games can also stimulate the creativity and imagination of players who like to create their own spaceships, customize their weapons, or design their own missions. They can express their personality and style through their choices and actions.
-Learning and skill development : Space fight games can also help players learn and develop various skills, such as spatial awareness, problem-solving, strategic thinking, coordination, and teamwork. They can also learn about science, history, and culture through the game's lore and setting.
-
-Some of the challenges are:
-space combat simulator pc free download
-best space battle games for android
-download space arena build a spaceship & fight
-spaceflight simulator game download for pc
-space war games online multiplayer
-download honkai star rail sci-fi rpg
-space shooter galaxy attack mod apk download
-the outer worlds pc game download
-space strategy games for windows 10
-download star wars squadrons space combat game
-space engineers game free download full version
-best space exploration games for pc
-download elite dangerous open world space game
-space invaders classic arcade game download
-space opera rpg games for pc
-download no man's sky procedurally generated space game
-space simulation games for mac
-download eve online massive multiplayer space game
-space colony management games for pc
-download kerbal space program rocket building game
-space horror games for pc
-download star citizen ambitious space sandbox game
-space trading and combat games for pc
-download endless space 2 turn-based strategy game
-space roguelike games for pc
-download rebel galaxy outlaw space action game
-space survival games for pc
-download x4 foundations complex space simulation game
-space adventure games for pc
-download mass effect legendary edition sci-fi rpg game
-space platformer games for pc
-download star trek online free-to-play mmo game
-space puzzle games for pc
-download starsector indie sandbox rpg game
-space tycoon games for pc
-download starbound sandbox exploration game
-space stealth games for pc
-download everspace 2 looter shooter game
-space racing games for pc
-download homeworld remastered collection classic strategy game
-
-Complexity and difficulty : Space fight games can also be complex and difficult for some players who are not familiar with the genre or the game's mechanics. They may have to deal with a steep learning curve, a lot of information, or a high level of challenge.
-Cost and time : Space fight games can also be costly and time-consuming for some players who want to enjoy the full experience of the game. They may have to invest in a powerful PC, a large storage space, a fast internet connection, or a premium subscription. They may also have to spend a lot of time downloading, installing, updating, or playing the game.
-Addiction and isolation : Space fight games can also be addictive and isolating for some players who get too immersed in the game's world. They may neglect their real-life responsibilities, relationships, or health. They may also lose touch with reality or socialize less with others.
-
- How to download space fight games for PC
-If you want to play space fight games on your PC, you will need to download them from a reliable and secure source. There are many platforms and sources that offer space fight games for PC, but some of the best ones are:
-
-Steam : Steam is one of the most popular and widely used platforms for downloading and playing PC games. It has a large library of space fight games in various genres and categories. It also has features such as cloud saving, achievements, reviews, forums, and workshops. You can download Steam for free from its official website. To download a space fight game from Steam, you will need to create an account, log in, browse the store page, find the game you want to buy, add it to your cart, check out, and wait for the download to finish. You can then launch the game from your library or desktop shortcut.
-GOG : GOG is another popular and widely used platform for downloading and playing PC games. It specializes in DRM-free games, meaning that you can play them without any online activation or restriction. It also has a large library of space fight games, especially classic and retro ones. It also has features such as cloud saving, achievements, reviews, forums, and mod support. You can download GOG for free from its official website. To download a space fight game from GOG, you will need to create an account, log in, browse the store page, find the game you want to buy, add it to your cart, check out, and wait for the download to finish. You can then launch the game from your library or desktop shortcut.
-Epic Games Store : Epic Games Store is another popular and widely used platform for downloading and playing PC games. It is owned by Epic Games, the developer of Fortnite and Unreal Engine . It has a smaller but curated library of space fight games, some of which are exclusive or free. It also has features such as cloud saving, achievements, reviews, forums, and creator support. You can download Epic Games Store for free from its official website. To download a space fight game from Epic Games Store, you will need to create an account, log in, browse the store page, find the game you want to buy or claim, add it to your library, and wait for the download to finish. You can then launch the game from your library or desktop shortcut.
-
-There are also other platforms and sources that offer space fight games for PC, such as Origin, Uplay, Humble Bundle, itch.io, and more. However, you should always be careful and cautious when downloading games from unknown or untrusted sources, as they may contain viruses, malware, or scams.
- The system requirements and compatibility issues for space fight games
-Before you download and play a space fight game on your PC, you should also check the system requirements and compatibility issues for the game. System requirements are the minimum or recommended specifications that your PC needs to run the game smoothly and without problems. Compatibility issues are the potential problems that may arise when running the game on your PC due to hardware or software differences or conflicts.
-You can usually find the system requirements and compatibility issues for a space fight game on its store page or official website. You should compare them with your PC's specifications and see if they match or exceed them. You can also use tools such as Can You Run It or Game Debate to automatically scan your PC and see if it can run the game.
-Some of the common system requirements and compatibility issues for space fight games are:
-
-Operating system : Most space fight games require a Windows operating system to run on PC. However, some games may also support other operating systems such as Mac OS or Linux. You should check the version and edition of your operating system and see if it is compatible with the game.
-Processor : The processor is the part of your PC that executes instructions and performs calculations. Most space fight games require a minimum or recommended processor speed and type to run on PC. For example, a game may require an Intel Core i5-4460 or AMD FX-8350 processor or better.
-Memory : The memory is the part of your PC that stores data temporarily while running programs. Most space fight games require a minimum or recommended memory size to run on PC. For example, a game may require 8 GB of RAM or more.
-Graphics card : The graphics card is the part of your PC that renders images and videos on your screen. Most space fight games require a minimum or recommended graphics card model and performance to run on PC. For example , a game may require an NVIDIA GeForce GTX 1060 or AMD Radeon RX 580 graphics card or better.
-Storage : The storage is the part of your PC that stores data permanently on your hard drive or SSD. Most space fight games require a minimum or recommended storage space to run on PC. For example, a game may require 50 GB of free disk space or more.
-Sound card : The sound card is the part of your PC that produces audio and sound effects. Most space fight games require a minimum or recommended sound card quality and compatibility to run on PC. For example, a game may require a DirectX compatible sound card or better.
-Internet connection : The internet connection is the part of your PC that connects you to the online world. Most space fight games require a minimum or recommended internet speed and stability to run on PC, especially if they have multiplayer modes or features. For example, a game may require a broadband internet connection or better.
-
-If your PC does not meet the system requirements or has compatibility issues for a space fight game, you may experience problems such as lag, stuttering, crashing, freezing, or errors. You may also not be able to run the game at all. To fix these problems, you may need to upgrade your PC's hardware, update your PC's software, adjust your game's settings, or contact the game's support team.
- The installation and setup process for space fight games
-After you download a space fight game for PC, you will need to install and set up the game on your PC. The installation and setup process may vary depending on the platform and source you downloaded the game from, but generally it involves the following steps:
-
-Run the installer : Locate the installer file that you downloaded and double-click on it to run it. You may need to grant permission or enter your password to proceed.
-Follow the instructions : Follow the instructions that appear on the screen to install the game on your PC. You may need to choose a destination folder, accept the terms and conditions, or select additional options.
-Wait for the installation : Wait for the installation process to complete. It may take some time depending on the size and complexity of the game.
-Launch the game : Launch the game from your library, desktop shortcut, or start menu. You may need to activate or register the game online before playing it.
-Adjust the settings : Adjust the settings of the game according to your preferences and PC's specifications. You may need to change the resolution, graphics quality, sound volume, control scheme, or language of the game.
-Enjoy the game : Enjoy playing the space fight game on your PC. You may need to create a profile, choose a mode, select a spaceship, or start a mission before playing.
-
- How to play space fight games
-Now that you have downloaded, installed, and set up a space fight game on your PC, you are ready to play it. However, playing a space fight game can be challenging and confusing for beginners who are not used to the genre or the game's mechanics. Therefore, we will give you some basic tips and tricks on how to play space fight games like a pro.
- The basic controls and mechanics of space fight games
-The basic controls and mechanics of space fight games are the keys and buttons that you use to control your spaceship and interact with the game's world. They may vary depending on the type and style of the game, but generally they include:
-
-Movement : The movement controls are the keys and buttons that you use to move your spaceship in different directions and speeds. They usually include the arrow keys, the WASD keys, or the joystick. They also include the throttle, which controls the acceleration and deceleration of your spaceship.
-Rotation : The rotation controls are the keys and buttons that you use to rotate your spaceship in different angles and orientations. They usually include the Q and E keys, the mouse, or the joystick. They also include the pitch, yaw, and roll, which control the tilting and turning of your spaceship.
-Shooting : The shooting controls are the keys and buttons that you use to fire your weapons at your targets. They usually include the left mouse button, the spacebar, or the trigger. They also include the aiming, which controls the direction and accuracy of your shots.
-Switching : The switching controls are the keys and buttons that you use to switch between different weapons, modes, or views. They usually include the number keys, the scroll wheel, or the D-pad. They also include the HUD, which displays information such as your health, shield, ammo, radar, and objectives.
-Interacting : The interacting controls are the keys and buttons that you use to interact with other objects or characters in the game. They usually include the right mouse button, the F key, or the X button. They also include the dialogue, which allows you to communicate with other players or NPCs.
-
-To play a space fight game effectively, you need to master these basic controls and mechanics. You need to know how to maneuver your spaceship in different situations, how to aim and shoot at your enemies, how to switch between different options, and how to interact with your environment. You also need to be aware of your surroundings, your status, and your goals.
- The tips and tricks for mastering space fight games
-Besides knowing the basic controls and mechanics of space fight games, you also need some tips and tricks to master them. These are some of the best practices and strategies that can help you improve your skills and performance in space fight games:
-
-Practice : The best way to master any game is to practice it regularly and frequently. You can practice by playing different modes, levels, or missions in the game. You can also practice by playing against bots, friends, or other players online. Practice will help you familiarize yourself with the game's features, mechanics, and challenges.
-Learn : Another way to master any game is to learn from it constantly and consistently. You can learn by reading guides, tutorials, or reviews about the game. You can also learn by watching videos, streams, or replays of other players who are better than you. Learning will help you gain knowledge, insights, and tips about the game.
-Experiment : Another way to master any game is to experiment with it creatively and innovatively. You can experiment by trying different spaceships, weapons, or settings in the game. You can also experiment by testing different tactics , strategies, or styles in the game. Experimenting will help you discover new possibilities, solutions, and outcomes in the game.
-Adapt : Another way to master any game is to adapt to it flexibly and dynamically. You can adapt by changing your plans, actions, or decisions according to the situation, condition, or feedback in the game. You can also adapt by learning from your mistakes, failures, or losses in the game. Adapting will help you overcome challenges, difficulties, and risks in the game.
-Enjoy : The most important way to master any game is to enjoy it fully and genuinely. You can enjoy by having fun, being curious, and being passionate about the game. You can also enjoy by appreciating the game's design, graphics, sound, and story. Enjoying will help you stay motivated, engaged, and satisfied with the game.
-
- The best space fight games to play in 2023
-There are many space fight games that you can play on your PC in 2023, but some of the best ones are:
-
-
-Name
-Description
-Platform
-Release Date
-
-
-Starfield
-A single-player RPG set in a vast and epic sci-fi universe, where you can create your own character, explore planets, and engage in space combat.
-Steam, Epic Games Store
-November 11, 2023
-
-
-EVE Online: Odyssey
-A massive multiplayer online game that lets you live your own adventure in a persistent and evolving galaxy, where you can trade, mine, fight, or explore.
-Steam
-June 8, 2023
-
-
-Kerbal Space Program 2
-A space simulator that lets you build and fly your own rockets, satellites, space stations, and colonies, with realistic physics and graphics.
-Steam, Epic Games Store
-Late 2023
-
-
-Star Wars: Rogue Squadron
-A space shooter that lets you join the legendary Rogue Squadron and pilot iconic starfighters in various missions and battles across the Star Wars galaxy.
-Steam, Epic Games Store
-TBA 2023
-
-
-Rebel Galaxy Outlaw 2
-A space adventure that lets you play as a smuggler, bounty hunter, or pirate in a gritty and colorful galaxy, where you can customize your ship, fight enemies, and explore systems.
-Steam, Epic Games Store td>
-Early 2023
-
-
-These are just some of the space fight games that you can look forward to playing in 2023. There are many more that you can discover and enjoy on your PC.
- Conclusion
-Space fight games are a subgenre of video games that involve flying and fighting in outer space. They can be classified into different types, such as space simulators, space shooters, space strategy, and space adventure. They have some common features and elements, such as spaceships, weapons, environments, and missions. They also have some benefits and challenges, such as fun, creativity, learning, complexity, cost, and addiction.
-If you want to play space fight games on your PC, you will need to download them from a reliable and secure source, such as Steam, GOG, or Epic Games Store. You will also need to check the system requirements and compatibility issues for the game, and install and set up the game on your PC. You will also need to master the basic controls and mechanics of the game, and follow some tips and tricks to improve your skills and performance. You will also need to choose the best space fight games to play in 2023, such as Starfield, EVE Online: Odyssey, Kerbal Space Program 2, Star Wars: Rogue Squadron, and Rebel Galaxy Outlaw 2.
-We hope that this article has helped you learn more about space fight games, how to download them for PC, and how to play them like a pro. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy gaming!
- FAQs
-Here are some of the frequently asked questions about space fight games:
-
-Q: What is the difference between space fight games and space flight games?
-A: Space fight games are games that focus on combat and action in outer space, while space flight games are games that focus on simulation and realism in outer space. Space fight games are more arcade-like and casual, while space flight games are more realistic and complex.
-Q: What are some of the best space fight games of all time?
-A: Some of the best space fight games of all time are Elite , Wing Commander , X-Wing , Freespace 2 , EVE Online , Mass Effect , No Man's Sky , Star Wars: Squadrons , and Everspace 2 .
-Q: What are some of the upcoming space fight games in 2023?
-A: Some of the upcoming space fight games in 2023 are Starfield , EVE Online: Odyssey , Kerbal Space Program 2 , Star Wars: Rogue Squadron , and Rebel Galaxy Outlaw 2 .
-Q: How can I play space fight games with a controller or a VR headset?
-A: Some space fight games support controller or VR headset input devices, while others do not. You can check the game's store page or official website to see if it supports these devices. You may also need to adjust the game's settings or use third-party software to enable these devices.
-Q: How can I play space fight games with friends or other players online?
-A: Some space fight games have multiplayer modes or features that allow you to play with friends or other players online. You can check the game's store page or official website to see if it has these modes or features. You may also need to create an account, join a server, invite or join a party, or use voice chat to play with others.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/UltraViewer Remote Desktop Software with Multi-Language Support.md b/spaces/congsaPfin/Manga-OCR/logs/UltraViewer Remote Desktop Software with Multi-Language Support.md
deleted file mode 100644
index 51ab4edf3d1b34dce242956e2f7b3e59e3e8db88..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/UltraViewer Remote Desktop Software with Multi-Language Support.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
- H3: Run the installer and follow the instructions H3: Launch UltraViewer and get your ID and password | | H2: How to use UltraViewer to remotely control another computer | H3: Ask your partner to send you their ID and password H3: Enter their ID and password and click Connect H3: Control their computer as if you were sitting in front of it | | H2: How to customize UltraViewer settings and features | H3: Change the language, theme, and hotkeys H3: Chat with your partner and share files H3: Control multiple computers or share your screen | | H2: How to buy UltraViewer for more benefits and support | H3: Compare the pricing plans and features H3: Choose the payment method and complete the transaction H3: Activate your license and enjoy the premium service | | H2: Conclusion | | | H2: FAQs | | Table 2: Article with HTML formatting How to Download UltraViewer: A Free Remote Desktop Software
-If you are looking for a free, easy, and secure way to remotely control another computer, you should try UltraViewer. UltraViewer is a software that allows you to access and support your partner's computer from anywhere in the world. You can use it for online education, customer service, technical support, or personal use. In this article, we will show you how to download and use UltraViewer in simple steps.
- What is UltraViewer and why do you need it?
-UltraViewer is a product of DucFabulous Co,ltd, a software company based in Vietnam. It was launched in 2015 and has since gained more than 40 million downloads worldwide. UltraViewer is a remote desktop software that lets you control your partner's computer as if you were sitting in front of their screen. You can see what they see, do what they do, and chat with them in real time. You can also send and receive files, control multiple computers, or share your screen with others.
-download ultraviewer Download Zip ✒ https://urlca.com/2uOf6z
-UltraViewer is useful for many purposes, such as:
-
-Remote support: You can help your clients or partners solve their problems quickly and professionally.
-Online education: You can teach or learn from your students or teachers remotely.
-Personal use: You can access your own computer from anywhere or help your friends or family with their computer issues.
-
-UltraViewer is free for personal and non-commercial use. It supports all Windows versions from XP to 11. It is very easy to use, lightweight, and secure. It uses AES encryption to protect your data and privacy. It also works behind firewalls and routers without any configuration.
- How to download and install UltraViewer on your computer
-Download UltraViewer from the official website
-To download UltraViewer, you need to visit the official website at [ultraviewer.net](^1^). There, you will see two options to download the software:
-
-Install (EXE): This option will download an executable file that will install UltraViewer with full features on your computer. It supports multi-language and only takes 1.5 MB of space.
-Portable (ZIP): This option will download a zip file that contains a portable version of UltraViewer that you can run without installing. However, it has less features than the EXE version and is not supported by the developer.
-
-We recommend choosing the Install (EXE) option for better performance and compatibility. Click on the Download button under this option and save the file on your computer.
- Run the installer and follow the instructions
-After downloading the file, locate it on your computer and double-click on it to run the installer. You will see a welcome screen that asks you to choose the language for the installation. Select your preferred language from the drop-down menu and click OK.
-The next screen will show you the license agreement for using UltraViewer. Read it carefully and click I Agree if you accept the terms.
-download ultraviewer free remote desktop software
-download ultraviewer latest version for windows
-download ultraviewer portable zip file
-download ultraviewer documents and tutorials
-download ultraviewer for online education
-download ultraviewer for pc support
-download ultraviewer 6.6.40 exe file
-download ultraviewer from official website
-download ultraviewer for windows 10
-download ultraviewer for windows xp
-download ultraviewer for mac os
-download ultraviewer for linux
-download ultraviewer for android
-download ultraviewer for ios
-download ultraviewer crack full version
-download ultraviewer license key generator
-download ultraviewer alternative software
-download ultraviewer vs teamviewer comparison
-download ultraviewer review and rating
-download ultraviewer user guide pdf
-download ultraviewer setup and installation
-download ultraviewer security and privacy features
-download ultraviewer changelogs and updates
-download ultraviewer customer service and support
-download ultraviewer pricing and plans
-download ultraviewer free trial offer
-download ultraviewer discount coupon code
-download ultraviewer affiliate program and commission
-download ultraviewer testimonials and feedbacks
-download ultraviewer faq and troubleshooting tips
-how to download and use ultraviewer software
-where to download and install ultraviewer software
-why to download and choose ultraviewer software
-when to download and update ultraviewer software
-what is ultraviewer software and how it works
-who can download and benefit from ultraviewer software
-which version of ultraviewer software to download
-is it safe to download and run ultraviewer software
-is it legal to download and share ultraviewer software
-is it worth to download and buy ultraviewer software
-The next screen will ask you
The next screen will ask you to choose the destination folder for installing UltraViewer. You can use the default folder or browse to another location on your computer. Click Next to continue.
-The next screen will ask you to choose the start menu folder for UltraViewer. You can use the default folder or create a new one. Click Next to continue.
-The next screen will ask you to select additional tasks for the installation, such as creating a desktop icon, a quick launch icon, or a firewall exception. Check the boxes according to your preference and click Next to continue.
-The final screen will show you a summary of the installation settings. Click Install to start the installation process. It will take a few seconds to complete.
-When the installation is finished, you will see a screen that says UltraViewer has been installed on your computer. Click Finish to exit the installer and launch UltraViewer.
- Launch UltraViewer and get your ID and password
-When you launch UltraViewer for the first time, you will see a main window that shows your ID and password. These are unique numbers that identify your computer and allow you to connect with other computers using UltraViewer.
-Your ID and password are randomly generated and change every time you start UltraViewer. You can also change them manually by clicking on the refresh button next to them.
-You need to share your ID and password with your partner if you want them to access your computer remotely. Likewise, you need to ask for their ID and password if you want to access their computer remotely.
-Do not share your ID and password with anyone you do not trust, as they can control your computer without your permission. You can also set a personal password for UltraViewer in the settings menu to prevent unauthorized access.
- How to use UltraViewer to remotely control another computer
-Ask your partner to send you their ID and password
-To connect with another computer using UltraViewer, you need to know their ID and password. You can ask them to send it to you via email, chat, phone, or any other method.
-Make sure that your partner has also installed and launched UltraViewer on their computer. They should see their ID and password on the main window of UltraViewer.
- Enter their ID and password and click Connect
-Once you have received your partner's ID and password, enter them in the corresponding fields on the main window of UltraViewer. Then, click on the Connect button or press Enter.
-UltraViewer will try to establish a connection with your partner's computer. If successful, you will see a message that says "Connected" on the bottom left corner of the window. You will also see your partner's screen on a new window.
- Control their computer as if you were sitting in front of it
-Now that you are connected, you can control your partner's computer as if you were sitting in front of it. You can use your mouse and keyboard to perform any actions on their screen. You can also resize, minimize, maximize, or close the window as you wish.
-You can see the status of the connection on the top right corner of the window. It shows the quality of the connection, the speed of data transfer, and the duration of the session.
-To end the connection, you can click on the Disconnect button on the top left corner of the window or press Esc. You can also ask your partner to disconnect from their side by clicking on the X button on their main window of UltraViewer.
- How to customize UltraViewer settings and features
-Change the language, theme, and hotkeys
-You can customize UltraViewer settings and features according to your preference. To access the settings menu, click on the Options button on features, click on the Multi Control button or the Share Screen button on the top left corner of the window.
- The Multi Control feature lets you control up to 16 computers at the same time. You can add more computers by clicking on the Add button and entering their ID and password. You can switch between different computers by clicking on their thumbnails on the bottom of the window. You can also arrange, resize, or close the windows as you wish.
- The Share Screen feature lets you share your screen with up to 16 people at the same time. You can invite more people by clicking on the Invite button and sending them your ID and password. You can see who is watching your screen by clicking on the Viewer List button. You can also pause, resume, or stop sharing your screen by clicking on the buttons on the top right corner of the window.
- How to buy UltraViewer for more benefits and support
-Compare the pricing plans and features
-UltraViewer is free for personal and non-commercial use. However, if you want to use it for commercial purposes, you need to buy a license from the official website. Buying a license will give you more benefits and support, such as:
-
-No ads or pop-ups
-No limit on the number of computers or sessions
-Priority technical support and updates
-Customized logo and branding
-Access to advanced features and settings
-
-UltraViewer offers three pricing plans for different needs and budgets:
-
-Plan Price Features
-Personal $29.99/year For personal use only Up to 3 computers Basic features and settings
-Professional $69.99/year For commercial use Up to 10 computers All features and settings Priority support and updates Customized logo and branding
-Enterprise $199.99/year For large-scale use Unlimited computers All features and settings Priority support and updates Customized logo and branding Dedicated server option
-
- Choose the payment method and complete the transaction
-To buy a license, you need to visit the official website at [ultraviewer.net] and click on the Buy Now button under your preferred plan. You will be redirected to a secure payment page where you can choose your payment method and enter your billing information.
-You can pay with PayPal, credit card, bank transfer, or other methods depending on your country. You will receive an email confirmation with your invoice and license key after completing the transaction.
- Activate your license and enjoy the premium service
-To activate your license, you need to launch UltraViewer on your computer and click on the Activate button on the main window. You will see a new window that asks you to enter your license key. Copy and paste your license key from your email confirmation and click OK.
-You will see a message that says "Your license has been activated successfully". You can now enjoy the premium service of UltraViewer without any limitations or interruptions.
- Conclusion
-In this article, we have shown you how to download and use UltraViewer, a free remote desktop software that allows you to access and support another computer from anywhere in the world. We have also shown you how to customize UltraViewer settings and features, and how to buy UltraViewer for more benefits and support.
-We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading!
- FAQs
-Is UltraViewer safe to use?
-Yes, UltraViewer is safe to use. It uses AES encryption to protect your data and privacy. It also works behind firewalls and routers without any configuration. However, you should only share your ID and password with people you trust, as they can control your computer without your permission.
- Can I use UltraViewer on Mac or Linux?
-No, UltraViewer only supports Windows operating systems from XP to 11. If you want to use UltraViewer on Mac or Linux, you need to install a Windows emulator or virtual machine on your device.
- Can I use UltraViewer offline?
-No, UltraViewer requires an internet connection to work. You need to have a stable and fast internet connection on both your computer and your partner's computer to ensure a smooth and secure remote session.
- How can I get help or support for UltraViewer?
-If you need help or support for UltraViewer, you can visit the official website at [ultraviewer.net] and click on the Support button. There, you will find a user manual, a video tutorial, a FAQ section, and a contact form. You can also send an email to support@ultraviewer.net or call +84 28 7300 9168.
- How can I update UltraViewer to the latest version?
-If you want to update UltraViewer to the latest version, you can visit the official website at [ultraviewer.net] and click on the Download button. There, you will see the latest version of UltraViewer available for download. You can download and install it over your existing version without losing your settings or data.
- How can I uninstall UltraViewer from my computer?
-If you want to uninstall UltraViewer from your computer, you can follow these steps:
-
-Close UltraViewer if it is running.
-Go to the Start menu and click on Control Panel.
-Click on Programs and Features or Uninstall a Program.
-Find UltraViewer in the list of programs and click on it.
-Click on Uninstall or Change/Remove.
-Follow the instructions to complete the uninstallation process.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/BIM 360 Plan IPad App Lt 2019 64 Bit FREE Crack Torrent Download.md b/spaces/contluForse/HuggingGPT/assets/BIM 360 Plan IPad App Lt 2019 64 Bit FREE Crack Torrent Download.md
deleted file mode 100644
index 7999c11cdd8eb47b23265d49f006f981c26a82b5..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/BIM 360 Plan IPad App Lt 2019 64 Bit FREE Crack Torrent Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-BIM 360 Plan IPad App Lt 2019 64 Bit Crack Torrent Download DOWNLOAD ✫ https://ssurll.com/2uzvu9
-
-BIM 360 Field IPad App 2006 Herunterladen Crack 64 Bits DE. ... AutoCAD ... Download Xforce Keygen AutoCAD Mobile App 2019 32 Bit Patch ... The zip . ... Autodesk 3ds Max 9 32 bit Keygen (X Force).rar torrent download.. ... Key rar ... BIM 360 Plan iPad app 2014 keygen only xforce 3 rar. ... Alias ... 1fdad05405
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Company Of Heroes 2 Reloaded Offline Skirmish Crackl ((FULL)).md b/spaces/contluForse/HuggingGPT/assets/Company Of Heroes 2 Reloaded Offline Skirmish Crackl ((FULL)).md
deleted file mode 100644
index 699e833d6ee108aa9e60326c4c19570fd092d851..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Company Of Heroes 2 Reloaded Offline Skirmish Crackl ((FULL)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-Company Of Heroes 2 Reloaded Offline Skirmish Crackl Download >>>>> https://ssurll.com/2uzxD8
-
-Company of Heroes 2: Master Collection is an RTS game that delivers an insane level of intensity quite like no other. 4d29de3e1b
-
-
-
diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/classifier.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/classifier.py
deleted file mode 100644
index 2b74541341ad24bfb97f7ea90ac6470b83a73aa3..0000000000000000000000000000000000000000
--- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/classifier.py
+++ /dev/null
@@ -1,56 +0,0 @@
-""" Classifier head and layer factory
-
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-from torch import nn as nn
-from torch.nn import functional as F
-
-from .adaptive_avgmax_pool import SelectAdaptivePool2d
-from .linear import Linear
-
-
-def _create_pool(num_features, num_classes, pool_type='avg', use_conv=False):
- flatten_in_pool = not use_conv # flatten when we use a Linear layer after pooling
- if not pool_type:
- assert num_classes == 0 or use_conv,\
- 'Pooling can only be disabled if classifier is also removed or conv classifier is used'
- flatten_in_pool = False # disable flattening if pooling is pass-through (no pooling)
- global_pool = SelectAdaptivePool2d(pool_type=pool_type, flatten=flatten_in_pool)
- num_pooled_features = num_features * global_pool.feat_mult()
- return global_pool, num_pooled_features
-
-
-def _create_fc(num_features, num_classes, use_conv=False):
- if num_classes <= 0:
- fc = nn.Identity() # pass-through (no classifier)
- elif use_conv:
- fc = nn.Conv2d(num_features, num_classes, 1, bias=True)
- else:
- # NOTE: using my Linear wrapper that fixes AMP + torchscript casting issue
- fc = Linear(num_features, num_classes, bias=True)
- return fc
-
-
-def create_classifier(num_features, num_classes, pool_type='avg', use_conv=False):
- global_pool, num_pooled_features = _create_pool(num_features, num_classes, pool_type, use_conv=use_conv)
- fc = _create_fc(num_pooled_features, num_classes, use_conv=use_conv)
- return global_pool, fc
-
-
-class ClassifierHead(nn.Module):
- """Classifier head w/ configurable global pooling and dropout."""
-
- def __init__(self, in_chs, num_classes, pool_type='avg', drop_rate=0., use_conv=False):
- super(ClassifierHead, self).__init__()
- self.drop_rate = drop_rate
- self.global_pool, num_pooled_features = _create_pool(in_chs, num_classes, pool_type, use_conv=use_conv)
- self.fc = _create_fc(num_pooled_features, num_classes, use_conv=use_conv)
- self.flatten = nn.Flatten(1) if use_conv and pool_type else nn.Identity()
-
- def forward(self, x):
- x = self.global_pool(x)
- if self.drop_rate:
- x = F.dropout(x, p=float(self.drop_rate), training=self.training)
- x = self.fc(x)
- x = self.flatten(x)
- return x
diff --git a/spaces/cooelf/Multimodal-CoT/timm/scheduler/step_lr.py b/spaces/cooelf/Multimodal-CoT/timm/scheduler/step_lr.py
deleted file mode 100644
index f797e1a8cf35999531dd5f1ccbbe09a9d0cf30a9..0000000000000000000000000000000000000000
--- a/spaces/cooelf/Multimodal-CoT/timm/scheduler/step_lr.py
+++ /dev/null
@@ -1,63 +0,0 @@
-""" Step Scheduler
-
-Basic step LR schedule with warmup, noise.
-
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-import math
-import torch
-
-from .scheduler import Scheduler
-
-
-class StepLRScheduler(Scheduler):
- """
- """
-
- def __init__(self,
- optimizer: torch.optim.Optimizer,
- decay_t: float,
- decay_rate: float = 1.,
- warmup_t=0,
- warmup_lr_init=0,
- t_in_epochs=True,
- noise_range_t=None,
- noise_pct=0.67,
- noise_std=1.0,
- noise_seed=42,
- initialize=True,
- ) -> None:
- super().__init__(
- optimizer, param_group_field="lr",
- noise_range_t=noise_range_t, noise_pct=noise_pct, noise_std=noise_std, noise_seed=noise_seed,
- initialize=initialize)
-
- self.decay_t = decay_t
- self.decay_rate = decay_rate
- self.warmup_t = warmup_t
- self.warmup_lr_init = warmup_lr_init
- self.t_in_epochs = t_in_epochs
- if self.warmup_t:
- self.warmup_steps = [(v - warmup_lr_init) / self.warmup_t for v in self.base_values]
- super().update_groups(self.warmup_lr_init)
- else:
- self.warmup_steps = [1 for _ in self.base_values]
-
- def _get_lr(self, t):
- if t < self.warmup_t:
- lrs = [self.warmup_lr_init + t * s for s in self.warmup_steps]
- else:
- lrs = [v * (self.decay_rate ** (t // self.decay_t)) for v in self.base_values]
- return lrs
-
- def get_epoch_values(self, epoch: int):
- if self.t_in_epochs:
- return self._get_lr(epoch)
- else:
- return None
-
- def get_update_values(self, num_updates: int):
- if not self.t_in_epochs:
- return self._get_lr(num_updates)
- else:
- return None
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/hooks/logger/text.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/hooks/logger/text.py
deleted file mode 100644
index 87b1a3eca9595a130121526f8b4c29915387ab35..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/hooks/logger/text.py
+++ /dev/null
@@ -1,256 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import datetime
-import os
-import os.path as osp
-from collections import OrderedDict
-
-import torch
-import torch.distributed as dist
-
-import annotator.uniformer.mmcv as mmcv
-from annotator.uniformer.mmcv.fileio.file_client import FileClient
-from annotator.uniformer.mmcv.utils import is_tuple_of, scandir
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class TextLoggerHook(LoggerHook):
- """Logger hook in text.
-
- In this logger hook, the information will be printed on terminal and
- saved in json file.
-
- Args:
- by_epoch (bool, optional): Whether EpochBasedRunner is used.
- Default: True.
- interval (int, optional): Logging interval (every k iterations).
- Default: 10.
- ignore_last (bool, optional): Ignore the log of last iterations in each
- epoch if less than :attr:`interval`. Default: True.
- reset_flag (bool, optional): Whether to clear the output buffer after
- logging. Default: False.
- interval_exp_name (int, optional): Logging interval for experiment
- name. This feature is to help users conveniently get the experiment
- information from screen or log file. Default: 1000.
- out_dir (str, optional): Logs are saved in ``runner.work_dir`` default.
- If ``out_dir`` is specified, logs will be copied to a new directory
- which is the concatenation of ``out_dir`` and the last level
- directory of ``runner.work_dir``. Default: None.
- `New in version 1.3.16.`
- out_suffix (str or tuple[str], optional): Those filenames ending with
- ``out_suffix`` will be copied to ``out_dir``.
- Default: ('.log.json', '.log', '.py').
- `New in version 1.3.16.`
- keep_local (bool, optional): Whether to keep local log when
- :attr:`out_dir` is specified. If False, the local log will be
- removed. Default: True.
- `New in version 1.3.16.`
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
- `New in version 1.3.16.`
- """
-
- def __init__(self,
- by_epoch=True,
- interval=10,
- ignore_last=True,
- reset_flag=False,
- interval_exp_name=1000,
- out_dir=None,
- out_suffix=('.log.json', '.log', '.py'),
- keep_local=True,
- file_client_args=None):
- super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag,
- by_epoch)
- self.by_epoch = by_epoch
- self.time_sec_tot = 0
- self.interval_exp_name = interval_exp_name
-
- if out_dir is None and file_client_args is not None:
- raise ValueError(
- 'file_client_args should be "None" when `out_dir` is not'
- 'specified.')
- self.out_dir = out_dir
-
- if not (out_dir is None or isinstance(out_dir, str)
- or is_tuple_of(out_dir, str)):
- raise TypeError('out_dir should be "None" or string or tuple of '
- 'string, but got {out_dir}')
- self.out_suffix = out_suffix
-
- self.keep_local = keep_local
- self.file_client_args = file_client_args
- if self.out_dir is not None:
- self.file_client = FileClient.infer_client(file_client_args,
- self.out_dir)
-
- def before_run(self, runner):
- super(TextLoggerHook, self).before_run(runner)
-
- if self.out_dir is not None:
- self.file_client = FileClient.infer_client(self.file_client_args,
- self.out_dir)
- # The final `self.out_dir` is the concatenation of `self.out_dir`
- # and the last level directory of `runner.work_dir`
- basename = osp.basename(runner.work_dir.rstrip(osp.sep))
- self.out_dir = self.file_client.join_path(self.out_dir, basename)
- runner.logger.info(
- (f'Text logs will be saved to {self.out_dir} by '
- f'{self.file_client.name} after the training process.'))
-
- self.start_iter = runner.iter
- self.json_log_path = osp.join(runner.work_dir,
- f'{runner.timestamp}.log.json')
- if runner.meta is not None:
- self._dump_log(runner.meta, runner)
-
- def _get_max_memory(self, runner):
- device = getattr(runner.model, 'output_device', None)
- mem = torch.cuda.max_memory_allocated(device=device)
- mem_mb = torch.tensor([mem / (1024 * 1024)],
- dtype=torch.int,
- device=device)
- if runner.world_size > 1:
- dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX)
- return mem_mb.item()
-
- def _log_info(self, log_dict, runner):
- # print exp name for users to distinguish experiments
- # at every ``interval_exp_name`` iterations and the end of each epoch
- if runner.meta is not None and 'exp_name' in runner.meta:
- if (self.every_n_iters(runner, self.interval_exp_name)) or (
- self.by_epoch and self.end_of_epoch(runner)):
- exp_info = f'Exp name: {runner.meta["exp_name"]}'
- runner.logger.info(exp_info)
-
- if log_dict['mode'] == 'train':
- if isinstance(log_dict['lr'], dict):
- lr_str = []
- for k, val in log_dict['lr'].items():
- lr_str.append(f'lr_{k}: {val:.3e}')
- lr_str = ' '.join(lr_str)
- else:
- lr_str = f'lr: {log_dict["lr"]:.3e}'
-
- # by epoch: Epoch [4][100/1000]
- # by iter: Iter [100/100000]
- if self.by_epoch:
- log_str = f'Epoch [{log_dict["epoch"]}]' \
- f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t'
- else:
- log_str = f'Iter [{log_dict["iter"]}/{runner.max_iters}]\t'
- log_str += f'{lr_str}, '
-
- if 'time' in log_dict.keys():
- self.time_sec_tot += (log_dict['time'] * self.interval)
- time_sec_avg = self.time_sec_tot / (
- runner.iter - self.start_iter + 1)
- eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1)
- eta_str = str(datetime.timedelta(seconds=int(eta_sec)))
- log_str += f'eta: {eta_str}, '
- log_str += f'time: {log_dict["time"]:.3f}, ' \
- f'data_time: {log_dict["data_time"]:.3f}, '
- # statistic memory
- if torch.cuda.is_available():
- log_str += f'memory: {log_dict["memory"]}, '
- else:
- # val/test time
- # here 1000 is the length of the val dataloader
- # by epoch: Epoch[val] [4][1000]
- # by iter: Iter[val] [1000]
- if self.by_epoch:
- log_str = f'Epoch({log_dict["mode"]}) ' \
- f'[{log_dict["epoch"]}][{log_dict["iter"]}]\t'
- else:
- log_str = f'Iter({log_dict["mode"]}) [{log_dict["iter"]}]\t'
-
- log_items = []
- for name, val in log_dict.items():
- # TODO: resolve this hack
- # these items have been in log_str
- if name in [
- 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time',
- 'memory', 'epoch'
- ]:
- continue
- if isinstance(val, float):
- val = f'{val:.4f}'
- log_items.append(f'{name}: {val}')
- log_str += ', '.join(log_items)
-
- runner.logger.info(log_str)
-
- def _dump_log(self, log_dict, runner):
- # dump log in json format
- json_log = OrderedDict()
- for k, v in log_dict.items():
- json_log[k] = self._round_float(v)
- # only append log at last line
- if runner.rank == 0:
- with open(self.json_log_path, 'a+') as f:
- mmcv.dump(json_log, f, file_format='json')
- f.write('\n')
-
- def _round_float(self, items):
- if isinstance(items, list):
- return [self._round_float(item) for item in items]
- elif isinstance(items, float):
- return round(items, 5)
- else:
- return items
-
- def log(self, runner):
- if 'eval_iter_num' in runner.log_buffer.output:
- # this doesn't modify runner.iter and is regardless of by_epoch
- cur_iter = runner.log_buffer.output.pop('eval_iter_num')
- else:
- cur_iter = self.get_iter(runner, inner_iter=True)
-
- log_dict = OrderedDict(
- mode=self.get_mode(runner),
- epoch=self.get_epoch(runner),
- iter=cur_iter)
-
- # only record lr of the first param group
- cur_lr = runner.current_lr()
- if isinstance(cur_lr, list):
- log_dict['lr'] = cur_lr[0]
- else:
- assert isinstance(cur_lr, dict)
- log_dict['lr'] = {}
- for k, lr_ in cur_lr.items():
- assert isinstance(lr_, list)
- log_dict['lr'].update({k: lr_[0]})
-
- if 'time' in runner.log_buffer.output:
- # statistic memory
- if torch.cuda.is_available():
- log_dict['memory'] = self._get_max_memory(runner)
-
- log_dict = dict(log_dict, **runner.log_buffer.output)
-
- self._log_info(log_dict, runner)
- self._dump_log(log_dict, runner)
- return log_dict
-
- def after_run(self, runner):
- # copy or upload logs to self.out_dir
- if self.out_dir is not None:
- for filename in scandir(runner.work_dir, self.out_suffix, True):
- local_filepath = osp.join(runner.work_dir, filename)
- out_filepath = self.file_client.join_path(
- self.out_dir, filename)
- with open(local_filepath, 'r') as f:
- self.file_client.put_text(f.read(), out_filepath)
-
- runner.logger.info(
- (f'The file {local_filepath} has been uploaded to '
- f'{out_filepath}.'))
-
- if not self.keep_local:
- os.remove(local_filepath)
- runner.logger.info(
- (f'{local_filepath} was removed due to the '
- '`self.keep_local=False`'))
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/losses/dice_loss.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/losses/dice_loss.py
deleted file mode 100644
index 27a77b962d7d8b3079c7d6cd9db52280c6fb4970..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/losses/dice_loss.py
+++ /dev/null
@@ -1,119 +0,0 @@
-"""Modified from https://github.com/LikeLy-Journey/SegmenTron/blob/master/
-segmentron/solver/loss.py (Apache-2.0 License)"""
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import get_class_weight, weighted_loss
-
-
-@weighted_loss
-def dice_loss(pred,
- target,
- valid_mask,
- smooth=1,
- exponent=2,
- class_weight=None,
- ignore_index=255):
- assert pred.shape[0] == target.shape[0]
- total_loss = 0
- num_classes = pred.shape[1]
- for i in range(num_classes):
- if i != ignore_index:
- dice_loss = binary_dice_loss(
- pred[:, i],
- target[..., i],
- valid_mask=valid_mask,
- smooth=smooth,
- exponent=exponent)
- if class_weight is not None:
- dice_loss *= class_weight[i]
- total_loss += dice_loss
- return total_loss / num_classes
-
-
-@weighted_loss
-def binary_dice_loss(pred, target, valid_mask, smooth=1, exponent=2, **kwards):
- assert pred.shape[0] == target.shape[0]
- pred = pred.reshape(pred.shape[0], -1)
- target = target.reshape(target.shape[0], -1)
- valid_mask = valid_mask.reshape(valid_mask.shape[0], -1)
-
- num = torch.sum(torch.mul(pred, target) * valid_mask, dim=1) * 2 + smooth
- den = torch.sum(pred.pow(exponent) + target.pow(exponent), dim=1) + smooth
-
- return 1 - num / den
-
-
-@LOSSES.register_module()
-class DiceLoss(nn.Module):
- """DiceLoss.
-
- This loss is proposed in `V-Net: Fully Convolutional Neural Networks for
- Volumetric Medical Image Segmentation `_.
-
- Args:
- loss_type (str, optional): Binary or multi-class loss.
- Default: 'multi_class'. Options are "binary" and "multi_class".
- smooth (float): A float number to smooth loss, and avoid NaN error.
- Default: 1
- exponent (float): An float number to calculate denominator
- value: \\sum{x^exponent} + \\sum{y^exponent}. Default: 2.
- reduction (str, optional): The method used to reduce the loss. Options
- are "none", "mean" and "sum". This parameter only works when
- per_image is True. Default: 'mean'.
- class_weight (list[float] | str, optional): Weight of each class. If in
- str format, read them from a file. Defaults to None.
- loss_weight (float, optional): Weight of the loss. Default to 1.0.
- ignore_index (int | None): The label index to be ignored. Default: 255.
- """
-
- def __init__(self,
- smooth=1,
- exponent=2,
- reduction='mean',
- class_weight=None,
- loss_weight=1.0,
- ignore_index=255,
- **kwards):
- super(DiceLoss, self).__init__()
- self.smooth = smooth
- self.exponent = exponent
- self.reduction = reduction
- self.class_weight = get_class_weight(class_weight)
- self.loss_weight = loss_weight
- self.ignore_index = ignore_index
-
- def forward(self,
- pred,
- target,
- avg_factor=None,
- reduction_override=None,
- **kwards):
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.class_weight is not None:
- class_weight = pred.new_tensor(self.class_weight)
- else:
- class_weight = None
-
- pred = F.softmax(pred, dim=1)
- num_classes = pred.shape[1]
- one_hot_target = F.one_hot(
- torch.clamp(target.long(), 0, num_classes - 1),
- num_classes=num_classes)
- valid_mask = (target != self.ignore_index).long()
-
- loss = self.loss_weight * dice_loss(
- pred,
- one_hot_target,
- valid_mask=valid_mask,
- reduction=reduction,
- avg_factor=avg_factor,
- smooth=self.smooth,
- exponent=self.exponent,
- class_weight=class_weight,
- ignore_index=self.ignore_index)
- return loss
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/swin.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/swin.py
deleted file mode 100644
index f8c71367e3e78b087f80b2ab3e2f495a9c372f1a..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/swin.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import timm
-
-from .swin_common import _make_swin_backbone
-
-
-def _make_pretrained_swinl12_384(pretrained, hooks=None):
- model = timm.create_model("swin_large_patch4_window12_384", pretrained=pretrained)
-
- hooks = [1, 1, 17, 1] if hooks == None else hooks
- return _make_swin_backbone(
- model,
- hooks=hooks
- )
diff --git a/spaces/crashedice/signify/SOURCE/yolo_files/utils/aws/resume.py b/spaces/crashedice/signify/SOURCE/yolo_files/utils/aws/resume.py
deleted file mode 100644
index 4b0d4246b594acddbecf065956fc8729bb96ec36..0000000000000000000000000000000000000000
--- a/spaces/crashedice/signify/SOURCE/yolo_files/utils/aws/resume.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Resume all interrupted trainings in yolov5/ dir including DDP trainings
-# Usage: $ python utils/aws/resume.py
-
-import os
-import sys
-from pathlib import Path
-
-import torch
-import yaml
-
-sys.path.append('./') # to run '$ python *.py' files in subdirectories
-
-port = 0 # --master_port
-path = Path('').resolve()
-for last in path.rglob('*/**/last.pt'):
- ckpt = torch.load(last)
- if ckpt['optimizer'] is None:
- continue
-
- # Load opt.yaml
- with open(last.parent.parent / 'opt.yaml') as f:
- opt = yaml.safe_load(f)
-
- # Get device count
- d = opt['device'].split(',') # devices
- nd = len(d) # number of devices
- ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1) # distributed data parallel
-
- if ddp: # multi-GPU
- port += 1
- cmd = f'python -m torch.distributed.launch --nproc_per_node {nd} --master_port {port} train.py --resume {last}'
- else: # single-GPU
- cmd = f'python train.py --resume {last}'
-
- cmd += ' > /dev/null 2>&1 &' # redirect output to dev/null and run in daemon thread
- print(cmd)
- os.system(cmd)
diff --git a/spaces/crylake/img2poem/query2labels/lib/models/swin_transformer.py b/spaces/crylake/img2poem/query2labels/lib/models/swin_transformer.py
deleted file mode 100644
index f8ec82c6de2ad483267d2149bd0eccff0f0c1984..0000000000000000000000000000000000000000
--- a/spaces/crylake/img2poem/query2labels/lib/models/swin_transformer.py
+++ /dev/null
@@ -1,623 +0,0 @@
-# --------------------------------------------------------
-# Swin Transformer
-# Copyright (c) 2021 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ze Liu
-# --------------------------------------------------------
-import torch
-import torch.nn as nn
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-__all__ = ['build_swin_transformer']
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
-
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
-
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- r""" Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
-
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
- def extra_repr(self) -> str:
- return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
-
- def flops(self, N):
- # calculate flops for 1 window with token length of N
- flops = 0
- # qkv = self.qkv(x)
- flops += N * self.dim * 3 * self.dim
- # attn = (q @ k.transpose(-2, -1))
- flops += self.num_heads * N * (self.dim // self.num_heads) * N
- # x = (attn @ v)
- flops += self.num_heads * N * N * (self.dim // self.num_heads)
- # x = self.proj(x)
- flops += N * self.dim * self.dim
- return flops
-
-
-class SwinTransformerBlock(nn.Module):
- r""" Swin Transformer Block.
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resulotion.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- if min(self.input_resolution) <= self.window_size:
- # if window size is larger than input resolution, we don't partition windows
- self.shift_size = 0
- self.window_size = min(self.input_resolution)
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- if self.shift_size > 0:
- # calculate attention mask for SW-MSA
- H, W = self.input_resolution
- img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
- else:
- attn_mask = None
-
- self.register_buffer("attn_mask", attn_mask)
-
- def forward(self, x):
- H, W = self.input_resolution
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- else:
- shifted_x = x
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
- f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
-
- def flops(self):
- flops = 0
- H, W = self.input_resolution
- # norm1
- flops += self.dim * H * W
- # W-MSA/SW-MSA
- nW = H * W / self.window_size / self.window_size
- flops += nW * self.attn.flops(self.window_size * self.window_size)
- # mlp
- flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
- # norm2
- flops += self.dim * H * W
- return flops
-
-
-class PatchMerging(nn.Module):
- r""" Patch Merging Layer.
-
- Args:
- input_resolution (tuple[int]): Resolution of input feature.
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.input_resolution = input_resolution
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x):
- """
- x: B, H*W, C
- """
- H, W = self.input_resolution
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
- assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
-
- x = x.view(B, H, W, C)
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
- def extra_repr(self) -> str:
- return f"input_resolution={self.input_resolution}, dim={self.dim}"
-
- def flops(self):
- H, W = self.input_resolution
- flops = H * W * self.dim
- flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
- return flops
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resolution.
- depth (int): Number of blocks.
- num_heads (int): Number of attention heads.
- window_size (int): Local window size.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self, dim, input_resolution, depth, num_heads, window_size,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False):
-
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
- num_heads=num_heads, window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop, attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x):
- for blk in self.blocks:
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x)
- else:
- x = blk(x)
- if self.downsample is not None:
- x = self.downsample(x)
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
-
- def flops(self):
- flops = 0
- for blk in self.blocks:
- flops += blk.flops()
- if self.downsample is not None:
- flops += self.downsample.flops()
- return flops
-
-
-class PatchEmbed(nn.Module):
- r""" Image to Patch Embedding
-
- Args:
- img_size (int): Image size. Default: 224.
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
- self.img_size = img_size
- self.patch_size = patch_size
- self.patches_resolution = patches_resolution
- self.num_patches = patches_resolution[0] * patches_resolution[1]
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- B, C, H, W = x.shape
- # FIXME look at relaxing size constraints
- assert H == self.img_size[0] and W == self.img_size[1], \
- f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
- x = self.proj(x).flatten(2).transpose(1, 2) # B Ph*Pw C
- if self.norm is not None:
- x = self.norm(x)
- return x
-
- def flops(self):
- Ho, Wo = self.patches_resolution
- flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1])
- if self.norm is not None:
- flops += Ho * Wo * self.embed_dim
- return flops
-
-
-class SwinTransformer(nn.Module):
- r""" Swin Transformer
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
-
- Args:
- img_size (int | tuple(int)): Input image size. Default 224
- patch_size (int | tuple(int)): Patch size. Default: 4
- in_chans (int): Number of input image channels. Default: 3
- num_classes (int): Number of classes for classification head. Default: 1000
- embed_dim (int): Patch embedding dimension. Default: 96
- depths (tuple(int)): Depth of each Swin Transformer layer.
- num_heads (tuple(int)): Number of attention heads in different layers.
- window_size (int): Window size. Default: 7
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
- drop_rate (float): Dropout rate. Default: 0
- attn_drop_rate (float): Attention dropout rate. Default: 0
- drop_path_rate (float): Stochastic depth rate. Default: 0.1
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
- patch_norm (bool): If True, add normalization after patch embedding. Default: True
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
- """
-
- def __init__(self, img_size=224, patch_size=4, in_chans=3, num_classes=1000,
- embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24],
- window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
- norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
- use_checkpoint=False, **kwargs):
- super().__init__()
-
- self.num_classes = num_classes
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.num_features = int(embed_dim * 2 ** (self.num_layers - 1))
- self.mlp_ratio = mlp_ratio
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
- num_patches = self.patch_embed.num_patches
- patches_resolution = self.patch_embed.patches_resolution
- self.patches_resolution = patches_resolution
-
- # absolute position embedding
- if self.ape:
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = BasicLayer(dim=int(embed_dim * 2 ** i_layer),
- input_resolution=(patches_resolution[0] // (2 ** i_layer),
- patches_resolution[1] // (2 ** i_layer)),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=self.mlp_ratio,
- qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
- norm_layer=norm_layer,
- downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- use_checkpoint=use_checkpoint)
- self.layers.append(layer)
-
- self.norm = norm_layer(self.num_features)
- self.avgpool = nn.AdaptiveAvgPool2d(1)
- self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
-
- self.apply(self._init_weights)
-
- # for reshape
- self.img_size = img_size
- self.down_sample_ratio = 32
- self.h = img_size // 32
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'absolute_pos_embed'}
-
- @torch.jit.ignore
- def no_weight_decay_keywords(self):
- return {'relative_position_bias_table'}
-
- def forward_features(self, x):
- x = self.patch_embed(x)
- if self.ape:
- x = x + self.absolute_pos_embed
- x = self.pos_drop(x)
-
- for layer in self.layers:
- x = layer(x)
-
- x = self.norm(x) # B L C
- B,L,C = x.shape
- x = x.transpose(1,2).reshape(B, C, self.h , self.h)
- return x
-
- def forward(self, x):
- x = self.forward_features(x)
- x = self.avgpool(x).squeeze(-1).squeeze(-1) # B C 1
- # import ipdb; ipdb.set_trace()
- x = self.head(x)
- return x
-
- def flops(self):
- flops = 0
- flops += self.patch_embed.flops()
- for i, layer in enumerate(self.layers):
- flops += layer.flops()
- flops += self.num_features * self.patches_resolution[0] * self.patches_resolution[1] // (2 ** self.num_layers)
- flops += self.num_features * self.num_classes
- return flops
-
-def build_swin_transformer(modelname, img_size, num_classes=1000):
- assert modelname in ['swin_B_224_22k', 'swin_B_384_22k', 'swin_L_224_22k', 'swin_L_384_22k']
- model_para_dict = {
- 'swin_B_224_22k': dict(
- embed_dim=128,
- depths=[ 2, 2, 18, 2 ],
- num_heads=[ 4, 8, 16, 32 ],
- window_size=7
- ),
- 'swin_B_384_22k': dict(
- embed_dim=128,
- depths=[ 2, 2, 18, 2 ],
- num_heads=[ 4, 8, 16, 32 ],
- window_size=12
- ),
- 'swin_L_224_22k': dict(
- embed_dim=192,
- depths=[ 2, 2, 18, 2 ],
- num_heads=[ 6, 12, 24, 48 ],
- window_size=7
- ),
- 'swin_L_384_22k': dict(
- embed_dim=192,
- depths=[ 2, 2, 18, 2 ],
- num_heads=[ 6, 12, 24, 48 ],
- window_size=12
- ),
- }
- model = SwinTransformer(img_size=img_size, num_classes=num_classes, **model_para_dict[modelname])
- return model
\ No newline at end of file
diff --git a/spaces/crytion/DeepNude/gan.py b/spaces/crytion/DeepNude/gan.py
deleted file mode 100644
index ccec90eb2d061431f0bebc9e4ade0a0823ca065a..0000000000000000000000000000000000000000
--- a/spaces/crytion/DeepNude/gan.py
+++ /dev/null
@@ -1,240 +0,0 @@
-from PIL import Image
-import numpy as np
-import cv2
-import torchvision.transforms as transforms
-import torch
-import io
-import os
-import functools
-
-class DataLoader():
-
- def __init__(self, opt, cv_img):
- super(DataLoader, self).__init__()
-
- self.dataset = Dataset()
- self.dataset.initialize(opt, cv_img)
-
- self.dataloader = torch.utils.data.DataLoader(
- self.dataset,
- batch_size=opt.batchSize,
- shuffle=not opt.serial_batches,
- num_workers=int(opt.nThreads))
-
- def load_data(self):
- return self.dataloader
-
- def __len__(self):
- return 1
-
-class Dataset(torch.utils.data.Dataset):
- def __init__(self):
- super(Dataset, self).__init__()
-
- def initialize(self, opt, cv_img):
- self.opt = opt
- self.root = opt.dataroot
-
- self.A = Image.fromarray(cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB))
- self.dataset_size = 1
-
- def __getitem__(self, index):
-
- transform_A = get_transform(self.opt)
- A_tensor = transform_A(self.A.convert('RGB'))
-
- B_tensor = inst_tensor = feat_tensor = 0
-
- input_dict = {'label': A_tensor, 'inst': inst_tensor, 'image': B_tensor,
- 'feat': feat_tensor, 'path': ""}
-
- return input_dict
-
- def __len__(self):
- return 1
-
-class DeepModel(torch.nn.Module):
-
- def initialize(self, opt):
-
- torch.cuda.empty_cache()
-
- self.opt = opt
-
- self.gpu_ids = [] #FIX CPU
-
- self.netG = self.__define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG,
- opt.n_downsample_global, opt.n_blocks_global, opt.n_local_enhancers,
- opt.n_blocks_local, opt.norm, self.gpu_ids)
-
- # load networks
- self.__load_network(self.netG)
-
-
-
- def inference(self, label, inst):
-
- # Encode Inputs
- input_label, inst_map, _, _ = self.__encode_input(label, inst, infer=True)
-
- # Fake Generation
- input_concat = input_label
-
- with torch.no_grad():
- fake_image = self.netG.forward(input_concat)
-
- return fake_image
-
- # helper loading function that can be used by subclasses
- def __load_network(self, network):
-
- save_path = os.path.join(self.opt.checkpoints_dir)
-
- network.load_state_dict(torch.load(save_path))
-
- def __encode_input(self, label_map, inst_map=None, real_image=None, feat_map=None, infer=False):
- if (len(self.gpu_ids) > 0):
- input_label = label_map.data.cuda() #GPU
- else:
- input_label = label_map.data #CPU
-
- return input_label, inst_map, real_image, feat_map
-
- def __weights_init(self, m):
- classname = m.__class__.__name__
- if classname.find('Conv') != -1:
- m.weight.data.normal_(0.0, 0.02)
- elif classname.find('BatchNorm2d') != -1:
- m.weight.data.normal_(1.0, 0.02)
- m.bias.data.fill_(0)
-
- def __define_G(self, input_nc, output_nc, ngf, netG, n_downsample_global=3, n_blocks_global=9, n_local_enhancers=1,
- n_blocks_local=3, norm='instance', gpu_ids=[]):
- norm_layer = self.__get_norm_layer(norm_type=norm)
- netG = GlobalGenerator(input_nc, output_nc, ngf, n_downsample_global, n_blocks_global, norm_layer)
-
- if len(gpu_ids) > 0:
- netG.cuda(gpu_ids[0])
- netG.apply(self.__weights_init)
- return netG
-
- def __get_norm_layer(self, norm_type='instance'):
- norm_layer = functools.partial(torch.nn.InstanceNorm2d, affine=False)
- return norm_layer
-
-##############################################################################
-# Generator
-##############################################################################
-class GlobalGenerator(torch.nn.Module):
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=torch.nn.BatchNorm2d,
- padding_type='reflect'):
- assert(n_blocks >= 0)
- super(GlobalGenerator, self).__init__()
- activation = torch.nn.ReLU(True)
-
- model = [torch.nn.ReflectionPad2d(3), torch.nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), norm_layer(ngf), activation]
- ### downsample
- for i in range(n_downsampling):
- mult = 2**i
- model += [torch.nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1),
- norm_layer(ngf * mult * 2), activation]
-
- ### resnet blocks
- mult = 2**n_downsampling
- for i in range(n_blocks):
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer)]
-
- ### upsample
- for i in range(n_downsampling):
- mult = 2**(n_downsampling - i)
- model += [torch.nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, output_padding=1),
- norm_layer(int(ngf * mult / 2)), activation]
- model += [torch.nn.ReflectionPad2d(3), torch.nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0), torch.nn.Tanh()]
- self.model = torch.nn.Sequential(*model)
-
- def forward(self, input):
- return self.model(input)
-
-# Define a resnet block
-class ResnetBlock(torch.nn.Module):
- def __init__(self, dim, padding_type, norm_layer, activation=torch.nn.ReLU(True), use_dropout=False):
- super(ResnetBlock, self).__init__()
- self.conv_block = self.__build_conv_block(dim, padding_type, norm_layer, activation, use_dropout)
-
- def __build_conv_block(self, dim, padding_type, norm_layer, activation, use_dropout):
- conv_block = []
- p = 0
- if padding_type == 'reflect':
- conv_block += [torch.nn.ReflectionPad2d(1)]
- elif padding_type == 'replicate':
- conv_block += [torch.nn.ReplicationPad2d(1)]
- elif padding_type == 'zero':
- p = 1
- else:
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
-
- conv_block += [torch.nn.Conv2d(dim, dim, kernel_size=3, padding=p),
- norm_layer(dim),
- activation]
- if use_dropout:
- conv_block += [torch.nn.Dropout(0.5)]
-
- p = 0
- if padding_type == 'reflect':
- conv_block += [torch.nn.ReflectionPad2d(1)]
- elif padding_type == 'replicate':
- conv_block += [torch.nn.ReplicationPad2d(1)]
- elif padding_type == 'zero':
- p = 1
- else:
- raise NotImplementedError('padding [%s] is not implemented' % padding_type)
- conv_block += [torch.nn.Conv2d(dim, dim, kernel_size=3, padding=p),
- norm_layer(dim)]
-
- return torch.nn.Sequential(*conv_block)
-
- def forward(self, x):
- out = x + self.conv_block(x)
- return out
-
-# Data utils:
-def get_transform(opt, method=Image.BICUBIC, normalize=True):
- transform_list = []
-
- base = float(2 ** opt.n_downsample_global)
- if opt.netG == 'local':
- base *= (2 ** opt.n_local_enhancers)
- transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base, method)))
-
- transform_list += [transforms.ToTensor()]
-
- if normalize:
- transform_list += [transforms.Normalize((0.5, 0.5, 0.5),
- (0.5, 0.5, 0.5))]
- return transforms.Compose(transform_list)
-
-def __make_power_2(img, base, method=Image.BICUBIC):
- ow, oh = img.size
- h = int(round(oh / base) * base)
- w = int(round(ow / base) * base)
- if (h == oh) and (w == ow):
- return img
- return img.resize((w, h), method)
-
-# Converts a Tensor into a Numpy array
-# |imtype|: the desired type of the converted numpy array
-def tensor2im(image_tensor, imtype=np.uint8, normalize=True):
- if isinstance(image_tensor, list):
- image_numpy = []
- for i in range(len(image_tensor)):
- image_numpy.append(tensor2im(image_tensor[i], imtype, normalize))
- return image_numpy
- image_numpy = image_tensor.cpu().float().numpy()
- if normalize:
- image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0
- else:
- image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0
- image_numpy = np.clip(image_numpy, 0, 255)
- if image_numpy.shape[2] == 1 or image_numpy.shape[2] > 3:
- image_numpy = image_numpy[:,:,0]
- return image_numpy.astype(imtype)
\ No newline at end of file
diff --git a/spaces/crytion/DeepNude/opencv_transform/nude_to_watermark.py b/spaces/crytion/DeepNude/opencv_transform/nude_to_watermark.py
deleted file mode 100644
index 1b39a3cc1ad1c9f41b255dc40ff3814c5c7fb305..0000000000000000000000000000000000000000
--- a/spaces/crytion/DeepNude/opencv_transform/nude_to_watermark.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import cv2
-import numpy as np
-import os
-
-# create_watermark ===============================================================
-# return:
-# ( True/False), depending on the transformation process
-def create_watermark(nude):
-
- # Add alpha channel if missing
- # if nude.shape[2] < 4:
- # nude = np.dstack([nude, np.ones((512, 512), dtype="uint8") * 255])
-
- # watermark = cv2.imread("fake.png", cv2.IMREAD_UNCHANGED)
-
- # f1 = np.asarray([0, 0, 0, 250]) # red color filter
- # f2 = np.asarray([255, 255, 255, 255])
- # mask = cv2.bitwise_not(cv2.inRange(watermark, f1, f2))
- # mask_inv = cv2.bitwise_not(mask)
-
- # res1 = cv2.bitwise_and(nude, nude, mask = mask)
- # # res2 = cv2.bitwise_and(nude, nude, mask = mask)
- # # res2 = cv2.bitwise_and(watermark, watermark, mask = mask_inv)
- # res = res1
-
- # alpha = 0.6
- # return cv2.addWeighted(res, alpha, nude, 1 - alpha, 0)
- return nude
\ No newline at end of file
diff --git a/spaces/csuhan/LLaMA-Adapter/llama/__init__.py b/spaces/csuhan/LLaMA-Adapter/llama/__init__.py
deleted file mode 100644
index e0344feb795d6da56506388334598513a9f4fa35..0000000000000000000000000000000000000000
--- a/spaces/csuhan/LLaMA-Adapter/llama/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# This software may be used and distributed according to the terms of the GNU General Public License version 3.
-
-from .generation import LLaMA
-from .model import ModelArgs, Transformer, VisionModel
-from .tokenizer import Tokenizer
diff --git a/spaces/cvlab/zero123-live/ldm/models/diffusion/ddim.py b/spaces/cvlab/zero123-live/ldm/models/diffusion/ddim.py
deleted file mode 100644
index 39ad3728a0b29f373e384ce15263e0ab04bec78d..0000000000000000000000000000000000000000
--- a/spaces/cvlab/zero123-live/ldm/models/diffusion/ddim.py
+++ /dev/null
@@ -1,324 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-from functools import partial
-from einops import rearrange
-
-from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor
-from ldm.models.diffusion.sampling_util import renorm_thresholding, norm_thresholding, spatial_norm_thresholding
-
-
-class DDIMSampler(object):
- def __init__(self, model, schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
-
- def to(self, device):
- """Same as to in torch module
- Don't really underestand why this isn't a module in the first place"""
- for k, v in self.__dict__.items():
- if isinstance(v, torch.Tensor):
- new_v = getattr(self, k).to(device)
- setattr(self, k, new_v)
-
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- if attr.device != torch.device("cuda"):
- attr = attr.to(torch.device("cuda"))
- setattr(self, name, attr)
-
- def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
- self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)
- alphas_cumprod = self.model.alphas_cumprod
- assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer('betas', to_torch(self.model.betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta,verbose=verbose)
- self.register_buffer('ddim_sigmas', ddim_sigmas)
- self.register_buffer('ddim_alphas', ddim_alphas)
- self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
- self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
- 1 - self.alphas_cumprod / self.alphas_cumprod_prev))
- self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
-
- @torch.no_grad()
- def sample(self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.,
- mask=None,
- x0=None,
- temperature=1.,
- noise_dropout=0.,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.,
- unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- dynamic_threshold=None,
- **kwargs
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- ctmp = conditioning[list(conditioning.keys())[0]]
- while isinstance(ctmp, list): ctmp = ctmp[0]
- cbs = ctmp.shape[0]
- if cbs != batch_size:
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
-
- else:
- if conditioning.shape[0] != batch_size:
- print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- print(f'Data shape for DDIM sampling is {size}, eta {eta}')
-
- samples, intermediates = self.ddim_sampling(conditioning, size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask, x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- dynamic_threshold=dynamic_threshold,
- )
- return samples, intermediates
-
- @torch.no_grad()
- def ddim_sampling(self, cond, shape,
- x_T=None, ddim_use_original_steps=False,
- callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, log_every_t=100,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None,
- t_start=-1):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
- timesteps = self.ddim_timesteps[:subset_end]
-
- timesteps = timesteps[:t_start]
-
- intermediates = {'x_inter': [img], 'pred_x0': [img]}
- time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
-
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
- img = img_orig * mask + (1. - mask) * img
-
- outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised, temperature=temperature,
- noise_dropout=noise_dropout, score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- dynamic_threshold=dynamic_threshold)
- img, pred_x0 = outs
- if callback:
- img = callback(i, img, pred_x0)
- if img_callback: img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates['x_inter'].append(img)
- intermediates['pred_x0'].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None,
- dynamic_threshold=None):
- b, *_, device = *x.shape, x.device
-
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
- e_t = self.model.apply_model(x, t, c)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- if isinstance(c, dict):
- assert isinstance(unconditional_conditioning, dict)
- c_in = dict()
- for k in c:
- if isinstance(c[k], list):
- c_in[k] = [torch.cat([
- unconditional_conditioning[k][i],
- c[k][i]]) for i in range(len(c[k]))]
- else:
- c_in[k] = torch.cat([
- unconditional_conditioning[k],
- c[k]])
- else:
- c_in = torch.cat([unconditional_conditioning, c])
- e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
- e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps"
- e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
- sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
- sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)
-
- # current prediction for x_0
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
-
- if dynamic_threshold is not None:
- pred_x0 = norm_thresholding(pred_x0, dynamic_threshold)
-
- # direction pointing to x_t
- dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
- return x_prev, pred_x0
-
- @torch.no_grad()
- def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None,
- unconditional_guidance_scale=1.0, unconditional_conditioning=None):
- num_reference_steps = self.ddpm_num_timesteps if use_original_steps else self.ddim_timesteps.shape[0]
-
- assert t_enc <= num_reference_steps
- num_steps = t_enc
-
- if use_original_steps:
- alphas_next = self.alphas_cumprod[:num_steps]
- alphas = self.alphas_cumprod_prev[:num_steps]
- else:
- alphas_next = self.ddim_alphas[:num_steps]
- alphas = torch.tensor(self.ddim_alphas_prev[:num_steps])
-
- x_next = x0
- intermediates = []
- inter_steps = []
- for i in tqdm(range(num_steps), desc='Encoding Image'):
- t = torch.full((x0.shape[0],), i, device=self.model.device, dtype=torch.long)
- if unconditional_guidance_scale == 1.:
- noise_pred = self.model.apply_model(x_next, t, c)
- else:
- assert unconditional_conditioning is not None
- e_t_uncond, noise_pred = torch.chunk(
- self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)),
- torch.cat((unconditional_conditioning, c))), 2)
- noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond)
-
- xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next
- weighted_noise_pred = alphas_next[i].sqrt() * (
- (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred
- x_next = xt_weighted + weighted_noise_pred
- if return_intermediates and i % (
- num_steps // return_intermediates) == 0 and i < num_steps - 1:
- intermediates.append(x_next)
- inter_steps.append(i)
- elif return_intermediates and i >= num_steps - 2:
- intermediates.append(x_next)
- inter_steps.append(i)
-
- out = {'x_encoded': x_next, 'intermediate_steps': inter_steps}
- if return_intermediates:
- out.update({'intermediates': intermediates})
- return x_next, out
-
- @torch.no_grad()
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
- # fast, but does not allow for exact reconstruction
- # t serves as an index to gather the correct alphas
- if use_original_steps:
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
- else:
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
-
- if noise is None:
- noise = torch.randn_like(x0)
- return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +
- extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)
-
- @torch.no_grad()
- def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- use_original_steps=False):
-
- timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps
- timesteps = timesteps[:t_start]
-
- time_range = np.flip(timesteps)
- total_steps = timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='Decoding image', total=total_steps)
- x_dec = x_latent
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)
- x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- return x_dec
\ No newline at end of file
diff --git a/spaces/daayros/anything-v3.0/README.md b/spaces/daayros/anything-v3.0/README.md
deleted file mode 100644
index 15176bed26d36b4f9566c7102a5655e310f76036..0000000000000000000000000000000000000000
--- a/spaces/daayros/anything-v3.0/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Anything V3.0
-emoji: 🏃
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.10.1
-app_file: app.py
-pinned: false
-duplicated_from: akhaliq/anything-v3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/danieldux/isco-gpt/utils.py b/spaces/danieldux/isco-gpt/utils.py
deleted file mode 100644
index 8d5a715335313d061c3ead3aec8ec6d10191cf64..0000000000000000000000000000000000000000
--- a/spaces/danieldux/isco-gpt/utils.py
+++ /dev/null
@@ -1,183 +0,0 @@
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.vectorstores.faiss import FAISS
-from langchain import OpenAI, Cohere
-from langchain.chains.qa_with_sources import load_qa_with_sources_chain
-from embeddings import OpenAIEmbeddings
-from langchain.llms import OpenAI
-from langchain.docstore.document import Document
-from langchain.vectorstores import FAISS, VectorStore
-import docx2txt
-from typing import List, Dict, Any
-import re
-import numpy as np
-from io import StringIO
-from io import BytesIO
-import streamlit as st
-from prompts import STUFF_PROMPT
-from pypdf import PdfReader
-from openai.error import AuthenticationError
-import pptx
-
-@st.experimental_memo()
-def parse_docx(file: BytesIO) -> str:
- text = docx2txt.process(file)
- # Remove multiple newlines
- text = re.sub(r"\n\s*\n", "\n\n", text)
- return text
-
-
-@st.experimental_memo()
-def parse_pdf(file: BytesIO) -> List[str]:
- pdf = PdfReader(file)
- output = []
- for page in pdf.pages:
- text = page.extract_text()
- # Merge hyphenated words
- text = re.sub(r"(\w+)-\n(\w+)", r"\1\2", text)
- # Fix newlines in the middle of sentences
- text = re.sub(r"(? str:
- text = file.read().decode("utf-8")
- # Remove multiple newlines
- text = re.sub(r"\n\s*\n", "\n\n", text)
- return text
-
-@st.experimental_memo()
-def parse_pptx(file: BytesIO) -> str:
-
- ppt_file = pptx.Presentation(file)
-
- string_data = ""
-
- for slide in ppt_file.slides:
- for shape in slide.shapes:
- if shape.has_text_frame:
- string_data += shape.text_frame.text + '\n'
- return string_data
-
-@st.experimental_memo()
-def parse_csv(uploaded_file):
- # To read file as bytes:
- #bytes_data = uploaded_file.getvalue()
- #st.write(bytes_data)
-
- # To convert to a string based IO:
- stringio = StringIO(uploaded_file.getvalue().decode("utf-8"))
- #st.write(stringio)
-
- # To read file as string:
- string_data = stringio.read()
- #st.write(string_data)
-
- # Can be used wherever a "file-like" object is accepted:
- # dataframe = pd.read_csv(uploaded_file)
- return string_data
-
-@st.experimental_memo()
-def parse_any(uploaded_file):
- stringio = StringIO(uploaded_file.getvalue().decode("utf-8"))
- string_data = stringio.read()
- return string_data
-
-@st.cache(allow_output_mutation=True)
-def text_to_docs(text: str) -> List[Document]:
- """Converts a string or list of strings to a list of Documents
- with metadata."""
- if isinstance(text, str):
- # Take a single string as one page
- text = [text]
- page_docs = [Document(page_content=page) for page in text]
-
- # Add page numbers as metadata
- for i, doc in enumerate(page_docs):
- doc.metadata["page"] = i + 1
-
- # Split pages into chunks
- doc_chunks = []
-
- for doc in page_docs:
- text_splitter = RecursiveCharacterTextSplitter(
- chunk_size=800,
- separators=["\n\n", "\n", ".", "!", "?", ",", " ", ""],
- chunk_overlap=0,
- )
- chunks = text_splitter.split_text(doc.page_content)
- for i, chunk in enumerate(chunks):
- doc = Document(
- page_content=chunk, metadata={"page": doc.metadata["page"], "chunk": i}
- )
- # Add sources a metadata
- doc.metadata["source"] = f"{doc.metadata['page']}-{doc.metadata['chunk']}"
- doc_chunks.append(doc)
- return doc_chunks
-
-
-@st.cache(allow_output_mutation=True, show_spinner=False)
-def embed_docs(docs: List[Document]) -> VectorStore:
- """Embeds a list of Documents and returns a FAISS index"""
-
- if not st.session_state.get("OPENAI_API_KEY"):
- raise AuthenticationError(
- "Enter your OpenAI API key in the sidebar. You can get a key at https://platform.openai.com/account/api-keys."
- )
- else:
- # Embed the chunks
- embeddings = OpenAIEmbeddings(openai_api_key=st.session_state.get("OPENAI_API_KEY")) # type: ignore
- index = FAISS.from_documents(docs, embeddings)
-
- return index
-
-
-@st.cache(allow_output_mutation=True)
-def search_docs(index: VectorStore, query: str) -> List[Document]:
- """Searches a FAISS index for similar chunks to the query
- and returns a list of Documents."""
-
- # Search for similar chunks
- docs = index.similarity_search(query, k=5)
- return docs
-
-
-@st.cache(allow_output_mutation=True)
-def get_answer(docs: List[Document], query: str) -> Dict[str, Any]:
- """Gets an answer to a question from a list of Documents."""
-
- # Get the answer
- chain = load_qa_with_sources_chain(OpenAI(temperature=0, openai_api_key=st.session_state.get("OPENAI_API_KEY")), chain_type="stuff", prompt=STUFF_PROMPT) # type: ignore
-
- answer = chain(
- {"input_documents": docs, "question": query}, return_only_outputs=True
- )
- return answer
-
-
-@st.cache(allow_output_mutation=True)
-def get_sources(answer: Dict[str, Any], docs: List[Document]) -> List[Document]:
- """Gets the source documents for an answer."""
-
- # Get sources for the answer
- source_keys = [s for s in answer["output_text"].split("SOURCES: ")[-1].split(", ")]
-
- source_docs = []
- for doc in docs:
- if doc.metadata["source"] in source_keys:
- source_docs.append(doc)
-
- return source_docs
-
-
-def wrap_text_in_html(text: str) -> str:
- """Wraps each text block separated by newlines in tags"""
- if isinstance(text, list):
- # Add horizontal rules between pages
- text = "\n
\n".join(text)
- return "".join([f"{line}
" for line in text.split("\n")])
\ No newline at end of file
diff --git a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py b/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py
deleted file mode 100644
index 7d1f4784898dbfeb69affefb6f624711adc8cb42..0000000000000000000000000000000000000000
--- a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/training/infer_demo.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import sys
-
-import os
-import torch
-import librosa
-from open_clip import create_model
-from training.data import get_audio_features
-from training.data import int16_to_float32, float32_to_int16
-from transformers import RobertaTokenizer
-
-tokenize = RobertaTokenizer.from_pretrained("roberta-base")
-
-
-def tokenizer(text):
- result = tokenize(
- text,
- padding="max_length",
- truncation=True,
- max_length=77,
- return_tensors="pt",
- )
- return {k: v.squeeze(0) for k, v in result.items()}
-
-
-PRETRAINED_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/checkpoints/epoch_top_0_audioset_no_fusion.pt"
-WAVE_48k_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/audio/machine.wav"
-
-
-def infer_text():
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
- precision = "fp32"
- amodel = "HTSAT-tiny" # or 'PANN-14'
- tmodel = "roberta" # the best text encoder in our training
- enable_fusion = False # False if you do not want to use the fusion model
- fusion_type = "aff_2d"
- pretrained = PRETRAINED_PATH
-
- model, model_cfg = create_model(
- amodel,
- tmodel,
- pretrained,
- precision=precision,
- device=device,
- enable_fusion=enable_fusion,
- fusion_type=fusion_type,
- )
- # load the text, can be a list (i.e. batch size)
- text_data = ["I love the contrastive learning", "I love the pretrain model"]
- # tokenize for roberta, if you want to tokenize for another text encoder, please refer to data.py#L43-90
- text_data = tokenizer(text_data)
-
- text_embed = model.get_text_embedding(text_data)
- print(text_embed.size())
-
-
-def infer_audio():
-
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
- precision = "fp32"
- amodel = "HTSAT-tiny" # or 'PANN-14'
- tmodel = "roberta" # the best text encoder in our training
- enable_fusion = False # False if you do not want to use the fusion model
- fusion_type = "aff_2d"
- pretrained = PRETRAINED_PATH
-
- model, model_cfg = create_model(
- amodel,
- tmodel,
- pretrained,
- precision=precision,
- device=device,
- enable_fusion=enable_fusion,
- fusion_type=fusion_type,
- )
-
- # load the waveform of the shape (T,), should resample to 48000
- audio_waveform, sr = librosa.load(WAVE_48k_PATH, sr=48000)
- # quantize
- audio_waveform = int16_to_float32(float32_to_int16(audio_waveform))
- audio_waveform = torch.from_numpy(audio_waveform).float()
- audio_dict = {}
-
- # the 'fusion' truncate mode can be changed to 'rand_trunc' if run in unfusion mode
- import ipdb
-
- ipdb.set_trace()
- audio_dict = get_audio_features(
- audio_dict,
- audio_waveform,
- 480000,
- data_truncating="fusion",
- data_filling="repeatpad",
- audio_cfg=model_cfg["audio_cfg"],
- )
- # can send a list to the model, to process many audio tracks in one time (i.e. batch size)
- audio_embed = model.get_audio_embedding([audio_dict])
- print(audio_embed.size())
- import ipdb
-
- ipdb.set_trace()
-
-
-if __name__ == "__main__":
- infer_text()
- infer_audio()
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/mpl_renderer.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/mpl_renderer.py
deleted file mode 100644
index dbcb5ca19a01e3ae000986673d66def23f9c2eac..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/contourpy/util/mpl_renderer.py
+++ /dev/null
@@ -1,613 +0,0 @@
-from __future__ import annotations
-
-import io
-from typing import TYPE_CHECKING, Any, cast
-
-import matplotlib.collections as mcollections
-import matplotlib.pyplot as plt
-import numpy as np
-
-from contourpy import FillType, LineType
-from contourpy.util.mpl_util import filled_to_mpl_paths, lines_to_mpl_paths, mpl_codes_to_offsets
-from contourpy.util.renderer import Renderer
-
-if TYPE_CHECKING:
- from matplotlib.axes import Axes
- from matplotlib.figure import Figure
- from numpy.typing import ArrayLike
-
- import contourpy._contourpy as cpy
-
-
-class MplRenderer(Renderer):
- _axes: Axes
- _fig: Figure
- _want_tight: bool
-
- """Utility renderer using Matplotlib to render a grid of plots over the same (x, y) range.
-
- Args:
- nrows (int, optional): Number of rows of plots, default ``1``.
- ncols (int, optional): Number of columns of plots, default ``1``.
- figsize (tuple(float, float), optional): Figure size in inches, default ``(9, 9)``.
- show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``.
- backend (str, optional): Matplotlib backend to use or ``None`` for default backend.
- Default ``None``.
- gridspec_kw (dict, optional): Gridspec keyword arguments to pass to ``plt.subplots``,
- default None.
- """
- def __init__(
- self,
- nrows: int = 1,
- ncols: int = 1,
- figsize: tuple[float, float] = (9, 9),
- show_frame: bool = True,
- backend: str | None = None,
- gridspec_kw: dict[str, Any] | None = None,
- ) -> None:
- if backend is not None:
- import matplotlib
- matplotlib.use(backend)
-
- kwargs = dict(figsize=figsize, squeeze=False, sharex=True, sharey=True)
- if gridspec_kw is not None:
- kwargs["gridspec_kw"] = gridspec_kw
- else:
- kwargs["subplot_kw"] = dict(aspect="equal")
-
- self._fig, axes = plt.subplots(nrows, ncols, **kwargs)
- self._axes = axes.flatten()
- if not show_frame:
- for ax in self._axes:
- ax.axis("off")
-
- self._want_tight = True
-
- def __del__(self) -> None:
- if hasattr(self, "_fig"):
- plt.close(self._fig)
-
- def _autoscale(self) -> None:
- # Using axes._need_autoscale attribute if need to autoscale before rendering after adding
- # lines/filled. Only want to autoscale once per axes regardless of how many lines/filled
- # added.
- for ax in self._axes:
- if getattr(ax, "_need_autoscale", False):
- ax.autoscale_view(tight=True)
- ax._need_autoscale = False
- if self._want_tight and len(self._axes) > 1:
- self._fig.tight_layout()
-
- def _get_ax(self, ax: Axes | int) -> Axes:
- if isinstance(ax, int):
- ax = self._axes[ax]
- return ax
-
- def filled(
- self,
- filled: cpy.FillReturn,
- fill_type: FillType,
- ax: Axes | int = 0,
- color: str = "C0",
- alpha: float = 0.7,
- ) -> None:
- """Plot filled contours on a single Axes.
-
- Args:
- filled (sequence of arrays): Filled contour data as returned by
- :func:`~contourpy.ContourGenerator.filled`.
- fill_type (FillType): Type of ``filled`` data, as returned by
- :attr:`~contourpy.ContourGenerator.fill_type`.
- ax (int or Maplotlib Axes, optional): Which axes to plot on, default ``0``.
- color (str, optional): Color to plot with. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``tab10`` colormap. Default ``"C0"``.
- alpha (float, optional): Opacity to plot with, default ``0.7``.
- """
- ax = self._get_ax(ax)
- paths = filled_to_mpl_paths(filled, fill_type)
- collection = mcollections.PathCollection(
- paths, facecolors=color, edgecolors="none", lw=0, alpha=alpha)
- ax.add_collection(collection)
- ax._need_autoscale = True
-
- def grid(
- self,
- x: ArrayLike,
- y: ArrayLike,
- ax: Axes | int = 0,
- color: str = "black",
- alpha: float = 0.1,
- point_color: str | None = None,
- quad_as_tri_alpha: float = 0,
- ) -> None:
- """Plot quad grid lines on a single Axes.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.
- color (str, optional): Color to plot grid lines, default ``"black"``.
- alpha (float, optional): Opacity to plot lines with, default ``0.1``.
- point_color (str, optional): Color to plot grid points or ``None`` if grid points
- should not be plotted, default ``None``.
- quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default 0.
-
- Colors may be a string color or the letter ``"C"`` followed by an integer in the range
- ``"C0"`` to ``"C9"`` to use a color from the ``tab10`` colormap.
-
- Warning:
- ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked.
- """
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- kwargs = dict(color=color, alpha=alpha)
- ax.plot(x, y, x.T, y.T, **kwargs)
- if quad_as_tri_alpha > 0:
- # Assumes no quad mask.
- xmid = 0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:])
- ymid = 0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:])
- kwargs["alpha"] = quad_as_tri_alpha
- ax.plot(
- np.stack((x[:-1, :-1], xmid, x[1:, 1:])).reshape((3, -1)),
- np.stack((y[:-1, :-1], ymid, y[1:, 1:])).reshape((3, -1)),
- np.stack((x[1:, :-1], xmid, x[:-1, 1:])).reshape((3, -1)),
- np.stack((y[1:, :-1], ymid, y[:-1, 1:])).reshape((3, -1)),
- **kwargs)
- if point_color is not None:
- ax.plot(x, y, color=point_color, alpha=alpha, marker="o", lw=0)
- ax._need_autoscale = True
-
- def lines(
- self,
- lines: cpy.LineReturn,
- line_type: LineType,
- ax: Axes | int = 0,
- color: str = "C0",
- alpha: float = 1.0,
- linewidth: float = 1,
- ) -> None:
- """Plot contour lines on a single Axes.
-
- Args:
- lines (sequence of arrays): Contour line data as returned by
- :func:`~contourpy.ContourGenerator.lines`.
- line_type (LineType): Type of ``lines`` data, as returned by
- :attr:`~contourpy.ContourGenerator.line_type`.
- ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.
- color (str, optional): Color to plot lines. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``tab10`` colormap. Default ``"C0"``.
- alpha (float, optional): Opacity to plot lines with, default ``1.0``.
- linewidth (float, optional): Width of lines, default ``1``.
- """
- ax = self._get_ax(ax)
- paths = lines_to_mpl_paths(lines, line_type)
- collection = mcollections.PathCollection(
- paths, facecolors="none", edgecolors=color, lw=linewidth, alpha=alpha)
- ax.add_collection(collection)
- ax._need_autoscale = True
-
- def mask(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike | np.ma.MaskedArray[Any, Any],
- ax: Axes | int = 0,
- color: str = "black",
- ) -> None:
- """Plot masked out grid points as circles on a single Axes.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- z (masked array of shape (ny, nx): z-values.
- ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.
- color (str, optional): Circle color, default ``"black"``.
- """
- mask = np.ma.getmask(z) # type: ignore[no-untyped-call]
- if mask is np.ma.nomask:
- return
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- ax.plot(x[mask], y[mask], "o", c=color)
-
- def save(self, filename: str, transparent: bool = False) -> None:
- """Save plots to SVG or PNG file.
-
- Args:
- filename (str): Filename to save to.
- transparent (bool, optional): Whether background should be transparent, default
- ``False``.
- """
- self._autoscale()
- self._fig.savefig(filename, transparent=transparent)
-
- def save_to_buffer(self) -> io.BytesIO:
- """Save plots to an ``io.BytesIO`` buffer.
-
- Return:
- BytesIO: PNG image buffer.
- """
- self._autoscale()
- buf = io.BytesIO()
- self._fig.savefig(buf, format="png")
- buf.seek(0)
- return buf
-
- def show(self) -> None:
- """Show plots in an interactive window, in the usual Matplotlib manner.
- """
- self._autoscale()
- plt.show()
-
- def title(self, title: str, ax: Axes | int = 0, color: str | None = None) -> None:
- """Set the title of a single Axes.
-
- Args:
- title (str): Title text.
- ax (int or Matplotlib Axes, optional): Which Axes to set the title of, default ``0``.
- color (str, optional): Color to set title. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``tab10`` colormap. Default is ``None`` which uses Matplotlib's default title color
- that depends on the stylesheet in use.
- """
- if color:
- self._get_ax(ax).set_title(title, color=color)
- else:
- self._get_ax(ax).set_title(title)
-
- def z_values(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: Axes | int = 0,
- color: str = "green",
- fmt: str = ".1f",
- quad_as_tri: bool = False,
- ) -> None:
- """Show ``z`` values on a single Axes.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- z (array-like of shape (ny, nx): z-values.
- ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.
- color (str, optional): Color of added text. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``tab10`` colormap. Default ``"green"``.
- fmt (str, optional): Format to display z-values, default ``".1f"``.
- quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centers
- of quads.
-
- Warning:
- ``quad_as_tri=True`` shows z-values for all quads, even if masked.
- """
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- for j in range(ny):
- for i in range(nx):
- ax.text(x[j, i], y[j, i], f"{z[j, i]:{fmt}}", ha="center", va="center",
- color=color, clip_on=True)
- if quad_as_tri:
- for j in range(ny-1):
- for i in range(nx-1):
- xx = np.mean(x[j:j+2, i:i+2])
- yy = np.mean(y[j:j+2, i:i+2])
- zz = np.mean(z[j:j+2, i:i+2])
- ax.text(xx, yy, f"{zz:{fmt}}", ha="center", va="center", color=color,
- clip_on=True)
-
-
-class MplTestRenderer(MplRenderer):
- """Test renderer implemented using Matplotlib.
-
- No whitespace around plots and no spines/ticks displayed.
- Uses Agg backend, so can only save to file/buffer, cannot call ``show()``.
- """
- def __init__(
- self,
- nrows: int = 1,
- ncols: int = 1,
- figsize: tuple[float, float] = (9, 9),
- ) -> None:
- gridspec = {
- "left": 0.01,
- "right": 0.99,
- "top": 0.99,
- "bottom": 0.01,
- "wspace": 0.01,
- "hspace": 0.01,
- }
- super().__init__(
- nrows, ncols, figsize, show_frame=True, backend="Agg", gridspec_kw=gridspec,
- )
-
- for ax in self._axes:
- ax.set_xmargin(0.0)
- ax.set_ymargin(0.0)
- ax.set_xticks([])
- ax.set_yticks([])
-
- self._want_tight = False
-
-
-class MplDebugRenderer(MplRenderer):
- """Debug renderer implemented using Matplotlib.
-
- Extends ``MplRenderer`` to add extra information to help in debugging such as markers, arrows,
- text, etc.
- """
- def __init__(
- self,
- nrows: int = 1,
- ncols: int = 1,
- figsize: tuple[float, float] = (9, 9),
- show_frame: bool = True,
- ) -> None:
- super().__init__(nrows, ncols, figsize, show_frame)
-
- def _arrow(
- self,
- ax: Axes,
- line_start: cpy.CoordinateArray,
- line_end: cpy.CoordinateArray,
- color: str,
- alpha: float,
- arrow_size: float,
- ) -> None:
- mid = 0.5*(line_start + line_end)
- along = line_end - line_start
- along /= np.sqrt(np.dot(along, along)) # Unit vector.
- right = np.asarray((along[1], -along[0]))
- arrow = np.stack((
- mid - (along*0.5 - right)*arrow_size,
- mid + along*0.5*arrow_size,
- mid - (along*0.5 + right)*arrow_size,
- ))
- ax.plot(arrow[:, 0], arrow[:, 1], "-", c=color, alpha=alpha)
-
- def _filled_to_lists_of_points_and_offsets(
- self,
- filled: cpy.FillReturn,
- fill_type: FillType,
- ) -> tuple[list[cpy.PointArray], list[cpy.OffsetArray]]:
- if fill_type == FillType.OuterCode:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_OuterCode, filled)
- all_points = filled[0]
- all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1]]
- elif fill_type == FillType.ChunkCombinedCode:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_ChunkCombinedCode, filled)
- all_points = [points for points in filled[0] if points is not None]
- all_offsets = [mpl_codes_to_offsets(codes) for codes in filled[1] if codes is not None]
- elif fill_type == FillType.OuterOffset:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_OuterOffset, filled)
- all_points = filled[0]
- all_offsets = filled[1]
- elif fill_type == FillType.ChunkCombinedOffset:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_ChunkCombinedOffset, filled)
- all_points = [points for points in filled[0] if points is not None]
- all_offsets = [offsets for offsets in filled[1] if offsets is not None]
- elif fill_type == FillType.ChunkCombinedCodeOffset:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_ChunkCombinedCodeOffset, filled)
- all_points = []
- all_offsets = []
- for points, codes, outer_offsets in zip(*filled):
- if points is None:
- continue
- if TYPE_CHECKING:
- assert codes is not None and outer_offsets is not None
- all_points += np.split(points, outer_offsets[1:-1])
- all_codes = np.split(codes, outer_offsets[1:-1])
- all_offsets += [mpl_codes_to_offsets(codes) for codes in all_codes]
- elif fill_type == FillType.ChunkCombinedOffsetOffset:
- if TYPE_CHECKING:
- filled = cast(cpy.FillReturn_ChunkCombinedOffsetOffset, filled)
- all_points = []
- all_offsets = []
- for points, offsets, outer_offsets in zip(*filled):
- if points is None:
- continue
- if TYPE_CHECKING:
- assert offsets is not None and outer_offsets is not None
- for i in range(len(outer_offsets)-1):
- offs = offsets[outer_offsets[i]:outer_offsets[i+1]+1]
- all_points.append(points[offs[0]:offs[-1]])
- all_offsets.append(offs - offs[0])
- else:
- raise RuntimeError(f"Rendering FillType {fill_type} not implemented")
-
- return all_points, all_offsets
-
- def _lines_to_list_of_points(
- self, lines: cpy.LineReturn, line_type: LineType,
- ) -> list[cpy.PointArray]:
- if line_type == LineType.Separate:
- if TYPE_CHECKING:
- lines = cast(cpy.LineReturn_Separate, lines)
- all_lines = lines
- elif line_type == LineType.SeparateCode:
- if TYPE_CHECKING:
- lines = cast(cpy.LineReturn_SeparateCode, lines)
- all_lines = lines[0]
- elif line_type == LineType.ChunkCombinedCode:
- if TYPE_CHECKING:
- lines = cast(cpy.LineReturn_ChunkCombinedCode, lines)
- all_lines = []
- for points, codes in zip(*lines):
- if points is not None:
- if TYPE_CHECKING:
- assert codes is not None
- offsets = mpl_codes_to_offsets(codes)
- for i in range(len(offsets)-1):
- all_lines.append(points[offsets[i]:offsets[i+1]])
- elif line_type == LineType.ChunkCombinedOffset:
- if TYPE_CHECKING:
- lines = cast(cpy.LineReturn_ChunkCombinedOffset, lines)
- all_lines = []
- for points, all_offsets in zip(*lines):
- if points is not None:
- if TYPE_CHECKING:
- assert all_offsets is not None
- for i in range(len(all_offsets)-1):
- all_lines.append(points[all_offsets[i]:all_offsets[i+1]])
- else:
- raise RuntimeError(f"Rendering LineType {line_type} not implemented")
-
- return all_lines
-
- def filled(
- self,
- filled: cpy.FillReturn,
- fill_type: FillType,
- ax: Axes | int = 0,
- color: str = "C1",
- alpha: float = 0.7,
- line_color: str = "C0",
- line_alpha: float = 0.7,
- point_color: str = "C0",
- start_point_color: str = "red",
- arrow_size: float = 0.1,
- ) -> None:
- super().filled(filled, fill_type, ax, color, alpha)
-
- if line_color is None and point_color is None:
- return
-
- ax = self._get_ax(ax)
- all_points, all_offsets = self._filled_to_lists_of_points_and_offsets(filled, fill_type)
-
- # Lines.
- if line_color is not None:
- for points, offsets in zip(all_points, all_offsets):
- for start, end in zip(offsets[:-1], offsets[1:]):
- xys = points[start:end]
- ax.plot(xys[:, 0], xys[:, 1], c=line_color, alpha=line_alpha)
-
- if arrow_size > 0.0:
- n = len(xys)
- for i in range(n-1):
- self._arrow(ax, xys[i], xys[i+1], line_color, line_alpha, arrow_size)
-
- # Points.
- if point_color is not None:
- for points, offsets in zip(all_points, all_offsets):
- mask = np.ones(offsets[-1], dtype=bool)
- mask[offsets[1:]-1] = False # Exclude end points.
- if start_point_color is not None:
- start_indices = offsets[:-1]
- mask[start_indices] = False # Exclude start points.
- ax.plot(
- points[:, 0][mask], points[:, 1][mask], "o", c=point_color, alpha=line_alpha)
-
- if start_point_color is not None:
- ax.plot(points[:, 0][start_indices], points[:, 1][start_indices], "o",
- c=start_point_color, alpha=line_alpha)
-
- def lines(
- self,
- lines: cpy.LineReturn,
- line_type: LineType,
- ax: Axes | int = 0,
- color: str = "C0",
- alpha: float = 1.0,
- linewidth: float = 1,
- point_color: str = "C0",
- start_point_color: str = "red",
- arrow_size: float = 0.1,
- ) -> None:
- super().lines(lines, line_type, ax, color, alpha, linewidth)
-
- if arrow_size == 0.0 and point_color is None:
- return
-
- ax = self._get_ax(ax)
- all_lines = self._lines_to_list_of_points(lines, line_type)
-
- if arrow_size > 0.0:
- for line in all_lines:
- for i in range(len(line)-1):
- self._arrow(ax, line[i], line[i+1], color, alpha, arrow_size)
-
- if point_color is not None:
- for line in all_lines:
- start_index = 0
- end_index = len(line)
- if start_point_color is not None:
- ax.plot(line[0, 0], line[0, 1], "o", c=start_point_color, alpha=alpha)
- start_index = 1
- if line[0][0] == line[-1][0] and line[0][1] == line[-1][1]:
- end_index -= 1
- ax.plot(line[start_index:end_index, 0], line[start_index:end_index, 1], "o",
- c=color, alpha=alpha)
-
- def point_numbers(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: Axes | int = 0,
- color: str = "red",
- ) -> None:
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- for j in range(ny):
- for i in range(nx):
- quad = i + j*nx
- ax.text(x[j, i], y[j, i], str(quad), ha="right", va="top", color=color,
- clip_on=True)
-
- def quad_numbers(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: Axes | int = 0,
- color: str = "blue",
- ) -> None:
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- for j in range(1, ny):
- for i in range(1, nx):
- quad = i + j*nx
- xmid = x[j-1:j+1, i-1:i+1].mean()
- ymid = y[j-1:j+1, i-1:i+1].mean()
- ax.text(xmid, ymid, str(quad), ha="center", va="center", color=color, clip_on=True)
-
- def z_levels(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- lower_level: float,
- upper_level: float | None = None,
- ax: Axes | int = 0,
- color: str = "green",
- ) -> None:
- ax = self._get_ax(ax)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- for j in range(ny):
- for i in range(nx):
- zz = z[j, i]
- if upper_level is not None and zz > upper_level:
- z_level = 2
- elif zz > lower_level:
- z_level = 1
- else:
- z_level = 0
- ax.text(x[j, i], y[j, i], z_level, ha="left", va="bottom", color=color,
- clip_on=True)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/py23.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/py23.py
deleted file mode 100644
index 29f634d624b7df125722c3bae594c1d39a835aec..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/py23.py
+++ /dev/null
@@ -1,96 +0,0 @@
-"""Python 2/3 compat layer leftovers."""
-
-import decimal as _decimal
-import math as _math
-import warnings
-from contextlib import redirect_stderr, redirect_stdout
-from io import BytesIO
-from io import StringIO as UnicodeIO
-from types import SimpleNamespace
-
-from .textTools import Tag, bytechr, byteord, bytesjoin, strjoin, tobytes, tostr
-
-warnings.warn(
- "The py23 module has been deprecated and will be removed in a future release. "
- "Please update your code.",
- DeprecationWarning,
-)
-
-__all__ = [
- "basestring",
- "bytechr",
- "byteord",
- "BytesIO",
- "bytesjoin",
- "open",
- "Py23Error",
- "range",
- "RecursionError",
- "round",
- "SimpleNamespace",
- "StringIO",
- "strjoin",
- "Tag",
- "tobytes",
- "tostr",
- "tounicode",
- "unichr",
- "unicode",
- "UnicodeIO",
- "xrange",
- "zip",
-]
-
-
-class Py23Error(NotImplementedError):
- pass
-
-
-RecursionError = RecursionError
-StringIO = UnicodeIO
-
-basestring = str
-isclose = _math.isclose
-isfinite = _math.isfinite
-open = open
-range = range
-round = round3 = round
-unichr = chr
-unicode = str
-zip = zip
-
-tounicode = tostr
-
-
-def xrange(*args, **kwargs):
- raise Py23Error("'xrange' is not defined. Use 'range' instead.")
-
-
-def round2(number, ndigits=None):
- """
- Implementation of Python 2 built-in round() function.
- Rounds a number to a given precision in decimal digits (default
- 0 digits). The result is a floating point number. Values are rounded
- to the closest multiple of 10 to the power minus ndigits; if two
- multiples are equally close, rounding is done away from 0.
- ndigits may be negative.
- See Python 2 documentation:
- https://docs.python.org/2/library/functions.html?highlight=round#round
- """
- if ndigits is None:
- ndigits = 0
-
- if ndigits < 0:
- exponent = 10 ** (-ndigits)
- quotient, remainder = divmod(number, exponent)
- if remainder >= exponent // 2 and number >= 0:
- quotient += 1
- return float(quotient * exponent)
- else:
- exponent = _decimal.Decimal("10") ** (-ndigits)
-
- d = _decimal.Decimal.from_float(number).quantize(
- exponent, rounding=_decimal.ROUND_HALF_UP
- )
-
- return float(d)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/analytics.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/analytics.py
deleted file mode 100644
index fccb0da911dbbe5140af0b9303472106f524c873..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/analytics.py
+++ /dev/null
@@ -1,249 +0,0 @@
-""" Functions related to analytics and telemetry. """
-from __future__ import annotations
-
-import asyncio
-import json
-import os
-import pkgutil
-import threading
-import urllib.parse
-import warnings
-from distutils.version import StrictVersion
-from typing import Any
-
-import requests
-
-import gradio
-from gradio import wasm_utils
-from gradio.context import Context
-from gradio.utils import GRADIO_VERSION
-
-# For testability, we import the pyfetch function into this module scope and define a fallback coroutine object to be patched in tests.
-try:
- from pyodide.http import pyfetch as pyodide_pyfetch # type: ignore
-except ImportError:
-
- async def pyodide_pyfetch(*args, **kwargs):
- raise NotImplementedError(
- "pyodide.http.pyfetch is not available in this environment."
- )
-
-
-ANALYTICS_URL = "https://api.gradio.app/"
-PKG_VERSION_URL = "https://api.gradio.app/pkg-version"
-
-
-def analytics_enabled() -> bool:
- """
- Returns: True if analytics are enabled, False otherwise.
- """
- return os.getenv("GRADIO_ANALYTICS_ENABLED", "True") == "True"
-
-
-def _do_analytics_request(url: str, data: dict[str, Any]) -> None:
- if wasm_utils.IS_WASM:
- asyncio.ensure_future(
- _do_wasm_analytics_request(
- url=url,
- data=data,
- )
- )
- else:
- threading.Thread(
- target=_do_normal_analytics_request,
- kwargs={
- "url": url,
- "data": data,
- },
- ).start()
-
-
-def _do_normal_analytics_request(url: str, data: dict[str, Any]) -> None:
- data["ip_address"] = get_local_ip_address()
- try:
- requests.post(url, data=data, timeout=5)
- except (requests.ConnectionError, requests.exceptions.ReadTimeout):
- pass # do not push analytics if no network
-
-
-async def _do_wasm_analytics_request(url: str, data: dict[str, Any]) -> None:
- data["ip_address"] = await get_local_ip_address_wasm()
-
- # We use urllib.parse.urlencode to encode the data as a form.
- # Ref: https://docs.python.org/3/library/urllib.request.html#urllib-examples
- body = urllib.parse.urlencode(data).encode("ascii")
- headers = {
- "Content-Type": "application/x-www-form-urlencoded",
- }
-
- try:
- await asyncio.wait_for(
- pyodide_pyfetch(url, method="POST", headers=headers, body=body),
- timeout=5,
- )
- except asyncio.TimeoutError:
- pass # do not push analytics if no network
-
-
-def version_check():
- try:
- version_data = pkgutil.get_data(__name__, "version.txt")
- if not version_data:
- raise FileNotFoundError
- current_pkg_version = version_data.decode("ascii").strip()
- latest_pkg_version = requests.get(url=PKG_VERSION_URL, timeout=3).json()[
- "version"
- ]
- if StrictVersion(latest_pkg_version) > StrictVersion(current_pkg_version):
- print(
- f"IMPORTANT: You are using gradio version {current_pkg_version}, "
- f"however version {latest_pkg_version} is available, please upgrade."
- )
- print("--------")
- except json.decoder.JSONDecodeError:
- warnings.warn("unable to parse version details from package URL.")
- except KeyError:
- warnings.warn("package URL does not contain version info.")
- except Exception:
- pass
-
-
-def get_local_ip_address() -> str:
- """
- Gets the public IP address or returns the string "No internet connection" if unable
- to obtain it or the string "Analytics disabled" if a user has disabled analytics.
- Does not make a new request if the IP address has already been obtained in the
- same Python session.
- """
- if not analytics_enabled():
- return "Analytics disabled"
-
- if Context.ip_address is None:
- try:
- ip_address = requests.get(
- "https://checkip.amazonaws.com/", timeout=3
- ).text.strip()
- except (requests.ConnectionError, requests.exceptions.ReadTimeout):
- ip_address = "No internet connection"
- Context.ip_address = ip_address
- else:
- ip_address = Context.ip_address
- return ip_address
-
-
-async def get_local_ip_address_wasm() -> str:
- """The Wasm-compatible version of get_local_ip_address()."""
- if not analytics_enabled():
- return "Analytics disabled"
-
- if Context.ip_address is None:
- try:
- response = await asyncio.wait_for(
- pyodide_pyfetch(
- # The API used by the normal version (`get_local_ip_address()`), `https://checkip.amazonaws.com/``, blocks CORS requests, so here we use a different API.
- "https://api.ipify.org"
- ),
- timeout=5,
- )
- response_text: str = await response.string() # type: ignore
- ip_address = response_text.strip()
- except (asyncio.TimeoutError, OSError):
- ip_address = "No internet connection"
- Context.ip_address = ip_address
- else:
- ip_address = Context.ip_address
- return ip_address
-
-
-def initiated_analytics(data: dict[str, Any]) -> None:
- if not analytics_enabled():
- return
-
- _do_analytics_request(
- url=f"{ANALYTICS_URL}gradio-initiated-analytics/",
- data=data,
- )
-
-
-def launched_analytics(blocks: gradio.Blocks, data: dict[str, Any]) -> None:
- if not analytics_enabled():
- return
-
- blocks_telemetry, inputs_telemetry, outputs_telemetry, targets_telemetry = (
- [],
- [],
- [],
- [],
- )
-
- from gradio.blocks import BlockContext
-
- for x in list(blocks.blocks.values()):
- blocks_telemetry.append(x.get_block_name()) if isinstance(
- x, BlockContext
- ) else blocks_telemetry.append(str(x))
-
- for x in blocks.dependencies:
- targets_telemetry = targets_telemetry + [
- # Sometimes the target can be the Blocks object itself, so we need to check if its in blocks.blocks
- str(blocks.blocks[y])
- for y in x["targets"]
- if y in blocks.blocks
- ]
- inputs_telemetry = inputs_telemetry + [
- str(blocks.blocks[y]) for y in x["inputs"] if y in blocks.blocks
- ]
- outputs_telemetry = outputs_telemetry + [
- str(blocks.blocks[y]) for y in x["outputs"] if y in blocks.blocks
- ]
- additional_data = {
- "version": GRADIO_VERSION,
- "is_kaggle": blocks.is_kaggle,
- "is_sagemaker": blocks.is_sagemaker,
- "using_auth": blocks.auth is not None,
- "dev_mode": blocks.dev_mode,
- "show_api": blocks.show_api,
- "show_error": blocks.show_error,
- "title": blocks.title,
- "inputs": blocks.input_components
- if blocks.mode == "interface"
- else inputs_telemetry,
- "outputs": blocks.output_components
- if blocks.mode == "interface"
- else outputs_telemetry,
- "targets": targets_telemetry,
- "blocks": blocks_telemetry,
- "events": [str(x["trigger"]) for x in blocks.dependencies],
- "is_wasm": wasm_utils.IS_WASM,
- }
-
- data.update(additional_data)
-
- _do_analytics_request(url=f"{ANALYTICS_URL}gradio-launched-telemetry/", data=data)
-
-
-def integration_analytics(data: dict[str, Any]) -> None:
- if not analytics_enabled():
- return
-
- _do_analytics_request(
- url=f"{ANALYTICS_URL}gradio-integration-analytics/",
- data=data,
- )
-
-
-def error_analytics(message: str) -> None:
- """
- Send error analytics if there is network
- Parameters:
- message: Details about error
- """
- if not analytics_enabled():
- return
-
- data = {"error": message}
-
- _do_analytics_request(
- url=f"{ANALYTICS_URL}gradio-error-analytics/",
- data=data,
- )
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css
deleted file mode 100644
index 77ebe6c1fea2e3557f76088bb9f5c30e2cfdb72a..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css
+++ /dev/null
@@ -1 +0,0 @@
-.spacer.svelte-1kspdo{display:inline-block;width:0;height:0}.json-node.svelte-1kspdo{display:inline;color:var(--body-text-color);line-height:var(--line-sm);font-family:var(--font-mono)}.expand-array.svelte-1kspdo{border:1px solid var(--border-color-primary);border-radius:var(--radius-sm);background:var(--background-fill-secondary);padding:0 var(--size-1);color:var(--body-text-color)}.expand-array.svelte-1kspdo:hover{background:var(--background-fill-primary)}.children.svelte-1kspdo{padding-left:var(--size-4)}.json-item.svelte-1kspdo{display:inline}.null.svelte-1kspdo{color:var(--body-text-color-subdued)}.string.svelte-1kspdo{color:var(--color-green-500)}.number.svelte-1kspdo{color:var(--color-blue-500)}.bool.svelte-1kspdo{color:var(--color-red-500)}.json-holder.svelte-1trjy9a{padding:var(--size-2)}button.svelte-1trjy9a{display:flex;position:absolute;top:var(--block-label-margin);right:var(--block-label-margin);align-items:center;box-shadow:var(--shadow-drop);border:1px solid var(--border-color-primary);border-top:none;border-right:none;border-radius:var(--block-label-right-radius);background:var(--block-label-background-fill);padding:5px;width:22px;height:22px;overflow:hidden;color:var(--block-label-text-color);font:var(--font);font-size:var(--button-small-text-size)}
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/test_headers.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/test_headers.py
deleted file mode 100644
index ba53d088f6f8d22dede47873a03b70a7103da7ea..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/test_headers.py
+++ /dev/null
@@ -1,157 +0,0 @@
-import pytest
-
-from .._events import Request
-from .._headers import (
- get_comma_header,
- has_expect_100_continue,
- Headers,
- normalize_and_validate,
- set_comma_header,
-)
-from .._util import LocalProtocolError
-
-
-def test_normalize_and_validate() -> None:
- assert normalize_and_validate([("foo", "bar")]) == [(b"foo", b"bar")]
- assert normalize_and_validate([(b"foo", b"bar")]) == [(b"foo", b"bar")]
-
- # no leading/trailing whitespace in names
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([(b"foo ", "bar")])
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([(b" foo", "bar")])
-
- # no weird characters in names
- with pytest.raises(LocalProtocolError) as excinfo:
- normalize_and_validate([(b"foo bar", b"baz")])
- assert "foo bar" in str(excinfo.value)
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([(b"foo\x00bar", b"baz")])
- # Not even 8-bit characters:
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([(b"foo\xffbar", b"baz")])
- # And not even the control characters we allow in values:
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([(b"foo\x01bar", b"baz")])
-
- # no return or NUL characters in values
- with pytest.raises(LocalProtocolError) as excinfo:
- normalize_and_validate([("foo", "bar\rbaz")])
- assert "bar\\rbaz" in str(excinfo.value)
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("foo", "bar\nbaz")])
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("foo", "bar\x00baz")])
- # no leading/trailing whitespace
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("foo", "barbaz ")])
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("foo", " barbaz")])
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("foo", "barbaz\t")])
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("foo", "\tbarbaz")])
-
- # content-length
- assert normalize_and_validate([("Content-Length", "1")]) == [
- (b"content-length", b"1")
- ]
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("Content-Length", "asdf")])
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("Content-Length", "1x")])
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("Content-Length", "1"), ("Content-Length", "2")])
- assert normalize_and_validate(
- [("Content-Length", "0"), ("Content-Length", "0")]
- ) == [(b"content-length", b"0")]
- assert normalize_and_validate([("Content-Length", "0 , 0")]) == [
- (b"content-length", b"0")
- ]
- with pytest.raises(LocalProtocolError):
- normalize_and_validate(
- [("Content-Length", "1"), ("Content-Length", "1"), ("Content-Length", "2")]
- )
- with pytest.raises(LocalProtocolError):
- normalize_and_validate([("Content-Length", "1 , 1,2")])
-
- # transfer-encoding
- assert normalize_and_validate([("Transfer-Encoding", "chunked")]) == [
- (b"transfer-encoding", b"chunked")
- ]
- assert normalize_and_validate([("Transfer-Encoding", "cHuNkEd")]) == [
- (b"transfer-encoding", b"chunked")
- ]
- with pytest.raises(LocalProtocolError) as excinfo:
- normalize_and_validate([("Transfer-Encoding", "gzip")])
- assert excinfo.value.error_status_hint == 501 # Not Implemented
- with pytest.raises(LocalProtocolError) as excinfo:
- normalize_and_validate(
- [("Transfer-Encoding", "chunked"), ("Transfer-Encoding", "gzip")]
- )
- assert excinfo.value.error_status_hint == 501 # Not Implemented
-
-
-def test_get_set_comma_header() -> None:
- headers = normalize_and_validate(
- [
- ("Connection", "close"),
- ("whatever", "something"),
- ("connectiON", "fOo,, , BAR"),
- ]
- )
-
- assert get_comma_header(headers, b"connection") == [b"close", b"foo", b"bar"]
-
- headers = set_comma_header(headers, b"newthing", ["a", "b"]) # type: ignore
-
- with pytest.raises(LocalProtocolError):
- set_comma_header(headers, b"newthing", [" a", "b"]) # type: ignore
-
- assert headers == [
- (b"connection", b"close"),
- (b"whatever", b"something"),
- (b"connection", b"fOo,, , BAR"),
- (b"newthing", b"a"),
- (b"newthing", b"b"),
- ]
-
- headers = set_comma_header(headers, b"whatever", ["different thing"]) # type: ignore
-
- assert headers == [
- (b"connection", b"close"),
- (b"connection", b"fOo,, , BAR"),
- (b"newthing", b"a"),
- (b"newthing", b"b"),
- (b"whatever", b"different thing"),
- ]
-
-
-def test_has_100_continue() -> None:
- assert has_expect_100_continue(
- Request(
- method="GET",
- target="/",
- headers=[("Host", "example.com"), ("Expect", "100-continue")],
- )
- )
- assert not has_expect_100_continue(
- Request(method="GET", target="/", headers=[("Host", "example.com")])
- )
- # Case insensitive
- assert has_expect_100_continue(
- Request(
- method="GET",
- target="/",
- headers=[("Host", "example.com"), ("Expect", "100-Continue")],
- )
- )
- # Doesn't work in HTTP/1.0
- assert not has_expect_100_continue(
- Request(
- method="GET",
- target="/",
- headers=[("Host", "example.com"), ("Expect", "100-continue")],
- http_version="1.0",
- )
- )
diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/train.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/train.py
deleted file mode 100644
index f5759c4679d2ee9c0748444adf66b8453cf09728..0000000000000000000000000000000000000000
--- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/train.py
+++ /dev/null
@@ -1,838 +0,0 @@
-import json
-import logging
-import math
-import os
-import time
-from contextlib import suppress
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-
-try:
- import wandb
-except ImportError:
- wandb = None
-
-from open_clip import ClipLoss, gather_features
-from .distributed import is_master
-from .zero_shot import zero_shot_eval
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self):
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
-
-def unwrap_model(model):
- if hasattr(model, "module"):
- return model.module
- else:
- return model
-
-
-def train_one_epoch(
- model, data, epoch, optimizer, scaler, scheduler, args, tb_writer=None
-):
- device = torch.device(args.device)
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- model.train()
- loss = ClipLoss(
- local_loss=args.local_loss,
- gather_with_grad=args.gather_with_grad,
- cache_labels=True,
- rank=args.rank,
- world_size=args.world_size,
- use_horovod=args.horovod,
- mlp_loss=args.clap_mlploss,
- weight_loss_kappa=args.kappa,
- )
-
- dataloader, sampler = data["train"].dataloader, data["train"].sampler
- if args.distributed and sampler is not None:
- sampler.set_epoch(epoch)
- num_batches_per_epoch = dataloader.num_batches
- sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10))
-
- # for toy dataset
- if args.dataset_type == "toy":
- dataloader.dataset.generate_queue()
-
- loss_m = AverageMeter()
- batch_time_m = AverageMeter()
- data_time_m = AverageMeter()
- end = time.time()
-
- for i, batch in enumerate(dataloader):
- # logging.info(f"batch {i} of {num_batches_per_epoch}")
- step = num_batches_per_epoch * epoch + i
- if isinstance(scheduler, dict):
- for s in scheduler.values():
- s(step)
- else:
- scheduler(step)
- audios = batch # contains mel_spec, wavform, and longer list
- texts = batch["text"]
- # audios = audios.to(device=device, non_blocking=True)
- # texts = texts.to(device=device, non_blocking=True)
-
- data_time_m.update(time.time() - end)
- if isinstance(optimizer, dict):
- for o_ in optimizer.values():
- o_.zero_grad()
- else:
- optimizer.zero_grad()
-
- with autocast():
- (
- audio_features,
- text_features,
- audio_features_mlp,
- text_features_mlp,
- logit_scale_a,
- logit_scale_t,
- ) = model(audios, texts, device)
-
- if args.clap_mlploss:
- total_loss = loss(
- audio_features=audio_features,
- text_features=text_features,
- logit_scale_a=logit_scale_a,
- logit_scale_t=logit_scale_t,
- audio_features_mlp=audio_features_mlp,
- text_features_mlp=text_features_mlp,
- )
- else:
- total_loss = loss(
- audio_features=audio_features,
- text_features=text_features,
- logit_scale_a=logit_scale_a,
- )
- if isinstance(optimizer, dict):
- if scaler is not None:
- scaler.scale(total_loss).backward()
- for o_ in optimizer.values():
- if args.horovod:
- o_.synchronize()
- scaler.unscale_(o_)
- with o_.skip_synchronize():
- scaler.step(o_)
- else:
- scaler.step(o_)
- scaler.update()
- else:
- total_loss.backward()
- for o_ in optimizer.values():
- o_.step()
- else:
- if scaler is not None:
- scaler.scale(total_loss).backward()
- if args.horovod:
- optimizer.synchronize()
- scaler.unscale_(optimizer)
- with optimizer.skip_synchronize():
- scaler.step(optimizer)
- else:
- scaler.step(optimizer)
- scaler.update()
- else:
- total_loss.backward()
- optimizer.step()
-
- # Note: we clamp to 4.6052 = ln(100), as in the original paper.
- with torch.no_grad():
- unwrap_model(model).logit_scale_a.clamp_(0, math.log(100))
- if args.clap_mlploss:
- unwrap_model(model).logit_scale_t.clamp_(0, math.log(100))
-
- batch_time_m.update(time.time() - end)
- end = time.time()
- batch_count = i + 1
- if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch):
- if isinstance(audios, dict):
- batch_size = len(audios["waveform"])
- else:
- batch_size = len(audios)
- num_samples = batch_count * batch_size * args.world_size
- samples_per_epoch = dataloader.num_samples
- percent_complete = 100.0 * batch_count / num_batches_per_epoch
-
- # NOTE loss is coarsely sampled, just master node and per log update
- loss_m.update(total_loss.item(), batch_size)
- logit_scale_scalar_a = logit_scale_a.item()
- logit_scale_scalar_t = logit_scale_t.item()
- if isinstance(optimizer, dict):
- if args.clap_mlploss:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]} "
- f"Logit Scale Audio: {logit_scale_scalar_a:.3f}"
- f"Logit Scale Text: {logit_scale_scalar_t:.3f}"
- )
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "scale_audio": logit_scale_scalar_a,
- "scale_text": logit_scale_scalar_t,
- "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()],
- }
- else:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]} "
- f"Logit Scale Audio: {logit_scale_scalar_a:.3f}"
- )
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "scale_audio": logit_scale_scalar_a,
- "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()],
- }
-
- else:
- if args.clap_mlploss:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {optimizer.param_groups[0]['lr']:5f} "
- f"Logit Scale Audio: {logit_scale_scalar_a:.3f}"
- f"Logit Scale Text: {logit_scale_scalar_t:.3f}"
- )
-
- # Save train loss / etc. Using non avg meter values as loggers have their own smoothing
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "scale_audio": logit_scale_scalar_a,
- "scale_text": logit_scale_scalar_t,
- "lr": optimizer.param_groups[0]["lr"],
- }
- else:
- logging.info(
- f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] "
- f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) "
- f"Data (t): {data_time_m.avg:.3f} "
- f"Batch (t): {batch_time_m.avg:.3f} "
- f"LR: {optimizer.param_groups[0]['lr']:5f} "
- f"Logit Scale Audio: {logit_scale_scalar_a:.3f}"
- )
-
- # Save train loss / etc. Using non avg meter values as loggers have their own smoothing
- log_data = {
- "loss": loss_m.val,
- "data_time": data_time_m.val,
- "batch_time": batch_time_m.val,
- "scale_audio": logit_scale_scalar_a,
- "lr": optimizer.param_groups[0]["lr"],
- }
- for name, val in log_data.items():
- name = "train/" + name
- if tb_writer is not None:
- tb_writer.add_scalar(name, val, step)
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- wandb.log({name: val, "step": step})
-
- # resetting batch / data time meters per log window
- batch_time_m.reset()
- data_time_m.reset()
- # end for
-
-
-def evaluate(model, data, epoch, args, tb_writer=None):
- metrics = {}
- if not args.parallel_eval:
- if not is_master(args):
- return metrics
- device = torch.device(args.device)
- model.eval()
-
- # CHANGE
- # zero_shot_metrics = zero_shot_eval(model, data, epoch, args)
- # metrics.update(zero_shot_metrics)
- if is_master(args):
- print("Evaluating...")
- autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress
- if args.val_dataset_names == ["Clotho", "audiocaps"]:
- # if only clotho and audiocaps are used, then we will use a different evaluation function.
- # This is because in the Clotho and audiocaps valid and test set, there are 5 text for 1 audio.
- if args.parallel_eval:
- # (yusong): just a hack here. Don't use parallel eval when evaluating only clotho and audiocaps.
- raise NotImplementedError(
- "Parallel evaluation not supported for eval only Clotho and audiocaps."
- )
- val_metrics_per_dataset = evaluate_clotho_audiocaps(
- model, data, epoch, args, autocast, device, tb_writer
- )
- for m in val_metrics_per_dataset.values():
- metrics.update(m)
- if "epoch" not in metrics.keys():
- metrics.update({"epoch": epoch})
- metrics = select_top_metric_clotho_audiocaps(
- metrics, val_metrics_per_dataset, args
- )
- elif "val" in data and (
- args.val_frequency
- and ((epoch % args.val_frequency) == 0 or epoch == args.epochs)
- ):
- dataloader = data["val"].dataloader
- num_samples = 0
- samples_per_val = dataloader.num_samples
-
- # FIXME this does not scale past small eval datasets
- # all_audio_features @ all_text_features will blow up memory and compute very quickly
- eval_info = {}
- if args.clap_mlploss:
- eval_info["all"] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- "all_audio_features_mlp": [],
- "all_text_features_mlp": [],
- } # cumulative_loss = 0.0
- else:
- eval_info["all"] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- } # cumu
- # all_audio_features, all_text_features, all_audio_features_mlp, all_text_features_mlp = [], [], [], []
- with torch.no_grad():
- for i, batch in enumerate(dataloader):
- audios = batch # contains mel_spec, wavform, and longer list
- texts = batch["text"]
- # audios = audios.to(device=device, non_blocking=True)
-
- all_names = list(
- set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]])
- )
- for name in all_names:
- if name not in eval_info.keys():
- if args.clap_mlploss:
- eval_info[name] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- "all_audio_features_mlp": [],
- "all_text_features_mlp": [],
- }
- else:
- eval_info[name] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- }
- with autocast():
- (
- audio_features,
- text_features,
- audio_features_mlp,
- text_features_mlp,
- logit_scale_a,
- logit_scale_t,
- ) = model(audios, texts, device)
-
- if args.parallel_eval:
- # multi-GPU eval
- if args.clap_mlploss:
- (
- audio_features,
- text_features,
- audio_features_mlp,
- text_features_mlp,
- ) = gather_features(
- audio_features=audio_features,
- text_features=text_features,
- audio_features_mlp=audio_features_mlp,
- text_features_mlp=text_features_mlp,
- local_loss=False,
- gather_with_grad=False,
- rank=args.rank,
- world_size=args.world_size,
- use_horovod=args.horovod,
- mlp_loss=args.clap_mlploss,
- )
- else:
- (audio_features, text_features,) = gather_features(
- audio_features=audio_features,
- text_features=text_features,
- local_loss=False,
- gather_with_grad=False,
- rank=args.rank,
- world_size=args.world_size,
- use_horovod=args.horovod,
- mlp_loss=args.clap_mlploss,
- )
-
- if is_master(args):
- num_samples += audio_features.shape[0]
- for n in [*all_names, "all"]:
- if n == "all":
- eval_info[n]["all_audio_features"].append(
- audio_features.cpu()
- )
- eval_info[n]["all_text_features"].append(
- text_features.cpu()
- )
- if args.clap_mlploss:
- eval_info[n]["all_audio_features_mlp"].append(
- audio_features_mlp.cpu()
- )
- eval_info[n]["all_text_features_mlp"].append(
- text_features_mlp.cpu()
- )
- else:
- idx = np.where(
- np.array(
- [
- "-".join(b.split("/")[-3:-1])
- for b in batch["__url__"]
- ]
- )
- == n
- )[0]
- eval_info[n]["all_audio_features"].append(
- audio_features.cpu().index_select(
- 0, torch.tensor(idx).long()
- )
- )
- eval_info[n]["all_text_features"].append(
- text_features.cpu().index_select(
- 0, torch.tensor(idx).long()
- )
- )
- if args.clap_mlploss:
- eval_info[n]["all_audio_features_mlp"].append(
- audio_features_mlp.cpu().index_select(
- 0, torch.tensor(idx).long()
- )
- )
- eval_info[n]["all_text_features_mlp"].append(
- text_features_mlp.cpu().index_select(
- 0, torch.tensor(idx).long()
- )
- )
- # print(f'eval step {i}') # (yusong): for debug
-
- # cumulative_loss += total_loss * batch_size
- # num_samples += batch_size
- if is_master(args) and (i % 100) == 0: # and i != 0:
- logging.info(
- f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]"
- )
- if is_master(args):
- val_metrics_per_dataset = {}
- for n in eval_info.keys():
- if args.clap_mlploss:
- metrics_single_dataset = get_metrics(
- audio_features=torch.cat(
- eval_info[n]["all_audio_features"]
- ),
- text_features=torch.cat(eval_info[n]["all_text_features"]),
- logit_scale_a=logit_scale_a.cpu(),
- audio_features_mlp=torch.cat(
- eval_info[n]["all_audio_features_mlp"]
- ),
- text_features_mlp=torch.cat(
- eval_info[n]["all_text_features_mlp"]
- ),
- logit_scale_t=logit_scale_t.cpu(),
- mlp_loss=args.clap_mlploss,
- )
- else:
- metrics_single_dataset = get_metrics(
- audio_features=torch.cat(
- eval_info[n]["all_audio_features"]
- ),
- text_features=torch.cat(eval_info[n]["all_text_features"]),
- logit_scale_a=logit_scale_a.cpu(),
- mlp_loss=args.clap_mlploss,
- )
- val_metrics_per_dataset[n] = {
- n + "/" + k: v for k, v in metrics_single_dataset.items()
- }
- metrics.update(val_metrics_per_dataset[n])
- if "epoch" not in metrics.keys():
- metrics.update({"epoch": epoch})
- if is_master(args):
- if not metrics:
- return metrics
-
- logging.info(
- f"Eval Epoch: {epoch} "
- + "\n".join(
- [
- "\t".join([f"{k}: {round(v, 4):.4f}" for k, v in m.items()])
- for m in val_metrics_per_dataset.values()
- ]
- )
- )
-
- if args.save_logs:
- for name, val in metrics.items():
- if tb_writer is not None:
- tb_writer.add_scalar(f"val/{name}", val, epoch)
-
- with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f:
- f.write(json.dumps(metrics))
- f.write("\n")
-
- if args.wandb:
- assert wandb is not None, "Please install wandb."
- for name, val in metrics.items():
- wandb.log({f"val/{name}": val, "epoch": epoch})
-
- return metrics
- else:
- return metrics
-
-
-def get_metrics(
- audio_features,
- text_features,
- logit_scale_a,
- audio_features_mlp=None,
- text_features_mlp=None,
- logit_scale_t=None,
- mlp_loss=False,
-):
- metrics = {}
- if mlp_loss:
- # Set up audio to text & text to audio similary matrice
- a_logits_per_audio = (
- (logit_scale_a * audio_features @ text_features_mlp.t()).detach().cpu()
- )
- a_logits_per_text = a_logits_per_audio.t().detach().cpu()
- t_logits_per_audio = (
- (logit_scale_t * audio_features_mlp @ text_features.t()).detach().cpu()
- )
- t_logits_per_text = t_logits_per_audio.t().detach().cpu()
-
- labels = torch.arange(audio_features.shape[0]).long()
- # Change the loss from two terms into four terms with 2x2 combined CE loss
- total_loss = (
- F.cross_entropy(a_logits_per_audio, labels)
- + F.cross_entropy(a_logits_per_text, labels)
- + F.cross_entropy(t_logits_per_audio, labels)
- + F.cross_entropy(t_logits_per_text, labels)
- ) / 4
-
- metrics[f"cumulative_loss"] = total_loss.item()
- metrics[f"num_samples"] = audio_features.shape[0]
-
- logits = {
- "audio_to_text": (a_logits_per_audio + t_logits_per_audio) / 2,
- "text_to_audio": (a_logits_per_text + t_logits_per_text) / 2,
- }
- ground_truth = torch.arange(len(text_features)).view(-1, 1)
-
- else:
- # print("text_features", text_features)
- # print("text_features.shape", text_features.shape)
- logits_per_audio = (
- (logit_scale_a * audio_features @ text_features.t()).detach().cpu()
- )
- logits_per_text = logits_per_audio.t().detach().cpu()
-
- labels = torch.arange(audio_features.shape[0]).long()
- # Change the loss from two terms into four terms with 2x2 combined CE loss
- total_loss = (
- F.cross_entropy(logits_per_audio, labels)
- + F.cross_entropy(logits_per_text, labels)
- ) / 2
-
- metrics[f"cumulative_loss"] = total_loss.item()
- metrics[f"num_samples"] = audio_features.shape[0]
-
- logits = {"audio_to_text": logits_per_audio, "text_to_audio": logits_per_text}
-
- ground_truth = torch.arange(len(text_features)).view(-1, 1)
-
- for name, logit in logits.items():
- ranking = torch.argsort(logit, descending=True)
- preds = torch.where(ranking == ground_truth)[
- 1
- ] # (yusong) this line is slow because it uses single thread
- preds = preds.detach().cpu().numpy()
- metrics[f"{name}_mean_rank"] = preds.mean() + 1
- metrics[f"{name}_median_rank"] = np.floor(np.median(preds)) + 1
- for k in [1, 5, 10]:
- metrics[f"{name}_R@{k}"] = np.mean(preds < k)
- # map@10
- metrics[f"{name}_mAP@10"] = np.mean(np.where(preds < 10, 1 / (preds + 1), 0.0))
-
- return metrics
-
-
-def evaluate_clotho_audiocaps(
- model, data, epoch, args, autocast, device, tb_writer=None
-):
- """
- Adapted from https://github.com/XinhaoMei/audio-text_retrieval/blob/main/tools/utils.py.
- 1. for text-to-audio retrieval, do 5 times and average the results
- 2. for R@1, R@5, R@10 in audio-to-text retrieval, take the best rank among 5 text
- 3. for map@10 in audio-to-text retrieval:
- 3.1: sort the rank of 5 text
- 3.2: exclude the rank >=10 (0-index)
- 3.3: compute the map regarding the remaining ranks: np.mean(np.arange(1, len(ranks)+1) / ranks).
- (3.3) That is, take the top ranks of 5 text that is < 10, and assign the descending number as ground truth.
- (3.3) E.g.: the ground truth of first rank of the 5 text should be 1, the second rank should be 2, etc.
- """
- # TODO: (yusong) only support single GPU evaluation and only support non-mlp case for now.
- dataloader = data["val"].dataloader
- with torch.no_grad():
- eval_info = {}
- for i, batch in enumerate(dataloader):
- audios = batch # contains mel_spec, wavform, and longer list
-
- # each item in the list has 5 texts
- if args.tmodel == "transformer":
- from open_clip import tokenize
-
- texts = [tokenize(t) for t in batch["full_text"]]
- texts = torch.cat(texts)
- else:
- from .data import tokenizer
-
- texts = [
- tokenizer(t) for t in batch["full_text"]
- ] # 5 texts for each audio
- texts = {
- k: torch.cat([t[k] for t in texts]) for k in texts[0].keys()
- } # 5 x batch
-
- # audios = audios.to(device=device, non_blocking=True)
-
- all_names = list(
- set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]])
- )
- for name in all_names:
- if name not in eval_info.keys():
- # we will not use mlp outputs even if args.clap_mlploss=True
- eval_info[name] = {
- "cumulative_loss": 0.0,
- "num_samples": 0,
- "all_audio_features": [],
- "all_text_features": [],
- }
- with autocast():
- audio_features = model(audios, None, device)
- text_features = model(None, texts, device)
- audio_features = F.normalize(audio_features, dim=-1)
- text_features = F.normalize(text_features, dim=-1)
-
- all_names = list(
- set(["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]])
- )
- for n in all_names:
- idx = np.where(
- np.array(
- ["-".join(b.split("/")[-3:-1]) for b in batch["__url__"]]
- )
- == n
- )[0]
- eval_info[n]["all_audio_features"].append(
- audio_features.cpu().index_select(0, torch.tensor(idx).long())
- )
- # (yusong) please double-check. This is for selecting 5 text features at once.
- # because idx is a list of indices in size of num_samples,
- # and text_features is a tensor of size (5*num_samples, dim)
- # so we need to select 5 consecutive indices at once for a single index in idx.
- eval_info[n]["all_text_features"].append(
- text_features.cpu()
- .reshape([-1, 5, text_features.shape[1]])
- .index_select(0, torch.tensor(idx).long())
- .reshape([-1, text_features.shape[1]])
- )
-
- val_metrics_all = {}
-
- for n in eval_info.keys():
- logit_scale_a, logit_scale_t = model(None, None, device)
- logit_scale_a = logit_scale_a.cpu()
-
- audio_features = torch.cat(eval_info[n]["all_audio_features"], dim=0)
- text_features = torch.cat(eval_info[n]["all_text_features"], dim=0)
-
- logits_per_audio = (
- (logit_scale_a * audio_features @ text_features.t()).detach().cpu()
- )
- logits_per_text = logits_per_audio.t().detach().cpu()
-
- # logits_per_audio shape: [num_samples, num_samples*5]
- # logits_per_text shape: [num_samples*5, num_samples]
-
- logging.info(
- f"dataset {n}, logits_per_audio shape: {logits_per_audio.shape}, "
- f"logits_per_text shape: {logits_per_text.shape}"
- )
-
- metrics = {}
- num_samples = audio_features.shape[0]
- metrics[f"num_samples"] = num_samples
-
- # (yusong) the following code is very important, please double-check:
- # logits_per_audio.reshape(num_samples, num_samples, 5)[:, :, d]
- # logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :]
- # Those two are retrieving one of the 5 text for each audio.
- labels = torch.arange(audio_features.shape[0]).long()
- audio_to_text_loss = [
- F.cross_entropy(
- logits_per_audio.reshape(num_samples, num_samples, 5)[:, :, d],
- labels,
- )
- for d in range(5)
- ]
- text_to_audio_loss = [
- F.cross_entropy(
- logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :],
- labels,
- )
- for d in range(5)
- ]
- total_loss = (np.mean(audio_to_text_loss) + np.mean(text_to_audio_loss)) / 2
-
- metrics[f"cumulative_loss"] = total_loss.item()
-
- # text to audio: do 5 times
- pred_text = []
- for d in range(5):
- logit = logits_per_text.reshape(num_samples, 5, num_samples)[:, d, :]
- ground_truth = torch.arange(len(logit)).view(-1, 1)
- ranking = torch.argsort(
- logit, descending=True
- ) # [num_samples, num_samples]
- preds = torch.where(ranking == ground_truth)[1]
- pred_text.append(preds.detach().cpu().numpy())
- pred_text_concat = np.concatenate(pred_text, axis=0) # [5*num_samples]
- metrics[f"text_to_audio_mean_rank"] = pred_text_concat.mean() + 1
- metrics[f"text_to_audio_median_rank"] = (
- np.floor(np.median(pred_text_concat)) + 1
- )
- for k in [1, 5, 10]:
- metrics[f"text_to_audio_R@{k}"] = np.mean(pred_text_concat < k)
- # map@10
- metrics[f"text_to_audio_mAP@10"] = np.mean(
- np.where(pred_text_concat < 10, 1 / (pred_text_concat + 1), 0.0)
- )
-
- # audio to text: take the best result
- # for audio to text map 10, sort and assign descending ground truth.
- # see https://github.com/XinhaoMei/audio-text_retrieval/blob/main/tools/utils.py#L103
- # map@10
- map_all = []
- pred_audio_all = []
- for d in range(num_samples):
- # logits_per_audio: [num_samples, num_samples*5]
- logit_single = logits_per_audio[d, :] # [5*num_samples]
- # Ground-truth index: [d*5, d*5+1, d*5+2, d*5+3, d*5+4]
- ranking = torch.argsort(
- logit_single, descending=True
- ) # [5*num_samples]
- # ranking: the index of first match, second match, ...
- ground_truth = torch.arange(d * 5, d * 5 + 5)[None]
- all_pred = torch.where(
- torch.stack([ranking] * 5) == ground_truth.view(-1, 1)
- )[1]
- min_pred = torch.min(all_pred)
- pred_audio_all.append(min_pred.detach().cpu().numpy())
- all_pred_filter = all_pred[all_pred < 10].detach().cpu().numpy()
- # /5 because we have 5 text, so it means for the text rank >=10 we count as 0.
- map_single = (
- np.sum(
- (np.arange(1, len(all_pred_filter) + 1) / (all_pred_filter + 1))
- )
- / 5
- )
- map_all.append(map_single)
- metrics[f"audio_to_text_mAP@10"] = np.mean(map_all)
- for k in [1, 5, 10]:
- metrics[f"audio_to_text_R@{k}"] = np.mean(np.array(pred_audio_all) < k)
-
- val_metrics_all[n] = {n + "/" + k: v for k, v in metrics.items()}
- return val_metrics_all
-
-
-def calculate_selection_performance_clotho_audiocaps(val_metrics_per_dataset):
- """
- Calculate performance for Clotho+AudioCaps for model selection.
- """
- selection_performance_all = []
- for n in val_metrics_per_dataset.keys():
- selection_performance = (
- val_metrics_per_dataset[n][f"{n}/audio_to_text_mAP@10"]
- + val_metrics_per_dataset[n][f"{n}/text_to_audio_mAP@10"]
- ) / 2
- selection_performance_all.append(selection_performance)
- return np.mean(selection_performance_all)
-
-
-def select_top_metric_clotho_audiocaps(metrics, val_metrics_per_dataset, args):
- # val_metrics_per_dataset: dict, key: dataset name, value: dict, key: metric name, value: metric value
- # metrics: dict, key: metric name, value: metric value
- # Hack: use args to save the top performance
- if not hasattr(args, "top_selection_performance"):
- selection_performance = calculate_selection_performance_clotho_audiocaps(
- val_metrics_per_dataset
- )
- # TODO: write the if and else together
- metric_update = {}
- for n in val_metrics_per_dataset.keys():
- for k in val_metrics_per_dataset[n].keys():
- metric_update[
- k.split("/")[0] + "-top" + "/" + k.split("/")[1]
- ] = val_metrics_per_dataset[n][k]
- metric_update["top_selection_performance"] = selection_performance
- metric_update["top-selection-epoch"] = metrics["epoch"]
- metrics.update(metric_update)
- args.top_metric = metric_update
- args.top_selection_performance = selection_performance
- else:
- selection_performance_new = calculate_selection_performance_clotho_audiocaps(
- val_metrics_per_dataset
- )
- selection_performance_old = args.top_selection_performance
- if selection_performance_new > selection_performance_old:
- metric_update = {}
- for n in val_metrics_per_dataset.keys():
- for k in val_metrics_per_dataset[n].keys():
- metric_update[
- k.split("/")[0] + "-top" + "/" + k.split("/")[1]
- ] = val_metrics_per_dataset[n][k]
- metric_update["top_selection_performance"] = selection_performance_new
- metric_update["top-selection-epoch"] = metrics["epoch"]
- metrics.update(metric_update)
- args.top_metric = metric_update
- args.top_selection_performance = selection_performance_new
- else:
- metrics.update(args.top_metric)
- return metrics
diff --git a/spaces/dev114/sentiment-analysis/app.py b/spaces/dev114/sentiment-analysis/app.py
deleted file mode 100644
index 1f9f3f26997ee9543e0d222e270f82a0e3134efe..0000000000000000000000000000000000000000
--- a/spaces/dev114/sentiment-analysis/app.py
+++ /dev/null
@@ -1,11 +0,0 @@
-import gradio as gr
-
-from transformers import pipeline
-
-sentiment = pipeline("sentiment-analysis")
-
-def get_sentiment(input_text):
- return sentiment(input_text)
-
-iface = gr.Interface(fn = get_sentiment, inputs = "text", outputs = ['text'], title ='Sentiment Analysis', description="Get Sentiment Negative/Positive for the given input")
-iface.launch()
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/127 Hours Movie In Hindi 720p Torrent.md b/spaces/diacanFperku/AutoGPT/127 Hours Movie In Hindi 720p Torrent.md
deleted file mode 100644
index bb08e014dc12623974c14d2905e17ee72095041c..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/127 Hours Movie In Hindi 720p Torrent.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-according to multiple reddit posts, 1337x is better than most other sites to torrent bollywood movies from. not only does it have thousands of old and new hindi movie torrents, but many of the rips are available in hd quality. you can also find a decent collection of recent hollywood films dubbed over with hindi.
-127 Hours movie in hindi 720p torrent Download Zip ✅ https://gohhs.com/2uFUR6
-the library includes loads of new bollywood movie torrents as well as plenty of classics. this includes movies like the power and kaagaz both released in 2022. you can also torrent bollywood movies dating all the way back to the 1940s, such as mahal.
-kaabil 2017 download full movie hindi subtitles. kaabil 2017 full. kaabil 2017 hindi subtitles; kaabil 2017 hd mp4 720k;. watch kaabil : watch kaabil 2017 full movie hd 1080p & 720p with english.kaabil hd hindi dubbed english kaabil hd 2017 720p 480p movie online kaabil 2017 movie. kaabil 2017 download full. . mena khelna bataya hi hajwala kyun nahi rahe ho man.film kaabil kiya hai.toh gaye na downloads:. watch. welcome to a.k.a.subtitles.com. aka.com a subtitle translation website devoted to share hindi subtitles, dubbed version, dubbed movies, dubbed movies, dubbed movies on any media platforms. kaabil (2017) desi scr rip xvid [1cd] m-subs team ictv. kaabil 2017 is a bollywood, hindi, drama film.
-with more than eight million torrents in total, torlock gives you tons of options for torrenting bollywood movies. it has a higher number of new releases than most other sites with a vast selection of movies released as recently as 2022. you can also find well over 1,000 films that are dubbed into hindi from a variety of languages, including recent hollywood releases.
-
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Chak De India Tamil Movie Download !NEW!.md b/spaces/diacanFperku/AutoGPT/Chak De India Tamil Movie Download !NEW!.md
deleted file mode 100644
index 46239f833d87c61daf95790cbe1e1773fe02c815..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Chak De India Tamil Movie Download !NEW!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Chak De India Tamil Movie Download DOWNLOAD ✑ https://gohhs.com/2uFUhb
-
-Chak De India Hindi Full Movie Watch Online Free Chak De India Hindi Full Movie Watch Online Free Chak De India Hindi Full Movie Watch Online Free Chak ... 4d29de3e1b
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Knoll Unmult After Effects Cc 19.md b/spaces/diacanFperku/AutoGPT/Knoll Unmult After Effects Cc 19.md
deleted file mode 100644
index a4f3cde18879b559944f020b926c57c80f1e71e1..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Knoll Unmult After Effects Cc 19.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Knoll Unmult After Effects Cc 19 Download Zip ———>>> https://gohhs.com/2uFV9k
-
-Adobe After Effects, Premiere Pro, and Photoshop CC and CS6.knoll light factory for ... 19 Obscuration Layer 20 Obscuration Type 21 Source Size 22 Unmult 22. 1fdad05405
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Rain Man Full Movie Download In Hindi 720p.md b/spaces/diacanFperku/AutoGPT/Rain Man Full Movie Download In Hindi 720p.md
deleted file mode 100644
index e261ee89e5ca3e0695c0731f40cf7384d0e6685c..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Rain Man Full Movie Download In Hindi 720p.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-Rain Man Full Movie Download In Hindi 720p: A Review
-Rain Man is a 1988 American drama film directed by Barry Levinson and starring Dustin Hoffman and Tom Cruise. The film tells the story of Charlie Babbitt (Cruise), a selfish and greedy car dealer who learns that his estranged father has left his fortune to his autistic savant brother Raymond (Hoffman). Charlie decides to kidnap Raymond and take him on a cross-country road trip, hoping to get a share of the inheritance. Along the way, Charlie and Raymond develop a bond and learn from each other.
-Rain Man was a critical and commercial success, winning four Academy Awards, including Best Picture, Best Director, Best Actor for Hoffman, and Best Original Screenplay. The film also received positive reviews from critics and audiences, who praised the performances of Hoffman and Cruise, the direction of Levinson, and the screenplay by Ronald Bass and Barry Morrow. Rain Man is considered one of the best films of the 1980s and one of the most influential films about autism.
-Rain Man Full Movie Download In Hindi 720p DOWNLOAD === https://gohhs.com/2uFUG8
-However, if you are looking for Rain Man full movie download in Hindi 720p, you might be disappointed. This is because Rain Man is not officially available in Hindi dubbed version, and the only way to watch it in Hindi is to find a pirated copy online. But is it worth it? In this article, we will review Rain Man full movie download in Hindi 720p, and tell you why you should avoid it.
-What is Rain Man Full Movie Download In Hindi 720p?
-Rain Man full movie download in Hindi 720p is a file that claims to contain the full movie of Rain Man in Hindi dubbed version with 720p resolution. This file is supposed to be downloaded from various sources on the internet, such as torrent sites, crack sites, file sharing sites, or streaming sites. Rain Man full movie download in Hindi 720p promises to give you the opportunity to watch Rain Man in your preferred language and quality without paying anything.
-Is Rain Man Full Movie Download In Hindi 720p Safe and Reliable?
-While Rain Man full movie download in Hindi 720p might sound tempting, we advise you to be careful before downloading and watching it on your device. There are several risks and drawbacks associated with using Rain Man full movie download in Hindi 720p, such as:
-
-Legal issues: Using Rain Man full movie download in Hindi 720p is illegal, as it violates the copyright laws of the United States and India. You might face legal consequences if you are caught using a pirated copy of Rain Man.
-Viruses and malware: Downloading Rain Man full movie download in Hindi 720p from unknown sources might expose your device to viruses and malware, which can damage your system or steal your personal information.
-Poor quality: Using Rain Man full movie download in Hindi 720p might result in poor quality of the movie, as it might not be compatible with the original version or have low resolution or audio issues.
-Lack of subtitles: Using Rain Man full movie download in Hindi 720p might not have subtitles for the English dialogues or scenes, which can make it hard to understand or enjoy the movie.
-
-What is the Alternative to Rain Man Full Movie Download In Hindi 720p?
-If you want to watch Rain Man without risking any legal or technical problems, we recommend that you watch it in its original English version with subtitles. You can find the official version of Rain Man on various platforms, such as:
-
-DVD or Blu-ray: You can buy or rent the DVD or Blu-ray of Rain Man from online or offline stores, such as Amazon or Walmart. The DVD or Blu-ray will have high quality and subtitles for different languages.
-Streaming services: You can stream or download Rain Man from legal streaming services, such as Netflix or Amazon Prime Video. The streaming services will have high quality and subtitles for different languages.
-Theaters: You can watch Rain Man on the big screen at your local theater, if it is available. The theater will have high quality and subtitles for different languages.
-
-Conclusion
-Rain Man is a great movie that deserves to be watched in its original version with subtitles. It is a powerful and emotional story of two brothers who learn to love and respect each other. It is also a realistic and sensitive portrayal of autism and its challenges. However, using Rain Man full movie download in Hindi 720p is not a good idea, as it can cause legal and technical problems for you. You might face legal issues, expose your device to viruses and malware, experience poor quality or lack of subtitles. Therefore, we recommend that you watch Rain Man in its official version from legal platforms, and enjoy it without any risks or limitations.
-
-We hope that this article has helped you understand what Rain Man full movie download in Hindi 720p is, how it works, and why you should avoid it. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
-How to Watch Rain Man in English with Subtitles?
-If you want to watch Rain Man in its original English version with subtitles, you might need to follow some steps to enable or adjust the subtitles on your device. Here are some of the steps that you might need to follow:
-
-If you are watching Rain Man on DVD or Blu-ray, you can use the remote control or the menu options to select the subtitles language and turn them on or off.
-If you are streaming or downloading Rain Man from a streaming service, you can use the settings or the options icon to select the subtitles language and turn them on or off.
-If you are watching Rain Man on a theater, you can ask the staff or check the website to see if they have subtitles available for different languages.
-
-What are Some Facts About Rain Man?
-Rain Man is a movie that has many interesting facts and trivia behind it. Here are some of the facts that you might not know about Rain Man:
-
-Rain Man was inspired by the real-life savant Kim Peek, who had extraordinary memory and reading abilities. Peek met with Dustin Hoffman and helped him prepare for his role as Raymond.
-Rain Man was originally written as a comedy, with Bill Murray and Martin Short as the lead actors. However, after several rewrites and changes, the movie became a drama, with Dustin Hoffman and Tom Cruise as the lead actors.
-Rain Man was the highest-grossing film of 1988, earning over $350 million worldwide. It also won four Academy Awards, including Best Picture, Best Director, Best Actor for Hoffman, and Best Original Screenplay.
-Rain Man was one of the first films to use digital audio technology, such as Dolby Stereo SR (Spectral Recording) and Dolby SR-D (Spectral Recording Digital). The film also featured an original score by Hans Zimmer, who later became one of the most famous film composers in Hollywood.
-Rain Man was credited with raising awareness and understanding of autism and its spectrum. The film also influenced many people to donate money or volunteer for autism-related causes and organizations.
-
-What are Some Reviews of Rain Man?
-Rain Man is a movie that has received many positive reviews from critics and audiences alike. Here are some of the reviews that praise Rain Man:
-
-"Rain Man is a moving story about two brothers, one a selfish yuppie named Charlie Babbitt who cares only about the importance of money and the other named Raymond who is autistic and doesn't know anything about money. It is a story of a man's voyage of discovery and growth. It is also a poignant reminder that all human beings have worth." - Roger Ebert, Chicago Sun-Times
-
-
-"Rain Man works so well within Raymond's limitations because it is a movie about limitations, particularly Charlie's own limited ability to love those in his life, or to see things from their point of view. As the film opens, he is as selfish a character as I've seen in a long time." - Gene Siskel, Chicago Tribune
-
-
-"Rain Man is a fascinating and touching story of two brothers who have never met before and embark on an unforgettable journey together. Dustin Hoffman gives an amazing performance as Raymond, the autistic savant who can memorize anything but can't express his emotions. Tom Cruise is equally impressive as Charlie, the arrogant and greedy hustler who learns to care for his brother and himself. The film is a masterpiece of direction, writing, acting, and music." - James Berardinelli, ReelViews
-
-What are Some Lessons from Rain Man?
-Rain Man is a movie that has many lessons and messages for its viewers. Here are some of the lessons that you can learn from Rain Man:
-
-Don't judge a book by its cover: Rain Man shows that appearances can be deceiving, and that people who seem different or strange might have hidden talents or qualities that make them special.
-Family is important: Rain Man shows that family is not only defined by blood, but by love and respect. Charlie and Raymond might not have grown up together, but they eventually become brothers who care for each other.
-Money can't buy happiness: Rain Man shows that money is not the most important thing in life, and that it can't buy happiness or love. Charlie might have been rich and successful, but he was also unhappy and lonely. Raymond might have been poor and dependent, but he was also happy and content.
-Be yourself: Rain Man shows that you should be yourself and not try to change or conform to others' expectations. Raymond was happy being himself, even if he was different from others. Charlie learned to accept Raymond for who he was, and not try to change him or exploit him.
-Learn from others: Rain Man shows that you can learn from others, especially those who are different from you. Charlie learned many things from Raymond, such as patience, compassion, honesty, and loyalty. Raymond also learned some things from Charlie, such as humor, adventure, and friendship.
-
-Conclusion
-Rain Man is a great movie that deserves to be watched in its original version with subtitles. It is a powerful and emotional story of two brothers who learn to love and respect each other. It is also a realistic and sensitive portrayal of autism and its spectrum. However, using Rain Man full movie download in Hindi 720p is not a good idea, as it can cause legal and technical problems for you. You might face legal issues, expose your device to viruses and malware, experience poor quality or lack of subtitles. Therefore, we recommend that you watch Rain Man in its official version from legal platforms, and enjoy it without any risks or limitations.
-We hope that this article has helped you understand what Rain Man full movie download in Hindi 720p is, how it works, and why you should avoid it. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/preprocess_text.py b/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/preprocess_text.py
deleted file mode 100644
index 44c35fecd9b7f21016e80e9597d6055254cba3f7..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Bufeiyan-c-Bert-VITS2/preprocess_text.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import json
-from random import shuffle
-
-import tqdm
-from text.cleaner import clean_text
-from collections import defaultdict
-import shutil
-stage = [1,2,3]
-
-transcription_path = 'filelists/short_character_anno.list'
-train_path = 'filelists/train.list'
-val_path = 'filelists/val.list'
-config_path = "configs/config.json"
-val_per_spk = 4
-max_val_total = 8
-
-if 1 in stage:
- with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f:
- for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()):
- try:
- utt, spk, language, text = line.strip().split('|')
- #language = "ZH"
- norm_text, phones, tones, word2ph = clean_text(text, language)
- f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones),
- " ".join([str(i) for i in tones]),
- " ".join([str(i) for i in word2ph])))
- except:
- print("err!", utt)
-
-if 2 in stage:
- spk_utt_map = defaultdict(list)
- spk_id_map = {}
- current_sid = 0
-
- with open( transcription_path+'.cleaned', encoding='utf-8') as f:
- for line in f.readlines():
- utt, spk, language, text, phones, tones, word2ph = line.strip().split('|')
- spk_utt_map[spk].append(line)
- if spk not in spk_id_map.keys():
- spk_id_map[spk] = current_sid
- current_sid += 1
- train_list = []
- val_list = []
- for spk, utts in spk_utt_map.items():
- shuffle(utts)
- val_list+=utts[:val_per_spk]
- train_list+=utts[val_per_spk:]
- if len(val_list) > max_val_total:
- train_list+=val_list[max_val_total:]
- val_list = val_list[:max_val_total]
-
- with open( train_path,"w", encoding='utf-8') as f:
- for line in train_list:
- f.write(line)
-
- file_path = transcription_path+'.cleaned'
- shutil.copy(file_path,'./filelists/train.list')
-
- with open(val_path, "w", encoding='utf-8') as f:
- for line in val_list:
- f.write(line)
-
-if 3 in stage:
- assert 2 in stage
- config = json.load(open(config_path))
- config['data']["n_speakers"] = current_sid #
- config["data"]['spk2id'] = spk_id_map
- with open(config_path, 'w', encoding='utf-8') as f:
- json.dump(config, f, indent=2, ensure_ascii=False)
diff --git a/spaces/dineshreddy/WALT/mmdet/core/mask/structures.py b/spaces/dineshreddy/WALT/mmdet/core/mask/structures.py
deleted file mode 100644
index f7e7ab8620b9f21710fc8a61bdaaec20d96e5c20..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmdet/core/mask/structures.py
+++ /dev/null
@@ -1,1042 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-import cv2
-import mmcv
-import numpy as np
-import pycocotools.mask as maskUtils
-import torch
-from mmcv.ops.roi_align import roi_align
-
-
-class BaseInstanceMasks(metaclass=ABCMeta):
- """Base class for instance masks."""
-
- @abstractmethod
- def rescale(self, scale, interpolation='nearest'):
- """Rescale masks as large as possible while keeping the aspect ratio.
- For details can refer to `mmcv.imrescale`.
-
- Args:
- scale (tuple[int]): The maximum size (h, w) of rescaled mask.
- interpolation (str): Same as :func:`mmcv.imrescale`.
-
- Returns:
- BaseInstanceMasks: The rescaled masks.
- """
-
- @abstractmethod
- def resize(self, out_shape, interpolation='nearest'):
- """Resize masks to the given out_shape.
-
- Args:
- out_shape: Target (h, w) of resized mask.
- interpolation (str): See :func:`mmcv.imresize`.
-
- Returns:
- BaseInstanceMasks: The resized masks.
- """
-
- @abstractmethod
- def flip(self, flip_direction='horizontal'):
- """Flip masks alone the given direction.
-
- Args:
- flip_direction (str): Either 'horizontal' or 'vertical'.
-
- Returns:
- BaseInstanceMasks: The flipped masks.
- """
-
- @abstractmethod
- def pad(self, out_shape, pad_val):
- """Pad masks to the given size of (h, w).
-
- Args:
- out_shape (tuple[int]): Target (h, w) of padded mask.
- pad_val (int): The padded value.
-
- Returns:
- BaseInstanceMasks: The padded masks.
- """
-
- @abstractmethod
- def crop(self, bbox):
- """Crop each mask by the given bbox.
-
- Args:
- bbox (ndarray): Bbox in format [x1, y1, x2, y2], shape (4, ).
-
- Return:
- BaseInstanceMasks: The cropped masks.
- """
-
- @abstractmethod
- def crop_and_resize(self,
- bboxes,
- out_shape,
- inds,
- device,
- interpolation='bilinear'):
- """Crop and resize masks by the given bboxes.
-
- This function is mainly used in mask targets computation.
- It firstly align mask to bboxes by assigned_inds, then crop mask by the
- assigned bbox and resize to the size of (mask_h, mask_w)
-
- Args:
- bboxes (Tensor): Bboxes in format [x1, y1, x2, y2], shape (N, 4)
- out_shape (tuple[int]): Target (h, w) of resized mask
- inds (ndarray): Indexes to assign masks to each bbox,
- shape (N,) and values should be between [0, num_masks - 1].
- device (str): Device of bboxes
- interpolation (str): See `mmcv.imresize`
-
- Return:
- BaseInstanceMasks: the cropped and resized masks.
- """
-
- @abstractmethod
- def expand(self, expanded_h, expanded_w, top, left):
- """see :class:`Expand`."""
-
- @property
- @abstractmethod
- def areas(self):
- """ndarray: areas of each instance."""
-
- @abstractmethod
- def to_ndarray(self):
- """Convert masks to the format of ndarray.
-
- Return:
- ndarray: Converted masks in the format of ndarray.
- """
-
- @abstractmethod
- def to_tensor(self, dtype, device):
- """Convert masks to the format of Tensor.
-
- Args:
- dtype (str): Dtype of converted mask.
- device (torch.device): Device of converted masks.
-
- Returns:
- Tensor: Converted masks in the format of Tensor.
- """
-
- @abstractmethod
- def translate(self,
- out_shape,
- offset,
- direction='horizontal',
- fill_val=0,
- interpolation='bilinear'):
- """Translate the masks.
-
- Args:
- out_shape (tuple[int]): Shape for output mask, format (h, w).
- offset (int | float): The offset for translate.
- direction (str): The translate direction, either "horizontal"
- or "vertical".
- fill_val (int | float): Border value. Default 0.
- interpolation (str): Same as :func:`mmcv.imtranslate`.
-
- Returns:
- Translated masks.
- """
-
- def shear(self,
- out_shape,
- magnitude,
- direction='horizontal',
- border_value=0,
- interpolation='bilinear'):
- """Shear the masks.
-
- Args:
- out_shape (tuple[int]): Shape for output mask, format (h, w).
- magnitude (int | float): The magnitude used for shear.
- direction (str): The shear direction, either "horizontal"
- or "vertical".
- border_value (int | tuple[int]): Value used in case of a
- constant border. Default 0.
- interpolation (str): Same as in :func:`mmcv.imshear`.
-
- Returns:
- ndarray: Sheared masks.
- """
-
- @abstractmethod
- def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0):
- """Rotate the masks.
-
- Args:
- out_shape (tuple[int]): Shape for output mask, format (h, w).
- angle (int | float): Rotation angle in degrees. Positive values
- mean counter-clockwise rotation.
- center (tuple[float], optional): Center point (w, h) of the
- rotation in source image. If not specified, the center of
- the image will be used.
- scale (int | float): Isotropic scale factor.
- fill_val (int | float): Border value. Default 0 for masks.
-
- Returns:
- Rotated masks.
- """
-
-
-class BitmapMasks(BaseInstanceMasks):
- """This class represents masks in the form of bitmaps.
-
- Args:
- masks (ndarray): ndarray of masks in shape (N, H, W), where N is
- the number of objects.
- height (int): height of masks
- width (int): width of masks
-
- Example:
- >>> from mmdet.core.mask.structures import * # NOQA
- >>> num_masks, H, W = 3, 32, 32
- >>> rng = np.random.RandomState(0)
- >>> masks = (rng.rand(num_masks, H, W) > 0.1).astype(np.int)
- >>> self = BitmapMasks(masks, height=H, width=W)
-
- >>> # demo crop_and_resize
- >>> num_boxes = 5
- >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes)
- >>> out_shape = (14, 14)
- >>> inds = torch.randint(0, len(self), size=(num_boxes,))
- >>> device = 'cpu'
- >>> interpolation = 'bilinear'
- >>> new = self.crop_and_resize(
- ... bboxes, out_shape, inds, device, interpolation)
- >>> assert len(new) == num_boxes
- >>> assert new.height, new.width == out_shape
- """
-
- def __init__(self, masks, height, width):
- self.height = height
- self.width = width
- if len(masks) == 0:
- self.masks = np.empty((0, self.height, self.width), dtype=np.uint8)
- else:
- assert isinstance(masks, (list, np.ndarray))
- if isinstance(masks, list):
- assert isinstance(masks[0], np.ndarray)
- assert masks[0].ndim == 2 # (H, W)
- else:
- assert masks.ndim == 3 or masks.ndim == 4# (N, H, W)
-
- self.masks = np.stack(masks).reshape(-1, height, width)
- assert self.masks.shape[1] == self.height
- assert self.masks.shape[2] == self.width
-
- def __getitem__(self, index):
- """Index the BitmapMask.
-
- Args:
- index (int | ndarray): Indices in the format of integer or ndarray.
-
- Returns:
- :obj:`BitmapMasks`: Indexed bitmap masks.
- """
- try:
- masks = self.masks[index].reshape(-1, self.height, self.width)
- except:
- masks = self.masks[index].reshape(-1, self.height, self.width)
-
- return BitmapMasks(masks, self.height, self.width)
-
- def __iter__(self):
- return iter(self.masks)
-
- def __repr__(self):
- s = self.__class__.__name__ + '('
- s += f'num_masks={len(self.masks)}, '
- s += f'height={self.height}, '
- s += f'width={self.width})'
- return s
-
- def __len__(self):
- """Number of masks."""
- return len(self.masks)
-
- def rescale(self, scale, interpolation='nearest'):
- """See :func:`BaseInstanceMasks.rescale`."""
- if len(self.masks) == 0:
- new_w, new_h = mmcv.rescale_size((self.width, self.height), scale)
- rescaled_masks = np.empty((0, new_h, new_w), dtype=np.uint8)
- else:
- rescaled_masks = np.stack([
- mmcv.imrescale(mask, scale, interpolation=interpolation)
- for mask in self.masks
- ])
- height, width = rescaled_masks.shape[1:]
- return BitmapMasks(rescaled_masks, height, width)
-
- def resize(self, out_shape, interpolation='nearest'):
- """See :func:`BaseInstanceMasks.resize`."""
- if len(self.masks) == 0:
- resized_masks = np.empty((0, *out_shape), dtype=np.uint8)
- else:
- resized_masks = np.stack([
- mmcv.imresize(
- mask, out_shape[::-1], interpolation=interpolation)
- for mask in self.masks
- ])
- return BitmapMasks(resized_masks, *out_shape)
-
- def flip(self, flip_direction='horizontal'):
- """See :func:`BaseInstanceMasks.flip`."""
- assert flip_direction in ('horizontal', 'vertical', 'diagonal')
-
- if len(self.masks) == 0:
- flipped_masks = self.masks
- else:
- flipped_masks = np.stack([
- mmcv.imflip(mask, direction=flip_direction)
- for mask in self.masks
- ])
- return BitmapMasks(flipped_masks, self.height, self.width)
-
- def pad(self, out_shape, pad_val=0):
- """See :func:`BaseInstanceMasks.pad`."""
- if len(self.masks) == 0:
- padded_masks = np.empty((0, *out_shape), dtype=np.uint8)
- else:
- padded_masks = np.stack([
- mmcv.impad(mask, shape=out_shape, pad_val=pad_val)
- for mask in self.masks
- ])
- return BitmapMasks(padded_masks, *out_shape)
-
- def crop(self, bbox):
- """See :func:`BaseInstanceMasks.crop`."""
- assert isinstance(bbox, np.ndarray)
- assert bbox.ndim == 1
-
- # clip the boundary
- bbox = bbox.copy()
- bbox[0::2] = np.clip(bbox[0::2], 0, self.width)
- bbox[1::2] = np.clip(bbox[1::2], 0, self.height)
- x1, y1, x2, y2 = bbox
- w = np.maximum(x2 - x1, 1)
- h = np.maximum(y2 - y1, 1)
-
- if len(self.masks) == 0:
- cropped_masks = np.empty((0, h, w), dtype=np.uint8)
- else:
- cropped_masks = self.masks[:, y1:y1 + h, x1:x1 + w]
- return BitmapMasks(cropped_masks, h, w)
-
- def crop_and_resize(self,
- bboxes,
- out_shape,
- inds,
- device='cpu',
- interpolation='bilinear'):
- """See :func:`BaseInstanceMasks.crop_and_resize`."""
- if len(self.masks) == 0:
- empty_masks = np.empty((0, *out_shape), dtype=np.uint8)
- return BitmapMasks(empty_masks, *out_shape)
-
- # convert bboxes to tensor
- if isinstance(bboxes, np.ndarray):
- bboxes = torch.from_numpy(bboxes).to(device=device)
- if isinstance(inds, np.ndarray):
- inds = torch.from_numpy(inds).to(device=device)
-
- num_bbox = bboxes.shape[0]
- fake_inds = torch.arange(
- num_bbox, device=device).to(dtype=bboxes.dtype)[:, None]
- rois = torch.cat([fake_inds, bboxes], dim=1) # Nx5
- rois = rois.to(device=device)
- if num_bbox > 0:
- #masks_vis = (self.masks == 1)
- masks_vis = (self.masks > 0)
- gt_masks_th = torch.from_numpy(masks_vis).to(device).index_select(
- 0, inds).to(dtype=rois.dtype)
- targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape,
- 1.0, 0, 'avg', True).squeeze(1)
- targets = targets.cpu().numpy().astype(int)
- resized_masks_vis = (targets > 0.5)
-
- #masks_full = (self.masks > 0)
- masks_full = (self.masks == 2)
- #masks_occ = (self.masks == 2)
- gt_masks_th = torch.from_numpy(masks_full).to(device).index_select(
- 0, inds).to(dtype=rois.dtype)
- targets = roi_align(gt_masks_th[:, None, :, :], rois, out_shape,
- 1.0, 0, 'avg', True).squeeze(1)
- targets = targets.cpu().numpy().astype(int)
- resized_masks_full = (targets > 0.5)
- resized_masks = np.stack([resized_masks_vis,resized_masks_full],axis=1)
- else:
- resized_masks = []
- return BitmapMasks(resized_masks, *out_shape)
-
- def expand(self, expanded_h, expanded_w, top, left):
- """See :func:`BaseInstanceMasks.expand`."""
- if len(self.masks) == 0:
- expanded_mask = np.empty((0, expanded_h, expanded_w),
- dtype=np.uint8)
- else:
- expanded_mask = np.zeros((len(self), expanded_h, expanded_w),
- dtype=np.uint8)
- expanded_mask[:, top:top + self.height,
- left:left + self.width] = self.masks
- return BitmapMasks(expanded_mask, expanded_h, expanded_w)
-
- def translate(self,
- out_shape,
- offset,
- direction='horizontal',
- fill_val=0,
- interpolation='bilinear'):
- """Translate the BitmapMasks.
-
- Args:
- out_shape (tuple[int]): Shape for output mask, format (h, w).
- offset (int | float): The offset for translate.
- direction (str): The translate direction, either "horizontal"
- or "vertical".
- fill_val (int | float): Border value. Default 0 for masks.
- interpolation (str): Same as :func:`mmcv.imtranslate`.
-
- Returns:
- BitmapMasks: Translated BitmapMasks.
-
- Example:
- >>> from mmdet.core.mask.structures import BitmapMasks
- >>> self = BitmapMasks.random(dtype=np.uint8)
- >>> out_shape = (32, 32)
- >>> offset = 4
- >>> direction = 'horizontal'
- >>> fill_val = 0
- >>> interpolation = 'bilinear'
- >>> # Note, There seem to be issues when:
- >>> # * out_shape is different than self's shape
- >>> # * the mask dtype is not supported by cv2.AffineWarp
- >>> new = self.translate(out_shape, offset, direction, fill_val,
- >>> interpolation)
- >>> assert len(new) == len(self)
- >>> assert new.height, new.width == out_shape
- """
- if len(self.masks) == 0:
- translated_masks = np.empty((0, *out_shape), dtype=np.uint8)
- else:
- translated_masks = mmcv.imtranslate(
- self.masks.transpose((1, 2, 0)),
- offset,
- direction,
- border_value=fill_val,
- interpolation=interpolation)
- if translated_masks.ndim == 2:
- translated_masks = translated_masks[:, :, None]
- translated_masks = translated_masks.transpose(
- (2, 0, 1)).astype(self.masks.dtype)
- return BitmapMasks(translated_masks, *out_shape)
-
- def shear(self,
- out_shape,
- magnitude,
- direction='horizontal',
- border_value=0,
- interpolation='bilinear'):
- """Shear the BitmapMasks.
-
- Args:
- out_shape (tuple[int]): Shape for output mask, format (h, w).
- magnitude (int | float): The magnitude used for shear.
- direction (str): The shear direction, either "horizontal"
- or "vertical".
- border_value (int | tuple[int]): Value used in case of a
- constant border.
- interpolation (str): Same as in :func:`mmcv.imshear`.
-
- Returns:
- BitmapMasks: The sheared masks.
- """
- if len(self.masks) == 0:
- sheared_masks = np.empty((0, *out_shape), dtype=np.uint8)
- else:
- sheared_masks = mmcv.imshear(
- self.masks.transpose((1, 2, 0)),
- magnitude,
- direction,
- border_value=border_value,
- interpolation=interpolation)
- if sheared_masks.ndim == 2:
- sheared_masks = sheared_masks[:, :, None]
- sheared_masks = sheared_masks.transpose(
- (2, 0, 1)).astype(self.masks.dtype)
- return BitmapMasks(sheared_masks, *out_shape)
-
- def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0):
- """Rotate the BitmapMasks.
-
- Args:
- out_shape (tuple[int]): Shape for output mask, format (h, w).
- angle (int | float): Rotation angle in degrees. Positive values
- mean counter-clockwise rotation.
- center (tuple[float], optional): Center point (w, h) of the
- rotation in source image. If not specified, the center of
- the image will be used.
- scale (int | float): Isotropic scale factor.
- fill_val (int | float): Border value. Default 0 for masks.
-
- Returns:
- BitmapMasks: Rotated BitmapMasks.
- """
- if len(self.masks) == 0:
- rotated_masks = np.empty((0, *out_shape), dtype=self.masks.dtype)
- else:
- rotated_masks = mmcv.imrotate(
- self.masks.transpose((1, 2, 0)),
- angle,
- center=center,
- scale=scale,
- border_value=fill_val)
- if rotated_masks.ndim == 2:
- # case when only one mask, (h, w)
- rotated_masks = rotated_masks[:, :, None] # (h, w, 1)
- rotated_masks = rotated_masks.transpose(
- (2, 0, 1)).astype(self.masks.dtype)
- return BitmapMasks(rotated_masks, *out_shape)
-
- @property
- def areas(self):
- """See :py:attr:`BaseInstanceMasks.areas`."""
- return self.masks.sum((1, 2))
-
- def to_ndarray(self):
- """See :func:`BaseInstanceMasks.to_ndarray`."""
- return self.masks
-
- def to_tensor(self, dtype, device):
- """See :func:`BaseInstanceMasks.to_tensor`."""
- return torch.tensor(self.masks, dtype=dtype, device=device)
-
- @classmethod
- def random(cls,
- num_masks=3,
- height=32,
- width=32,
- dtype=np.uint8,
- rng=None):
- """Generate random bitmap masks for demo / testing purposes.
-
- Example:
- >>> from mmdet.core.mask.structures import BitmapMasks
- >>> self = BitmapMasks.random()
- >>> print('self = {}'.format(self))
- self = BitmapMasks(num_masks=3, height=32, width=32)
- """
- from mmdet.utils.util_random import ensure_rng
- rng = ensure_rng(rng)
- masks = (rng.rand(num_masks, height, width) > 0.1).astype(dtype)
- self = cls(masks, height=height, width=width)
- return self
-
-
-class PolygonMasks(BaseInstanceMasks):
- """This class represents masks in the form of polygons.
-
- Polygons is a list of three levels. The first level of the list
- corresponds to objects, the second level to the polys that compose the
- object, the third level to the poly coordinates
-
- Args:
- masks (list[list[ndarray]]): The first level of the list
- corresponds to objects, the second level to the polys that
- compose the object, the third level to the poly coordinates
- height (int): height of masks
- width (int): width of masks
-
- Example:
- >>> from mmdet.core.mask.structures import * # NOQA
- >>> masks = [
- >>> [ np.array([0, 0, 10, 0, 10, 10., 0, 10, 0, 0]) ]
- >>> ]
- >>> height, width = 16, 16
- >>> self = PolygonMasks(masks, height, width)
-
- >>> # demo translate
- >>> new = self.translate((16, 16), 4., direction='horizontal')
- >>> assert np.all(new.masks[0][0][1::2] == masks[0][0][1::2])
- >>> assert np.all(new.masks[0][0][0::2] == masks[0][0][0::2] + 4)
-
- >>> # demo crop_and_resize
- >>> num_boxes = 3
- >>> bboxes = np.array([[0, 0, 30, 10.0]] * num_boxes)
- >>> out_shape = (16, 16)
- >>> inds = torch.randint(0, len(self), size=(num_boxes,))
- >>> device = 'cpu'
- >>> interpolation = 'bilinear'
- >>> new = self.crop_and_resize(
- ... bboxes, out_shape, inds, device, interpolation)
- >>> assert len(new) == num_boxes
- >>> assert new.height, new.width == out_shape
- """
-
- def __init__(self, masks, height, width):
- assert isinstance(masks, list)
- if len(masks) > 0:
- assert isinstance(masks[0], list)
- assert isinstance(masks[0][0], np.ndarray)
-
- self.height = height
- self.width = width
- self.masks = masks
-
- def __getitem__(self, index):
- """Index the polygon masks.
-
- Args:
- index (ndarray | List): The indices.
-
- Returns:
- :obj:`PolygonMasks`: The indexed polygon masks.
- """
- if isinstance(index, np.ndarray):
- index = index.tolist()
- if isinstance(index, list):
- masks = [self.masks[i] for i in index]
- else:
- try:
- masks = self.masks[index]
- except Exception:
- raise ValueError(
- f'Unsupported input of type {type(index)} for indexing!')
- if len(masks) and isinstance(masks[0], np.ndarray):
- masks = [masks] # ensure a list of three levels
- return PolygonMasks(masks, self.height, self.width)
-
- def __iter__(self):
- return iter(self.masks)
-
- def __repr__(self):
- s = self.__class__.__name__ + '('
- s += f'num_masks={len(self.masks)}, '
- s += f'height={self.height}, '
- s += f'width={self.width})'
- return s
-
- def __len__(self):
- """Number of masks."""
- return len(self.masks)
-
- def rescale(self, scale, interpolation=None):
- """see :func:`BaseInstanceMasks.rescale`"""
- new_w, new_h = mmcv.rescale_size((self.width, self.height), scale)
- if len(self.masks) == 0:
- rescaled_masks = PolygonMasks([], new_h, new_w)
- else:
- rescaled_masks = self.resize((new_h, new_w))
- return rescaled_masks
-
- def resize(self, out_shape, interpolation=None):
- """see :func:`BaseInstanceMasks.resize`"""
- if len(self.masks) == 0:
- resized_masks = PolygonMasks([], *out_shape)
- else:
- h_scale = out_shape[0] / self.height
- w_scale = out_shape[1] / self.width
- resized_masks = []
- for poly_per_obj in self.masks:
- resized_poly = []
- for p in poly_per_obj:
- p = p.copy()
- p[0::2] *= w_scale
- p[1::2] *= h_scale
- resized_poly.append(p)
- resized_masks.append(resized_poly)
- resized_masks = PolygonMasks(resized_masks, *out_shape)
- return resized_masks
-
- def flip(self, flip_direction='horizontal'):
- """see :func:`BaseInstanceMasks.flip`"""
- assert flip_direction in ('horizontal', 'vertical', 'diagonal')
- if len(self.masks) == 0:
- flipped_masks = PolygonMasks([], self.height, self.width)
- else:
- flipped_masks = []
- for poly_per_obj in self.masks:
- flipped_poly_per_obj = []
- for p in poly_per_obj:
- p = p.copy()
- if flip_direction == 'horizontal':
- p[0::2] = self.width - p[0::2]
- elif flip_direction == 'vertical':
- p[1::2] = self.height - p[1::2]
- else:
- p[0::2] = self.width - p[0::2]
- p[1::2] = self.height - p[1::2]
- flipped_poly_per_obj.append(p)
- flipped_masks.append(flipped_poly_per_obj)
- flipped_masks = PolygonMasks(flipped_masks, self.height,
- self.width)
- return flipped_masks
-
- def crop(self, bbox):
- """see :func:`BaseInstanceMasks.crop`"""
- assert isinstance(bbox, np.ndarray)
- assert bbox.ndim == 1
-
- # clip the boundary
- bbox = bbox.copy()
- bbox[0::2] = np.clip(bbox[0::2], 0, self.width)
- bbox[1::2] = np.clip(bbox[1::2], 0, self.height)
- x1, y1, x2, y2 = bbox
- w = np.maximum(x2 - x1, 1)
- h = np.maximum(y2 - y1, 1)
-
- if len(self.masks) == 0:
- cropped_masks = PolygonMasks([], h, w)
- else:
- cropped_masks = []
- for poly_per_obj in self.masks:
- cropped_poly_per_obj = []
- for p in poly_per_obj:
- # pycocotools will clip the boundary
- p = p.copy()
- p[0::2] -= bbox[0]
- p[1::2] -= bbox[1]
- cropped_poly_per_obj.append(p)
- cropped_masks.append(cropped_poly_per_obj)
- cropped_masks = PolygonMasks(cropped_masks, h, w)
- return cropped_masks
-
- def pad(self, out_shape, pad_val=0):
- """padding has no effect on polygons`"""
- return PolygonMasks(self.masks, *out_shape)
-
- def expand(self, *args, **kwargs):
- """TODO: Add expand for polygon"""
- raise NotImplementedError
-
- def crop_and_resize(self,
- bboxes,
- out_shape,
- inds,
- device='cpu',
- interpolation='bilinear'):
- """see :func:`BaseInstanceMasks.crop_and_resize`"""
- out_h, out_w = out_shape
- if len(self.masks) == 0:
- return PolygonMasks([], out_h, out_w)
-
- resized_masks = []
- for i in range(len(bboxes)):
- mask = self.masks[inds[i]]
- bbox = bboxes[i, :]
- x1, y1, x2, y2 = bbox
- w = np.maximum(x2 - x1, 1)
- h = np.maximum(y2 - y1, 1)
- h_scale = out_h / max(h, 0.1) # avoid too large scale
- w_scale = out_w / max(w, 0.1)
-
- resized_mask = []
- for p in mask:
- p = p.copy()
- # crop
- # pycocotools will clip the boundary
- p[0::2] -= bbox[0]
- p[1::2] -= bbox[1]
-
- # resize
- p[0::2] *= w_scale
- p[1::2] *= h_scale
- resized_mask.append(p)
- resized_masks.append(resized_mask)
- return PolygonMasks(resized_masks, *out_shape)
-
- def translate(self,
- out_shape,
- offset,
- direction='horizontal',
- fill_val=None,
- interpolation=None):
- """Translate the PolygonMasks.
-
- Example:
- >>> self = PolygonMasks.random(dtype=np.int)
- >>> out_shape = (self.height, self.width)
- >>> new = self.translate(out_shape, 4., direction='horizontal')
- >>> assert np.all(new.masks[0][0][1::2] == self.masks[0][0][1::2])
- >>> assert np.all(new.masks[0][0][0::2] == self.masks[0][0][0::2] + 4) # noqa: E501
- """
- assert fill_val is None or fill_val == 0, 'Here fill_val is not '\
- f'used, and defaultly should be None or 0. got {fill_val}.'
- if len(self.masks) == 0:
- translated_masks = PolygonMasks([], *out_shape)
- else:
- translated_masks = []
- for poly_per_obj in self.masks:
- translated_poly_per_obj = []
- for p in poly_per_obj:
- p = p.copy()
- if direction == 'horizontal':
- p[0::2] = np.clip(p[0::2] + offset, 0, out_shape[1])
- elif direction == 'vertical':
- p[1::2] = np.clip(p[1::2] + offset, 0, out_shape[0])
- translated_poly_per_obj.append(p)
- translated_masks.append(translated_poly_per_obj)
- translated_masks = PolygonMasks(translated_masks, *out_shape)
- return translated_masks
-
- def shear(self,
- out_shape,
- magnitude,
- direction='horizontal',
- border_value=0,
- interpolation='bilinear'):
- """See :func:`BaseInstanceMasks.shear`."""
- if len(self.masks) == 0:
- sheared_masks = PolygonMasks([], *out_shape)
- else:
- sheared_masks = []
- if direction == 'horizontal':
- shear_matrix = np.stack([[1, magnitude],
- [0, 1]]).astype(np.float32)
- elif direction == 'vertical':
- shear_matrix = np.stack([[1, 0], [magnitude,
- 1]]).astype(np.float32)
- for poly_per_obj in self.masks:
- sheared_poly = []
- for p in poly_per_obj:
- p = np.stack([p[0::2], p[1::2]], axis=0) # [2, n]
- new_coords = np.matmul(shear_matrix, p) # [2, n]
- new_coords[0, :] = np.clip(new_coords[0, :], 0,
- out_shape[1])
- new_coords[1, :] = np.clip(new_coords[1, :], 0,
- out_shape[0])
- sheared_poly.append(
- new_coords.transpose((1, 0)).reshape(-1))
- sheared_masks.append(sheared_poly)
- sheared_masks = PolygonMasks(sheared_masks, *out_shape)
- return sheared_masks
-
- def rotate(self, out_shape, angle, center=None, scale=1.0, fill_val=0):
- """See :func:`BaseInstanceMasks.rotate`."""
- if len(self.masks) == 0:
- rotated_masks = PolygonMasks([], *out_shape)
- else:
- rotated_masks = []
- rotate_matrix = cv2.getRotationMatrix2D(center, -angle, scale)
- for poly_per_obj in self.masks:
- rotated_poly = []
- for p in poly_per_obj:
- p = p.copy()
- coords = np.stack([p[0::2], p[1::2]], axis=1) # [n, 2]
- # pad 1 to convert from format [x, y] to homogeneous
- # coordinates format [x, y, 1]
- coords = np.concatenate(
- (coords, np.ones((coords.shape[0], 1), coords.dtype)),
- axis=1) # [n, 3]
- rotated_coords = np.matmul(
- rotate_matrix[None, :, :],
- coords[:, :, None])[..., 0] # [n, 2, 1] -> [n, 2]
- rotated_coords[:, 0] = np.clip(rotated_coords[:, 0], 0,
- out_shape[1])
- rotated_coords[:, 1] = np.clip(rotated_coords[:, 1], 0,
- out_shape[0])
- rotated_poly.append(rotated_coords.reshape(-1))
- rotated_masks.append(rotated_poly)
- rotated_masks = PolygonMasks(rotated_masks, *out_shape)
- return rotated_masks
-
- def to_bitmap(self):
- """convert polygon masks to bitmap masks."""
- bitmap_masks = self.to_ndarray()
- return BitmapMasks(bitmap_masks, self.height, self.width)
-
- @property
- def areas(self):
- """Compute areas of masks.
-
- This func is modified from `detectron2
- `_.
- The function only works with Polygons using the shoelace formula.
-
- Return:
- ndarray: areas of each instance
- """ # noqa: W501
- area = []
- for polygons_per_obj in self.masks:
- area_per_obj = 0
- for p in polygons_per_obj:
- area_per_obj += self._polygon_area(p[0::2], p[1::2])
- area.append(area_per_obj)
- return np.asarray(area)
-
- def _polygon_area(self, x, y):
- """Compute the area of a component of a polygon.
-
- Using the shoelace formula:
- https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates
-
- Args:
- x (ndarray): x coordinates of the component
- y (ndarray): y coordinates of the component
-
- Return:
- float: the are of the component
- """ # noqa: 501
- return 0.5 * np.abs(
- np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))
-
- def to_ndarray(self):
- """Convert masks to the format of ndarray."""
- if len(self.masks) == 0:
- return np.empty((0, self.height, self.width), dtype=np.uint8)
- bitmap_masks = []
- for poly_per_obj in self.masks:
- bitmap_masks.append(
- polygon_to_bitmap(poly_per_obj, self.height, self.width))
- return np.stack(bitmap_masks)
-
- def to_tensor(self, dtype, device):
- """See :func:`BaseInstanceMasks.to_tensor`."""
- if len(self.masks) == 0:
- return torch.empty((0, self.height, self.width),
- dtype=dtype,
- device=device)
- ndarray_masks = self.to_ndarray()
- return torch.tensor(ndarray_masks, dtype=dtype, device=device)
-
- @classmethod
- def random(cls,
- num_masks=3,
- height=32,
- width=32,
- n_verts=5,
- dtype=np.float32,
- rng=None):
- """Generate random polygon masks for demo / testing purposes.
-
- Adapted from [1]_
-
- References:
- .. [1] https://gitlab.kitware.com/computer-vision/kwimage/-/blob/928cae35ca8/kwimage/structs/polygon.py#L379 # noqa: E501
-
- Example:
- >>> from mmdet.core.mask.structures import PolygonMasks
- >>> self = PolygonMasks.random()
- >>> print('self = {}'.format(self))
- """
- from mmdet.utils.util_random import ensure_rng
- rng = ensure_rng(rng)
-
- def _gen_polygon(n, irregularity, spikeyness):
- """Creates the polygon by sampling points on a circle around the
- centre. Random noise is added by varying the angular spacing
- between sequential points, and by varying the radial distance of
- each point from the centre.
-
- Based on original code by Mike Ounsworth
-
- Args:
- n (int): number of vertices
- irregularity (float): [0,1] indicating how much variance there
- is in the angular spacing of vertices. [0,1] will map to
- [0, 2pi/numberOfVerts]
- spikeyness (float): [0,1] indicating how much variance there is
- in each vertex from the circle of radius aveRadius. [0,1]
- will map to [0, aveRadius]
-
- Returns:
- a list of vertices, in CCW order.
- """
- from scipy.stats import truncnorm
- # Generate around the unit circle
- cx, cy = (0.0, 0.0)
- radius = 1
-
- tau = np.pi * 2
-
- irregularity = np.clip(irregularity, 0, 1) * 2 * np.pi / n
- spikeyness = np.clip(spikeyness, 1e-9, 1)
-
- # generate n angle steps
- lower = (tau / n) - irregularity
- upper = (tau / n) + irregularity
- angle_steps = rng.uniform(lower, upper, n)
-
- # normalize the steps so that point 0 and point n+1 are the same
- k = angle_steps.sum() / (2 * np.pi)
- angles = (angle_steps / k).cumsum() + rng.uniform(0, tau)
-
- # Convert high and low values to be wrt the standard normal range
- # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html
- low = 0
- high = 2 * radius
- mean = radius
- std = spikeyness
- a = (low - mean) / std
- b = (high - mean) / std
- tnorm = truncnorm(a=a, b=b, loc=mean, scale=std)
-
- # now generate the points
- radii = tnorm.rvs(n, random_state=rng)
- x_pts = cx + radii * np.cos(angles)
- y_pts = cy + radii * np.sin(angles)
-
- points = np.hstack([x_pts[:, None], y_pts[:, None]])
-
- # Scale to 0-1 space
- points = points - points.min(axis=0)
- points = points / points.max(axis=0)
-
- # Randomly place within 0-1 space
- points = points * (rng.rand() * .8 + .2)
- min_pt = points.min(axis=0)
- max_pt = points.max(axis=0)
-
- high = (1 - max_pt)
- low = (0 - min_pt)
- offset = (rng.rand(2) * (high - low)) + low
- points = points + offset
- return points
-
- def _order_vertices(verts):
- """
- References:
- https://stackoverflow.com/questions/1709283/how-can-i-sort-a-coordinate-list-for-a-rectangle-counterclockwise
- """
- mlat = verts.T[0].sum() / len(verts)
- mlng = verts.T[1].sum() / len(verts)
-
- tau = np.pi * 2
- angle = (np.arctan2(mlat - verts.T[0], verts.T[1] - mlng) +
- tau) % tau
- sortx = angle.argsort()
- verts = verts.take(sortx, axis=0)
- return verts
-
- # Generate a random exterior for each requested mask
- masks = []
- for _ in range(num_masks):
- exterior = _order_vertices(_gen_polygon(n_verts, 0.9, 0.9))
- exterior = (exterior * [(width, height)]).astype(dtype)
- masks.append([exterior.ravel()])
-
- self = cls(masks, height, width)
- return self
-
-
-def polygon_to_bitmap(polygons, height, width):
- """Convert masks from the form of polygons to bitmaps.
-
- Args:
- polygons (list[ndarray]): masks in polygon representation
- height (int): mask height
- width (int): mask width
-
- Return:
- ndarray: the converted masks in bitmap representation
- """
- rles = maskUtils.frPyObjects(polygons, height, width)
- rle = maskUtils.merge(rles)
- bitmap_mask = maskUtils.decode(rle).astype(np.bool)
- return bitmap_mask
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/ner/bert_softmax/README.md b/spaces/dinhminh20521597/OCR_DEMO/configs/ner/bert_softmax/README.md
deleted file mode 100644
index 650d18c4d56406e5f064085229f49875f5b4aea5..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/ner/bert_softmax/README.md
+++ /dev/null
@@ -1,47 +0,0 @@
-# Bert
-
-> [Bert: Pre-training of deep bidirectional transformers for language understanding](https://arxiv.org/abs/1810.04805)
-
-
-
-## Abstract
-
-We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
-BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
-
-
-
-
-
-
-
-## Dataset
-
-### Train Dataset
-
-| trainset | text_num | entity_num |
-| :---------: | :------: | :--------: |
-| CLUENER2020 | 10748 | 23338 |
-
-### Test Dataset
-
-| testset | text_num | entity_num |
-| :---------: | :------: | :--------: |
-| CLUENER2020 | 1343 | 2982 |
-
-## Results and models
-
-| Method | Pretrain | Precision | Recall | F1-Score | Download |
-| :-------------------------------------------------------: | :----------------------------------------------------------: | :-------: | :----: | :------: | :----------------------------------------------------------: |
-| [bert_softmax](/configs/ner/bert_softmax/bert_softmax_cluener_18e.py) | [pretrain](https://download.openmmlab.com/mmocr/ner/bert_softmax/bert_pretrain.pth) | 0.7885 | 0.7998 | 0.7941 | [model](https://download.openmmlab.com/mmocr/ner/bert_softmax/bert_softmax_cluener-eea70ea2.pth) \| [log](https://download.openmmlab.com/mmocr/ner/bert_softmax/20210514_172645.log.json) |
-
-## Citation
-
-```bibtex
-@article{devlin2018bert,
- title={Bert: Pre-training of deep bidirectional transformers for language understanding},
- author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
- journal={arXiv preprint arXiv:1810.04805},
- year={2018}
-}
-```
diff --git a/spaces/dishanttembhurne/myGenAiChatbot/README.md b/spaces/dishanttembhurne/myGenAiChatbot/README.md
deleted file mode 100644
index e7d1ce0ad26fc5e48185c8cbcbf179b7159cfb78..0000000000000000000000000000000000000000
--- a/spaces/dishanttembhurne/myGenAiChatbot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MyGenAiChatbot
-emoji: 👀
-colorFrom: green
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/dorkai/singpt-2.0/modules/models.py b/spaces/dorkai/singpt-2.0/modules/models.py
deleted file mode 100644
index f4bb11fd3f7292657b008ab644b5be121d9980e5..0000000000000000000000000000000000000000
--- a/spaces/dorkai/singpt-2.0/modules/models.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import json
-import os
-import time
-import zipfile
-from pathlib import Path
-
-import numpy as np
-import torch
-import transformers
-from transformers import AutoModelForCausalLM, AutoTokenizer
-
-import modules.shared as shared
-
-transformers.logging.set_verbosity_error()
-
-local_rank = None
-
-if shared.args.flexgen:
- from flexgen.flex_opt import (CompressionConfig, ExecutionEnv, OptLM,
- Policy, str2bool)
-
-if shared.args.deepspeed:
- import deepspeed
- from transformers.deepspeed import (HfDeepSpeedConfig,
- is_deepspeed_zero3_enabled)
-
- from modules.deepspeed_parameters import generate_ds_config
-
- # Distributed setup
- local_rank = shared.args.local_rank if shared.args.local_rank is not None else int(os.getenv("LOCAL_RANK", "0"))
- world_size = int(os.getenv("WORLD_SIZE", "1"))
- torch.cuda.set_device(local_rank)
- deepspeed.init_distributed()
- ds_config = generate_ds_config(shared.args.bf16, 1 * world_size, shared.args.nvme_offload_dir)
- dschf = HfDeepSpeedConfig(ds_config) # Keep this object alive for the Transformers integration
-
-
-def load_model(model_name):
- print(f"Loading {model_name}...")
- t0 = time.time()
-
- shared.is_RWKV = model_name.lower().startswith('rwkv-')
-
- # Default settings
- if not any([shared.args.cpu, shared.args.load_in_8bit, shared.args.gptq_bits, shared.args.auto_devices, shared.args.disk, shared.args.gpu_memory is not None, shared.args.cpu_memory is not None, shared.args.deepspeed, shared.args.flexgen, shared.is_RWKV]):
- if any(size in shared.model_name.lower() for size in ('13b', '20b', '30b')):
- model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), device_map='auto', load_in_8bit=True)
- else:
- model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16).cuda()
-
- # FlexGen
- elif shared.args.flexgen:
- # Initialize environment
- env = ExecutionEnv.create(shared.args.disk_cache_dir)
-
- # Offloading policy
- policy = Policy(1, 1,
- shared.args.percent[0], shared.args.percent[1],
- shared.args.percent[2], shared.args.percent[3],
- shared.args.percent[4], shared.args.percent[5],
- overlap=True, sep_layer=True, pin_weight=shared.args.pin_weight,
- cpu_cache_compute=False, attn_sparsity=1.0,
- compress_weight=shared.args.compress_weight,
- comp_weight_config=CompressionConfig(
- num_bits=4, group_size=64,
- group_dim=0, symmetric=False),
- compress_cache=False,
- comp_cache_config=CompressionConfig(
- num_bits=4, group_size=64,
- group_dim=2, symmetric=False))
-
- model = OptLM(f"facebook/{shared.model_name}", env, "models", policy)
-
- # DeepSpeed ZeRO-3
- elif shared.args.deepspeed:
- model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16)
- model = deepspeed.initialize(model=model, config_params=ds_config, model_parameters=None, optimizer=None, lr_scheduler=None)[0]
- model.module.eval() # Inference
- print(f"DeepSpeed ZeRO-3 is enabled: {is_deepspeed_zero3_enabled()}")
-
- # RMKV model (not on HuggingFace)
- elif shared.is_RWKV:
- from modules.RWKV import RWKVModel, RWKVTokenizer
-
- model = RWKVModel.from_pretrained(Path(f'models/{model_name}'), dtype="fp32" if shared.args.cpu else "bf16" if shared.args.bf16 else "fp16", device="cpu" if shared.args.cpu else "cuda")
- tokenizer = RWKVTokenizer.from_pretrained(Path('models'))
-
- return model, tokenizer
-
- # Quantized model
- elif shared.args.gptq_bits > 0:
- from modules.GPTQ_loader import load_quantized
-
- model = load_quantized(model_name)
-
- # Custom
- else:
- command = "AutoModelForCausalLM.from_pretrained"
- params = ["low_cpu_mem_usage=True"]
- if not shared.args.cpu and not torch.cuda.is_available():
- print("Warning: no GPU has been detected.\nFalling back to CPU mode.\n")
- shared.args.cpu = True
-
- if shared.args.cpu:
- params.append("low_cpu_mem_usage=True")
- params.append("torch_dtype=torch.float32")
- else:
- params.append("device_map='auto'")
- params.append("load_in_8bit=True" if shared.args.load_in_8bit else "torch_dtype=torch.bfloat16" if shared.args.bf16 else "torch_dtype=torch.float16")
-
- if shared.args.gpu_memory:
- memory_map = shared.args.gpu_memory
- max_memory = f"max_memory={{0: '{memory_map[0]}GiB'"
- for i in range(1, len(memory_map)):
- max_memory += (f", {i}: '{memory_map[i]}GiB'")
- max_memory += (f", 'cpu': '{shared.args.cpu_memory or '99'}GiB'}}")
- params.append(max_memory)
- elif not shared.args.load_in_8bit:
- total_mem = (torch.cuda.get_device_properties(0).total_memory/(1024*1024))
- suggestion = round((total_mem-1000)/1000)*1000
- if total_mem-suggestion < 800:
- suggestion -= 1000
- suggestion = int(round(suggestion/1000))
- print(f"\033[1;32;1mAuto-assiging --gpu-memory {suggestion} for your GPU to try to prevent out-of-memory errors.\nYou can manually set other values.\033[0;37;0m")
- params.append(f"max_memory={{0: '{suggestion}GiB', 'cpu': '{shared.args.cpu_memory or '99'}GiB'}}")
- if shared.args.disk:
- params.append(f"offload_folder='{shared.args.disk_cache_dir}'")
-
- command = f"{command}(Path(f'models/{shared.model_name}'), {', '.join(set(params))})"
- model = eval(command)
-
- # Loading the tokenizer
- if shared.model_name.lower().startswith(('gpt4chan', 'gpt-4chan', '4chan')) and Path("models/gpt-j-6B/").exists():
- tokenizer = AutoTokenizer.from_pretrained(Path("models/gpt-j-6B/"))
- else:
- tokenizer = AutoTokenizer.from_pretrained(Path(f"models/{shared.model_name}/"))
- tokenizer.truncation_side = 'left'
-
- print(f"Loaded the model in {(time.time()-t0):.2f} seconds.")
- return model, tokenizer
-
-def load_soft_prompt(name):
- if name == 'None':
- shared.soft_prompt = False
- shared.soft_prompt_tensor = None
- else:
- with zipfile.ZipFile(Path(f'softprompts/{name}.zip')) as zf:
- zf.extract('tensor.npy')
- zf.extract('meta.json')
- j = json.loads(open('meta.json', 'r').read())
- print(f"\nLoading the softprompt \"{name}\".")
- for field in j:
- if field != 'name':
- if type(j[field]) is list:
- print(f"{field}: {', '.join(j[field])}")
- else:
- print(f"{field}: {j[field]}")
- print()
- tensor = np.load('tensor.npy')
- Path('tensor.npy').unlink()
- Path('meta.json').unlink()
- tensor = torch.Tensor(tensor).to(device=shared.model.device, dtype=shared.model.dtype)
- tensor = torch.reshape(tensor, (1, tensor.shape[0], tensor.shape[1]))
-
- shared.soft_prompt = True
- shared.soft_prompt_tensor = tensor
-
- return name
diff --git a/spaces/ds520/bingo/src/pages/api/healthz.ts b/spaces/ds520/bingo/src/pages/api/healthz.ts
deleted file mode 100644
index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000
--- a/spaces/ds520/bingo/src/pages/api/healthz.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- res.status(200).end('ok')
-}
diff --git a/spaces/ds520/bingo/src/state/index.ts b/spaces/ds520/bingo/src/state/index.ts
deleted file mode 100644
index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000
--- a/spaces/ds520/bingo/src/state/index.ts
+++ /dev/null
@@ -1,118 +0,0 @@
-import { BingWebBot } from '@/lib/bots/bing'
-import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types'
-import { nanoid } from '@/lib/utils'
-import { atom } from 'jotai'
-import { atomWithImmer } from 'jotai-immer'
-import { atomWithStorage } from 'jotai/utils'
-import { atomFamily } from 'jotai/utils'
-import { atomWithHash, atomWithLocation } from 'jotai-location'
-
-const initialMessages: ChatMessageModel[] = [
- { author: 'system', text: 'conversation between user and robot', id: '1' },
- { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' },
- {
- author: 'bot', text: `
-您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点:
-
-- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。
- - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原;
- - 缺点:价格较高,噪音较大,需要定期清洁滤网。
-- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。
- - 优点:清洁性能强劲,操作方便,适用多种场景;
- - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。
-- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。
- - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换;
- - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。
-
-希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊
- `, id: '3' },
- { author: 'user', text: '今天的新闻', id: '4' },
- {
- author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息:
-
- # 中国新闻
- - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^]
- - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^]
- - 中央气象台7月16日18时发布台风橙色预警[^1^]
- - 贵州石阡:暑期旅游带动乡村振兴[^1^]
- - 激活大科学装置的“人才红利”[^1^]
- - 聚才用才留才 让希望的田野成为智慧的田野[^1^]
- - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^]
- - 成都以赛为媒提升城市美誉度[^1^]
- - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^]
- - 浙江建德:新安江上享清凉[^1^]
- - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^]
- - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^]
- - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^]
- - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^]
- - 大运来了丨成都迎大运 全民健身动起来[^1^]
- - 藏在高校里的宝藏博物馆[^1^]
- - 中国汽车工业用70年赢得三个“全球第一”[^1^]
- - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^]
-
- # 国际新闻
- - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^]
- - 国际航运业加快绿色转型[^2^]
- - 美企反对收紧对华芯片出口限制[^2^]
- - 欧洲加大气候科技领域投资[^2^]
- - 中企助力丹麦发展清洁能源[^2^]
- - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^]
- - 中国和阿尔及利亚共同构建新型国际关系典范[^2^]
-
- 以上信息仅供参考,具体详情请点击以下链接查看:
-
- [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/)
- [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' },
- { author: 'user', text: '写一个快排', id: '6' },
- { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' },
- {
- author: 'bot', text: "好的,我会尝试画一只猫。\n > ",
- id: '8'
- }
-]
-
-export const GreetMessages = [
- '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?',
- '重新开始总是很棒。问我任何问题!',
- '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?',
- '当然,我已准备好进行新的挑战。我现在可以为你做什么?',
- '很好,让我们来更改主题。你在想什么?',
- '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?',
- '好的,我准备好了!感谢重置。我们应该了解哪些内容?',
- '感谢刷新!你有新的话题吗?',
- '明白了,让我们重新开始。接下来应该讨论什么?',
- '下一步!我可以为你做什么?',
- '好的,我已准备好新话题。我们应该一起了解哪些内容?'
-]
-
-export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true })
-export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true })
-
-type Param = { botId: BotId; page: string }
-
-const createBotInstance = () => {
- return new BingWebBot({
- cookie: ' ',
- ua: ' ',
- })
-}
-
-export const chatFamily = atomFamily(
- (param: Param) => {
- return atomWithImmer({
- botId: param.botId,
- bot: createBotInstance(),
- messages: [] as ChatMessageModel[],
- generatingMessageId: '',
- abortController: undefined as AbortController | undefined,
- conversationId: nanoid(),
- })
- },
- (a, b) => a.botId === b.botId && a.page === b.page,
-)
-
-export const hashAtom = atomWithHash('dialog', '')
-
-export const locationAtom = atomWithLocation()
-
-export const voiceListenAtom = atom(false)
diff --git a/spaces/dyguay/object-detection-api/README.md b/spaces/dyguay/object-detection-api/README.md
deleted file mode 100644
index faa2354219753b9aa55fa2d549a248d50701c55b..0000000000000000000000000000000000000000
--- a/spaces/dyguay/object-detection-api/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: Object Detection Api
-emoji: 🌖
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/elplaguister/Yuuka_TTS/src/monotonic_align/core.py b/spaces/elplaguister/Yuuka_TTS/src/monotonic_align/core.py
deleted file mode 100644
index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000
--- a/spaces/elplaguister/Yuuka_TTS/src/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
diff --git a/spaces/emre/emre-llama-2-13b-mini/README.md b/spaces/emre/emre-llama-2-13b-mini/README.md
deleted file mode 100644
index e6894e8a6b0b6f823fc3493d73d31fe9c5d4bec9..0000000000000000000000000000000000000000
--- a/spaces/emre/emre-llama-2-13b-mini/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Emre Llama 2 13b Mini
-emoji: 📊
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: cc-by-nc-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/enadewan/ASK_FREDDY_BY_CL/app.py b/spaces/enadewan/ASK_FREDDY_BY_CL/app.py
deleted file mode 100644
index ed703d26bcfb7bfb823cd35e0ea0a55d775cf58f..0000000000000000000000000000000000000000
--- a/spaces/enadewan/ASK_FREDDY_BY_CL/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-from langchain.memory import ConversationBufferMemory
-from langchain.vectorstores.faiss import FAISS
-import os
-from langchain.memory import ConversationBufferMemory
-from langchain.chains import ConversationalRetrievalChain
-import pandas as pd
-import numpy as np
-from langchain.embeddings import HuggingFaceEmbeddings
-from langchain import HuggingFaceHub
-from typing import Any, Dict, List
-from InstructorEmbedding import INSTRUCTOR
-from langchain.embeddings import HuggingFaceInstructEmbeddings
-instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl")
-HUGGINGFACEHUB_API_TOKEN='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'
-new_vectorstore = FAISS.load_local("./faiss_docs_xl_index", instructor_embeddings )
-llm = HuggingFaceHub(repo_id="declare-lab/flan-alpaca-large", model_kwargs={"temperature": 0.1, "max_length": 512},huggingfacehub_api_token= "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")
-
-
-# Front end web app
-import gradio as gr
-with gr.Blocks() as demo:
-
- chatbot = gr.Chatbot()
- msg = gr.Textbox("Ask Freddy")
- clear = gr.Button("Clear")
- chat_history = []
-
- def user(user_message, history):
- # Get response from QA chain
- memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer')
- qa=ConversationalRetrievalChain.from_llm( llm=llm ,retriever=new_vectorstore.as_retriever(search_kwargs={"k":1, "include_metadata": True}),chain_type="refine",memory=memory,return_source_documents=True)
- result = qa({"question": user_message,"chat_history": chat_history})
- myresponse=result['answer']
- # Append user message and response to chat history
- chat_history.append((user_message, myresponse))
- return gr.update(value=""), chat_history
- msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False)
- clear.click(lambda: None, None, chatbot, queue=False)
-
-if __name__ == "__main__":
- demo.launch(debug=True,share=False)
\ No newline at end of file
diff --git a/spaces/etri-vilab/Ko-LLaVA/static/css/bulma-slider.min.css b/spaces/etri-vilab/Ko-LLaVA/static/css/bulma-slider.min.css
deleted file mode 100644
index 09b4aeb2fb19d7d883a0b01cb1982cb382992f95..0000000000000000000000000000000000000000
--- a/spaces/etri-vilab/Ko-LLaVA/static/css/bulma-slider.min.css
+++ /dev/null
@@ -1 +0,0 @@
-@-webkit-keyframes spinAround{from{-webkit-transform:rotate(0);transform:rotate(0)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes spinAround{from{-webkit-transform:rotate(0);transform:rotate(0)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}input[type=range].slider{-webkit-appearance:none;-moz-appearance:none;appearance:none;margin:1rem 0;background:0 0;touch-action:none}input[type=range].slider.is-fullwidth{display:block;width:100%}input[type=range].slider:focus{outline:0}input[type=range].slider:not([orient=vertical])::-webkit-slider-runnable-track{width:100%}input[type=range].slider:not([orient=vertical])::-moz-range-track{width:100%}input[type=range].slider:not([orient=vertical])::-ms-track{width:100%}input[type=range].slider:not([orient=vertical]).has-output+output,input[type=range].slider:not([orient=vertical]).has-output-tooltip+output{width:3rem;background:#4a4a4a;border-radius:4px;padding:.4rem .8rem;font-size:.75rem;line-height:.75rem;text-align:center;text-overflow:ellipsis;white-space:nowrap;color:#fff;overflow:hidden;pointer-events:none;z-index:200}input[type=range].slider:not([orient=vertical]).has-output-tooltip:disabled+output,input[type=range].slider:not([orient=vertical]).has-output:disabled+output{opacity:.5}input[type=range].slider:not([orient=vertical]).has-output{display:inline-block;vertical-align:middle;width:calc(100% - (4.2rem))}input[type=range].slider:not([orient=vertical]).has-output+output{display:inline-block;margin-left:.75rem;vertical-align:middle}input[type=range].slider:not([orient=vertical]).has-output-tooltip{display:block}input[type=range].slider:not([orient=vertical]).has-output-tooltip+output{position:absolute;left:0;top:-.1rem}input[type=range].slider[orient=vertical]{-webkit-appearance:slider-vertical;-moz-appearance:slider-vertical;appearance:slider-vertical;-webkit-writing-mode:bt-lr;-ms-writing-mode:bt-lr;writing-mode:bt-lr}input[type=range].slider[orient=vertical]::-webkit-slider-runnable-track{height:100%}input[type=range].slider[orient=vertical]::-moz-range-track{height:100%}input[type=range].slider[orient=vertical]::-ms-track{height:100%}input[type=range].slider::-webkit-slider-runnable-track{cursor:pointer;animate:.2s;box-shadow:0 0 0 #7a7a7a;background:#dbdbdb;border-radius:4px;border:0 solid #7a7a7a}input[type=range].slider::-moz-range-track{cursor:pointer;animate:.2s;box-shadow:0 0 0 #7a7a7a;background:#dbdbdb;border-radius:4px;border:0 solid #7a7a7a}input[type=range].slider::-ms-track{cursor:pointer;animate:.2s;box-shadow:0 0 0 #7a7a7a;background:#dbdbdb;border-radius:4px;border:0 solid #7a7a7a}input[type=range].slider::-ms-fill-lower{background:#dbdbdb;border-radius:4px}input[type=range].slider::-ms-fill-upper{background:#dbdbdb;border-radius:4px}input[type=range].slider::-webkit-slider-thumb{box-shadow:none;border:1px solid #b5b5b5;border-radius:4px;background:#fff;cursor:pointer}input[type=range].slider::-moz-range-thumb{box-shadow:none;border:1px solid #b5b5b5;border-radius:4px;background:#fff;cursor:pointer}input[type=range].slider::-ms-thumb{box-shadow:none;border:1px solid #b5b5b5;border-radius:4px;background:#fff;cursor:pointer}input[type=range].slider::-webkit-slider-thumb{-webkit-appearance:none;appearance:none}input[type=range].slider.is-circle::-webkit-slider-thumb{border-radius:290486px}input[type=range].slider.is-circle::-moz-range-thumb{border-radius:290486px}input[type=range].slider.is-circle::-ms-thumb{border-radius:290486px}input[type=range].slider:active::-webkit-slider-thumb{-webkit-transform:scale(1.25);transform:scale(1.25)}input[type=range].slider:active::-moz-range-thumb{transform:scale(1.25)}input[type=range].slider:active::-ms-thumb{transform:scale(1.25)}input[type=range].slider:disabled{opacity:.5;cursor:not-allowed}input[type=range].slider:disabled::-webkit-slider-thumb{cursor:not-allowed;-webkit-transform:scale(1);transform:scale(1)}input[type=range].slider:disabled::-moz-range-thumb{cursor:not-allowed;transform:scale(1)}input[type=range].slider:disabled::-ms-thumb{cursor:not-allowed;transform:scale(1)}input[type=range].slider:not([orient=vertical]){min-height:calc((1rem + 2px) * 1.25)}input[type=range].slider:not([orient=vertical])::-webkit-slider-runnable-track{height:.5rem}input[type=range].slider:not([orient=vertical])::-moz-range-track{height:.5rem}input[type=range].slider:not([orient=vertical])::-ms-track{height:.5rem}input[type=range].slider[orient=vertical]::-webkit-slider-runnable-track{width:.5rem}input[type=range].slider[orient=vertical]::-moz-range-track{width:.5rem}input[type=range].slider[orient=vertical]::-ms-track{width:.5rem}input[type=range].slider::-webkit-slider-thumb{height:1rem;width:1rem}input[type=range].slider::-moz-range-thumb{height:1rem;width:1rem}input[type=range].slider::-ms-thumb{height:1rem;width:1rem}input[type=range].slider::-ms-thumb{margin-top:0}input[type=range].slider::-webkit-slider-thumb{margin-top:-.25rem}input[type=range].slider[orient=vertical]::-webkit-slider-thumb{margin-top:auto;margin-left:-.25rem}input[type=range].slider.is-small:not([orient=vertical]){min-height:calc((.75rem + 2px) * 1.25)}input[type=range].slider.is-small:not([orient=vertical])::-webkit-slider-runnable-track{height:.375rem}input[type=range].slider.is-small:not([orient=vertical])::-moz-range-track{height:.375rem}input[type=range].slider.is-small:not([orient=vertical])::-ms-track{height:.375rem}input[type=range].slider.is-small[orient=vertical]::-webkit-slider-runnable-track{width:.375rem}input[type=range].slider.is-small[orient=vertical]::-moz-range-track{width:.375rem}input[type=range].slider.is-small[orient=vertical]::-ms-track{width:.375rem}input[type=range].slider.is-small::-webkit-slider-thumb{height:.75rem;width:.75rem}input[type=range].slider.is-small::-moz-range-thumb{height:.75rem;width:.75rem}input[type=range].slider.is-small::-ms-thumb{height:.75rem;width:.75rem}input[type=range].slider.is-small::-ms-thumb{margin-top:0}input[type=range].slider.is-small::-webkit-slider-thumb{margin-top:-.1875rem}input[type=range].slider.is-small[orient=vertical]::-webkit-slider-thumb{margin-top:auto;margin-left:-.1875rem}input[type=range].slider.is-medium:not([orient=vertical]){min-height:calc((1.25rem + 2px) * 1.25)}input[type=range].slider.is-medium:not([orient=vertical])::-webkit-slider-runnable-track{height:.625rem}input[type=range].slider.is-medium:not([orient=vertical])::-moz-range-track{height:.625rem}input[type=range].slider.is-medium:not([orient=vertical])::-ms-track{height:.625rem}input[type=range].slider.is-medium[orient=vertical]::-webkit-slider-runnable-track{width:.625rem}input[type=range].slider.is-medium[orient=vertical]::-moz-range-track{width:.625rem}input[type=range].slider.is-medium[orient=vertical]::-ms-track{width:.625rem}input[type=range].slider.is-medium::-webkit-slider-thumb{height:1.25rem;width:1.25rem}input[type=range].slider.is-medium::-moz-range-thumb{height:1.25rem;width:1.25rem}input[type=range].slider.is-medium::-ms-thumb{height:1.25rem;width:1.25rem}input[type=range].slider.is-medium::-ms-thumb{margin-top:0}input[type=range].slider.is-medium::-webkit-slider-thumb{margin-top:-.3125rem}input[type=range].slider.is-medium[orient=vertical]::-webkit-slider-thumb{margin-top:auto;margin-left:-.3125rem}input[type=range].slider.is-large:not([orient=vertical]){min-height:calc((1.5rem + 2px) * 1.25)}input[type=range].slider.is-large:not([orient=vertical])::-webkit-slider-runnable-track{height:.75rem}input[type=range].slider.is-large:not([orient=vertical])::-moz-range-track{height:.75rem}input[type=range].slider.is-large:not([orient=vertical])::-ms-track{height:.75rem}input[type=range].slider.is-large[orient=vertical]::-webkit-slider-runnable-track{width:.75rem}input[type=range].slider.is-large[orient=vertical]::-moz-range-track{width:.75rem}input[type=range].slider.is-large[orient=vertical]::-ms-track{width:.75rem}input[type=range].slider.is-large::-webkit-slider-thumb{height:1.5rem;width:1.5rem}input[type=range].slider.is-large::-moz-range-thumb{height:1.5rem;width:1.5rem}input[type=range].slider.is-large::-ms-thumb{height:1.5rem;width:1.5rem}input[type=range].slider.is-large::-ms-thumb{margin-top:0}input[type=range].slider.is-large::-webkit-slider-thumb{margin-top:-.375rem}input[type=range].slider.is-large[orient=vertical]::-webkit-slider-thumb{margin-top:auto;margin-left:-.375rem}input[type=range].slider.is-white::-moz-range-track{background:#fff!important}input[type=range].slider.is-white::-webkit-slider-runnable-track{background:#fff!important}input[type=range].slider.is-white::-ms-track{background:#fff!important}input[type=range].slider.is-white::-ms-fill-lower{background:#fff}input[type=range].slider.is-white::-ms-fill-upper{background:#fff}input[type=range].slider.is-white .has-output-tooltip+output,input[type=range].slider.is-white.has-output+output{background-color:#fff;color:#0a0a0a}input[type=range].slider.is-black::-moz-range-track{background:#0a0a0a!important}input[type=range].slider.is-black::-webkit-slider-runnable-track{background:#0a0a0a!important}input[type=range].slider.is-black::-ms-track{background:#0a0a0a!important}input[type=range].slider.is-black::-ms-fill-lower{background:#0a0a0a}input[type=range].slider.is-black::-ms-fill-upper{background:#0a0a0a}input[type=range].slider.is-black .has-output-tooltip+output,input[type=range].slider.is-black.has-output+output{background-color:#0a0a0a;color:#fff}input[type=range].slider.is-light::-moz-range-track{background:#f5f5f5!important}input[type=range].slider.is-light::-webkit-slider-runnable-track{background:#f5f5f5!important}input[type=range].slider.is-light::-ms-track{background:#f5f5f5!important}input[type=range].slider.is-light::-ms-fill-lower{background:#f5f5f5}input[type=range].slider.is-light::-ms-fill-upper{background:#f5f5f5}input[type=range].slider.is-light .has-output-tooltip+output,input[type=range].slider.is-light.has-output+output{background-color:#f5f5f5;color:#363636}input[type=range].slider.is-dark::-moz-range-track{background:#363636!important}input[type=range].slider.is-dark::-webkit-slider-runnable-track{background:#363636!important}input[type=range].slider.is-dark::-ms-track{background:#363636!important}input[type=range].slider.is-dark::-ms-fill-lower{background:#363636}input[type=range].slider.is-dark::-ms-fill-upper{background:#363636}input[type=range].slider.is-dark .has-output-tooltip+output,input[type=range].slider.is-dark.has-output+output{background-color:#363636;color:#f5f5f5}input[type=range].slider.is-primary::-moz-range-track{background:#00d1b2!important}input[type=range].slider.is-primary::-webkit-slider-runnable-track{background:#00d1b2!important}input[type=range].slider.is-primary::-ms-track{background:#00d1b2!important}input[type=range].slider.is-primary::-ms-fill-lower{background:#00d1b2}input[type=range].slider.is-primary::-ms-fill-upper{background:#00d1b2}input[type=range].slider.is-primary .has-output-tooltip+output,input[type=range].slider.is-primary.has-output+output{background-color:#00d1b2;color:#fff}input[type=range].slider.is-link::-moz-range-track{background:#3273dc!important}input[type=range].slider.is-link::-webkit-slider-runnable-track{background:#3273dc!important}input[type=range].slider.is-link::-ms-track{background:#3273dc!important}input[type=range].slider.is-link::-ms-fill-lower{background:#3273dc}input[type=range].slider.is-link::-ms-fill-upper{background:#3273dc}input[type=range].slider.is-link .has-output-tooltip+output,input[type=range].slider.is-link.has-output+output{background-color:#3273dc;color:#fff}input[type=range].slider.is-info::-moz-range-track{background:#209cee!important}input[type=range].slider.is-info::-webkit-slider-runnable-track{background:#209cee!important}input[type=range].slider.is-info::-ms-track{background:#209cee!important}input[type=range].slider.is-info::-ms-fill-lower{background:#209cee}input[type=range].slider.is-info::-ms-fill-upper{background:#209cee}input[type=range].slider.is-info .has-output-tooltip+output,input[type=range].slider.is-info.has-output+output{background-color:#209cee;color:#fff}input[type=range].slider.is-success::-moz-range-track{background:#23d160!important}input[type=range].slider.is-success::-webkit-slider-runnable-track{background:#23d160!important}input[type=range].slider.is-success::-ms-track{background:#23d160!important}input[type=range].slider.is-success::-ms-fill-lower{background:#23d160}input[type=range].slider.is-success::-ms-fill-upper{background:#23d160}input[type=range].slider.is-success .has-output-tooltip+output,input[type=range].slider.is-success.has-output+output{background-color:#23d160;color:#fff}input[type=range].slider.is-warning::-moz-range-track{background:#ffdd57!important}input[type=range].slider.is-warning::-webkit-slider-runnable-track{background:#ffdd57!important}input[type=range].slider.is-warning::-ms-track{background:#ffdd57!important}input[type=range].slider.is-warning::-ms-fill-lower{background:#ffdd57}input[type=range].slider.is-warning::-ms-fill-upper{background:#ffdd57}input[type=range].slider.is-warning .has-output-tooltip+output,input[type=range].slider.is-warning.has-output+output{background-color:#ffdd57;color:rgba(0,0,0,.7)}input[type=range].slider.is-danger::-moz-range-track{background:#ff3860!important}input[type=range].slider.is-danger::-webkit-slider-runnable-track{background:#ff3860!important}input[type=range].slider.is-danger::-ms-track{background:#ff3860!important}input[type=range].slider.is-danger::-ms-fill-lower{background:#ff3860}input[type=range].slider.is-danger::-ms-fill-upper{background:#ff3860}input[type=range].slider.is-danger .has-output-tooltip+output,input[type=range].slider.is-danger.has-output+output{background-color:#ff3860;color:#fff}
\ No newline at end of file
diff --git a/spaces/eubinecto/idiomify/idiomify/fetchers.py b/spaces/eubinecto/idiomify/idiomify/fetchers.py
deleted file mode 100644
index 928fc9f964b8925649601ae431a3b74e275b6d3b..0000000000000000000000000000000000000000
--- a/spaces/eubinecto/idiomify/idiomify/fetchers.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import yaml
-import wandb
-from os import path
-import pandas as pd
-from typing import Tuple
-from wandb.sdk.wandb_run import Run
-from idiomify.paths import CONFIG_YAML, idioms_dir, literal2idiomatic, idiomifier_dir, tokenizer_dir
-from idiomify.urls import PIE_URL
-from transformers import AutoModelForSeq2SeqLM, AutoConfig, BartTokenizer
-from idiomify.models import Idiomifier
-from idiomify.pipeline import Pipeline
-
-
-# --- from the web --- #
-def fetch_pie() -> pd.DataFrame:
- # fetch & parse it directly from the web
- return pd.read_csv(PIE_URL)
-
-
-# --- from wandb --- #
-def fetch_idioms(ver: str, run: Run = None) -> pd.DataFrame:
- """
- why do you need this? -> you need this to have access to the idiom embeddings.
- """
- # if run object is given, we track the lineage of the data.
- # if not, we get the dataset via wandb Api.
- if run:
- artifact = run.use_artifact(f"idioms:{ver}", type="dataset")
- else:
- artifact = wandb.Api().artifact(f"eubinecto/idiomify/idioms:{ver}", type="dataset")
- artifact_dir = artifact.download(root=str(idioms_dir(ver)))
- tsv_path = path.join(artifact_dir, "all.tsv")
- return pd.read_csv(tsv_path, sep="\t")
-
-
-def fetch_literal2idiomatic(ver: str, run: Run = None) -> Tuple[pd.DataFrame, pd.DataFrame]:
- # if run object is given, we track the lineage of the data.
- # if not, we get the dataset via wandb Api.
- if run:
- artifact = run.use_artifact(f"literal2idiomatic:{ver}", type="dataset")
- else:
- artifact = wandb.Api().artifact(f"eubinecto/idiomify/literal2idiomatic:{ver}", type="dataset")
- artifact_dir = artifact.download(root=str(literal2idiomatic(ver)))
- train_path = path.join(artifact_dir, "train.tsv")
- test_path = path.join(artifact_dir, "test.tsv")
- train_df = pd.read_csv(train_path, sep="\t")
- test_df = pd.read_csv(test_path, sep="\t")
- return train_df, test_df
-
-
-def fetch_idiomifier(ver: str, run: Run = None) -> Idiomifier:
- """
- you may want to change the name to Idiomifier.
- The current Idiomifier then turns into a pipeline.
- """
- if run:
- artifact = run.use_artifact(f"idiomifier:{ver}", type="model")
- else:
- artifact = wandb.Api().artifact(f"eubinecto/idiomify/idiomifier:{ver}", type="model")
- config = artifact.metadata
- artifact_dir = artifact.download(root=str(idiomifier_dir(ver)))
- ckpt_path = path.join(artifact_dir, "model.ckpt")
- bart = AutoModelForSeq2SeqLM.from_config(AutoConfig.from_pretrained(config['bart']))
- bart.resize_token_embeddings(config['vocab_size'])
- model = Idiomifier.load_from_checkpoint(ckpt_path, bart=bart)
- return model
-
-
-def fetch_tokenizer(ver: str, run: Run = None) -> BartTokenizer:
- if run:
- artifact = run.use_artifact(f"tokenizer:{ver}", type="other")
- else:
- artifact = wandb.Api().artifact(f"eubinecto/idiomify/tokenizer:{ver}", type="other")
- artifact_dir = artifact.download(root=str(tokenizer_dir(ver)))
- tokenizer = BartTokenizer.from_pretrained(artifact_dir)
- return tokenizer
-
-
-def fetch_pipeline() -> Pipeline:
- """
- fetch a pipeline of the version stated in config.yaml
- """
- config = fetch_config()['idiomifier']
- model = fetch_idiomifier(config['ver'])
- tokenizer = fetch_tokenizer(config['tokenizer_ver'])
- idioms = fetch_idioms(config['idioms_ver'])
- model.eval() # this is crucial to obtain consistent results
- pipeline = Pipeline(model, tokenizer, idioms)
- return pipeline
-
-
-# --- from local --- #
-def fetch_config() -> dict:
- with open(str(CONFIG_YAML), 'r', encoding="utf-8") as fh:
- return yaml.safe_load(fh)
diff --git a/spaces/fabiogra/moseca/app/style.py b/spaces/fabiogra/moseca/app/style.py
deleted file mode 100644
index 5f66b238a4c93651306aaaa3e7c40905704b5302..0000000000000000000000000000000000000000
--- a/spaces/fabiogra/moseca/app/style.py
+++ /dev/null
@@ -1,155 +0,0 @@
-_font_title = "Monoton"
-_font_subtitle = "Exo"
-
-CSS = (
- """
-
-
-
-
-
-
-"""
-)
-
-
-CSS_TABS = """"""
diff --git a/spaces/falterWliame/Face_Mask_Detection/MPEG.Encoder.Ultimate.v5.1.27.0120. Full Version Extra Quality.md b/spaces/falterWliame/Face_Mask_Detection/MPEG.Encoder.Ultimate.v5.1.27.0120. Full Version Extra Quality.md
deleted file mode 100644
index 45f93b25b1887ce0faefe0140875bc71d376126c..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/MPEG.Encoder.Ultimate.v5.1.27.0120. Full Version Extra Quality.md
+++ /dev/null
@@ -1,14 +0,0 @@
-MPEG.Encoder.Ultimate.v5.1.27.0120. Full Version Download File ⏩ https://urlca.com/2uDc9C
-
-January 30, 2565 BC - MPEG.Encoder.Ultimate.v5.1.27.0120. Full Version(プレビューä¸ï¼‰.編集ã™ã‚‹. regyzbustvos; 2022.01.30 13:11:27. Megaupload.com - Download without registration.
-On our site you will find only free porn videos that do not require registration.
-Every day we publish only the latest in the porn industry.
-Megaupload.com.
-You will be able to download megaupload.
-Com via torrent.
-Megaupload.com - Download without registration.
-Watch porn online.
-Sex Uncensored / No Condom. 8a78ff9644
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Ong Bak 1 Eng Sub Torrent.md b/spaces/falterWliame/Face_Mask_Detection/Ong Bak 1 Eng Sub Torrent.md
deleted file mode 100644
index ed5268c2addd105a705c54752563b509d016029b..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Ong Bak 1 Eng Sub Torrent.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Ong Bak 1 Eng Sub Torrent Download File ❤ https://urlca.com/2uDcKs
-
-Watch the drama online for free. Tags: Watch Ong Bak Engsub, watch Ong Bak, watch Ong Bak eng sub, Ong Bak online ep 1, ep 2, . Description of the drama Ruthless City: A maniac is operating in the city and terrorizing the inhabitants.
-His victims are young, beautiful girls.
-The main character is a police girl.
-She wants to find and neutralize the maniac by all means.
-Watch Ruthless City drama online with subtitles or Russian dubbing on our website, you can watch Merciless City series online for free in good quality.
-The main character is a female police officer. 8a78ff9644
-
-
-
diff --git a/spaces/fatiXbelha/sd/Download Among Us Unlock All Features and Hacks - Be the Ultimate Imposter.md b/spaces/fatiXbelha/sd/Download Among Us Unlock All Features and Hacks - Be the Ultimate Imposter.md
deleted file mode 100644
index fdcfafdd646703040e9e56cc58b1c80f5a71c4ad..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Among Us Unlock All Features and Hacks - Be the Ultimate Imposter.md
+++ /dev/null
@@ -1,75 +0,0 @@
-
-Download Among Us Unlock All: How to Get Free Skins, Pets, and Hats in the Popular Game
- If you are a fan of online multiplayer games, you have probably heard of Among Us , the game that has taken the world by storm. Among Us is a game of deception and teamwork, where you play as either a crewmate or an impostor on a spaceship. As a crewmate, your goal is to complete tasks and find the impostor before they kill everyone. As an impostor, your goal is to sabotage and kill the crewmates without getting caught.
- Among Us is a game that can be played with friends or strangers online, and it offers a lot of fun and excitement. However, if you want to spice up your game experience, you might want to try Among Us unlock all , a mod apk that allows you to get free skins, pets, and hats in the game. These cosmetics can make your character look more unique and stylish, and they can also help you blend in or stand out depending on your strategy.
-download among us unlock all Download Zip ✔ https://urllie.com/2uNvLC
- In this article, we will show you how to download Among Us unlock all mod apk for Android and PC devices, as well as how to unlock all skins, pets, and hats in Among Us without mod apk. We will also discuss the benefits and risks of using Among Us unlock all mod apk, and answer some frequently asked questions about it. So, if you are ready to learn more about this amazing mod apk, read on!
- How to Download Among Us Unlock All Mod Apk for Android Devices
- If you want to download Among Us unlock all mod apk for your Android device, you will need to follow these steps:
-
-Go to your device settings and enable the installation of apps from unknown sources.
-Delete the original Among Us app from your device if you have it installed.
-Go to Google and type the link: [7](https://apkdone.com/among-us-apk-mod/).
-Click on the first link and download the mod apk file.
-Once the download is complete, open the file and install it on your device.
-Launch the game and enjoy your free skins, pets, and hats!
-
- How to Download Among Us Unlock All Mod Apk for PC Devices
- If you want to download Among Us unlock all mod apk for your PC device, you will need to follow these steps:
-
-Download and install an Android emulator on your PC, such as BlueStacks or NoxPlayer.
-Launch the emulator and sign in with your Google account.
-Go to Google Play Store and download the original Among Us app.
-Go to Google and type the link: [7](https://apkdone.com/among-us-apk-mod/).
-Click on the first link and download the mod apk file.
-Drag and drop the file into the emulator window.
-Install the mod apk file on the emulator.
-Launch the game and enjoy your free skins, pets, and hats!
-
- How to Unlock All Skins, Pets, and Hats in Among Us Without Mod Apk
- If you don't want to use a mod apk to unlock all skins, pets, and hats in Among Us, you can still get them by using other methods. Here are some ways you can do that:
-download among us mod apk unlocked all
-download among us hack unlock all skins
-download among us mod menu unlock all
-download among us latest version unlock all
-download among us pc unlock all
-download among us free unlock all
-download among us modded unlock all
-download among us android unlock all
-download among us ios unlock all
-download among us online unlock all
-download among us cheat unlock all
-download among us cracked unlock all
-download among us premium unlock all
-download among us full version unlock all
-download among us unlimited unlock all
-download among us pro unlock all
-download among us mega mod unlock all
-download among us no ads unlock all
-download among us always impostor unlock all
-download among us no ban unlock all
-download among us god mode unlock all
-download among us no kill cooldown unlock all
-download among us see impostor unlock all
-download among us wall hack unlock all
-download among us speed hack unlock all
-download among us ghost mode unlock all
-download among us anti cheat unlock all
-download among us custom skins unlock all
-download among us custom maps unlock all
-download among us custom roles unlock all
-download among us voice chat unlock all
-download among us video chat unlock all
-download among us discord integration unlock all
-download among us cross platform unlock all
-download among us high resolution unlock all
-download among us 3d graphics unlock all
-download among us realistic graphics unlock all
-download among us new update unlock all
-download among us new features unlock all
-download among us new modes unlock all
-
-Purchase them with real money. You can buy skins, pets, and hats from the in-game store using real money. The prices range from $0.99 to $2.99 depending on the item. You can also buy bundles A: No, you cannot use other mods or hacks with Among Us unlock all mod apk. If you try to use other mods or hacks, you will cause conflicts and errors in the game. You will also increase the chances of getting detected and banned by the game.
- I hope this article has helped you learn more about downloading Among Us unlock all mod apk. If you have any questions or feedback, please leave a comment below. Thank you for reading and have a great day!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Beach Buggy Racing 2 Island Adventure MOD APK and join the online multiplayer races.md b/spaces/fatiXbelha/sd/Download Beach Buggy Racing 2 Island Adventure MOD APK and join the online multiplayer races.md
deleted file mode 100644
index 2f4fe1666491c6d1314289e6765d16973892b0c0..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Beach Buggy Racing 2 Island Adventure MOD APK and join the online multiplayer races.md
+++ /dev/null
@@ -1,112 +0,0 @@
-
-Download Beach Buggy Racing 2 Island Adventure Mod APK
-If you are looking for a fun and exciting kart racing game with split screen competition and game rule customization, then you should check out Beach Buggy Racing 2 Island Adventure . This game is a sequel to the popular Beach Buggy Racing, and it offers more tracks, cars, characters, power-ups, and modes than ever before. You can explore a mysterious island packed with ancient temples, dragon-infested castles, and post apocalyptic ice cream stands, while racing against a rag-tag crew of rivals. You can also craft your own custom game modes and play with up to 8 local players.
-But what if you want to enjoy all the features and content of this game without spending any money or time? Well, there is a solution for that: you can download Beach Buggy Racing 2 Island Adventure Mod APK . This is a modified version of the original game that gives you unlimited coins, gems, cars, and power-ups. You can unlock and upgrade all the vehicles, skins, drivers, and power-ups in the game without any hassle. You can also access all the maps, events, championships, and tournaments in the game without any restrictions.
-download beach buggy racing 2 island adventure mod apk Download Zip ✑ ✑ ✑ https://urllie.com/2uNzJI
-So, how can you download Beach Buggy Racing 2 Island Adventure Mod APK? And what are some tips and tricks for playing this game? And what are the reviews of this game from critics and players? In this article, we will answer all these questions and more. Read on to find out everything you need to know about Beach Buggy Racing 2 Island Adventure Mod APK.
- What is Beach Buggy Racing 2 Island Adventure?
-Beach Buggy Racing 2 Island Adventure is an action-packed, mayhem-filled kart racer with split screen competition and game rule customization. It is developed by Vector Unit, a studio that specializes in racing games. It is available on Steam, Xbox, PlayStation, Switch, Android TV, and iOS devices.
-The game features a story-driven adventure mode where you explore a massive interactive map as you rise through the ranks of the Beach Buggy Racing League. You can also jump in and play various quick events such as race, drift attack, firework fury, and more. You can also customize your own ridiculous game rules and save them for insane battles with up to 8 local players. You can choose from different configurations of power-ups, race rules, lap counts, and more.
-The game also boasts a thrill-ride race tracks where you discover Egyptian pyramids, dragon-infested castles, pirate ship wrecks, and experimental alien bio-labs. There are 23 unique worlds, each packed with secrets and shortcuts. You can also collect and customize over 40 unique cars including monster trucks, moon rovers, race cars, and beach buggies. You can also build your team of 14 drivers with their own special abilities and strategies. And you can master over 40 individual power-ups like donut tires, tesla coil
Why download the mod apk?
-Beach Buggy Racing 2 Island Adventure is a free-to-play game, but it also has some in-app purchases and ads that can limit your enjoyment. For example, you need to spend coins and gems to unlock and upgrade cars, skins, drivers, and power-ups. You also need to watch ads to get some rewards or access some events. And you may face some difficulties in competing with other players who have better cars and power-ups than you.
-That's why downloading the mod apk can be a great idea. The mod apk is a modified version of the original game that gives you unlimited coins, gems, cars, and power-ups. You can unlock and upgrade everything in the game without spending any money or time. You can also remove all the ads and enjoy a smooth gaming experience. You can also access all the maps, events, championships, and tournaments in the game without any restrictions.
-With the mod apk, you can have more fun and freedom in playing Beach Buggy Racing 2 Island Adventure. You can experiment with different combinations of cars, drivers, and power-ups. You can also challenge yourself with harder levels and opponents. And you can share your achievements and screenshots with your friends and family.
- How to download and install the mod apk?
-Downloading and installing the mod apk is very easy and safe. You just need to follow these simple steps:
-Beach Buggy Racing 2: Island A APK free download
-How to install Beach Buggy Racing 2: Island Adventure MOD APK
-Beach Buggy Racing 2: Island Adventure premium version for Android TV
-Beach Buggy Racing 2: Island Adventure game crafting and customization
-Beach Buggy Racing 2: Island Adventure split screen multiplayer mode
-Beach Buggy Racing 2: Island Adventure APK MOD INFO and features
-Beach Buggy Racing 2: Island Adventure updated graphics and gameplay
-Beach Buggy Racing 2: Island Adventure weekly tournaments and leaderboards
-Beach Buggy Racing 2: Island Adventure 23 unique worlds and tracks
-Beach Buggy Racing 2: Island Adventure 40+ cars and powerups to collect
-Beach Buggy Racing 2: Island Adventure Bluetooth or USB game controller required
-Beach Buggy Racing 2: Island Adventure Vector Unit developer and publisher
-Beach Buggy Racing 2: Island Adventure adrenaline-pumping races and drift attacks
-Beach Buggy Racing 2: Island Adventure fun for all skill levels and ages
-Beach Buggy Racing 2: Island Adventure no ads or in-app purchases
-Beach Buggy Racing 2: Island Adventure latest version and update date
-Beach Buggy Racing 2: Island Adventure Google Play ID and Installs
-Beach Buggy Racing 2: Island Adventure compatible devices and requirements
-Beach Buggy Racing 2: Island Adventure reviews and ratings from users
-Beach Buggy Racing 2: Island Adventure tips and tricks for beginners
-Beach Buggy Racing 2: Island Adventure cheats and hacks for unlimited coins
-Beach Buggy Racing 2: Island Adventure best cars and powerups to use
-Beach Buggy Racing 2: Island Adventure secrets and shortcuts to discover
-Beach Buggy Racing 2: Island Adventure story-driven adventure mode and championships
-Beach Buggy Racing 2: Island Adventure rag-tag crew of rivals and drivers
-Beach Buggy Racing 2: Island Adventure ancient temples, dragon-infested castles, and more
-Beach Buggy Racing 2: Island Adventure moon buggies, monster trucks, race cars, and more
-Beach Buggy Racing 2: Island Adventure skeletons, robots, aliens, and more
-Beach Buggy Racing 2: Island Adventure donut tires, tesla coil, ball'n'chain, and more
-Beach Buggy Racing 2: Island Adventure firework fury, police chase, dodgeball fury, and more
-
-Go to this website where you can find the latest version of Beach Buggy Racing 2 Island Adventure Mod APK.
-Click on the download button and wait for the file to be downloaded on your device.
-Go to your device settings and enable the installation of apps from unknown sources.
-Locate the downloaded file in your file manager and tap on it to start the installation process.
-Follow the instructions on the screen and wait for the installation to be completed.
-Launch the game and enjoy unlimited coins, gems, cars, and power-ups.
-
-Note: You may need to uninstall the original game before installing the mod apk. Also, make sure you have enough storage space on your device before downloading the mod apk.
- Tips and tricks for playing Beach Buggy Racing 2 Island Adventure
-Beach Buggy Racing 2 Island Adventure is a fun and addictive game, but it can also be challenging at times. Here are some tips and tricks that can help you improve your skills and performance in the game:
-
-Master the drift : Drifting is a key technique in kart racing games, as it allows you to turn corners faster and gain speed boosts. To drift in Beach Buggy Racing 2 Island Adventure, you need to tap on the brake button while steering in the direction you want to go. The longer you drift, the more boost you get. You can also use power-ups like oil slicks or banana peels to make your opponents lose control while drifting.
-Use your driver's ability : Each driver in Beach Buggy Racing 2 Island Adventure has a unique ability that can give you an edge in the race. For example, Rez has a laser beam that can zap other racers, McSkelly has a skeleton army that can block their way, and Beat Bot has a disco ball that can blind them. To use your driver's ability, you need to fill up a meter by collecting coins or hitting other racers. Once the meter is full, you can tap on the driver's icon to activate their ability.
-Avoid traps : The tracks in Beach Buggy Racing 2 Island Adventure are full of traps and hazards that can slow you down or damage your car. For example, there are spikes, lava pools, falling rocks, giant worms, and more. You need to be careful and avoid these traps as much as possible. You can also use power-ups like shields or magnets to protect yourself or attract coins away from traps.
-Customize your own game modes : One of the coolest features of Beach Buggy Racing 2 Island Adventure is that you can customize your own game modes and play with up to 8 local players. You can choose from different configurations of power-ups, race rules, lap counts, and more. You can also save your custom game modes and share them with your friends. This way, you can create your own fun and crazy challenges and enjoy them with your family or friends.
-
- What are the reviews of Beach Buggy Racing 2 Island Adventure?
-Beach Buggy Racing 2 Island Adventure has received mostly positive reviews from critics and players alike. Here are some of the positive reviews of the game from critics and players:
-
-Source
-Rating
-Quote
-
-
-Steam
-Very Positive (389)
-"This game is awesome! It has everything I want in a kart racer: great graphics, smooth gameplay, tons of customization, and local multiplayer. The tracks are very fun and varied, and the power-ups are hilarious and creative. I love the fact that you can make your own game modes and play with up to 8 players on one screen. This is the best kart racer on Steam, hands down."
-
-
-GameSpew
-8/10
-"Beach Buggy Racing 2: Island Adventure is a very competent karting game that can compete with the best of them. It’s got a huge amount of content, a great career mode, and plenty of options for local multiplayer. It’s also got a lot of charm and personality, with colourful graphics, catchy music, and quirky characters. If you’re looking for a fun and addictive kart racer that doesn’t take itself too seriously, you can’t go wrong with Beach Buggy Racing 2: Island Adventure."
-
-
-TheXboxHub
-4/5
-"Beach Buggy Racing 2: Island Adventure on Xbox is a decent kart racer that will appeal to many. It’s got enough content to keep you busy for hours, and enough variety to keep you entertained. It’s not perfect, and it can be frustrating at times, but it’s also a lot of fun. If you’re looking for a new kart racer to play with your friends or family, or just by yourself, Beach Buggy Racing 2: Island Adventure is worth a look."
-
-
- Conclusion
-Beach Buggy Racing 2 Island Adventure is a kart racer that delivers explosive fun for all skill levels. It has a story-driven adventure mode, a quick event mode, a championship mode, and a custom game mode. It also has over 40 unique cars, 14 drivers, 40 power-ups, and 23 worlds to explore. And it has split screen competition for up to 8 local players.
-If you want to enjoy all the features and content of this game without any limitations, you can download Beach Buggy Racing 2 Island Adventure Mod APK. This is a modified version of the original game that gives you unlimited coins, gems, cars, and power-ups. You can also remove all the ads and access all the maps, events, championships, and tournaments in the game.
-To download Beach Buggy Racing 2 Island Adventure Mod APK, you just need to follow the simple steps we have provided in this article. You also need to make sure you have enough storage space on your device and enable the installation of apps from unknown sources. Once you have installed the mod apk, you can launch the game and enjoy unlimited fun.
-We hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave them in the comments section below. And if you liked this article, please share it with your friends and family who might be interested in Beach Buggy Racing 2 Island Adventure Mod APK.
- FAQs
-Here are some frequently asked questions and answers about Beach Buggy Racing 2 Island Adventure and the mod apk:
- Q: Is Beach Buggy Racing 2 Island Adventure Mod APK safe to download?
-A: Yes, it is safe to download as long as you use a trusted source like the one we have provided in this article. However, you should always be careful when downloading any mod apk from unknown sources as they may contain viruses or malware that can harm your device or steal your data.
- Q: Do I need to root my device to use Beach Buggy Racing 2 Island Adventure Mod APK?
-A: No, you do not need to root your device to use Beach Buggy Racing 2 Island Adventure Mod APK. You just need to enable the installation of apps from unknown sources in your device settings.
- Q: Can I play online multiplayer with Beach Buggy Racing 2 Island Adventure Mod APK?
-A: No, you cannot play online multiplayer with Beach Buggy Racing 2 Island Adventure Mod APK as it may cause compatibility issues or ban your account. You can only play local multiplayer with up to 8 players on one screen.
- Q: Can I update Beach B Q: Can I update Beach Buggy Racing 2 Island Adventure Mod APK?
- A: Yes, you can update Beach Buggy Racing 2 Island Adventure Mod APK as long as you download the latest version from the same source we have provided in this article. However, you may need to uninstall the previous version before installing the new one. You may also lose your progress and data if you update the mod apk.
- Q: What are the system requirements for Beach Buggy Racing 2 Island Adventure?
-A: The system requirements for Beach Buggy Racing 2 Island Adventure are as follows:
-
-For Android devices: Android 4.4 or higher, 1 GB of RAM, and 200 MB of free storage space.
-For iOS devices: iOS 9.0 or later, and compatible with iPhone, iPad, and iPod touch.
-For Steam: Windows 7 or higher, Intel Core i5 or equivalent CPU, 1 GB of RAM, DirectX 11 compatible graphics card, and 500 MB of available disk space.
-For Xbox: Xbox One or Xbox Series X|S console, and Xbox Live Gold subscription for online multiplayer.
-For PlayStation: PlayStation 4 or PlayStation 5 console, and PlayStation Plus subscription for online multiplayer.
-For Switch: Nintendo Switch console, and Nintendo Switch Online membership for online multiplayer.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download House Flip Master Mod APK and Unlock All the Features.md b/spaces/fatiXbelha/sd/Download House Flip Master Mod APK and Unlock All the Features.md
deleted file mode 100644
index 96892da0a9375d4500838d4d69dc8a64533ddefc..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download House Flip Master Mod APK and Unlock All the Features.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-Download House Flip Master Mod Apk: A Fun and Addictive Game for Android
- If you love flipping houses or performing amazing tricks on a trampoline, then you will love House Flip Master. This is a game where you can do both and more. You can buy, renovate, and sell houses for profit, or you can jump on a trampoline and show off your skills. You can also customize your character and trampoline with different outfits and accessories. In this article, we will tell you how to download House Flip Master mod apk, which will give you unlimited money, coins, and access to all the features of the game.
- What is House Flip Master?
- House Flip Master is a game developed by MotionVolt Games Ltd. It is available for Android devices and has over 10 million downloads on Google Play. The game has two modes: house flipping and trampoline jumping.
-download house flip master mod apk Download ····· https://urllie.com/2uNDAR
- A game where you flip houses for profit
- In this mode, you can buy houses that need some work and fix them up. You can paint the walls, replace the floors, furnish the rooms, and add some decorations. You can also upgrade the exterior of the house, such as the roof, windows, doors, and garden. Once you are done, you can sell the house for a higher price and make a profit. You can use the money to buy more houses or to improve your trampoline.
- A game where you perform amazing stunts on a trampoline
- In this mode, you can jump on a trampoline and do various tricks. You can flip, spin, twist, bounce, and land in different ways. You can also use props like balls, hoops, rockets, and balloons to make your jumps more fun. You can earn coins by completing challenges and achievements. You can use the coins to buy new outfits and accessories for your character and trampoline.
- A game where you can customize your character and trampoline
- In this game, you can choose from different characters, such as a boy, a girl, a ninja, a pirate, a clown, and more. You can also change their hair, skin, eyes, clothes, shoes, hats, glasses, and masks. You can also customize your trampoline with different colors, patterns, stickers, and shapes. You can make your character and trampoline look unique and cool.
- Why download House Flip Master mod apk?
- House Flip Master is a fun and addictive game that will keep you entertained for hours. However, if you want to enjoy the game to the fullest, you might want to download House Flip Master mod apk. This is a modified version of the game that will give you some advantages over the original version.
- To get unlimited money and coins
- With House Flip Master mod apk, you will have unlimited money and coins in the game. This means that you can buy any house that you want without worrying about the budget. You can also renovate the house as much as you want without running out of money. You can also buy any outfit or accessory that you like without saving up coins.
-download house flip master mod apk unlimited money
-download house flip master mod apk latest version
-download house flip master mod apk for android
-download house flip master mod apk free
-download house flip master mod apk hack
-download house flip master mod apk an1
-download house flip master mod apk revdl
-download house flip master mod apk 2.3.0
-download house flip master mod apk offline
-download house flip master mod apk no ads
-download house flipper game mod apk
-download house flipper simulator mod apk
-download house flipper 3d mod apk
-download house flipper android mod apk
-download house flipper beta mod apk
-download house designer fix and flip mod apk
-download home design makeover mod apk
-download home design 3d mod apk
-download home design dreams mod apk
-download home design caribbean life mod apk
-how to install house flip master mod apk
-how to play house flip master mod apk
-how to update house flip master mod apk
-how to get house flip master mod apk
-how to hack house flip master mod apk
-best site to download house flip master mod apk
-best way to download house flip master mod apk
-safe site to download house flip master mod apk
-trusted site to download house flip master mod apk
-legit site to download house flip master mod apk
-where can i download house flip master mod apk
-where to find house flip master mod apk
-where to get house flip master mod apk
-where to buy house flip master mod apk
-where is house flip master mod apk
-what is house flip master mod apk
-what does house flip master mod apk do
-what are the features of house flip master mod apk
-what are the benefits of house flip master mod apk
-what are the drawbacks of house flip master mod apk
-why download house flip master mod apk
-why use house flip master mod apk
-why play house flip master mod apk
-why install house flip master mod apk
-why update house flip master mod apk
- To unlock all the locations and characters
- With House Flip Master mod apk, you will have access to all the locations and characters in the game. This means that you can flip houses in different places, such as New York, Paris, Tokyo, London, and more. You can also jump on tr ampolines in different environments, such as a beach, a forest, a city, and more. You can also play as different characters, such as a superhero, a zombie, a robot, and more.
- To enjoy the game without ads and interruptions
- With House Flip Master mod apk, you will not see any ads or pop-ups in the game. This means that you can play the game without any distractions or annoyances. You can also play the game offline, without needing an internet connection.
- How to download House Flip Master mod apk?
- If you are interested in downloading House Flip Master mod apk, you will need to follow these simple steps:
- Step 1: Find a reliable source online
- There are many websites that offer House Flip Master mod apk, but not all of them are safe and trustworthy. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you should be careful and do some research before downloading anything from the internet. You can read reviews, ratings, comments, and feedback from other users to find out which websites are reliable and which ones are not.
- Step 2: Download the apk file to your device
- Once you have found a reliable source online, you can download the apk file to your device. The apk file is the installation file for the game. It is usually a small file that does not take much space or time to download. You can use any browser or downloader app to download the apk file.
- Step 3: Enable unknown sources in your settings
- Before you can install the apk file, you will need to enable unknown sources in your settings. This is because the apk file is not from the official Google Play Store, and your device may block it by default. To enable unknown sources, you will need to go to your settings, security, and toggle on the option that says "allow installation of apps from unknown sources". This will allow you to install the apk file without any problems.
- Step 4: Install the apk file and launch the game
- After you have enabled unknown sources, you can install the apk file by tapping on it and following the instructions on the screen. It will take a few seconds or minutes to install the game on your device. Once it is done, you can launch the game and enjoy it with all the mod features.
- Conclusion
- House Flip Master is a fun and addictive game that lets you flip houses and jump on trampolines. You can download House Flip Master mod apk to get unlimited money, coins, and access to all the locations and characters in the game. You can also play the game without ads and interruptions. To download House Flip Master mod apk, you will need to find a reliable source online, download the apk file to your device, enable unknown sources in your settings, and install the apk file and launch the game. We hope this article was helpful and informative for you. Happy flipping!
- FAQs
- Here are some frequently asked questions about House Flip Master mod apk:
-
-Is House Flip Master mod apk safe?
-Yes, House Flip Master mod apk is safe if you download it from a reliable source online. However, you should always be careful and scan any files that you download from the internet with an antivirus app before installing them on your device.
-Is House Flip Master mod apk free?
-Yes, House Flip Master mod apk is free to download and play. You do not need to pay anything to enjoy the game with all the mod features.
-Does House Flip Master mod apk require root?
-No, House Flip Master mod apk does not require root access on your device. You can install and play the game without rooting your device.
-Can I update House Flip Master mod apk?
-No, House Flip Master mod apk cannot be updated through the Google Play Store. If you want to update the game, you will need to download and install the latest version of House Flip Master mod apk from a reliable source online.
-Can I play House Flip Master mod apk with my friends?
-No, House Flip Master mod apk does not support multiplayer mode. You can only play the game solo on your device.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Epic War 5 The Ultimate Challenge for Strategy Fans - Download and Win.md b/spaces/fatiXbelha/sd/Epic War 5 The Ultimate Challenge for Strategy Fans - Download and Win.md
deleted file mode 100644
index 53aa7db5dfc4d7f8686038973132d0dbf9b330de..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Epic War 5 The Ultimate Challenge for Strategy Fans - Download and Win.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-Download Epic War 5: A Guide to the Ultimate Strategy Game
-If you are a fan of epic battles, fantasy worlds, and strategic gameplay, then you should definitely check out Epic War 5. This game is the fifth installment of the highly acclaimed Epic War series, and it offers more challenges, more features, and more fun than ever before. In this article, we will tell you everything you need to know about Epic War 5, including what it is, how to download it, and why you should play it. Let's get started!
- What is Epic War 5?
-Epic War 5 is a strategy game that combines elements of tower defense, RPG, and action. It was developed by Rudy Sudarto and released in 2011. The game has three heroes with unique stories, 30 units, 70 skills, and 30 accessories. You can battle your way to the Hellsgate that will end the world through 12 main stages, and 8 extra and 5 trial stages for additional challenges.
-download epic war 5 Download Zip » https://urllie.com/2uNxIt
- The story and the heroes
-The game has three different storylines, each with a different hero. You can choose to play as:
-
-Viola : A young princess who wants to prove herself as a warrior. She leads an army of humans and elves against the forces of darkness.
-Garcia : A mercenary who seeks revenge for his fallen comrades. He commands a band of outlaws and pirates against the corrupt empire.
-Havoc : A mysterious warrior who has a connection to the Hellsgate. He fights with a horde of demons and undead against anyone who stands in his way.
-
-Each hero has their own personality, voice, and skills. You can also customize their appearance and equipment with various accessories.
- The gameplay and the features
-The game is divided into stages, each with a different map, enemies, and objectives. You have to deploy your units on the battlefield, use your skills wisely, and destroy the enemy castle. You can also summon powerful creatures like dragons, titans, and angels to aid you in battle.
-The game has many features that make it more interesting and challenging, such as:
-
-Unit upgrades : You can upgrade your units with gold and crystals. Upgrades can improve their stats, abilities, and appearance.
-Hero skills : You can unlock and learn new skills for your hero as you progress through the game. Skills can be passive or active, and they can affect your units, your enemies, or the environment.
-Achievements : You can earn achievements by completing certain tasks or reaching certain milestones in the game. Achievements can give you rewards such as gold, crystals, or accessories.
-Leaderboards : You can compare your scores and rankings with other players around the world. You can also challenge your friends to beat your records.
-
- The tips and the tricks
-If you want to master Epic War 5, you need to know some tips and tricks that can help you win more battles. Here are some of them:
-
-Know your units : Each unit has its own strengths and weaknesses. You should know which units are effective against which enemies, and which units work well together.
-Know your skills : Each skill has its own cooldown, range, and effect. You should know when to use them and how to combine them for maximum impact.
-Know your enemies : Each enemy has its own behavior, attack pattern, and weakness. You should know how to counter them and avoid their attacks.
-Manage your resources : You have a limited amount of gold, crystals, and mana. You should know how to spend them wisely and efficiently.
-Use the terrain : The map has different features such as hills, bridges, and obstacles. You should know how to use them to your advantage or disadvantage your enemies.
-
- How to download Epic War 5?
-If you are interested in playing Epic War 5, you have two options: you can play it online on your browser, or you can download it on your PC or mobile device. Here are the steps for each option:
- For PC users
-If you want to play Epic War 5 on your PC, you can download it from Steam. Steam is a digital distribution platform that allows you to buy, download, and play games on your computer. To download Epic War 5 from Steam, you need to:
-download epic war 5 for pc
-download epic war 5 mod apk
-download epic war 5 hacked version
-download epic war 5 full game
-download epic war 5 offline
-download epic war 5 free online
-download epic war 5 latest update
-download epic war 5 cheats and tips
-download epic war 5 strategy guide
-download epic war 5 walkthrough
-download epic war 5 android
-download epic war 5 ios
-download epic war 5 windows
-download epic war 5 mac
-download epic war 5 linux
-download epic war 5 flash game
-download epic war 5 unblocked
-download epic war 5 from kongregate
-download epic war 5 from armor games
-download epic war 5 from newgrounds
-download epic war 5 from steam
-download epic war 5 from google play
-download epic war 5 from app store
-download epic war 5 from microsoft store
-download epic war 5 from amazon
-download epic war 5 review
-download epic war 5 rating
-download epic war 5 trailer
-download epic war 5 gameplay
-download epic war 5 soundtrack
-download epic war 5 characters
-download epic war 5 heroes
-download epic war 5 units
-download epic war 5 spells
-download epic war 5 items
-download epic war 5 maps
-download epic war 5 levels
-download epic war 5 stages
-download epic war 5 bosses
-download epic war 5 secrets
-download epic war 5 achievements
-download epic war 5 medals
-download epic war 5 challenges
-download epic war 5 quests
-download epic war 5 missions
-download epic war 5 events
-download epic war 5 tournaments
-download epic war 5 multiplayer mode
-download epic war 5 co-op mode
-
-Create a Steam account if you don't have one.
-Download and install the Steam client on your PC.
-Launch the Steam client and log in with your account.
-Search for Epic War 5 in the Steam store and click on it.
-Click on the "Add to Cart" button and proceed to checkout.
-After purchasing the game, click on the "Library" tab and find Epic War 5 in your list of games.
-Click on the "Install" button and wait for the game to download and install.
-Once the game is installed, click on the "Play" button and enjoy!
-
- For mobile users
-If you want to play Epic War 5 on your mobile device, you can download it from Google Play Store or Apple App Store. These are online stores that allow you to download and install apps and games on your Android or iOS device. To download Epic War 5 from these stores, you need to:
-
-Open the Google Play Store or Apple App Store app on your device.
-Search for Epic War 5 in the store and tap on it.
-Tap on the "Install" button and wait for the game to download and install.
-Once the game is installed, tap on the "Open" button and enjoy!
-
- Why should you play Epic War 5?
-Epic War 5 is not just another strategy game. It is a game that will challenge your skills, test your strategies, and immerse you in a fantasy world. Here are some reasons why you should play Epic War 5:
- The pros and the cons
-Like any game, Epic War 5 has its pros and cons. Here are some of them:
-
-Pros Cons
-- Stunning graphics and animations - High difficulty level
-- Engaging storylines and characters - Long loading times
-- Varied gameplay and features - Repetitive music and sound effects
-- Online multiplayer mode - Limited customization options
-- Free to play online - In-app purchases required for some items
-
- The ratings and the reviews
-Epic War 5 has received positive ratings and reviews from both critics and players. Here are some of them:
-
-"Epic War 5 is a great strategy game that will keep you hooked for hours. The graphics are amazing, the gameplay is addictive, and the story is captivating. If you love epic battles, fantasy worlds, and strategic gameplay, then you should definitely check out Epic War 5." - Gamezebo
-"Epic War 5 is one of the best strategy games I have ever played. The game has everything: action, adventure, humor, drama, romance, and more. The game is challenging but rewarding, and it never gets boring. I highly recommend Epic War 5 to anyone who likes strategy games." - Player review
-"Epic War 5 is a masterpiece of strategy gaming. The game has a lot of depth and variety, and it offers a lot of replay value. The game is not for the faint of heart, though. It is very hard and requires a lot of skill and strategy. If you are looking for a challenge, then Epic War 5 is the game for you." - IGN
-
- The alternatives and the comparisons
-Epic War 5 is not the only strategy game out there. There are many other games that offer similar or different experiences. Here are some of them:
-
-Age of War : This is a classic strategy game that spans across different ages of human history. You have to defend your base and attack your enemy's base using units and weapons from each age. You can also evolve your units and upgrade your base as you progress through the game.
-Kingdom Rush : This is a popular tower defense game that features a medieval fantasy setting. You have to build and upgrade towers along the path to stop the waves of enemies from reaching your kingdom. You can also use heroes, spells, and reinforcements to help you in battle.
-Clash of Clans : This is a massively multiplayer online strategy game that allows you to build your own village, train your troops, and join clans with other players. You can also raid other players' villages, participate in clan wars, and complete quests and events.
-
- How does Epic War 5 compare to these games? Well, it depends on your preference and taste. Some may prefer the simplicity and nostalgia of Age of War, some may enjoy the variety and challenge of Kingdom Rush, and some may like the social and competitive aspect of Clash of Clans. Epic War 5 has its own unique features and advantages that make it stand out from the rest. Ultimately, the best way to find out which game suits you best is to try them all yourself.
- Conclusion
-Epic War 5 is a strategy game that will keep you entertained and engaged for hours. It has a captivating story, stunning graphics, varied gameplay, and online multiplayer mode. It is also free to play online, so you don't have to worry about spending money on it. However, it is also very hard and requires a lot of skill and strategy. It may not be suitable for casual or beginner players. If you are looking for a challenge, then Epic War 5 is the game for you.
- FAQs
-Here are some frequently asked questions about Epic War 5:
-
-How do I unlock new units?
-You can unlock new units by completing stages or achievements. You can also buy some units with crystals or real money.
-How do I get more gold and crystals?
-You can get more gold and crystals by winning battles, completing achievements, or watching ads. You can also buy them with real money.
-How do I save my progress?
-You can save your progress by creating an account on Kongregate or Armor Games. You can also use the cloud save feature on Steam or mobile devices.
-How do I play with my friends?
-You can play with your friends by using the multiplayer mode on Steam or mobile devices. You can also invite your friends to join your clan on Clash of Clans.
-How do I contact the developer?
-You can contact the developer by visiting his website or his Facebook page. You can also leave a comment or a review on the game's page on Kongregate, Steam, Google Play Store, or Apple App Store.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git "a/spaces/fb700/chat3/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/fb700/chat3/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
deleted file mode 100644
index f8ecec9d5864e06043e8b4ecdb17337522a5d84d..0000000000000000000000000000000000000000
--- "a/spaces/fb700/chat3/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
+++ /dev/null
@@ -1,139 +0,0 @@
-import threading
-from request_llm.bridge_all import predict_no_ui_long_connection
-from toolbox import update_ui
-from toolbox import CatchException, write_results_to_file, report_execption
-from .crazy_utils import breakdown_txt_to_satisfy_token_limit
-
-def extract_code_block_carefully(txt):
- splitted = txt.split('```')
- n_code_block_seg = len(splitted) - 1
- if n_code_block_seg <= 1: return txt
- # 剩下的情况都开头除去 ``` 结尾除去一次 ```
- txt_out = '```'.join(splitted[1:-1])
- return txt_out
-
-
-
-def break_txt_into_half_at_some_linebreak(txt):
- lines = txt.split('\n')
- n_lines = len(lines)
- pre = lines[:(n_lines//2)]
- post = lines[(n_lines//2):]
- return "\n".join(pre), "\n".join(post)
-
-
-@CatchException
-def 全项目切换英文(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
- # 第1步:清空历史,以免输入溢出
- history = []
-
- # 第2步:尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 第3步:集合文件
- import time, glob, os, shutil, re
- os.makedirs('gpt_log/generated_english_version', exist_ok=True)
- os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True)
- file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
- [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
- # file_manifest = ['./toolbox.py']
- i_say_show_user_buffer = []
-
- # 第4步:随便显示点什么防止卡顿的感觉
- for index, fp in enumerate(file_manifest):
- # if 'test_project' in fp: continue
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}'
- i_say_show_user_buffer.append(i_say_show_user)
- chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
- # 第5步:Token限制下的截断与处理
- MAX_TOKEN = 3000
- import tiktoken
- from toolbox import get_conf
- enc = tiktoken.encoding_for_model("gpt-3.5-turbo")
- def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
-
-
- # 第6步:任务函数
- mutable_return = [None for _ in file_manifest]
- observe_window = [[""] for _ in file_manifest]
- def thread_worker(fp,index):
- if index > 10:
- time.sleep(60)
- print('Openai 限制免费用户每分钟20次请求,降低请求频率中。')
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```'
- try:
- gpt_say = ""
- # 分解代码文件
- file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN)
- for file_content_partial in file_content_breakdown:
- i_say = i_say_template(fp, file_content_partial)
- # # ** gpt request **
- gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index])
- gpt_say_partial = extract_code_block_carefully(gpt_say_partial)
- gpt_say += gpt_say_partial
- mutable_return[index] = gpt_say
- except ConnectionAbortedError as token_exceed_err:
- print('至少一个线程任务Token溢出而失败', e)
- except Exception as e:
- print('至少一个线程任务意外失败', e)
-
- # 第7步:所有线程同时开始执行任务函数
- handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)]
- for h in handles:
- h.daemon = True
- h.start()
- chatbot.append(('开始了吗?', f'多线程操作已经开始'))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第8步:循环轮询各个线程是否执行完毕
- cnt = 0
- while True:
- cnt += 1
- time.sleep(0.2)
- th_alive = [h.is_alive() for h in handles]
- if not any(th_alive): break
- # 更好的UI视觉效果
- observe_win = []
- for thread_index, alive in enumerate(th_alive):
- observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace(' ','.....').replace('$','.')+"... ]")
- stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)]
- stat_str = ''.join(stat)
- chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1)))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第9步:把结果写入文件
- for index, h in enumerate(handles):
- h.join() # 这里其实不需要join了,肯定已经都结束了
- fp = file_manifest[index]
- gpt_say = mutable_return[index]
- i_say_show_user = i_say_show_user_buffer[index]
-
- where_to_relocate = f'gpt_log/generated_english_version/{fp}'
- if gpt_say is not None:
- with open(where_to_relocate, 'w+', encoding='utf-8') as f:
- f.write(gpt_say)
- else: # 失败
- shutil.copyfile(file_manifest[index], where_to_relocate)
- chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}'))
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- time.sleep(1)
-
- # 第10步:备份一个文件
- res = write_results_to_file(history)
- chatbot.append(("生成一份任务执行报告", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/load_mats.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/load_mats.py
deleted file mode 100644
index f9a6fcc71de1d7dad8b0f81c67dc1c213764ff0b..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/util/load_mats.py
+++ /dev/null
@@ -1,120 +0,0 @@
-"""This script is to load 3D face model for Deep3DFaceRecon_pytorch
-"""
-
-import numpy as np
-from PIL import Image
-from scipy.io import loadmat, savemat
-from array import array
-import os.path as osp
-
-# load expression basis
-def LoadExpBasis(bfm_folder='BFM'):
- n_vertex = 53215
- Expbin = open(osp.join(bfm_folder, 'Exp_Pca.bin'), 'rb')
- exp_dim = array('i')
- exp_dim.fromfile(Expbin, 1)
- expMU = array('f')
- expPC = array('f')
- expMU.fromfile(Expbin, 3*n_vertex)
- expPC.fromfile(Expbin, 3*exp_dim[0]*n_vertex)
- Expbin.close()
-
- expPC = np.array(expPC)
- expPC = np.reshape(expPC, [exp_dim[0], -1])
- expPC = np.transpose(expPC)
-
- expEV = np.loadtxt(osp.join(bfm_folder, 'std_exp.txt'))
-
- return expPC, expEV
-
-
-# transfer original BFM09 to our face model
-def transferBFM09(bfm_folder='BFM'):
- print('Transfer BFM09 to BFM_model_front......')
- original_BFM = loadmat(osp.join(bfm_folder, '01_MorphableModel.mat'))
- shapePC = original_BFM['shapePC'] # shape basis
- shapeEV = original_BFM['shapeEV'] # corresponding eigen value
- shapeMU = original_BFM['shapeMU'] # mean face
- texPC = original_BFM['texPC'] # texture basis
- texEV = original_BFM['texEV'] # eigen value
- texMU = original_BFM['texMU'] # mean texture
-
- expPC, expEV = LoadExpBasis(bfm_folder)
-
- # transfer BFM09 to our face model
-
- idBase = shapePC*np.reshape(shapeEV, [-1, 199])
- idBase = idBase/1e5 # unify the scale to decimeter
- idBase = idBase[:, :80] # use only first 80 basis
-
- exBase = expPC*np.reshape(expEV, [-1, 79])
- exBase = exBase/1e5 # unify the scale to decimeter
- exBase = exBase[:, :64] # use only first 64 basis
-
- texBase = texPC*np.reshape(texEV, [-1, 199])
- texBase = texBase[:, :80] # use only first 80 basis
-
- # our face model is cropped along face landmarks and contains only 35709 vertex.
- # original BFM09 contains 53490 vertex, and expression basis provided by Guo et al. contains 53215 vertex.
- # thus we select corresponding vertex to get our face model.
-
- index_exp = loadmat(osp.join(bfm_folder, 'BFM_front_idx.mat'))
- index_exp = index_exp['idx'].astype(np.int32) - 1 # starts from 0 (to 53215)
-
- index_shape = loadmat(osp.join(bfm_folder, 'BFM_exp_idx.mat'))
- index_shape = index_shape['trimIndex'].astype(
- np.int32) - 1 # starts from 0 (to 53490)
- index_shape = index_shape[index_exp]
-
- idBase = np.reshape(idBase, [-1, 3, 80])
- idBase = idBase[index_shape, :, :]
- idBase = np.reshape(idBase, [-1, 80])
-
- texBase = np.reshape(texBase, [-1, 3, 80])
- texBase = texBase[index_shape, :, :]
- texBase = np.reshape(texBase, [-1, 80])
-
- exBase = np.reshape(exBase, [-1, 3, 64])
- exBase = exBase[index_exp, :, :]
- exBase = np.reshape(exBase, [-1, 64])
-
- meanshape = np.reshape(shapeMU, [-1, 3])/1e5
- meanshape = meanshape[index_shape, :]
- meanshape = np.reshape(meanshape, [1, -1])
-
- meantex = np.reshape(texMU, [-1, 3])
- meantex = meantex[index_shape, :]
- meantex = np.reshape(meantex, [1, -1])
-
- # other info contains triangles, region used for computing photometric loss,
- # region used for skin texture regularization, and 68 landmarks index etc.
- other_info = loadmat(osp.join(bfm_folder, 'facemodel_info.mat'))
- frontmask2_idx = other_info['frontmask2_idx']
- skinmask = other_info['skinmask']
- keypoints = other_info['keypoints']
- point_buf = other_info['point_buf']
- tri = other_info['tri']
- tri_mask2 = other_info['tri_mask2']
-
- # save our face model
- savemat(osp.join(bfm_folder, 'BFM_model_front.mat'), {'meanshape': meanshape, 'meantex': meantex, 'idBase': idBase, 'exBase': exBase, 'texBase': texBase,
- 'tri': tri, 'point_buf': point_buf, 'tri_mask2': tri_mask2, 'keypoints': keypoints, 'frontmask2_idx': frontmask2_idx, 'skinmask': skinmask})
-
-
-# load landmarks for standard face, which is used for image preprocessing
-def load_lm3d(bfm_folder):
-
- Lm3D = loadmat(osp.join(bfm_folder, 'similarity_Lm3D_all.mat'))
- Lm3D = Lm3D['lm']
-
- # calculate 5 facial landmarks using 68 landmarks
- lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1
- Lm3D = np.stack([Lm3D[lm_idx[0], :], np.mean(Lm3D[lm_idx[[1, 2]], :], 0), np.mean(
- Lm3D[lm_idx[[3, 4]], :], 0), Lm3D[lm_idx[5], :], Lm3D[lm_idx[6], :]], axis=0)
- Lm3D = Lm3D[[1, 2, 0, 3, 4], :]
-
- return Lm3D
-
-
-if __name__ == '__main__':
- transferBFM09()
\ No newline at end of file
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/utils/ffhq_dataset/__init__.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/utils/ffhq_dataset/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/fengjianliang/bingo/Dockerfile b/spaces/fengjianliang/bingo/Dockerfile
deleted file mode 100644
index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000
--- a/spaces/fengjianliang/bingo/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM weaigc/bingo:latest
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-CMD npm start
diff --git a/spaces/fffffu/bing/src/components/ui/select.tsx b/spaces/fffffu/bing/src/components/ui/select.tsx
deleted file mode 100644
index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000
--- a/spaces/fffffu/bing/src/components/ui/select.tsx
+++ /dev/null
@@ -1,123 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as SelectPrimitive from '@radix-ui/react-select'
-
-import { cn } from '@/lib/utils'
-import {
- IconArrowDown,
- IconCheck,
- IconChevronUpDown
-} from '@/components/ui/icons'
-
-const Select = SelectPrimitive.Root
-
-const SelectGroup = SelectPrimitive.Group
-
-const SelectValue = SelectPrimitive.Value
-
-const SelectTrigger = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
- {children}
-
-
-
-
-))
-SelectTrigger.displayName = SelectPrimitive.Trigger.displayName
-
-const SelectContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, position = 'popper', ...props }, ref) => (
-
-
-
- {children}
-
-
-
-))
-SelectContent.displayName = SelectPrimitive.Content.displayName
-
-const SelectLabel = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SelectLabel.displayName = SelectPrimitive.Label.displayName
-
-const SelectItem = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
-
-
-
-
- {children}
-
-))
-SelectItem.displayName = SelectPrimitive.Item.displayName
-
-const SelectSeparator = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SelectSeparator.displayName = SelectPrimitive.Separator.displayName
-
-export {
- Select,
- SelectGroup,
- SelectValue,
- SelectTrigger,
- SelectContent,
- SelectLabel,
- SelectItem,
- SelectSeparator
-}
diff --git a/spaces/fffiloni/Video-Matting-Anything/app.py b/spaces/fffiloni/Video-Matting-Anything/app.py
deleted file mode 100644
index e07fe0309bf5b4b16ceaeeb6e80d66c475dcb26c..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Video-Matting-Anything/app.py
+++ /dev/null
@@ -1,428 +0,0 @@
-# ------------------------------------------------------------------------
-# Modified from Grounded-SAM (https://github.com/IDEA-Research/Grounded-Segment-Anything)
-# ------------------------------------------------------------------------
-import os
-import sys
-import random
-import warnings
-
-os.system("export BUILD_WITH_CUDA=True")
-os.system("python -m pip install -e segment-anything")
-os.system("python -m pip install -e GroundingDINO")
-os.system("pip install --upgrade diffusers[torch]")
-#os.system("pip install opencv-python pycocotools matplotlib")
-sys.path.insert(0, './GroundingDINO')
-sys.path.insert(0, './segment-anything')
-warnings.filterwarnings("ignore")
-
-import cv2
-from scipy import ndimage
-
-import gradio as gr
-import argparse
-
-import numpy as np
-from PIL import Image
-from moviepy.editor import *
-import torch
-from torch.nn import functional as F
-import torchvision
-import networks
-import utils
-
-# Grounding DINO
-from groundingdino.util.inference import Model
-
-# SAM
-from segment_anything.utils.transforms import ResizeLongestSide
-
-# SD
-from diffusers import StableDiffusionPipeline
-
-transform = ResizeLongestSide(1024)
-# Green Screen
-PALETTE_back = (51, 255, 146)
-
-GROUNDING_DINO_CONFIG_PATH = "GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py"
-GROUNDING_DINO_CHECKPOINT_PATH = "checkpoints/groundingdino_swint_ogc.pth"
-mam_checkpoint="checkpoints/mam_sam_vitb.pth"
-output_dir="outputs"
-device = 'cuda'
-background_list = os.listdir('assets/backgrounds')
-
-#groundingdino_model = None
-#mam_predictor = None
-#generator = None
-
-# initialize MAM
-mam_model = networks.get_generator_m2m(seg='sam', m2m='sam_decoder_deep')
-mam_model.to(device)
-checkpoint = torch.load(mam_checkpoint, map_location=device)
-mam_model.load_state_dict(utils.remove_prefix_state_dict(checkpoint['state_dict']), strict=True)
-mam_model = mam_model.eval()
-
-# initialize GroundingDINO
-grounding_dino_model = Model(model_config_path=GROUNDING_DINO_CONFIG_PATH, model_checkpoint_path=GROUNDING_DINO_CHECKPOINT_PATH, device=device)
-
-# initialize StableDiffusionPipeline
-generator = StableDiffusionPipeline.from_pretrained("checkpoints/stable-diffusion-v1-5", torch_dtype=torch.float16)
-generator.to(device)
-
-def get_frames(video_in):
- frames = []
- #resize the video
- clip = VideoFileClip(video_in)
-
- #check fps
- if clip.fps > 30:
- print("vide rate is over 30, resetting to 30")
- clip_resized = clip.resize(height=512)
- clip_resized.write_videofile("video_resized.mp4", fps=30)
- else:
- print("video rate is OK")
- clip_resized = clip.resize(height=512)
- clip_resized.write_videofile("video_resized.mp4", fps=clip.fps)
-
- print("video resized to 512 height")
-
- # Opens the Video file with CV2
- cap= cv2.VideoCapture("video_resized.mp4")
-
- fps = cap.get(cv2.CAP_PROP_FPS)
- print("video fps: " + str(fps))
- i=0
- while(cap.isOpened()):
- ret, frame = cap.read()
- if ret == False:
- break
- cv2.imwrite('kang'+str(i)+'.jpg',frame)
- frames.append('kang'+str(i)+'.jpg')
- i+=1
-
- cap.release()
- cv2.destroyAllWindows()
- print("broke the video into frames")
-
- return frames, fps
-
-
-def create_video(frames, fps, type):
- print("building video result")
- clip = ImageSequenceClip(frames, fps=fps)
- clip.write_videofile(f"video_{type}_result.mp4", fps=fps)
-
- return f"video_{type}_result.mp4"
-
-
-def run_grounded_sam(input_image, text_prompt, task_type, background_prompt, bg_already):
- background_type = "generated_by_text"
- box_threshold = 0.25
- text_threshold = 0.25
- iou_threshold = 0.5
- scribble_mode = "split"
- guidance_mode = "alpha"
-
- #global groundingdino_model, sam_predictor, generator
-
- # make dir
- os.makedirs(output_dir, exist_ok=True)
-
- #if mam_predictor is None:
- # initialize MAM
- # build model
- # mam_model = networks.get_generator_m2m(seg='sam', m2m='sam_decoder_deep')
- # mam_model.to(device)
-
- # load checkpoint
- # checkpoint = torch.load(mam_checkpoint, map_location=device)
- # mam_model.load_state_dict(utils.remove_prefix_state_dict(checkpoint['state_dict']), strict=True)
-
- # inference
- # mam_model = mam_model.eval()
-
- #if groundingdino_model is None:
- # grounding_dino_model = Model(model_config_path=GROUNDING_DINO_CONFIG_PATH, model_checkpoint_path=GROUNDING_DINO_CHECKPOINT_PATH, device=device)
-
- #if generator is None:
- # generator = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
- # generator.to(device)
-
- # load image
- #image_ori = input_image["image"]
- image_ori = input_image
- #scribble = input_image["mask"]
- original_size = image_ori.shape[:2]
-
- if task_type == 'text':
- if text_prompt is None:
- print('Please input non-empty text prompt')
- with torch.no_grad():
- detections, phrases = grounding_dino_model.predict_with_caption(
- image=cv2.cvtColor(image_ori, cv2.COLOR_RGB2BGR),
- caption=text_prompt,
- box_threshold=box_threshold,
- text_threshold=text_threshold
- )
-
- if len(detections.xyxy) > 1:
- nms_idx = torchvision.ops.nms(
- torch.from_numpy(detections.xyxy),
- torch.from_numpy(detections.confidence),
- iou_threshold,
- ).numpy().tolist()
-
- detections.xyxy = detections.xyxy[nms_idx]
- detections.confidence = detections.confidence[nms_idx]
-
- bbox = detections.xyxy[np.argmax(detections.confidence)]
- bbox = transform.apply_boxes(bbox, original_size)
- bbox = torch.as_tensor(bbox, dtype=torch.float).to(device)
-
- image = transform.apply_image(image_ori)
- image = torch.as_tensor(image).to(device)
- image = image.permute(2, 0, 1).contiguous()
-
- pixel_mean = torch.tensor([123.675, 116.28, 103.53]).view(3,1,1).to(device)
- pixel_std = torch.tensor([58.395, 57.12, 57.375]).view(3,1,1).to(device)
-
- image = (image - pixel_mean) / pixel_std
-
- h, w = image.shape[-2:]
- pad_size = image.shape[-2:]
- padh = 1024 - h
- padw = 1024 - w
- image = F.pad(image, (0, padw, 0, padh))
-
- if task_type == 'scribble_point':
- scribble = scribble.transpose(2, 1, 0)[0]
- labeled_array, num_features = ndimage.label(scribble >= 255)
- centers = ndimage.center_of_mass(scribble, labeled_array, range(1, num_features+1))
- centers = np.array(centers)
- ### (x,y)
- centers = transform.apply_coords(centers, original_size)
- point_coords = torch.from_numpy(centers).to(device)
- point_coords = point_coords.unsqueeze(0).to(device)
- point_labels = torch.from_numpy(np.array([1] * len(centers))).unsqueeze(0).to(device)
- if scribble_mode == 'split':
- point_coords = point_coords.permute(1, 0, 2)
- point_labels = point_labels.permute(1, 0)
-
- sample = {'image': image.unsqueeze(0), 'point': point_coords, 'label': point_labels, 'ori_shape': original_size, 'pad_shape': pad_size}
- elif task_type == 'scribble_box':
- scribble = scribble.transpose(2, 1, 0)[0]
- labeled_array, num_features = ndimage.label(scribble >= 255)
- centers = ndimage.center_of_mass(scribble, labeled_array, range(1, num_features+1))
- centers = np.array(centers)
- ### (x1, y1, x2, y2)
- x_min = centers[:, 0].min()
- x_max = centers[:, 0].max()
- y_min = centers[:, 1].min()
- y_max = centers[:, 1].max()
- bbox = np.array([x_min, y_min, x_max, y_max])
- bbox = transform.apply_boxes(bbox, original_size)
- bbox = torch.as_tensor(bbox, dtype=torch.float).to(device)
-
- sample = {'image': image.unsqueeze(0), 'bbox': bbox.unsqueeze(0), 'ori_shape': original_size, 'pad_shape': pad_size}
- elif task_type == 'text':
- sample = {'image': image.unsqueeze(0), 'bbox': bbox.unsqueeze(0), 'ori_shape': original_size, 'pad_shape': pad_size}
- else:
- print("task_type:{} error!".format(task_type))
-
- with torch.no_grad():
- feas, pred, post_mask = mam_model.forward_inference(sample)
-
- alpha_pred_os1, alpha_pred_os4, alpha_pred_os8 = pred['alpha_os1'], pred['alpha_os4'], pred['alpha_os8']
- alpha_pred_os8 = alpha_pred_os8[..., : sample['pad_shape'][0], : sample['pad_shape'][1]]
- alpha_pred_os4 = alpha_pred_os4[..., : sample['pad_shape'][0], : sample['pad_shape'][1]]
- alpha_pred_os1 = alpha_pred_os1[..., : sample['pad_shape'][0], : sample['pad_shape'][1]]
-
- alpha_pred_os8 = F.interpolate(alpha_pred_os8, sample['ori_shape'], mode="bilinear", align_corners=False)
- alpha_pred_os4 = F.interpolate(alpha_pred_os4, sample['ori_shape'], mode="bilinear", align_corners=False)
- alpha_pred_os1 = F.interpolate(alpha_pred_os1, sample['ori_shape'], mode="bilinear", align_corners=False)
-
- if guidance_mode == 'mask':
- weight_os8 = utils.get_unknown_tensor_from_mask_oneside(post_mask, rand_width=10, train_mode=False)
- post_mask[weight_os8>0] = alpha_pred_os8[weight_os8>0]
- alpha_pred = post_mask.clone().detach()
- else:
- weight_os8 = utils.get_unknown_box_from_mask(post_mask)
- alpha_pred_os8[weight_os8>0] = post_mask[weight_os8>0]
- alpha_pred = alpha_pred_os8.clone().detach()
-
-
- weight_os4 = utils.get_unknown_tensor_from_pred_oneside(alpha_pred, rand_width=20, train_mode=False)
- alpha_pred[weight_os4>0] = alpha_pred_os4[weight_os4>0]
-
- weight_os1 = utils.get_unknown_tensor_from_pred_oneside(alpha_pred, rand_width=10, train_mode=False)
- alpha_pred[weight_os1>0] = alpha_pred_os1[weight_os1>0]
-
- alpha_pred = alpha_pred[0][0].cpu().numpy()
-
- #### draw
- ### alpha matte
- alpha_rgb = cv2.cvtColor(np.uint8(alpha_pred*255), cv2.COLOR_GRAY2RGB)
- ### com img with background
- global background_img
- if background_type == 'real_world_sample':
- background_img_file = os.path.join('assets/backgrounds', random.choice(background_list))
- background_img = cv2.imread(background_img_file)
- background_img = cv2.cvtColor(background_img, cv2.COLOR_BGR2RGB)
- background_img = cv2.resize(background_img, (image_ori.shape[1], image_ori.shape[0]))
- com_img = alpha_pred[..., None] * image_ori + (1 - alpha_pred[..., None]) * np.uint8(background_img)
- com_img = np.uint8(com_img)
- else:
- if background_prompt is None:
- print('Please input non-empty background prompt')
- else:
- if bg_already is False:
- background_img = generator(background_prompt).images[0]
-
- background_img = np.array(background_img)
- background_img = cv2.resize(background_img, (image_ori.shape[1], image_ori.shape[0]))
- com_img = alpha_pred[..., None] * image_ori + (1 - alpha_pred[..., None]) * np.uint8(background_img)
- com_img = np.uint8(com_img)
- ### com img with green screen
- green_img = alpha_pred[..., None] * image_ori + (1 - alpha_pred[..., None]) * np.array([PALETTE_back], dtype='uint8')
- green_img = np.uint8(green_img)
- #return [(com_img, 'composite with background'), (green_img, 'green screen'), (alpha_rgb, 'alpha matte')]
- return com_img, green_img, alpha_rgb
-
-def infer(video_in, trim_value, prompt, background_prompt):
- print(prompt)
- break_vid = get_frames(video_in)
-
- frames_list= break_vid[0]
- fps = break_vid[1]
- n_frame = int(trim_value*fps)
-
- if n_frame >= len(frames_list):
- print("video is shorter than the cut value")
- n_frame = len(frames_list)
-
- with_bg_result_frames = []
- with_green_result_frames = []
- with_matte_result_frames = []
-
- print("set stop frames to: " + str(n_frame))
- bg_already = False
- for i in frames_list[0:int(n_frame)]:
- to_numpy_i = Image.open(i).convert("RGB")
- #need to convert to numpy
- # Convert the image to a NumPy array
- image_array = np.array(to_numpy_i)
-
- results = run_grounded_sam(image_array, prompt, "text", background_prompt, bg_already)
- bg_already = True
- bg_img = Image.fromarray(results[0])
- green_img = Image.fromarray(results[1])
- matte_img = Image.fromarray(results[2])
-
-
- # exporting the images
- bg_img.save(f"bg_result_img-{i}.jpg")
- with_bg_result_frames.append(f"bg_result_img-{i}.jpg")
- green_img.save(f"green_result_img-{i}.jpg")
- with_green_result_frames.append(f"green_result_img-{i}.jpg")
- matte_img.save(f"matte_result_img-{i}.jpg")
- with_matte_result_frames.append(f"matte_result_img-{i}.jpg")
- print("frame " + i + "/" + str(n_frame) + ": done;")
-
- vid_bg = create_video(with_bg_result_frames, fps, "bg")
- vid_green = create_video(with_green_result_frames, fps, "greenscreen")
- vid_matte = create_video(with_matte_result_frames, fps, "matte")
-
- bg_already = False
- print("finished !")
-
- return vid_bg, vid_green, vid_matte
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser("MAM demo", add_help=True)
- parser.add_argument("--debug", action="store_true", help="using debug mode")
- parser.add_argument("--share", action="store_true", help="share the app")
- parser.add_argument('--port', type=int, default=7589, help='port to run the server')
- parser.add_argument('--no-gradio-queue', action="store_true", help='path to the SAM checkpoint')
- args = parser.parse_args()
-
- print(args)
-
- block = gr.Blocks()
- if not args.no_gradio_queue:
- block = block.queue()
-
- with block:
- gr.Markdown(
- """
- # Matting Anything in Video Demo
- Welcome to the Matting Anything in Video demo by @fffiloni and upload your video to get started
- You may open usage details below to understand how to use this demo.
- ## Usage
-
- You may upload a video to start, for the moment we only support 1 prompt type to get the alpha matte of the target:
- **text**: Send text prompt to identify the target instance in the `Text prompt` box.
-
- We also only support 1 background type to support image composition with the alpha matte output:
- **generated_by_text**: Send background text prompt to create a background image with stable diffusion model in the `Background prompt` box.
-
-
-
-
- for longer sequences, more control and no queue.
- """)
-
- with gr.Row():
- with gr.Column():
- video_in = gr.Video()
- trim_in = gr.Slider(label="Cut video at (s)", minimum=1, maximum=10, step=1, value=1)
- #task_type = gr.Dropdown(["scribble_point", "scribble_box", "text"], value="text", label="Prompt type")
- #task_type = "text"
- text_prompt = gr.Textbox(label="Text prompt", placeholder="the girl in the middle", info="Describe the subject visible in your video that you want to matte")
- #background_type = gr.Dropdown(["generated_by_text", "real_world_sample"], value="generated_by_text", label="Background type")
- background_prompt = gr.Textbox(label="Background prompt", placeholder="downtown area in New York")
-
- run_button = gr.Button("Run")
- #with gr.Accordion("Advanced options", open=False):
- # box_threshold = gr.Slider(
- # label="Box Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.05
- # )
- # text_threshold = gr.Slider(
- # label="Text Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.05
- # )
- # iou_threshold = gr.Slider(
- # label="IOU Threshold", minimum=0.0, maximum=1.0, value=0.5, step=0.05
- # )
- # scribble_mode = gr.Dropdown(
- # ["merge", "split"], value="split", label="scribble_mode"
- # )
- # guidance_mode = gr.Dropdown(
- # ["mask", "alpha"], value="alpha", label="guidance_mode", info="mask guidance is for complex scenes with multiple instances, alpha guidance is for simple scene with single instance"
- # )
-
- with gr.Column():
- #gallery = gr.Gallery(
- # label="Generated images", show_label=True, elem_id="gallery"
- #).style(preview=True, grid=3, object_fit="scale-down")
- vid_bg_out = gr.Video(label="Video with background")
- with gr.Row():
- vid_green_out = gr.Video(label="Video green screen")
- vid_matte_out = gr.Video(label="Video matte")
-
- gr.Examples(
- fn=infer,
- examples=[
- [
- "./examples/example_men_bottle.mp4",
- 10,
- "the man holding a bottle",
- "the Sahara desert"
- ]
- ],
- inputs=[video_in, trim_in, text_prompt, background_prompt],
- outputs=[vid_bg_out, vid_green_out, vid_matte_out]
- )
- run_button.click(fn=infer, inputs=[
- video_in, trim_in, text_prompt, background_prompt], outputs=[vid_bg_out, vid_green_out, vid_matte_out], api_name="go_matte")
-
- block.queue(max_size=24).launch(debug=args.debug, share=args.share, show_error=True)
- #block.queue(concurrency_count=100)
- #block.launch(server_name='0.0.0.0', server_port=args.port, debug=args.debug, share=args.share)
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/path.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/path.d.ts
deleted file mode 100644
index 1d33f79269f7d4f55dca5513dd8ba3e33dc7732b..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/path.d.ts
+++ /dev/null
@@ -1,191 +0,0 @@
-declare module 'path/posix' {
- import path = require('path');
- export = path;
-}
-declare module 'path/win32' {
- import path = require('path');
- export = path;
-}
-/**
- * The `path` module provides utilities for working with file and directory paths.
- * It can be accessed using:
- *
- * ```js
- * const path = require('path');
- * ```
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/path.js)
- */
-declare module 'path' {
- namespace path {
- /**
- * A parsed path object generated by path.parse() or consumed by path.format().
- */
- interface ParsedPath {
- /**
- * The root of the path such as '/' or 'c:\'
- */
- root: string;
- /**
- * The full directory path such as '/home/user/dir' or 'c:\path\dir'
- */
- dir: string;
- /**
- * The file name including extension (if any) such as 'index.html'
- */
- base: string;
- /**
- * The file extension (if any) such as '.html'
- */
- ext: string;
- /**
- * The file name without extension (if any) such as 'index'
- */
- name: string;
- }
- interface FormatInputPathObject {
- /**
- * The root of the path such as '/' or 'c:\'
- */
- root?: string | undefined;
- /**
- * The full directory path such as '/home/user/dir' or 'c:\path\dir'
- */
- dir?: string | undefined;
- /**
- * The file name including extension (if any) such as 'index.html'
- */
- base?: string | undefined;
- /**
- * The file extension (if any) such as '.html'
- */
- ext?: string | undefined;
- /**
- * The file name without extension (if any) such as 'index'
- */
- name?: string | undefined;
- }
- interface PlatformPath {
- /**
- * Normalize a string path, reducing '..' and '.' parts.
- * When multiple slashes are found, they're replaced by a single one; when the path contains a trailing slash, it is preserved. On Windows backslashes are used.
- *
- * @param path string path to normalize.
- * @throws {TypeError} if `path` is not a string.
- */
- normalize(path: string): string;
- /**
- * Join all arguments together and normalize the resulting path.
- *
- * @param paths paths to join.
- * @throws {TypeError} if any of the path segments is not a string.
- */
- join(...paths: string[]): string;
- /**
- * The right-most parameter is considered {to}. Other parameters are considered an array of {from}.
- *
- * Starting from leftmost {from} parameter, resolves {to} to an absolute path.
- *
- * If {to} isn't already absolute, {from} arguments are prepended in right to left order,
- * until an absolute path is found. If after using all {from} paths still no absolute path is found,
- * the current working directory is used as well. The resulting path is normalized,
- * and trailing slashes are removed unless the path gets resolved to the root directory.
- *
- * @param paths A sequence of paths or path segments.
- * @throws {TypeError} if any of the arguments is not a string.
- */
- resolve(...paths: string[]): string;
- /**
- * Determines whether {path} is an absolute path. An absolute path will always resolve to the same location, regardless of the working directory.
- *
- * If the given {path} is a zero-length string, `false` will be returned.
- *
- * @param path path to test.
- * @throws {TypeError} if `path` is not a string.
- */
- isAbsolute(path: string): boolean;
- /**
- * Solve the relative path from {from} to {to} based on the current working directory.
- * At times we have two absolute paths, and we need to derive the relative path from one to the other. This is actually the reverse transform of path.resolve.
- *
- * @throws {TypeError} if either `from` or `to` is not a string.
- */
- relative(from: string, to: string): string;
- /**
- * Return the directory name of a path. Similar to the Unix dirname command.
- *
- * @param path the path to evaluate.
- * @throws {TypeError} if `path` is not a string.
- */
- dirname(path: string): string;
- /**
- * Return the last portion of a path. Similar to the Unix basename command.
- * Often used to extract the file name from a fully qualified path.
- *
- * @param path the path to evaluate.
- * @param suffix optionally, an extension to remove from the result.
- * @throws {TypeError} if `path` is not a string or if `ext` is given and is not a string.
- */
- basename(path: string, suffix?: string): string;
- /**
- * Return the extension of the path, from the last '.' to end of string in the last portion of the path.
- * If there is no '.' in the last portion of the path or the first character of it is '.', then it returns an empty string.
- *
- * @param path the path to evaluate.
- * @throws {TypeError} if `path` is not a string.
- */
- extname(path: string): string;
- /**
- * The platform-specific file separator. '\\' or '/'.
- */
- readonly sep: '\\' | '/';
- /**
- * The platform-specific file delimiter. ';' or ':'.
- */
- readonly delimiter: ';' | ':';
- /**
- * Returns an object from a path string - the opposite of format().
- *
- * @param path path to evaluate.
- * @throws {TypeError} if `path` is not a string.
- */
- parse(path: string): ParsedPath;
- /**
- * Returns a path string from an object - the opposite of parse().
- *
- * @param pathObject path to evaluate.
- */
- format(pathObject: FormatInputPathObject): string;
- /**
- * On Windows systems only, returns an equivalent namespace-prefixed path for the given path.
- * If path is not a string, path will be returned without modifications.
- * This method is meaningful only on Windows system.
- * On POSIX systems, the method is non-operational and always returns path without modifications.
- */
- toNamespacedPath(path: string): string;
- /**
- * Posix specific pathing.
- * Same as parent object on posix.
- */
- readonly posix: PlatformPath;
- /**
- * Windows specific pathing.
- * Same as parent object on windows
- */
- readonly win32: PlatformPath;
- }
- }
- const path: path.PlatformPath;
- export = path;
-}
-declare module 'node:path' {
- import path = require('path');
- export = path;
-}
-declare module 'node:path/posix' {
- import path = require('path/posix');
- export = path;
-}
-declare module 'node:path/win32' {
- import path = require('path/win32');
- export = path;
-}
diff --git a/spaces/fffiloni/instant-TTS-Bark-cloning/app.py b/spaces/fffiloni/instant-TTS-Bark-cloning/app.py
deleted file mode 100644
index e21f9fe22a20152aa30c4d31fa9fdb752f18fa76..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/instant-TTS-Bark-cloning/app.py
+++ /dev/null
@@ -1,591 +0,0 @@
-import gradio as gr
-from share_btn import community_icon_html, loading_icon_html, share_js
-import os
-import shutil
-import re
-
-#from huggingface_hub import snapshot_download
-import numpy as np
-from scipy.io import wavfile
-from scipy.io.wavfile import write, read
-from pydub import AudioSegment
-
-file_upload_available = os.environ.get("ALLOW_FILE_UPLOAD")
-MAX_NUMBER_SENTENCES = 10
-
-import json
-with open("characters.json", "r") as file:
- data = json.load(file)
- characters = [
- {
- "image": item["image"],
- "title": item["title"],
- "speaker": item["speaker"]
- }
- for item in data
- ]
-
-from TTS.api import TTS
-tts = TTS("tts_models/multilingual/multi-dataset/bark", gpu=True)
-
-def cut_wav(input_path, max_duration):
- # Load the WAV file
- audio = AudioSegment.from_wav(input_path)
-
- # Calculate the duration of the audio
- audio_duration = len(audio) / 1000 # Convert milliseconds to seconds
-
- # Determine the duration to cut (maximum of max_duration and actual audio duration)
- cut_duration = min(max_duration, audio_duration)
-
- # Cut the audio
- cut_audio = audio[:int(cut_duration * 1000)] # Convert seconds to milliseconds
-
- # Get the input file name without extension
- file_name = os.path.splitext(os.path.basename(input_path))[0]
-
- # Construct the output file path with the original file name and "_cut" suffix
- output_path = f"{file_name}_cut.wav"
-
- # Save the cut audio as a new WAV file
- cut_audio.export(output_path, format="wav")
-
- return output_path
-
-def load_hidden(audio_in):
- return audio_in
-
-def load_hidden_mic(audio_in):
- print("USER RECORDED A NEW SAMPLE")
-
- library_path = 'bark_voices'
- folder_name = 'audio-0-100'
- second_folder_name = 'audio-0-100_cleaned'
-
- folder_path = os.path.join(library_path, folder_name)
- second_folder_path = os.path.join(library_path, second_folder_name)
-
- print("We need to clean previous util files, if needed:")
- if os.path.exists(folder_path):
- try:
- shutil.rmtree(folder_path)
- print(f"Successfully deleted the folder previously created from last raw recorded sample: {folder_path}")
- except OSError as e:
- print(f"Error: {folder_path} - {e.strerror}")
- else:
- print(f"OK, the folder for a raw recorded sample does not exist: {folder_path}")
-
- if os.path.exists(second_folder_path):
- try:
- shutil.rmtree(second_folder_path)
- print(f"Successfully deleted the folder previously created from last cleaned recorded sample: {second_folder_path}")
- except OSError as e:
- print(f"Error: {second_folder_path} - {e.strerror}")
- else:
- print(f"Ok, the folder for a cleaned recorded sample does not exist: {second_folder_path}")
-
- return audio_in
-
-def clear_clean_ckeck():
- return False
-
-def wipe_npz_file(folder_path):
- print("YO • a user is manipulating audio inputs")
-
-def split_process(audio, chosen_out_track):
- gr.Info("Cleaning your audio sample...")
- os.makedirs("out", exist_ok=True)
- write('test.wav', audio[0], audio[1])
- os.system("python3 -m demucs.separate -n mdx_extra_q -j 4 test.wav -o out")
- #return "./out/mdx_extra_q/test/vocals.wav","./out/mdx_extra_q/test/bass.wav","./out/mdx_extra_q/test/drums.wav","./out/mdx_extra_q/test/other.wav"
- if chosen_out_track == "vocals":
- print("Audio sample cleaned")
- return "./out/mdx_extra_q/test/vocals.wav"
- elif chosen_out_track == "bass":
- return "./out/mdx_extra_q/test/bass.wav"
- elif chosen_out_track == "drums":
- return "./out/mdx_extra_q/test/drums.wav"
- elif chosen_out_track == "other":
- return "./out/mdx_extra_q/test/other.wav"
- elif chosen_out_track == "all-in":
- return "test.wav"
-
-def update_selection(selected_state: gr.SelectData):
- c_image = characters[selected_state.index]["image"]
- c_title = characters[selected_state.index]["title"]
- c_speaker = characters[selected_state.index]["speaker"]
-
- return c_title, selected_state
-
-
-def infer(prompt, input_wav_file, clean_audio, hidden_numpy_audio):
- print("""
-—————
-NEW INFERENCE:
-———————
- """)
- if prompt == "":
- gr.Warning("Do not forget to provide a tts prompt !")
-
- if clean_audio is True :
- print("We want to clean audio sample")
- # Extract the file name without the extension
- new_name = os.path.splitext(os.path.basename(input_wav_file))[0]
- print(f"FILE BASENAME is: {new_name}")
- if os.path.exists(os.path.join("bark_voices", f"{new_name}_cleaned")):
- print("This file has already been cleaned")
- check_name = os.path.join("bark_voices", f"{new_name}_cleaned")
- source_path = os.path.join(check_name, f"{new_name}_cleaned.wav")
- else:
- print("This file is new, we need to clean and store it")
- source_path = split_process(hidden_numpy_audio, "vocals")
-
- # Rename the file
- new_path = os.path.join(os.path.dirname(source_path), f"{new_name}_cleaned.wav")
- os.rename(source_path, new_path)
- source_path = new_path
- else :
- print("We do NOT want to clean audio sample")
- # Path to your WAV file
- source_path = input_wav_file
-
- # Destination directory
- destination_directory = "bark_voices"
-
- # Extract the file name without the extension
- file_name = os.path.splitext(os.path.basename(source_path))[0]
-
- # Construct the full destination directory path
- destination_path = os.path.join(destination_directory, file_name)
-
- # Create the new directory
- os.makedirs(destination_path, exist_ok=True)
-
- # Move the WAV file to the new directory
- shutil.move(source_path, os.path.join(destination_path, f"{file_name}.wav"))
-
- # —————
-
- # Split the text into sentences based on common punctuation marks
- sentences = re.split(r'(?<=[.!?])\s+', prompt)
-
- if len(sentences) > MAX_NUMBER_SENTENCES:
- gr.Info("Your text is too long. To keep this demo enjoyable for everyone, we only kept the first 10 sentences :) Duplicate this space and set MAX_NUMBER_SENTENCES for longer texts ;)")
- # Keep only the first MAX_NUMBER_SENTENCES sentences
- first_nb_sentences = sentences[:MAX_NUMBER_SENTENCES]
-
- # Join the selected sentences back into a single string
- limited_prompt = ' '.join(first_nb_sentences)
- prompt = limited_prompt
-
- else:
- prompt = prompt
-
- gr.Info("Generating audio from prompt")
- tts.tts_to_file(text=prompt,
- file_path="output.wav",
- voice_dir="bark_voices/",
- speaker=f"{file_name}")
-
- # List all the files and subdirectories in the given directory
- contents = os.listdir(f"bark_voices/{file_name}")
-
- # Print the contents
- for item in contents:
- print(item)
- print("Preparing final waveform video ...")
- tts_video = gr.make_waveform(audio="output.wav")
- print(tts_video)
- print("FINISHED")
- return "output.wav", tts_video, gr.update(value=f"bark_voices/{file_name}/{contents[1]}", visible=True), gr.Group.update(visible=True), destination_path
-
-def infer_from_c(prompt, c_name):
- print("""
-—————
-NEW INFERENCE:
-———————
- """)
- if prompt == "":
- gr.Warning("Do not forget to provide a tts prompt !")
- print("Warning about prompt sent to user")
-
- print(f"USING VOICE LIBRARY: {c_name}")
- # Split the text into sentences based on common punctuation marks
- sentences = re.split(r'(?<=[.!?])\s+', prompt)
-
- if len(sentences) > MAX_NUMBER_SENTENCES:
- gr.Info("Your text is too long. To keep this demo enjoyable for everyone, we only kept the first 10 sentences :) Duplicate this space and set MAX_NUMBER_SENTENCES for longer texts ;)")
- # Keep only the first MAX_NUMBER_SENTENCES sentences
- first_nb_sentences = sentences[:MAX_NUMBER_SENTENCES]
-
- # Join the selected sentences back into a single string
- limited_prompt = ' '.join(first_nb_sentences)
- prompt = limited_prompt
-
- else:
- prompt = prompt
-
-
- if c_name == "":
- gr.Warning("Voice character is not properly selected. Please ensure that the name of the chosen voice is specified in the Character Name input.")
- print("Warning about Voice Name sent to user")
- else:
- print(f"Generating audio from prompt with {c_name} ;)")
-
- tts.tts_to_file(text=prompt,
- file_path="output.wav",
- voice_dir="examples/library/",
- speaker=f"{c_name}")
-
- print("Preparing final waveform video ...")
- tts_video = gr.make_waveform(audio="output.wav")
- print(tts_video)
- print("FINISHED")
- return "output.wav", tts_video, gr.update(value=f"examples/library/{c_name}/{c_name}.npz", visible=True), gr.Group.update(visible=True)
-
-
-css = """
-#col-container {max-width: 780px; margin-left: auto; margin-right: auto;}
-a {text-decoration-line: underline; font-weight: 600;}
-.mic-wrap > button {
- width: 100%;
- height: 60px;
- font-size: 1.4em!important;
-}
-.record-icon.svelte-1thnwz {
- display: flex;
- position: relative;
- margin-right: var(--size-2);
- width: unset;
- height: unset;
-}
-span.record-icon > span.dot.svelte-1thnwz {
- width: 20px!important;
- height: 20px!important;
-}
-.animate-spin {
- animation: spin 1s linear infinite;
-}
-@keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
-}
-#share-btn-container {
- display: flex;
- padding-left: 0.5rem !important;
- padding-right: 0.5rem !important;
- background-color: #000000;
- justify-content: center;
- align-items: center;
- border-radius: 9999px !important;
- max-width: 15rem;
- height: 36px;
-}
-div#share-btn-container > div {
- flex-direction: row;
- background: black;
- align-items: center;
-}
-#share-btn-container:hover {
- background-color: #060606;
-}
-#share-btn {
- all: initial;
- color: #ffffff;
- font-weight: 600;
- cursor:pointer;
- font-family: 'IBM Plex Sans', sans-serif;
- margin-left: 0.5rem !important;
- padding-top: 0.5rem !important;
- padding-bottom: 0.5rem !important;
- right:0;
-}
-#share-btn * {
- all: unset;
-}
-#share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
-}
-#share-btn-container .wrap {
- display: none !important;
-}
-#share-btn-container.hidden {
- display: none!important;
-}
-img[src*='#center'] {
- display: block;
- margin: auto;
-}
-.footer {
- margin-bottom: 45px;
- margin-top: 10px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
- }
- .footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
- }
- .dark .footer {
- border-color: #303030;
- }
- .dark .footer>p {
- background: #0b0f19;
- }
-
-.disclaimer {
- text-align: left;
-}
-.disclaimer > p {
- font-size: .8rem;
-}
-"""
-
-with gr.Blocks(css=css) as demo:
- with gr.Column(elem_id="col-container"):
-
- gr.Markdown("""
- Coqui + Bark Voice Cloning
-
- Mimic any voice character in less than 2 minutes with this Coqui TTS + Bark demo !
- Upload a clean 20 seconds WAV file of the vocal persona you want to mimic,
- type your text-to-speech prompt and hit submit !
-
-
- [](https://huggingface.co/spaces/fffiloni/instant-TTS-Bark-cloning?duplicate=true)
-
- """)
- with gr.Row():
- with gr.Column():
- prompt = gr.Textbox(
- label = "Text to speech prompt",
- info = "One or two sentences at a time is better* (max: 10)",
- placeholder = "Hello friend! How are you today?",
- elem_id = "tts-prompt"
- )
-
- with gr.Tab("File upload"):
-
- with gr.Column():
-
- if file_upload_available == "True":
- audio_in = gr.Audio(
- label="WAV voice to clone",
- type="filepath",
- source="upload"
- )
- else:
- audio_in = gr.Audio(
- label="WAV voice to clone",
- type="filepath",
- source="upload",
- interactive = False
- )
- clean_sample = gr.Checkbox(label="Clean sample ?", value=False)
- hidden_audio_numpy = gr.Audio(type="numpy", visible=False)
- submit_btn = gr.Button("Submit")
-
- with gr.Tab("Microphone"):
- texts_samples = gr.Textbox(label = "Helpers",
- info = "You can read out loud one of these sentences if you do not know what to record :)",
- value = """"Jazz, a quirky mix of groovy saxophones and wailing trumpets, echoes through the vibrant city streets."
-———
-"A majestic orchestra plays enchanting melodies, filling the air with harmony."
-———
-"The exquisite aroma of freshly baked bread wafts from a cozy bakery, enticing passersby."
-———
-"A thunderous roar shakes the ground as a massive jet takes off into the sky, leaving trails of white behind."
-———
-"Laughter erupts from a park where children play, their innocent voices rising like tinkling bells."
-———
-"Waves crash on the beach, and seagulls caw as they soar overhead, a symphony of nature's sounds."
-———
-"In the distance, a blacksmith hammers red-hot metal, the rhythmic clang punctuating the day."
-———
-"As evening falls, a soft hush blankets the world, crickets chirping in a soothing rhythm."
- """,
- interactive = False,
- lines = 5
- )
- micro_in = gr.Audio(
- label="Record voice to clone",
- type="filepath",
- source="microphone",
- interactive = True
- )
- clean_micro = gr.Checkbox(label="Clean sample ?", value=False)
- micro_submit_btn = gr.Button("Submit")
-
- audio_in.upload(fn=load_hidden, inputs=[audio_in], outputs=[hidden_audio_numpy], queue=False)
- micro_in.stop_recording(fn=load_hidden_mic, inputs=[micro_in], outputs=[hidden_audio_numpy], queue=False)
-
-
- with gr.Tab("Voices Characters"):
- selected_state = gr.State()
- gallery_in = gr.Gallery(
- label="Character Gallery",
- value=[(item["image"], item["title"]) for item in characters],
- interactive = True,
- allow_preview=False,
- columns=3,
- elem_id="gallery",
- show_share_button=False
- )
- c_submit_btn = gr.Button("Submit")
-
-
- with gr.Column():
-
- cloned_out = gr.Audio(
- label="Text to speech output",
- visible = False
- )
-
- video_out = gr.Video(
- label = "Waveform video",
- elem_id = "voice-video-out"
- )
-
- npz_file = gr.File(
- label = ".npz file",
- visible = False
- )
-
- folder_path = gr.Textbox(visible=False)
-
-
-
- character_name = gr.Textbox(
- label="Character Name",
- placeholder="Name that voice character",
- elem_id = "character-name"
- )
-
- voice_description = gr.Textbox(
- label="description",
- placeholder="How would you describe that voice ? ",
- elem_id = "voice-description"
- )
-
- with gr.Group(elem_id="share-btn-container", visible=False) as share_group:
- community_icon = gr.HTML(community_icon_html)
- loading_icon = gr.HTML(loading_icon_html)
- share_button = gr.Button("Share with Community", elem_id="share-btn")
-
- share_button.click(None, [], [], _js=share_js, queue=False)
-
- gallery_in.select(
- update_selection,
- outputs=[character_name, selected_state],
- queue=False,
- show_progress=False,
- )
-
- audio_in.change(fn=wipe_npz_file, inputs=[folder_path], queue=False)
- micro_in.clear(fn=wipe_npz_file, inputs=[folder_path], queue=False)
-
- gr.Examples(
- examples = [
- [
- "Once upon a time, in a cozy little shell, lived a friendly crab named Crabby. Crabby loved his cozy home, but he always felt like something was missing.",
- "./examples/en_speaker_6.wav",
- False,
- None
- ],
- [
- "It was a typical afternoon in the bustling city, the sun shining brightly through the windows of the packed courtroom. Three people sat at the bar, their faces etched with worry and anxiety. ",
- "./examples/en_speaker_9.wav",
- False,
- None
- ],
- ],
- fn = infer,
- inputs = [
- prompt,
- audio_in,
- clean_sample,
- hidden_audio_numpy
- ],
- outputs = [
- cloned_out,
- video_out,
- npz_file,
- share_group,
- folder_path
- ],
- cache_examples = False
- )
-
- gr.HTML("""
-
-
-
* DISCLAIMER
-
- I hold no responsibility for the utilization or outcomes of audio content produced using the semantic constructs generated by this model.
- Please ensure that any application of this technology remains within legal and ethical boundaries.
- It is important to utilize this technology for ethical and legal purposes, upholding the standards of creativity and innovation.
-
-
- """)
-
- submit_btn.click(
- fn = infer,
- inputs = [
- prompt,
- audio_in,
- clean_sample,
- hidden_audio_numpy
- ],
- outputs = [
- cloned_out,
- video_out,
- npz_file,
- share_group,
- folder_path
- ]
- )
-
- micro_submit_btn.click(
- fn = infer,
- inputs = [
- prompt,
- micro_in,
- clean_micro,
- hidden_audio_numpy
- ],
- outputs = [
- cloned_out,
- video_out,
- npz_file,
- share_group,
- folder_path
- ]
- )
-
- c_submit_btn.click(
- fn = infer_from_c,
- inputs = [
- prompt,
- character_name
- ],
- outputs = [
- cloned_out,
- video_out,
- npz_file,
- share_group
- ]
- )
-
-demo.queue(api_open=False, max_size=10).launch()
\ No newline at end of file
diff --git a/spaces/flax-community/image-captioning/README.md b/spaces/flax-community/image-captioning/README.md
deleted file mode 100644
index 8bc3b65c3d828ce1635bf0748b0e771b21f14989..0000000000000000000000000000000000000000
--- a/spaces/flax-community/image-captioning/README.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Image Captioning
-emoji: 🖼️
-colorFrom: pink
-colorTo: red
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/flax-community/netherformer/app.py b/spaces/flax-community/netherformer/app.py
deleted file mode 100644
index 52b88a07c3763bc4a819dde762bc0bf6ae96fac0..0000000000000000000000000000000000000000
--- a/spaces/flax-community/netherformer/app.py
+++ /dev/null
@@ -1,174 +0,0 @@
-import streamlit as st
-import os
-
-from transformers import AutoTokenizer
-from transformers import AutoModelForSeq2SeqLM
-from transformers import pipeline
-from transformers import set_seed
-
-debug = False
-
-MODELS = [
- "flax-community/t5-base-dutch-demo",
- "yhavinga/t5-v1.1-base-dutch-cnn-test",
- "yhavinga/t5-v1.1-large-dutch-cnn-test",
-]
-
-DEFAULT_TEXT: str = """Het Van Goghmuseum in Amsterdam heeft vier kostbare prenten verworven van Mary Cassatt, de Amerikaanse impressionistische kunstenaar en tijdgenoot van Vincent van Gogh. Dat heeft het museum woensdagmiddag op een persconferentie bekendgemaakt.
-
-Het gaat om drie grote kleurenetsen en een zwart-wit litho met voorstellingen van vrouwen. Voor deze prenten, die afkomstig zijn van een Amerikaanse verzamelaar, betaalde het museum ruim 1,4 miljoen euro. Drie grote fondsen en een aantal particulieren hebben samen de aankoopsom beschikbaar gesteld.
-
-Mary Stevenson Cassatt (1844-1926) woonde en werkte lange tijd in Frankrijk. Ze staat met haar impressionistische schilderijen en tekeningen te boek als een van de vernieuwers van de Parijse kunstwereld in de late negentiende eeuw. Het Van Goghmuseum rekent haar prenten „tot het mooiste wat op grafisch gebied in het fin de siècle is geproduceerd”.
-
-De drie aangekochte kleurenetsen – Het doorpassen, De brief en Badende vrouw – komen uit een serie van tien waarmee Cassatt haar naam als (prent)kunstenaar definitief vestigde. Ze maakte de etsen na een bezoek in 1890 aan een tentoonstelling van Japanse prenten in Parijs. Over die expositie schreef de Amerikaanse aan haar vriendin Berthe Morisot, een andere vrouwelijke impressionist: „We kunnen de Japanse prenten in de Beaux-Arts gaan bekijken. Echt, die mag je niet missen. Als je kleurenprenten wilt maken, is er niets mooiers voorstelbaar. Ik droom ervan en denk nergens anders meer aan dan aan kleur op koper.”"""
-
-
-class TextSummarizer:
- def __init__(self):
- self.tokenizer = None
- self.model = None
- self.generator = None
- self.model_loaded = None
- set_seed(42)
-
- def load(self, model_name):
- os.environ["TOKENIZERS_PARALLELISM"] = "false"
- self.tokenizer = AutoTokenizer.from_pretrained(model_name)
- self.model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
- self.generator = pipeline(
- "text2text-generation", model=self.model, tokenizer=self.tokenizer
- )
- self.model_loaded = model_name
-
- def summarize(self, model_name, input_text, generate_kwargs) -> str:
- if not self.generator or self.model_loaded != model_name:
- with st.spinner(f"Loading model {model_name}..."):
- self.load(model_name)
- return self.generator(
- input_text, return_tensors=False, return_text=True, **generate_kwargs
- )[0].get("generated_text")
-
-
-@st.cache(allow_output_mutation=True)
-def instantiate_generator():
- summarizer = TextSummarizer()
- return summarizer
-
-
-def main():
- st.set_page_config( # Alternate names: setup_page, page, layout
- page_title="Netherformer", # String or None. Strings get appended with "• Streamlit".
- layout="wide", # Can be "centered" or "wide". In the future also "dashboard", etc.
- initial_sidebar_state="expanded", # Can be "auto", "expanded", "collapsed"
- page_icon="📰", # String, anything supported by st.image, or None.
- )
-
- with open("style.css") as f:
- st.markdown(f"", unsafe_allow_html=True)
-
- generator = instantiate_generator()
-
- st.markdown(
- """
-
- """,
- unsafe_allow_html=True,
- )
- st.sidebar.image("NewsCovid-19-512.png", width=200)
- st.sidebar.markdown(
- """# Netherformer
-* Create summaries of Dutch news stories.
-* Copy paste any Dutch news text and press the Generate Summary botton.
-* This is a demo of several Dutch T5 models finetuned for summarization:
- * [T5 Base Dutch finetuned on CNN DM and XSUM](https://huggingface.co/flax-community/t5-base-dutch-demo) (Jul '21)
- * [T5 v1.1 Base Dutch finetuned on CNN DM](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cnn-test) (Jan '22)
- * [T5 v1.1 Large Dutch finetuned on CNN DM](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cnn-test) (Jan '22)
-"""
- )
- st.sidebar.title("Parameters:")
-
- MODEL = st.sidebar.selectbox("Choose model", index=1, options=MODELS)
-
- min_length = st.sidebar.number_input(
- "Min length", min_value=10, max_value=150, value=75
- )
- max_length = st.sidebar.number_input(
- "Max length", min_value=50, max_value=250, value=142
- )
- no_repeat_ngram_size = st.sidebar.number_input(
- "No repeat NGram size", min_value=1, max_value=5, value=3
- )
-
- if sampling_mode := st.sidebar.selectbox(
- "select a Mode", index=0, options=["Beam Search", "Top-k Sampling"]
- ):
- if sampling_mode == "Beam Search":
- num_beams = st.sidebar.number_input(
- "Num beams", min_value=1, max_value=10, value=10
- )
- length_penalty = st.sidebar.number_input(
- "Length penalty", min_value=0.0, max_value=5.0, value=1.5, step=0.1
- )
- params = {
- "min_length": min_length,
- "max_length": max_length,
- "no_repeat_ngram_size": no_repeat_ngram_size,
- "num_beams": num_beams,
- "early_stopping": True,
- "length_penalty": length_penalty,
- "num_return_sequences": 1,
- }
- else:
- top_k = st.sidebar.number_input(
- "Top K", min_value=0, max_value=100, value=40
- )
- top_p = st.sidebar.number_input(
- "Top P", min_value=0.0, max_value=1.0, value=0.95, step=0.05
- )
- temperature = st.sidebar.number_input(
- "Temperature", min_value=0.0, max_value=1.0, value=0.3, step=0.05
- )
- params = {
- "min_length": min_length,
- "max_length": max_length,
- "no_repeat_ngram_size": no_repeat_ngram_size,
- "do_sample": True,
- "top_k": top_k,
- "top_p": top_p,
- "temperature": temperature,
- "num_return_sequences": 1,
- }
-
- st.sidebar.markdown(
- """For an explanation of the parameters, please to the [Huggingface blog post about text generation](https://huggingface.co/blog/how-to-generate)
- and the [Huggingface text generation interface doc](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.generate).
- """
- )
-
- input_text = st.text_area("Enter a Dutch news text", DEFAULT_TEXT, height=500)
-
- if st.button("Generate summary"):
-
- with st.spinner("Generating summary ..."):
- if debug:
- from time import sleep
-
- sleep(2)
- response = """Een bedrijventerrein aan de Prins Willem Alexanderhaven in Roermond heeft de komende dagen weg te blijven uit de regio. De burgemeester van Roermond roept alle ramptoeristen er opnieuw toe op meer dan 10.00 uur weg."""
- else:
- response = generator.summarize(MODEL, input_text, params)
-
- st.header("Summary:")
- st.markdown(response)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/flowers-team/SocialAISchool/stester.py b/spaces/flowers-team/SocialAISchool/stester.py
deleted file mode 100644
index 51dcc639e850db9c0a6a160af87bbd9266f7a40c..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/stester.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import os
-import numpy as np
-import re
-from pathlib import Path
-from collections import defaultdict
-from scipy import stats
-
-experiments = Path("./results_1000/")
-
-results_dict = {}
-
-def label_parser(label):
- label_parser_dict = {
- "VIGIL4_WizardGuide_lang64_no_explo": "ABL_MH-BabyAI",
- "VIGIL4_WizardTwoGuides_lang64_no_explo": "FULL_MH-BabyAI",
-
- "VIGIL4_WizardGuide_lang64_mm": "ABL_MH-BabyAI-ExpBonus",
- "VIGIL4_WizardTwoGuides_lang64_mm": "FULL_MH-BabyAI-ExpBonus",
-
- "VIGIL4_WizardGuide_lang64_deaf_no_explo": "ABL_Deaf-MH-BabyAI",
- "VIGIL4_WizardTwoGuides_lang64_deaf_no_explo": "FULL_Deaf-MH-BabyAI",
-
- "VIGIL4_WizardGuide_lang64_bow": "ABL_MH-BabyAI-BOW",
- "VIGIL4_WizardTwoGuides_lang64_bow": "FULL_MH-BabyAI-BOW",
-
- "VIGIL4_WizardGuide_lang64_no_mem": "ABL_MH-BabyAI-no-mem",
- "VIGIL4_WizardTwoGuides_lang64_no_mem": "FULL_MH-BabyAI-no-mem",
-
- "VIGIL5_WizardGuide_lang64_bigru": "ABL_MH-BabyAI-bigru",
- "VIGIL5_WizardTwoGuides_lang64_bigru": "FULL_MH-BabyAI-bigru",
-
- "VIGIL5_WizardGuide_lang64_attgru": "ABL_MH-BabyAI-attgru",
- "VIGIL5_WizardTwoGuides_lang64_attgru": "FULL_MH-BabyAI-attgru",
-
- "VIGIL4_WizardGuide_lang64_curr_dial": "ABL_MH-BabyAI-current-dialogue",
- "VIGIL4_WizardTwoGuides_lang64_curr_dial": "FULL_MH-BabyAI-current-dialogue",
-
- "random_WizardGuide": "ABL_Random-agent",
- "random_WizardTwoGuides": "FULL_Random-agent",
- }
- if sum([1 for k, v in label_parser_dict.items() if k in label]) != 1:
- print("ERROR")
- print(label)
- exit()
-
- for k, v in label_parser_dict.items():
- if k in label: return v
-
- return label
-
-for experiment_out_file in experiments.iterdir():
- results_dict[label_parser(str(experiment_out_file))] = []
- with open(experiment_out_file) as f:
- for line in f:
- if "seed success rate" in line:
- seed_success_rate = float(re.search('[0-9]\.[0-9]*', line).group())
- results_dict[label_parser(str(experiment_out_file))].append(seed_success_rate)
-
-assert set([len(v) for v in results_dict.values()]) == set([16])
-
-test_p = 0.05
-compare = {
- "ABL_MH-BabyAI-ExpBonus": "ABL_MH-BabyAI",
- "ABL_MH-BabyAI": "ABL_Deaf-MH-BabyAI",
- "ABL_Deaf-MH-BabyAI": "ABL_Random-agent",
- "FULL_MH-BabyAI-ExpBonus": "FULL_MH-BabyAI",
- "FULL_MH-BabyAI": "FULL_Deaf-MH-BabyAI",
- "FULL_Deaf-MH-BabyAI": "FULL_Random-agent",
-}
-for k, v in compare.items():
- p = stats.ttest_ind(
- results_dict[k],
- results_dict[v],
- equal_var=False
- ).pvalue
- if np.isnan(p):
- from IPython import embed; embed()
- print("{} (m:{}) <---> {} (m:{}) = p: {} result: {}".format(
- k, np.mean(results_dict[k]), v, np.mean(results_dict[v]), p,
- "Distributions different(p={})".format(test_p) if p < test_p else "Distributions same(p={})".format(test_p)
- ))
- print()
-# from IPython import embed; embed()
\ No newline at end of file
diff --git a/spaces/froginsect/Lama-Cleaner-lama/README.md b/spaces/froginsect/Lama-Cleaner-lama/README.md
deleted file mode 100644
index 34fec6eb0c7e0b523863096b4835b8e25bb4ba52..0000000000000000000000000000000000000000
--- a/spaces/froginsect/Lama-Cleaner-lama/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Lama Cleaner Lama
-emoji: ⚡
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: Sanster/Lama-Cleaner-lama
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gagan3012/summarization/src/models/evaluate_model.py b/spaces/gagan3012/summarization/src/models/evaluate_model.py
deleted file mode 100644
index 7cf7cef7e021a810eff6b963191e5e61edf9d572..0000000000000000000000000000000000000000
--- a/spaces/gagan3012/summarization/src/models/evaluate_model.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from dagshub import dagshub_logger
-import yaml
-
-from model import Summarization
-import pandas as pd
-
-
-def evaluate_model():
- """
- Evaluate model using rouge measure
- """
- with open("model_params.yml") as f:
- params = yaml.safe_load(f)
-
- test_df = pd.read_csv("data/processed/test.csv")
- test_df = test_df.sample(n=25, random_state=42)
- model = Summarization()
- model.load_model(model_type=params["model_type"], model_dir=params["model_dir"])
- results = model.evaluate(test_df=test_df, metrics=params["metric"])
-
- with dagshub_logger(
- metrics_path="reports/evaluation_metrics.csv", should_log_hparams=False
- ) as logger:
- logger.log_metrics(results)
-
-
-if __name__ == "__main__":
- evaluate_model()
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/dataset_wrappers.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/dataset_wrappers.py
deleted file mode 100644
index d6a5e957ec3b44465432617cf6e8f0b86a8a5efa..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/dataset_wrappers.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from torch.utils.data.dataset import ConcatDataset as _ConcatDataset
-
-from .builder import DATASETS
-
-
-@DATASETS.register_module()
-class ConcatDataset(_ConcatDataset):
- """A wrapper of concatenated dataset.
-
- Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but
- concat the group flag for image aspect ratio.
-
- Args:
- datasets (list[:obj:`Dataset`]): A list of datasets.
- """
-
- def __init__(self, datasets):
- super(ConcatDataset, self).__init__(datasets)
- self.CLASSES = datasets[0].CLASSES
- self.PALETTE = datasets[0].PALETTE
-
-
-@DATASETS.register_module()
-class RepeatDataset(object):
- """A wrapper of repeated dataset.
-
- The length of repeated dataset will be `times` larger than the original
- dataset. This is useful when the data loading time is long but the dataset
- is small. Using RepeatDataset can reduce the data loading time between
- epochs.
-
- Args:
- dataset (:obj:`Dataset`): The dataset to be repeated.
- times (int): Repeat times.
- """
-
- def __init__(self, dataset, times):
- self.dataset = dataset
- self.times = times
- self.CLASSES = dataset.CLASSES
- self.PALETTE = dataset.PALETTE
- self._ori_len = len(self.dataset)
-
- def __getitem__(self, idx):
- """Get item from original dataset."""
- return self.dataset[idx % self._ori_len]
-
- def __len__(self):
- """The length is multiplied by ``times``"""
- return self.times * self._ori_len
diff --git a/spaces/ghuron/artist/dataset/astro.sql b/spaces/ghuron/artist/dataset/astro.sql
deleted file mode 100644
index a9695c0e80efedd3a4bd28d976d386c387ce5363..0000000000000000000000000000000000000000
--- a/spaces/ghuron/artist/dataset/astro.sql
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:a6385e7e85ecd17269f37eb746213fedc057a09d6f9b76f157ed609f8666e785
-size 549064704
diff --git a/spaces/gordonchan/h2oo/gradio_utils/grclient.py b/spaces/gordonchan/h2oo/gradio_utils/grclient.py
deleted file mode 100644
index 8346a61cad99d492f8a10de17851454488364b83..0000000000000000000000000000000000000000
--- a/spaces/gordonchan/h2oo/gradio_utils/grclient.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import traceback
-from typing import Callable
-import os
-
-from gradio_client.client import Job
-
-os.environ['HF_HUB_DISABLE_TELEMETRY'] = '1'
-
-from gradio_client import Client
-
-
-class GradioClient(Client):
- """
- Parent class of gradio client
- To handle automatically refreshing client if detect gradio server changed
- """
-
- def __init__(self, *args, **kwargs):
- self.args = args
- self.kwargs = kwargs
- super().__init__(*args, **kwargs)
- self.server_hash = self.get_server_hash()
-
- def get_server_hash(self):
- """
- Get server hash using super without any refresh action triggered
- Returns: git hash of gradio server
- """
- return super().submit(api_name='/system_hash').result()
-
- def refresh_client_if_should(self):
- # get current hash in order to update api_name -> fn_index map in case gradio server changed
- # FIXME: Could add cli api as hash
- server_hash = self.get_server_hash()
- if self.server_hash != server_hash:
- self.refresh_client()
- self.server_hash = server_hash
- else:
- self.reset_session()
-
- def refresh_client(self):
- """
- Ensure every client call is independent
- Also ensure map between api_name and fn_index is updated in case server changed (e.g. restarted with new code)
- Returns:
- """
- # need session hash to be new every time, to avoid "generator already executing"
- self.reset_session()
-
- client = Client(*self.args, **self.kwargs)
- for k, v in client.__dict__.items():
- setattr(self, k, v)
-
- def submit(
- self,
- *args,
- api_name: str | None = None,
- fn_index: int | None = None,
- result_callbacks: Callable | list[Callable] | None = None,
- ) -> Job:
- # Note predict calls submit
- try:
- self.refresh_client_if_should()
- job = super().submit(*args, api_name=api_name, fn_index=fn_index)
- except Exception as e:
- print("Hit e=%s" % str(e), flush=True)
- # force reconfig in case only that
- self.refresh_client()
- job = super().submit(*args, api_name=api_name, fn_index=fn_index)
-
- # see if immediately failed
- e = job.future._exception
- if e is not None:
- print("GR job failed: %s %s" % (str(e), ''.join(traceback.format_tb(e.__traceback__))), flush=True)
- # force reconfig in case only that
- self.refresh_client()
- job = super().submit(*args, api_name=api_name, fn_index=fn_index)
- e2 = job.future._exception
- if e2 is not None:
- print("GR job failed again: %s\n%s" % (str(e2), ''.join(traceback.format_tb(e2.__traceback__))), flush=True)
-
- return job
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Bal Brahmachari 720p 1080p The Legend of Mahavir and His Special Powers.md b/spaces/gotiQspiryo/whisper-ui/examples/Bal Brahmachari 720p 1080p The Legend of Mahavir and His Special Powers.md
deleted file mode 100644
index 67ab99b1d136febd84a9b116425f6c755772bc61..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Bal Brahmachari 720p 1080p The Legend of Mahavir and His Special Powers.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Bal Brahmachari 720p 1080p DOWNLOAD — https://urlgoal.com/2uyNFB
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gradio/HuBERT/examples/laser/laser_src/multitask_data_utils.py b/spaces/gradio/HuBERT/examples/laser/laser_src/multitask_data_utils.py
deleted file mode 100644
index b05caea26793bf5112a7abc29d76225f578f3ebe..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/laser/laser_src/multitask_data_utils.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import OrderedDict
-
-import numpy as np
-
-from fairseq.data import BaseWrapperDataset, FairseqDataset, iterators
-
-
-class MultiItr(object):
- def __init__(self, itr):
- self.itr = itr
- self._counts = [0 for x in itr]
-
- def __len__(self):
- return sum(len(itr) for itr in self.itr)
-
- def __iter__(self):
- return self
-
- def __next__(self):
- ratios = [count / len(itr) for count, itr in zip(self._counts, self.itr)]
- idx = ratios.index(min(ratios))
- self._counts[idx] += 1
- return next(self.itr[idx])
-
-
-class MultidatasetEpochBatchIterator(iterators.EpochBatchIterating):
- """A wrapper around multiple epoch batch iterators."""
-
- def __init__(
- self,
- dataset,
- batch_sampler,
- seed=1,
- num_shards=1,
- shard_id=0,
- num_workers=0,
- epoch=1,
- ):
-
- assert isinstance(dataset, OrderedDict)
- assert len(dataset)
- assert isinstance(dataset[next(iter(dataset))], FairseqDataset)
-
- self.iterators = []
-
- self.epoch = epoch
- for key, dt in dataset.items():
- epoch_iter = iterators.EpochBatchIterator(
- dataset=dt,
- collate_fn=dt.collater,
- batch_sampler=batch_sampler[key],
- seed=seed,
- num_shards=num_shards,
- shard_id=shard_id,
- num_workers=0,
- epoch=epoch,
- )
- self.iterators.append(epoch_iter)
-
- def __len__(self):
- return sum(len(itr) for itr in self.iterators)
-
- def next_epoch_itr(self, shuffle=True, fix_batches_to_gpus=False):
- # `self.epoch += 1` should be handled by underlying `EpochBatchIterator`s.
- return MultiItr(
- [
- itr.next_epoch_itr(
- shuffle=shuffle, fix_batches_to_gpus=fix_batches_to_gpus
- )
- for itr in self.iterators
- ]
- )
-
- def end_of_epoch(self):
- return all(itr.end_of_epoch() for itr in self.iterators)
-
- @property
- def next_epoch_idx(self):
- """Return the epoch index after *next_epoch_itr* is called."""
-
- epochs = [itr.next_epoch_idx for itr in self.iterators]
- self.epoch = epochs[0]
- assert all(epoch == self.epoch for epoch in epochs)
-
- return self.epoch
-
- @property
- def iterations_in_epoch(self):
- return sum(itr.iterations_in_epoch for itr in self.iterators)
-
- def state_dict(self):
- return {
- "iterators": [it.state_dict() for it in self.iterators],
- "epoch": self.epoch,
- }
-
- def load_state_dict(self, state_dict):
- self.epoch = state_dict["epoch"]
- for it, d in zip(self.iterators, state_dict["iterators"]):
- it.load_state_dict(d)
-
-
-class MultitaskDatasetWrapper(BaseWrapperDataset):
- """A wrapper for a multitask dataset."""
-
- def __init__(self, dataset, target_language_id, sample=1.0, name=""):
- super().__init__(dataset)
- self.target_language_id = target_language_id
- self.sample = sample
- self.name = name
-
- def collater(self, *args, **kwargs):
- ans = self.dataset.collater(*args, **kwargs)
- if "net_input" in ans:
- ans["net_input"]["target_language_id"] = self.target_language_id
- ans["net_input"]["dataset_name"] = self.name
- return ans
-
- def num_tokens(self, *args, **kwargs):
- return self.dataset.num_tokens(*args, **kwargs)
-
- def ordered_indices(self, *args, **kwargs):
- indices = self.dataset.ordered_indices(*args, **kwargs)
- # Hacky solution for sampling
- size = int(self.sample * indices.shape[0])
-
- return indices.take(np.sort(np.random.permutation(indices.shape[0])[:size]))
-
- def size(self, index: int):
- return self.dataset.size(index)
-
- @property
- def supports_prefetch(self):
- """Whether this dataset supports prefetching."""
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- return self.dataset.prefetch(indices)
diff --git a/spaces/gradio/HuBERT/fairseq/models/hubert/__init__.py b/spaces/gradio/HuBERT/fairseq/models/hubert/__init__.py
deleted file mode 100644
index a1b0eabbdbcaf12b15bb96b329ab1e276256f79a..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/models/hubert/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .hubert import * # noqa
-from .hubert_asr import * # noqa
diff --git a/spaces/gradio/longformer/tvm/_ffi/libinfo.py b/spaces/gradio/longformer/tvm/_ffi/libinfo.py
deleted file mode 100644
index a8ef9cfb2df3726b3a31ff675163503f3ef0c288..0000000000000000000000000000000000000000
--- a/spaces/gradio/longformer/tvm/_ffi/libinfo.py
+++ /dev/null
@@ -1,205 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership. The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied. See the License for the
-# specific language governing permissions and limitations
-# under the License.
-"""Library information."""
-from __future__ import absolute_import
-import sys
-import os
-
-def split_env_var(env_var, split):
- """Splits environment variable string.
-
- Parameters
- ----------
- env_var : str
- Name of environment variable.
-
- split : str
- String to split env_var on.
-
- Returns
- -------
- splits : list(string)
- If env_var exists, split env_var. Otherwise, empty list.
- """
- if os.environ.get(env_var, None):
- return [p.strip() for p in os.environ[env_var].split(split)]
- return []
-
-def find_lib_path(name=None, search_path=None, optional=False):
- """Find dynamic library files.
-
- Parameters
- ----------
- name : list of str
- List of names to be found.
-
- Returns
- -------
- lib_path : list(string)
- List of all found path to the libraries
- """
- use_runtime = os.environ.get("TVM_USE_RUNTIME_LIB", False)
-
- # See https://github.com/dmlc/tvm/issues/281 for some background.
-
- # NB: This will either be the source directory (if TVM is run
- # inplace) or the install directory (if TVM is installed).
- # An installed TVM's curr_path will look something like:
- # $PREFIX/lib/python3.6/site-packages/tvm/_ffi
- ffi_dir = os.path.dirname(os.path.realpath(os.path.expanduser(__file__)))
- source_dir = os.path.join(ffi_dir, "..", "..", "..")
- install_lib_dir = os.path.join(ffi_dir, "..", "..", "..", "..")
-
- dll_path = []
-
- if os.environ.get('TVM_LIBRARY_PATH', None):
- dll_path.append(os.environ['TVM_LIBRARY_PATH'])
-
- if sys.platform.startswith('linux'):
- dll_path.extend(split_env_var('LD_LIBRARY_PATH', ':'))
- dll_path.extend(split_env_var('PATH', ':'))
- elif sys.platform.startswith('darwin'):
- dll_path.extend(split_env_var('DYLD_LIBRARY_PATH', ':'))
- dll_path.extend(split_env_var('PATH', ':'))
- elif sys.platform.startswith('win32'):
- dll_path.extend(split_env_var('PATH', ';'))
-
- # Pip lib directory
- dll_path.append(os.path.join(ffi_dir, ".."))
- # Default cmake build directory
- dll_path.append(os.path.join(source_dir, "build"))
- dll_path.append(os.path.join(source_dir, "build", "Release"))
- # Default make build directory
- dll_path.append(os.path.join(source_dir, "lib"))
-
- dll_path.append(install_lib_dir)
-
- dll_path = [os.path.realpath(x) for x in dll_path]
- if search_path is not None:
- if search_path is list:
- dll_path = dll_path + search_path
- else:
- dll_path.append(search_path)
- if name is not None:
- if isinstance(name, list):
- lib_dll_path = []
- for n in name:
- lib_dll_path += [os.path.join(p, n) for p in dll_path]
- else:
- lib_dll_path = [os.path.join(p, name) for p in dll_path]
- runtime_dll_path = []
- else:
- if sys.platform.startswith('win32'):
- lib_dll_path = [os.path.join(p, 'libtvm.dll') for p in dll_path] +\
- [os.path.join(p, 'tvm.dll') for p in dll_path]
- runtime_dll_path = [os.path.join(p, 'libtvm_runtime.dll') for p in dll_path] +\
- [os.path.join(p, 'tvm_runtime.dll') for p in dll_path]
- elif sys.platform.startswith('darwin'):
- lib_dll_path = [os.path.join(p, 'libtvm.dylib') for p in dll_path]
- runtime_dll_path = [os.path.join(p, 'libtvm_runtime.dylib') for p in dll_path]
- else:
- lib_dll_path = [os.path.join(p, 'libtvm.so') for p in dll_path]
- runtime_dll_path = [os.path.join(p, 'libtvm_runtime.so') for p in dll_path]
-
- if not use_runtime:
- # try to find lib_dll_path
- lib_found = [p for p in lib_dll_path if os.path.exists(p) and os.path.isfile(p)]
- lib_found += [p for p in runtime_dll_path if os.path.exists(p) and os.path.isfile(p)]
- else:
- # try to find runtime_dll_path
- use_runtime = True
- lib_found = [p for p in runtime_dll_path if os.path.exists(p) and os.path.isfile(p)]
-
- if not lib_found:
- message = ('Cannot find the files.\n' +
- 'List of candidates:\n' +
- str('\n'.join(lib_dll_path + runtime_dll_path)))
- if not optional:
- raise RuntimeError(message)
- return None
-
- if use_runtime:
- sys.stderr.write("Loading runtime library %s... exec only\n" % lib_found[0])
- sys.stderr.flush()
- return lib_found
-
-
-def find_include_path(name=None, search_path=None, optional=False):
- """Find header files for C compilation.
-
- Parameters
- ----------
- name : list of str
- List of directory names to be searched.
-
- Returns
- -------
- include_path : list(string)
- List of all found paths to header files.
- """
- ffi_dir = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
- source_dir = os.path.join(ffi_dir, "..", "..", "..")
- install_include_dir = os.path.join(ffi_dir, "..", "..", "..", "..")
- third_party_dir = os.path.join(source_dir, "3rdparty")
-
- header_path = []
-
- if os.environ.get('TVM_INCLUDE_PATH', None):
- header_path.append(os.environ['TVM_INCLUDE_PATH'])
-
- header_path.append(install_include_dir)
- header_path.append(source_dir)
- header_path.append(third_party_dir)
-
- header_path = [os.path.abspath(x) for x in header_path]
- if search_path is not None:
- if search_path is list:
- header_path = header_path + search_path
- else:
- header_path.append(search_path)
- if name is not None:
- if isinstance(name, list):
- tvm_include_path = []
- for n in name:
- tvm_include_path += [os.path.join(p, n) for p in header_path]
- else:
- tvm_include_path = [os.path.join(p, name) for p in header_path]
- dlpack_include_path = []
- else:
- tvm_include_path = [os.path.join(p, 'include') for p in header_path]
- dlpack_include_path = [os.path.join(p, 'dlpack/include') for p in header_path]
-
- # try to find include path
- include_found = [p for p in tvm_include_path if os.path.exists(p) and os.path.isdir(p)]
- include_found += [p for p in dlpack_include_path if os.path.exists(p) and os.path.isdir(p)]
-
- if not include_found:
- message = ('Cannot find the files.\n' +
- 'List of candidates:\n' +
- str('\n'.join(tvm_include_path + dlpack_include_path)))
- if not optional:
- raise RuntimeError(message)
- return None
-
- return include_found
-
-
-# current version
-# We use the version of the incoming release for code
-# that is under development.
-# The following line is set by tvm/python/update_version.py
-__version__ = "0.6.dev"
diff --git a/spaces/gradio/neon-tts-plugin-coqui/app.py b/spaces/gradio/neon-tts-plugin-coqui/app.py
deleted file mode 100644
index b5a78d625ffde42934edbbb31bbffe70081218c0..0000000000000000000000000000000000000000
--- a/spaces/gradio/neon-tts-plugin-coqui/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import tempfile
-import gradio as gr
-from neon_tts_plugin_coqui import CoquiTTS
-
-LANGUAGES = list(CoquiTTS.langs.keys())
-coquiTTS = CoquiTTS()
-
-def tts(text: str, language: str):
- with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
- coquiTTS.get_tts(text, fp, speaker = {"language" : language})
- return fp.name
-
-inputs = [gr.Textbox(label="Input", value=CoquiTTS.langs["en"]["sentence"], max_lines=3),
- gr.Radio(label="Language", choices=LANGUAGES, value="en")]
-outputs = gr.Audio(label="Output")
-
-demo = gr.Interface(fn=tts, inputs=inputs, outputs=outputs)
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/grey1227/experiment_terminator/src/experiment_terminator.py b/spaces/grey1227/experiment_terminator/src/experiment_terminator.py
deleted file mode 100644
index c11af0b6c54faf46e55744b1483904ec779d309d..0000000000000000000000000000000000000000
--- a/spaces/grey1227/experiment_terminator/src/experiment_terminator.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import numpy as np
-
-
-class ExperimentTerminator():
-
- def __init__(self):
-
- # The number of samples to be used in Monte Carlo elements of the code. Increasing this number
- # improves accuracy of estimates at the expense of run time.
- self.mc_samples = 5_000
-
- # The Type I Error rate for the experiment. This determines the credible interval size used
- # for all calculations (e.g., alpha = 0.05 produces 95% credible intervals)
- self.alpha = 0.05
-
- def get_posterior_samples(self,
- completed_trials_a,
- completed_trials_b,
- successes_a,
- successes_b):
- """
- Get samples from the posterior distribution of the probability of success in the
- control group and the test group.
-
- :param completed_trials_a: Integer giving the number of trials completed in the control group
- :param completed_trials_b: Integer giving the number of trials completed in the test group
- :param successes_a: Integer giving the number of successes observed so far in the control group
- :param successes_b: Integer giving the number of successes observed so far in the test group
-
- :return: A list with two arrays of samples. The first array is a set of posterior samples from
- the distribution of the probability of success in the control group. The second array is the
- same for the test group.
- """
-
- posterior_samples_a = np.random.beta(successes_a,
- completed_trials_a - successes_a,
- self.mc_samples)
- posterior_samples_b = np.random.beta(successes_b,
- completed_trials_b - successes_b,
- self.mc_samples)
- return [posterior_samples_a, posterior_samples_b]
-
- def get_prob_reject_null(self,
- planned_trials_a,
- planned_trials_b,
- completed_trials_a,
- completed_trials_b,
- successes_a,
- successes_b,
- posterior_samples_a,
- posterior_samples_b):
- """
- Calculate the probability that the null hypothesis will be rejected by the planned end
- of the experiment
-
- :param planned_trials_a: Integer giving the number of trials planned to be completed in
- the control group in the experiment
- :param planned_trials_b: Integer giving the number of trials planned to be completed in
- the test group in the experiment
- :param completed_trials_a: Integer giving the number of trials completed in the control group
- :param completed_trials_b: Integer giving the number of trials completed in the teest group
- :param successes_a: Integer giving the number of successes observed so far in the control group
- :param successes_b: Integer giving the number of successes observed so far in the test group
- :param posterior_samples_a: Posterior samples for the control group returned by get_posterior_samples
- :param posterior_samples_a: Posterior samples for the test group returned by get_posterior_samples
-
- :return: Float with the posterior predictive probability of rejecting the null hypothesis.
- """
-
- potential_successes_a = np.random.binomial(planned_trials_a - completed_trials_a,
- posterior_samples_a,
- self.mc_samples)
- potential_successes_a += successes_a
- potential_successes_b = np.random.binomial(planned_trials_b - completed_trials_b,
- posterior_samples_b,
- self.mc_samples)
- potential_successes_b += successes_b
-
- rejection = np.zeros(self.mc_samples)
- for i in range(self.mc_samples):
- post_samples_a = np.random.beta(potential_successes_a[i] + 1,
- planned_trials_a - potential_successes_a[i] + 1,
- self.mc_samples)
- post_samples_b = np.random.beta(potential_successes_b[i] + 1,
- planned_trials_b - potential_successes_b[i] + 1,
- self.mc_samples)
- post_samples_b_minus_a = post_samples_b - post_samples_a
- interval = np.quantile(post_samples_b_minus_a, [self.alpha / 2, 1 - (self.alpha / 2)])
- if (interval[0] > 0 or interval[1] < 0):
- rejection[i] = 1
-
- return np.mean(rejection)
-
- def analyze_experiment(self,
- planned_trials_a,
- planned_trials_b,
- completed_trials_a,
- completed_trials_b,
- successes_a,
- successes_b):
- """
- Based on the number of planned trials, completed trials, and successes observed so far in the
- control and test groups, calculate a bunch of summary measures of the posterior distribution
- and predicted posterior.
-
- :param planned_trials_a: Integer giving the number of trials planned to be completed in
- the control group in the experiment
- :param planned_trials_b: Integer giving the number of trials planned to be completed in
- the test group in the experiment
- :param completed_trials_a: Integer giving the number of trials completed in the control group
- :param completed_trials_b: Integer giving the number of trials completed in the teest group
- :param successes_a: Integer giving the number of successes observed so far in the control group
- :param successes_b: Integer giving the number of successes observed so far in the test group
- """
-
- posterior_samples = self.get_posterior_samples(completed_trials_a,
- completed_trials_b,
- successes_a,
- successes_b)
- posterior_lift = (posterior_samples[1] - posterior_samples[0]) / posterior_samples[0]
- conversion_rate_a = successes_a / completed_trials_a
- conversion_rate_b = successes_b / completed_trials_b
- expected_lift = np.mean(posterior_lift)
- pr_b_gt_a = np.mean(posterior_lift >= 0)
- pr_reject_null = self.get_prob_reject_null(planned_trials_a,
- planned_trials_b,
- completed_trials_a,
- completed_trials_b,
- successes_a,
- successes_b,
- posterior_samples[0],
- posterior_samples[1])
- return [conversion_rate_a,
- conversion_rate_b,
- expected_lift,
- pr_b_gt_a,
- pr_reject_null,
- posterior_lift]
-
-
-
-if __name__ == "__main__":
- et = ExperimentTerminator()
- exp_outcomes = et.analyze_experiment(2000, 2000, 1000, 1000, 250, 270)
- print(exp_outcomes)
\ No newline at end of file
diff --git a/spaces/gulabpatel/Real-ESRGAN/Training.md b/spaces/gulabpatel/Real-ESRGAN/Training.md
deleted file mode 100644
index 64704e1d2e1f334984232afd12b245235b274a9e..0000000000000000000000000000000000000000
--- a/spaces/gulabpatel/Real-ESRGAN/Training.md
+++ /dev/null
@@ -1,100 +0,0 @@
-# :computer: How to Train Real-ESRGAN
-
-The training codes have been released.
-Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I will also retrain the models.
-
-## Overview
-
-The training has been divided into two stages. These two stages have the same data synthesis process and training pipeline, except for the loss functions. Specifically,
-
-1. We first train Real-ESRNet with L1 loss from the pre-trained model ESRGAN.
-1. We then use the trained Real-ESRNet model as an initialization of the generator, and train the Real-ESRGAN with a combination of L1 loss, perceptual loss and GAN loss.
-
-## Dataset Preparation
-
-We use DF2K (DIV2K and Flickr2K) + OST datasets for our training. Only HR images are required.
-You can download from :
-
-1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip
-2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
-3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip
-
-For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales.
-
-We then crop DF2K images into sub-images for faster IO and processing.
-
-You need to prepare a txt file containing the image paths. The following are some examples in `meta_info_DF2Kmultiscale+OST_sub.txt` (As different users may have different sub-images partitions, this file is not suitable for your purpose and you need to prepare your own txt file):
-
-```txt
-DF2K_HR_sub/000001_s001.png
-DF2K_HR_sub/000001_s002.png
-DF2K_HR_sub/000001_s003.png
-...
-```
-
-## Train Real-ESRNet
-
-1. Download pre-trained model [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) into `experiments/pretrained_models`.
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models
- ```
-1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly:
- ```yml
- train:
- name: DF2K+OST
- type: RealESRGANDataset
- dataroot_gt: datasets/DF2K # modify to the root path of your folder
- meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt
- io_backend:
- type: disk
- ```
-1. If you want to perform validation during training, uncomment those lines and modify accordingly:
- ```yml
- # Uncomment these for validation
- # val:
- # name: validation
- # type: PairedImageDataset
- # dataroot_gt: path_to_gt
- # dataroot_lq: path_to_lq
- # io_backend:
- # type: disk
-
- ...
-
- # Uncomment these for validation
- # validation settings
- # val:
- # val_freq: !!float 5e3
- # save_img: True
-
- # metrics:
- # psnr: # metric name, can be arbitrary
- # type: calculate_psnr
- # crop_border: 4
- # test_y_channel: false
- ```
-1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
- ```
-1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
- ```
-
-## Train Real-ESRGAN
-
-1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`.
-1. Modify the option file `train_realesrgan_x4plus.yml` accordingly. Most modifications are similar to those listed above.
-1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
- ```
-1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
- ```
diff --git a/spaces/gylleus/icongen/torch_utils/persistence.py b/spaces/gylleus/icongen/torch_utils/persistence.py
deleted file mode 100644
index 0186cfd97bca0fcb397a7b73643520c1d1105a02..0000000000000000000000000000000000000000
--- a/spaces/gylleus/icongen/torch_utils/persistence.py
+++ /dev/null
@@ -1,251 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Facilities for pickling Python code alongside other data.
-
-The pickled code is automatically imported into a separate Python module
-during unpickling. This way, any previously exported pickles will remain
-usable even if the original code is no longer available, or if the current
-version of the code is not consistent with what was originally pickled."""
-
-import sys
-import pickle
-import io
-import inspect
-import copy
-import uuid
-import types
-import dnnlib
-
-#----------------------------------------------------------------------------
-
-_version = 6 # internal version number
-_decorators = set() # {decorator_class, ...}
-_import_hooks = [] # [hook_function, ...]
-_module_to_src_dict = dict() # {module: src, ...}
-_src_to_module_dict = dict() # {src: module, ...}
-
-#----------------------------------------------------------------------------
-
-def persistent_class(orig_class):
- r"""Class decorator that extends a given class to save its source code
- when pickled.
-
- Example:
-
- from torch_utils import persistence
-
- @persistence.persistent_class
- class MyNetwork(torch.nn.Module):
- def __init__(self, num_inputs, num_outputs):
- super().__init__()
- self.fc = MyLayer(num_inputs, num_outputs)
- ...
-
- @persistence.persistent_class
- class MyLayer(torch.nn.Module):
- ...
-
- When pickled, any instance of `MyNetwork` and `MyLayer` will save its
- source code alongside other internal state (e.g., parameters, buffers,
- and submodules). This way, any previously exported pickle will remain
- usable even if the class definitions have been modified or are no
- longer available.
-
- The decorator saves the source code of the entire Python module
- containing the decorated class. It does *not* save the source code of
- any imported modules. Thus, the imported modules must be available
- during unpickling, also including `torch_utils.persistence` itself.
-
- It is ok to call functions defined in the same module from the
- decorated class. However, if the decorated class depends on other
- classes defined in the same module, they must be decorated as well.
- This is illustrated in the above example in the case of `MyLayer`.
-
- It is also possible to employ the decorator just-in-time before
- calling the constructor. For example:
-
- cls = MyLayer
- if want_to_make_it_persistent:
- cls = persistence.persistent_class(cls)
- layer = cls(num_inputs, num_outputs)
-
- As an additional feature, the decorator also keeps track of the
- arguments that were used to construct each instance of the decorated
- class. The arguments can be queried via `obj.init_args` and
- `obj.init_kwargs`, and they are automatically pickled alongside other
- object state. A typical use case is to first unpickle a previous
- instance of a persistent class, and then upgrade it to use the latest
- version of the source code:
-
- with open('old_pickle.pkl', 'rb') as f:
- old_net = pickle.load(f)
- new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs)
- misc.copy_params_and_buffers(old_net, new_net, require_all=True)
- """
- assert isinstance(orig_class, type)
- if is_persistent(orig_class):
- return orig_class
-
- assert orig_class.__module__ in sys.modules
- orig_module = sys.modules[orig_class.__module__]
- orig_module_src = _module_to_src(orig_module)
-
- class Decorator(orig_class):
- _orig_module_src = orig_module_src
- _orig_class_name = orig_class.__name__
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self._init_args = copy.deepcopy(args)
- self._init_kwargs = copy.deepcopy(kwargs)
- assert orig_class.__name__ in orig_module.__dict__
- _check_pickleable(self.__reduce__())
-
- @property
- def init_args(self):
- return copy.deepcopy(self._init_args)
-
- @property
- def init_kwargs(self):
- return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs))
-
- def __reduce__(self):
- fields = list(super().__reduce__())
- fields += [None] * max(3 - len(fields), 0)
- if fields[0] is not _reconstruct_persistent_obj:
- meta = dict(type='class', version=_version, module_src=self._orig_module_src, class_name=self._orig_class_name, state=fields[2])
- fields[0] = _reconstruct_persistent_obj # reconstruct func
- fields[1] = (meta,) # reconstruct args
- fields[2] = None # state dict
- return tuple(fields)
-
- Decorator.__name__ = orig_class.__name__
- _decorators.add(Decorator)
- return Decorator
-
-#----------------------------------------------------------------------------
-
-def is_persistent(obj):
- r"""Test whether the given object or class is persistent, i.e.,
- whether it will save its source code when pickled.
- """
- try:
- if obj in _decorators:
- return True
- except TypeError:
- pass
- return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck
-
-#----------------------------------------------------------------------------
-
-def import_hook(hook):
- r"""Register an import hook that is called whenever a persistent object
- is being unpickled. A typical use case is to patch the pickled source
- code to avoid errors and inconsistencies when the API of some imported
- module has changed.
-
- The hook should have the following signature:
-
- hook(meta) -> modified meta
-
- `meta` is an instance of `dnnlib.EasyDict` with the following fields:
-
- type: Type of the persistent object, e.g. `'class'`.
- version: Internal version number of `torch_utils.persistence`.
- module_src Original source code of the Python module.
- class_name: Class name in the original Python module.
- state: Internal state of the object.
-
- Example:
-
- @persistence.import_hook
- def wreck_my_network(meta):
- if meta.class_name == 'MyNetwork':
- print('MyNetwork is being imported. I will wreck it!')
- meta.module_src = meta.module_src.replace("True", "False")
- return meta
- """
- assert callable(hook)
- _import_hooks.append(hook)
-
-#----------------------------------------------------------------------------
-
-def _reconstruct_persistent_obj(meta):
- r"""Hook that is called internally by the `pickle` module to unpickle
- a persistent object.
- """
- meta = dnnlib.EasyDict(meta)
- meta.state = dnnlib.EasyDict(meta.state)
- for hook in _import_hooks:
- meta = hook(meta)
- assert meta is not None
-
- assert meta.version == _version
- module = _src_to_module(meta.module_src)
-
- assert meta.type == 'class'
- orig_class = module.__dict__[meta.class_name]
- decorator_class = persistent_class(orig_class)
- obj = decorator_class.__new__(decorator_class)
-
- setstate = getattr(obj, '__setstate__', None)
- if callable(setstate):
- setstate(meta.state) # pylint: disable=not-callable
- else:
- obj.__dict__.update(meta.state)
- return obj
-
-#----------------------------------------------------------------------------
-
-def _module_to_src(module):
- r"""Query the source code of a given Python module.
- """
- src = _module_to_src_dict.get(module, None)
- if src is None:
- src = inspect.getsource(module)
- _module_to_src_dict[module] = src
- _src_to_module_dict[src] = module
- return src
-
-def _src_to_module(src):
- r"""Get or create a Python module for the given source code.
- """
- module = _src_to_module_dict.get(src, None)
- if module is None:
- module_name = "_imported_module_" + uuid.uuid4().hex
- module = types.ModuleType(module_name)
- sys.modules[module_name] = module
- _module_to_src_dict[module] = src
- _src_to_module_dict[src] = module
- exec(src, module.__dict__) # pylint: disable=exec-used
- return module
-
-#----------------------------------------------------------------------------
-
-def _check_pickleable(obj):
- r"""Check that the given object is pickleable, raising an exception if
- it is not. This function is expected to be considerably more efficient
- than actually pickling the object.
- """
- def recurse(obj):
- if isinstance(obj, (list, tuple, set)):
- return [recurse(x) for x in obj]
- if isinstance(obj, dict):
- return [[recurse(x), recurse(y)] for x, y in obj.items()]
- if isinstance(obj, (str, int, float, bool, bytes, bytearray)):
- return None # Python primitive types are pickleable.
- if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor']:
- return None # NumPy arrays and PyTorch tensors are pickleable.
- if is_persistent(obj):
- return None # Persistent objects are pickleable, by virtue of the constructor check.
- return obj
- with io.BytesIO() as f:
- pickle.dump(recurse(obj), f)
-
-#----------------------------------------------------------------------------
diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/encoders/helpers.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/encoders/helpers.py
deleted file mode 100644
index c4a58b34ea5ca6912fe53c63dede0a8696f5c024..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/PTI/models/e4e/encoders/helpers.py
+++ /dev/null
@@ -1,140 +0,0 @@
-from collections import namedtuple
-import torch
-import torch.nn.functional as F
-from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module
-
-"""
-ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Flatten(Module):
- def forward(self, input):
- return input.view(input.size(0), -1)
-
-
-def l2_norm(input, axis=1):
- norm = torch.norm(input, 2, axis, True)
- output = torch.div(input, norm)
- return output
-
-
-class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])):
- """ A named tuple describing a ResNet block. """
-
-
-def get_block(in_channel, depth, num_units, stride=2):
- return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)]
-
-
-def get_blocks(num_layers):
- if num_layers == 50:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=4),
- get_block(in_channel=128, depth=256, num_units=14),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- elif num_layers == 100:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=13),
- get_block(in_channel=128, depth=256, num_units=30),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- elif num_layers == 152:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=8),
- get_block(in_channel=128, depth=256, num_units=36),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- else:
- raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers))
- return blocks
-
-
-class SEModule(Module):
- def __init__(self, channels, reduction):
- super(SEModule, self).__init__()
- self.avg_pool = AdaptiveAvgPool2d(1)
- self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False)
- self.relu = ReLU(inplace=True)
- self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False)
- self.sigmoid = Sigmoid()
-
- def forward(self, x):
- module_input = x
- x = self.avg_pool(x)
- x = self.fc1(x)
- x = self.relu(x)
- x = self.fc2(x)
- x = self.sigmoid(x)
- return module_input * x
-
-
-class bottleneck_IR(Module):
- def __init__(self, in_channel, depth, stride):
- super(bottleneck_IR, self).__init__()
- if in_channel == depth:
- self.shortcut_layer = MaxPool2d(1, stride)
- else:
- self.shortcut_layer = Sequential(
- Conv2d(in_channel, depth, (1, 1), stride, bias=False),
- BatchNorm2d(depth)
- )
- self.res_layer = Sequential(
- BatchNorm2d(in_channel),
- Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth),
- Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth)
- )
-
- def forward(self, x):
- shortcut = self.shortcut_layer(x)
- res = self.res_layer(x)
- return res + shortcut
-
-
-class bottleneck_IR_SE(Module):
- def __init__(self, in_channel, depth, stride):
- super(bottleneck_IR_SE, self).__init__()
- if in_channel == depth:
- self.shortcut_layer = MaxPool2d(1, stride)
- else:
- self.shortcut_layer = Sequential(
- Conv2d(in_channel, depth, (1, 1), stride, bias=False),
- BatchNorm2d(depth)
- )
- self.res_layer = Sequential(
- BatchNorm2d(in_channel),
- Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False),
- PReLU(depth),
- Conv2d(depth, depth, (3, 3), stride, 1, bias=False),
- BatchNorm2d(depth),
- SEModule(depth, 16)
- )
-
- def forward(self, x):
- shortcut = self.shortcut_layer(x)
- res = self.res_layer(x)
- return res + shortcut
-
-
-def _upsample_add(x, y):
- """Upsample and add two feature maps.
- Args:
- x: (Variable) top feature map to be upsampled.
- y: (Variable) lateral feature map.
- Returns:
- (Variable) added feature map.
- Note in PyTorch, when input size is odd, the upsampled feature map
- with `F.upsample(..., scale_factor=2, mode='nearest')`
- maybe not equal to the lateral feature map size.
- e.g.
- original input size: [N,_,15,15] ->
- conv2d feature map size: [N,_,8,8] ->
- upsampled feature map size: [N,_,16,16]
- So we choose bilinear upsample which supports arbitrary output sizes.
- """
- _, _, H, W = y.size()
- return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y
diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/training/projectors/__init__.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/pti/training/projectors/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/hamelcubsfan/AutoGPT/tests/unit/test_chat.py b/spaces/hamelcubsfan/AutoGPT/tests/unit/test_chat.py
deleted file mode 100644
index 774f4103762c28d5a02e89c14b224fae0bc0756a..0000000000000000000000000000000000000000
--- a/spaces/hamelcubsfan/AutoGPT/tests/unit/test_chat.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Generated by CodiumAI
-import time
-import unittest
-from unittest.mock import patch
-
-from autogpt.chat import create_chat_message, generate_context
-
-
-class TestChat(unittest.TestCase):
- # Tests that the function returns a dictionary with the correct keys and values when valid strings are provided for role and content.
- def test_happy_path_role_content(self):
- result = create_chat_message("system", "Hello, world!")
- self.assertEqual(result, {"role": "system", "content": "Hello, world!"})
-
- # Tests that the function returns a dictionary with the correct keys and values when empty strings are provided for role and content.
- def test_empty_role_content(self):
- result = create_chat_message("", "")
- self.assertEqual(result, {"role": "", "content": ""})
-
- # Tests the behavior of the generate_context function when all input parameters are empty.
- @patch("time.strftime")
- def test_generate_context_empty_inputs(self, mock_strftime):
- # Mock the time.strftime function to return a fixed value
- mock_strftime.return_value = "Sat Apr 15 00:00:00 2023"
- # Arrange
- prompt = ""
- relevant_memory = ""
- full_message_history = []
- model = "gpt-3.5-turbo-0301"
-
- # Act
- result = generate_context(prompt, relevant_memory, full_message_history, model)
-
- # Assert
- expected_result = (
- -1,
- 47,
- 3,
- [
- {"role": "system", "content": ""},
- {
- "role": "system",
- "content": f"The current time and date is {time.strftime('%c')}",
- },
- {
- "role": "system",
- "content": f"This reminds you of these events from your past:\n\n\n",
- },
- ],
- )
- self.assertEqual(result, expected_result)
-
- # Tests that the function successfully generates a current_context given valid inputs.
- def test_generate_context_valid_inputs(self):
- # Given
- prompt = "What is your favorite color?"
- relevant_memory = "You once painted your room blue."
- full_message_history = [
- create_chat_message("user", "Hi there!"),
- create_chat_message("assistant", "Hello! How can I assist you today?"),
- create_chat_message("user", "Can you tell me a joke?"),
- create_chat_message(
- "assistant",
- "Why did the tomato turn red? Because it saw the salad dressing!",
- ),
- create_chat_message("user", "Haha, that's funny."),
- ]
- model = "gpt-3.5-turbo-0301"
-
- # When
- result = generate_context(prompt, relevant_memory, full_message_history, model)
-
- # Then
- self.assertIsInstance(result[0], int)
- self.assertIsInstance(result[1], int)
- self.assertIsInstance(result[2], int)
- self.assertIsInstance(result[3], list)
- self.assertGreaterEqual(result[0], 0)
- self.assertGreaterEqual(result[1], 0)
- self.assertGreaterEqual(result[2], 0)
- self.assertGreaterEqual(
- len(result[3]), 3
- ) # current_context should have at least 3 messages
- self.assertLessEqual(
- result[1], 2048
- ) # token limit for GPT-3.5-turbo-0301 is 2048 tokens
diff --git a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/conv2d_resample.py b/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/conv2d_resample.py
deleted file mode 100644
index 5eb5877d7ffe4af74a2165f1d8d8c39dfac2476b..0000000000000000000000000000000000000000
--- a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/conv2d_resample.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""2D convolution with optional up/downsampling."""
-
-import torch
-
-from .. import misc
-from . import conv2d_gradfix
-from . import upfirdn2d
-from .upfirdn2d import _parse_padding
-from .upfirdn2d import _get_filter_size
-
-#----------------------------------------------------------------------------
-
-def _get_weight_shape(w):
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- shape = [int(sz) for sz in w.shape]
- misc.assert_shape(w, shape)
- return shape
-
-#----------------------------------------------------------------------------
-
-def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True):
- """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations.
- """
- _out_channels, _in_channels_per_group, kh, kw = _get_weight_shape(w)
-
- # Flip weight if requested.
- # Note: conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False).
- if not flip_weight and (kw > 1 or kh > 1):
- w = w.flip([2, 3])
-
- # Execute using conv2d_gradfix.
- op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d
- return op(x, w, stride=stride, padding=padding, groups=groups)
-
-#----------------------------------------------------------------------------
-
-@misc.profiled_function
-def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False):
- r"""2D convolution with optional up/downsampling.
-
- Padding is performed only once at the beginning, not between the operations.
-
- Args:
- x: Input tensor of shape
- `[batch_size, in_channels, in_height, in_width]`.
- w: Weight tensor of shape
- `[out_channels, in_channels//groups, kernel_height, kernel_width]`.
- f: Low-pass filter for up/downsampling. Must be prepared beforehand by
- calling upfirdn2d.setup_filter(). None = identity (default).
- up: Integer upsampling factor (default: 1).
- down: Integer downsampling factor (default: 1).
- padding: Padding with respect to the upsampled image. Can be a single number
- or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- groups: Split input channels into N groups (default: 1).
- flip_weight: False = convolution, True = correlation (default: True).
- flip_filter: False = convolution, True = correlation (default: False).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- # Validate arguments.
- assert isinstance(x, torch.Tensor) and (x.ndim == 4)
- assert isinstance(w, torch.Tensor) and (w.ndim == 4) and (w.dtype == x.dtype)
- assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [1, 2] and f.dtype == torch.float32)
- assert isinstance(up, int) and (up >= 1)
- assert isinstance(down, int) and (down >= 1)
- assert isinstance(groups, int) and (groups >= 1)
- out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
- fw, fh = _get_filter_size(f)
- px0, px1, py0, py1 = _parse_padding(padding)
-
- # Adjust padding to account for up/downsampling.
- if up > 1:
- px0 += (fw + up - 1) // 2
- px1 += (fw - up) // 2
- py0 += (fh + up - 1) // 2
- py1 += (fh - up) // 2
- if down > 1:
- px0 += (fw - down + 1) // 2
- px1 += (fw - down) // 2
- py0 += (fh - down + 1) // 2
- py1 += (fh - down) // 2
-
- # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve.
- if kw == 1 and kh == 1 and (down > 1 and up == 1):
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[px0,px1,py0,py1], flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- return x
-
- # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample.
- if kw == 1 and kh == 1 and (up > 1 and down == 1):
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter)
- return x
-
- # Fast path: downsampling only => use strided convolution.
- if down > 1 and up == 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0,px1,py0,py1], flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, stride=down, groups=groups, flip_weight=flip_weight)
- return x
-
- # Fast path: upsampling with optional downsampling => use transpose strided convolution.
- if up > 1:
- if groups == 1:
- w = w.transpose(0, 1)
- else:
- w = w.reshape(groups, out_channels // groups, in_channels_per_group, kh, kw)
- w = w.transpose(1, 2)
- w = w.reshape(groups * in_channels_per_group, out_channels // groups, kh, kw)
- px0 -= kw - 1
- px1 -= kw - up
- py0 -= kh - 1
- py1 -= kh - up
- pxt = max(min(-px0, -px1), 0)
- pyt = max(min(-py0, -py1), 0)
- x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[pyt,pxt], groups=groups, transpose=True, flip_weight=(not flip_weight))
- x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0+pxt,px1+pxt,py0+pyt,py1+pyt], gain=up**2, flip_filter=flip_filter)
- if down > 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter)
- return x
-
- # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d.
- if up == 1 and down == 1:
- if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0:
- return _conv2d_wrapper(x=x, w=w, padding=[py0,px0], groups=groups, flip_weight=flip_weight)
-
- # Fallback: Generic reference implementation.
- x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- if down > 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter)
- return x
-
-#----------------------------------------------------------------------------
diff --git a/spaces/hebert2099/MusicGen/tests/models/test_musicgen.py b/spaces/hebert2099/MusicGen/tests/models/test_musicgen.py
deleted file mode 100644
index 53eff4405ab7de18e0ae18df8c8f9959a1c9e031..0000000000000000000000000000000000000000
--- a/spaces/hebert2099/MusicGen/tests/models/test_musicgen.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import pytest
-import torch
-
-from audiocraft.models import MusicGen
-
-
-class TestSEANetModel:
- def get_musicgen(self):
- mg = MusicGen.get_pretrained(name='debug', device='cpu')
- mg.set_generation_params(duration=2.0)
- return mg
-
- def test_base(self):
- mg = self.get_musicgen()
- assert mg.frame_rate == 25
- assert mg.sample_rate == 32000
- assert mg.audio_channels == 1
-
- def test_generate_unconditional(self):
- mg = self.get_musicgen()
- wav = mg.generate_unconditional(3)
- assert list(wav.shape) == [3, 1, 64000]
-
- def test_generate_continuation(self):
- mg = self.get_musicgen()
- prompt = torch.randn(3, 1, 32000)
- wav = mg.generate_continuation(prompt, 32000)
- assert list(wav.shape) == [3, 1, 64000]
-
- prompt = torch.randn(2, 1, 32000)
- wav = mg.generate_continuation(
- prompt, 32000, ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 64000]
-
- prompt = torch.randn(2, 1, 32000)
- with pytest.raises(AssertionError):
- wav = mg.generate_continuation(
- prompt, 32000, ['youpi', 'lapin dort', 'one too many'])
-
- def test_generate(self):
- mg = self.get_musicgen()
- wav = mg.generate(
- ['youpi', 'lapin dort'])
- assert list(wav.shape) == [2, 1, 64000]
diff --git a/spaces/hf4all/web-ui/_next/static/css/aa52c84dc63fe0c2.css b/spaces/hf4all/web-ui/_next/static/css/aa52c84dc63fe0c2.css
deleted file mode 100644
index a5906762da87944d1af55175437321b2b935f7a6..0000000000000000000000000000000000000000
--- a/spaces/hf4all/web-ui/_next/static/css/aa52c84dc63fe0c2.css
+++ /dev/null
@@ -1,21 +0,0 @@
-@font-face{font-family:Inter;font-style:normal;font-weight:400;font-display:swap;src:url(_next/static/media/Inter-Regular.f1f0c35b.woff2) format("woff2"),url(_next/static/media/Inter-Regular.f356e84a.woff) format("woff")}@font-face{font-family:Inter;font-style:normal;font-weight:500;font-display:swap;src:url(_next/static/media/Inter-Medium.dc792b50.woff2) format("woff2"),url(_next/static/media/Inter-Medium.ec7dd2d9.woff) format("woff")}@font-face{font-family:Inter;font-style:normal;font-weight:600;font-display:swap;src:url(_next/static/media/Inter-SemiBold.fcb100c7.woff2) format("woff2"),url(_next/static/media/Inter-SemiBold.55027e47.woff) format("woff")}@font-face{font-family:Inter;font-style:normal;font-weight:700;font-display:swap;src:url(_next/static/media/Inter-Bold.579e0f95.woff2) format("woff2"),url(_next/static/media/Inter-Bold.b1234477.woff) format("woff")}@font-face{font-family:Inter var;font-weight:100 900;font-style:normal;font-named-instance:"Regular";font-display:swap;src:url(_next/static/media/Inter-roman.var.b2129c00.woff2) format("woff2 supports variations(gvar)"),url(_next/static/media/Inter-roman.var.b2129c00.woff2) format("woff2-variations"),url(_next/static/media/Inter-roman.var.b2129c00.woff2) format("woff2")}/*
-! tailwindcss v3.3.2 | MIT License | https://tailwindcss.com
-*/*,:after,:before{box-sizing:border-box;border:0 solid #e5e7eb}:after,:before{--tw-content:""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,pre,samp{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dd,dl,figure,h1,h2,h3,h4,h5,h6,hr,p,pre{margin:0}fieldset{margin:0}fieldset,legend{padding:0}menu,ol,ul{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}[role=button],button{cursor:pointer}:disabled{cursor:default}audio,canvas,embed,iframe,img,object,svg,video{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*{scrollbar-color:auto;scrollbar-width:auto}:root{opacity:.88}*,:after,:before{--tw-border-spacing-x:0;--tw-border-spacing-y:0;--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness:proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(59,130,246,.5);--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x:0;--tw-border-spacing-y:0;--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness:proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(59,130,246,.5);--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.container{width:100%}@media (min-width:640px){.container{max-width:640px}}@media (min-width:768px){.container{max-width:768px}}@media (min-width:1024px){.container{max-width:1024px}}@media (min-width:1280px){.container{max-width:1280px}}@media (min-width:1536px){.container{max-width:1536px}}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);white-space:nowrap;border-width:0}.pointer-events-none{pointer-events:none}.invisible{visibility:hidden}.collapse{visibility:collapse}.fixed{position:fixed}.absolute{position:absolute}.relative{position:relative}.inset-0{inset:0}.inset-y-0{top:0;bottom:0}.left-0{left:0}.right-0{right:0}.right-4{right:1rem}.right-\[-8px\]{right:-8px}.top-4{top:1rem}.top-\[-8px\]{top:-8px}.isolate{isolation:isolate}.z-10{z-index:10}.z-50{z-index:50}.m-5{margin:1.25rem}.-mx-1{margin-left:-.25rem;margin-right:-.25rem}.mx-10{margin-left:2.5rem;margin-right:2.5rem}.mx-3{margin-left:.75rem;margin-right:.75rem}.mx-5{margin-left:1.25rem;margin-right:1.25rem}.mx-auto{margin-left:auto;margin-right:auto}.my-3{margin-top:.75rem;margin-bottom:.75rem}.my-5{margin-top:1.25rem;margin-bottom:1.25rem}.my-8{margin-top:2rem;margin-bottom:2rem}.-ml-px{margin-left:-1px}.-mr-1{margin-right:-.25rem}.mb-0{margin-bottom:0}.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-3{margin-bottom:.75rem}.mb-4{margin-bottom:1rem}.mb-\[5px\]{margin-bottom:5px}.ml-1{margin-left:.25rem}.ml-2{margin-left:.5rem}.ml-6{margin-left:1.5rem}.ml-auto{margin-left:auto}.mr-1{margin-right:.25rem}.mr-2{margin-right:.5rem}.mr-\[10px\]{margin-right:10px}.mr-\[6px\]{margin-right:6px}.mt-1{margin-top:.25rem}.mt-2{margin-top:.5rem}.mt-3{margin-top:.75rem}.mt-5{margin-top:1.25rem}.mt-\[12px\]{margin-top:12px}.mt-auto{margin-top:auto}.block{display:block}.inline-block{display:inline-block}.inline{display:inline}.flex{display:flex}.inline-flex{display:inline-flex}.grid{display:grid}.hidden{display:none}.h-11{height:2.75rem}.h-4{height:1rem}.h-5{height:1.25rem}.h-6{height:1.5rem}.h-8{height:2rem}.h-9{height:2.25rem}.h-\[18px\]{height:18px}.h-\[1px\]{height:1px}.h-\[250px\]{height:250px}.h-\[400px\]{height:400px}.h-\[45px\]{height:45px}.h-full{height:100%}.h-px{height:1px}.h-screen{height:100vh}.max-h-60{max-height:15rem}.max-h-\[300px\]{max-height:300px}.max-h-full{max-height:100%}.max-h-screen{max-height:100vh}.min-h-\[2rem\]{min-height:2rem}.min-h-\[300px\]{min-height:300px}.min-h-\[400px\]{min-height:400px}.w-1\/6{width:16.666667%}.w-11{width:2.75rem}.w-11\/12{width:91.666667%}.w-3\/6{width:50%}.w-4{width:1rem}.w-5{width:1.25rem}.w-56{width:14rem}.w-6{width:1.5rem}.w-8{width:2rem}.w-\[1000px\]{width:1000px}.w-\[18px\]{width:18px}.w-\[200px\]{width:200px}.w-\[230px\]{width:230px}.w-\[300px\]{width:300px}.w-\[30px\]{width:30px}.w-\[400px\]{width:400px}.w-\[600px\]{width:600px}.w-\[79px\]{width:79px}.w-\[800px\]{width:800px}.w-fit{width:-moz-fit-content;width:fit-content}.w-full{width:100%}.min-w-0{min-width:0}.min-w-\[150px\]{min-width:150px}.min-w-\[2rem\]{min-width:2rem}.min-w-max{min-width:-moz-max-content;min-width:max-content}.max-w-fit{max-width:-moz-fit-content;max-width:fit-content}.flex-1{flex:1 1 0%}.shrink-0{flex-shrink:0}.grow{flex-grow:1}.origin-top-right{transform-origin:top right}.translate-x-1{--tw-translate-x:0.25rem}.translate-x-1,.translate-x-6{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.translate-x-6{--tw-translate-x:1.5rem}.rotate-180{--tw-rotate:180deg}.rotate-180,.scale-100{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.scale-100{--tw-scale-x:1;--tw-scale-y:1}.scale-95{--tw-scale-x:.95;--tw-scale-y:.95}.scale-95,.transform{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.cursor-default{cursor:default}.cursor-not-allowed{cursor:not-allowed}.cursor-pointer{cursor:pointer}.cursor-wait{cursor:wait}.select-none{-webkit-user-select:none;-moz-user-select:none;user-select:none}.resize-none{resize:none}.auto-rows-fr{grid-auto-rows:minmax(0,1fr)}.grid-cols-1{grid-template-columns:repeat(1,minmax(0,1fr))}.grid-cols-2{grid-template-columns:repeat(2,minmax(0,1fr))}.grid-cols-3{grid-template-columns:repeat(3,minmax(0,1fr))}.grid-cols-\[auto_1fr\]{grid-template-columns:auto 1fr}.flex-row{flex-direction:row}.flex-row-reverse{flex-direction:row-reverse}.flex-col{flex-direction:column}.flex-col-reverse{flex-direction:column-reverse}.flex-wrap{flex-wrap:wrap}.items-start{align-items:flex-start}.items-center{align-items:center}.justify-center{justify-content:center}.justify-between{justify-content:space-between}.gap-1{gap:.25rem}.gap-2{gap:.5rem}.gap-3{gap:.75rem}.gap-4{gap:1rem}.gap-5{gap:1.25rem}.gap-\[10px\]{gap:10px}.gap-\[12px\]{gap:12px}.gap-\[5px\]{gap:5px}.space-x-3>:not([hidden])~:not([hidden]){--tw-space-x-reverse:0;margin-right:calc(.75rem * var(--tw-space-x-reverse));margin-left:calc(.75rem * calc(1 - var(--tw-space-x-reverse)))}.space-x-4>:not([hidden])~:not([hidden]){--tw-space-x-reverse:0;margin-right:calc(1rem * var(--tw-space-x-reverse));margin-left:calc(1rem * calc(1 - var(--tw-space-x-reverse)))}.space-y-2>:not([hidden])~:not([hidden]){--tw-space-y-reverse:0;margin-top:calc(.5rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(.5rem * var(--tw-space-y-reverse))}.space-y-4>:not([hidden])~:not([hidden]){--tw-space-y-reverse:0;margin-top:calc(1rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(1rem * var(--tw-space-y-reverse))}.divide-y>:not([hidden])~:not([hidden]){--tw-divide-y-reverse:0;border-top-width:calc(1px * calc(1 - var(--tw-divide-y-reverse)));border-bottom-width:calc(1px * var(--tw-divide-y-reverse))}.divide-gray-100>:not([hidden])~:not([hidden]){--tw-divide-opacity:1;border-color:rgb(243 244 246/var(--tw-divide-opacity))}.self-end{align-self:flex-end}.overflow-auto{overflow:auto}.overflow-hidden{overflow:hidden}.overflow-y-auto{overflow-y:auto}.overflow-x-hidden{overflow-x:hidden}.truncate{overflow:hidden;text-overflow:ellipsis}.truncate,.whitespace-nowrap{white-space:nowrap}.whitespace-pre-wrap{white-space:pre-wrap}.break-all{word-break:break-all}.rounded{border-radius:.25rem}.rounded-2xl{border-radius:1rem}.rounded-3xl{border-radius:1.5rem}.rounded-\[10px\]{border-radius:10px}.rounded-\[15px\]{border-radius:15px}.rounded-\[20px\]{border-radius:20px}.rounded-\[30px\]{border-radius:30px}.rounded-\[6px\]{border-radius:6px}.rounded-full{border-radius:9999px}.rounded-lg{border-radius:.5rem}.rounded-md{border-radius:.375rem}.rounded-sm{border-radius:.125rem}.rounded-xl{border-radius:.75rem}.rounded-b-lg{border-bottom-right-radius:.5rem;border-bottom-left-radius:.5rem}.rounded-l-md{border-top-left-radius:.375rem;border-bottom-left-radius:.375rem}.rounded-r-md{border-top-right-radius:.375rem;border-bottom-right-radius:.375rem}.border{border-width:1px}.border-0{border-width:0}.border-2{border-width:2px}.border-b{border-bottom-width:1px}.border-t{border-top-width:1px}.border-solid{border-style:solid}.border-dashed{border-style:dashed}.border-\[\#ffffff4d\]{border-color:#ffffff4d}.border-gray-200{--tw-border-opacity:1;border-color:rgb(229 231 235/var(--tw-border-opacity))}.border-gray-300{--tw-border-opacity:1;border-color:rgb(209 213 219/var(--tw-border-opacity))}.border-primary-border{--tw-border-opacity:1;border-color:rgb(var(--primary-border)/var(--tw-border-opacity))}.border-b-slate-100{--tw-border-opacity:1;border-bottom-color:rgb(241 245 249/var(--tw-border-opacity))}.bg-\[\#00000014\]{background-color:#00000014}.bg-\[\#e6e7e8\]{--tw-bg-opacity:1;background-color:rgb(230 231 232/var(--tw-bg-opacity))}.bg-black{--tw-bg-opacity:1;background-color:rgb(0 0 0/var(--tw-bg-opacity))}.bg-black\/30{background-color:rgba(0,0,0,.3)}.bg-black\/50{background-color:rgba(0,0,0,.5)}.bg-blue-600{--tw-bg-opacity:1;background-color:rgb(37 99 235/var(--tw-bg-opacity))}.bg-gray-100{--tw-bg-opacity:1;background-color:rgb(243 244 246/var(--tw-bg-opacity))}.bg-gray-200{--tw-bg-opacity:1;background-color:rgb(229 231 235/var(--tw-bg-opacity))}.bg-primary-background{--tw-bg-opacity:1;background-color:rgb(var(--primary-background)/var(--tw-bg-opacity))}.bg-primary-blue{--tw-bg-opacity:1;background-color:rgb(var(--color-primary-blue)/var(--tw-bg-opacity))}.bg-primary-border{--tw-bg-opacity:1;background-color:rgb(var(--primary-border)/var(--tw-bg-opacity))}.bg-secondary{--tw-bg-opacity:1;background-color:rgb(var(--color-secondary)/var(--tw-bg-opacity))}.bg-slate-100{--tw-bg-opacity:1;background-color:rgb(241 245 249/var(--tw-bg-opacity))}.bg-transparent{background-color:transparent}.bg-violet-500{--tw-bg-opacity:1;background-color:rgb(139 92 246/var(--tw-bg-opacity))}.bg-white{--tw-bg-opacity:1;background-color:rgb(255 255 255/var(--tw-bg-opacity))}.bg-opacity-20{--tw-bg-opacity:0.2}.bg-opacity-40{--tw-bg-opacity:0.4}.bg-opacity-70{--tw-bg-opacity:0.7}.bg-opacity-90{--tw-bg-opacity:0.9}.object-contain{-o-object-fit:contain;object-fit:contain}.\!p-0{padding:0!important}.p-2{padding:.5rem}.p-3{padding:.75rem}.p-5{padding:1.25rem}.p-6{padding:1.5rem}.p-\[6px\]{padding:6px}.px-1{padding-left:.25rem;padding-right:.25rem}.px-10{padding-left:2.5rem;padding-right:2.5rem}.px-2{padding-left:.5rem;padding-right:.5rem}.px-2\.5{padding-left:.625rem;padding-right:.625rem}.px-3{padding-left:.75rem;padding-right:.75rem}.px-4{padding-left:1rem;padding-right:1rem}.px-5{padding-left:1.25rem;padding-right:1.25rem}.px-6{padding-left:1.5rem;padding-right:1.5rem}.px-\[14px\]{padding-left:14px;padding-right:14px}.px-\[15px\]{padding-left:15px;padding-right:15px}.py-1{padding-top:.25rem;padding-bottom:.25rem}.py-1\.5{padding-top:.375rem;padding-bottom:.375rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.py-3{padding-top:.75rem;padding-bottom:.75rem}.py-4{padding-top:1rem;padding-bottom:1rem}.py-5{padding-top:1.25rem;padding-bottom:1.25rem}.py-6{padding-top:1.5rem;padding-bottom:1.5rem}.py-\[10px\]{padding-top:10px;padding-bottom:10px}.py-\[5px\]{padding-top:5px;padding-bottom:5px}.py-\[6px\]{padding-top:6px;padding-bottom:6px}.pb-10{padding-bottom:2.5rem}.pb-4{padding-bottom:1rem}.pb-\[10px\]{padding-bottom:10px}.pl-3{padding-left:.75rem}.pr-10{padding-right:2.5rem}.pr-2{padding-right:.5rem}.pr-4{padding-right:1rem}.pr-8{padding-right:2rem}.pr-9{padding-right:2.25rem}.pt-2{padding-top:.5rem}.pt-3{padding-top:.75rem}.text-left{text-align:left}.text-center{text-align:center}.text-right{text-align:right}.\!text-base{font-size:1rem!important;line-height:1.5rem!important}.text-base{font-size:1rem;line-height:1.5rem}.text-lg{font-size:1.125rem;line-height:1.75rem}.text-sm{font-size:.875rem;line-height:1.25rem}.text-xs{font-size:.75rem;line-height:1rem}.font-bold{font-weight:700}.font-medium{font-weight:500}.font-normal{font-weight:400}.font-semibold{font-weight:600}.leading-6{line-height:1.5rem}.leading-none{line-height:1}.leading-tight{line-height:1.25}.tracking-widest{letter-spacing:.1em}.text-\[\#303030\]{--tw-text-opacity:1;color:rgb(48 48 48/var(--tw-text-opacity))}.text-gray-400{--tw-text-opacity:1;color:rgb(156 163 175/var(--tw-text-opacity))}.text-gray-600{--tw-text-opacity:1;color:rgb(75 85 99/var(--tw-text-opacity))}.text-gray-800{--tw-text-opacity:1;color:rgb(31 41 55/var(--tw-text-opacity))}.text-gray-900{--tw-text-opacity:1;color:rgb(17 24 39/var(--tw-text-opacity))}.text-indigo-600{--tw-text-opacity:1;color:rgb(79 70 229/var(--tw-text-opacity))}.text-light-text{--tw-text-opacity:1;color:rgb(var(--light-text)/var(--tw-text-opacity))}.text-primary-text{--tw-text-opacity:1;color:rgb(var(--primary-text)/var(--tw-text-opacity))}.text-red-500{--tw-text-opacity:1;color:rgb(239 68 68/var(--tw-text-opacity))}.text-secondary-text{--tw-text-opacity:1;color:rgb(var(--secondary-text)/var(--tw-text-opacity))}.text-slate-500{--tw-text-opacity:1;color:rgb(100 116 139/var(--tw-text-opacity))}.text-slate-700{--tw-text-opacity:1;color:rgb(51 65 85/var(--tw-text-opacity))}.text-slate-900{--tw-text-opacity:1;color:rgb(15 23 42/var(--tw-text-opacity))}.text-violet-200{--tw-text-opacity:1;color:rgb(221 214 254/var(--tw-text-opacity))}.text-violet-400{--tw-text-opacity:1;color:rgb(167 139 250/var(--tw-text-opacity))}.text-white{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}.underline{text-decoration-line:underline}.opacity-0{opacity:0}.opacity-100{opacity:1}.opacity-30{opacity:.3}.opacity-50{opacity:.5}.opacity-70{opacity:.7}.opacity-80{opacity:.8}.shadow-2xl{--tw-shadow:0 25px 50px -12px rgba(0,0,0,.25);--tw-shadow-colored:0 25px 50px -12px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.shadow-\[hsl\(206_22\%_7\%_\/_35\%\)_0px_10px_38px_-10px\2c _hsl\(206_22\%_7\%_\/_20\%\)_0px_10px_20px_-15px\]{--tw-shadow:rgba(14,18,22,.35) 0px 10px 38px -10px,rgba(14,18,22,.2) 0px 10px 20px -15px;--tw-shadow-colored:0px 10px 38px -10px var(--tw-shadow-color),0px 10px 20px -15px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.shadow-lg{--tw-shadow:0 10px 15px -3px rgba(0,0,0,.1),0 4px 6px -4px rgba(0,0,0,.1);--tw-shadow-colored:0 10px 15px -3px var(--tw-shadow-color),0 4px 6px -4px var(--tw-shadow-color)}.shadow-lg,.shadow-sm{box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.shadow-sm{--tw-shadow:0 1px 2px 0 rgba(0,0,0,.05);--tw-shadow-colored:0 1px 2px 0 var(--tw-shadow-color)}.outline-none{outline:2px solid transparent;outline-offset:2px}.ring-1{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow,0 0 #0000)}.ring-inset{--tw-ring-inset:inset}.ring-black{--tw-ring-opacity:1;--tw-ring-color:rgb(0 0 0/var(--tw-ring-opacity))}.ring-gray-300{--tw-ring-opacity:1;--tw-ring-color:rgb(209 213 219/var(--tw-ring-opacity))}.ring-primary-border{--tw-ring-opacity:1;--tw-ring-color:rgb(var(--primary-border)/var(--tw-ring-opacity))}.ring-opacity-5{--tw-ring-opacity:0.05}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.backdrop-blur-sm{--tw-backdrop-blur:blur(4px);-webkit-backdrop-filter:var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia);backdrop-filter:var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia)}.transition{transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,-webkit-backdrop-filter;transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter,-webkit-backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.transition-all{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.transition-opacity{transition-property:opacity;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.duration-100{transition-duration:.1s}.duration-75{transition-duration:75ms}.ease-in{transition-timing-function:cubic-bezier(.4,0,1,1)}.ease-out{transition-timing-function:cubic-bezier(0,0,.2,1)}.will-change-\[transform\2c opacity\]{will-change:transform,opacity}.scrollbar-thin{scrollbar-color:var(--scrollbar-thumb,initial) var(--scrollbar-track,initial)}.scrollbar-thin::-webkit-scrollbar-track{background-color:var(--scrollbar-track);border-radius:var(--scrollbar-track-radius)}.scrollbar-thin::-webkit-scrollbar-track:hover{background-color:var(--scrollbar-track-hover,var(--scrollbar-track))}.scrollbar-thin::-webkit-scrollbar-track:active{background-color:var(--scrollbar-track-active,var(--scrollbar-track-hover,var(--scrollbar-track)))}.scrollbar-thin::-webkit-scrollbar-thumb{background-color:var(--scrollbar-thumb);border-radius:var(--scrollbar-thumb-radius)}.scrollbar-thin::-webkit-scrollbar-thumb:hover{background-color:var(--scrollbar-thumb-hover,var(--scrollbar-thumb))}.scrollbar-thin::-webkit-scrollbar-thumb:active{background-color:var(--scrollbar-thumb-active,var(--scrollbar-thumb-hover,var(--scrollbar-thumb)))}.scrollbar-thin::-webkit-scrollbar-corner{background-color:var(--scrollbar-corner);border-radius:var(--scrollbar-corner-radius)}.scrollbar-thin::-webkit-scrollbar-corner:hover{background-color:var(--scrollbar-corner-hover,var(--scrollbar-corner))}.scrollbar-thin::-webkit-scrollbar-corner:active{background-color:var(--scrollbar-corner-active,var(--scrollbar-corner-hover,var(--scrollbar-corner)))}.scrollbar-thin{scrollbar-width:thin}.scrollbar-thin::-webkit-scrollbar{display:block;width:8px;height:8px}.scrollbar-none{scrollbar-width:none}.scrollbar-none::-webkit-scrollbar{display:none}body,html{font-family:Inter,"system-ui"}@supports(font-variation-settings:normal){body,html{font-family:Inter var,"system-ui"}}body{font-size:100%}:focus-visible{outline:none}:root.light{color-scheme:light;--color-primary-blue:73 135 252;--color-secondary:242 242 242;--color-primary-purple:103 86 189;--primary-background:255 255 255;--primary-text:48 48 48;--secondary-text:128 128 128;--light-text:190 190 190;--primary-border:237 237 237/*!
- Theme: GitHub
- Description: Light theme as seen on github.com
- Author: github.com
- Maintainer: @Hirse
- Updated: 2021-05-15
-
- Outdated base version: https://github.com/primer/github-syntax-light
- Current colors taken from GitHub's CSS
-*/}:root.light pre code.hljs{display:block;overflow-x:auto;padding:1em}:root.light code.hljs{padding:3px 5px}:root.light .hljs{color:#24292e;background:#fff}:root.light .hljs-doctag,:root.light .hljs-keyword,:root.light .hljs-meta .hljs-keyword,:root.light .hljs-template-tag,:root.light .hljs-template-variable,:root.light .hljs-type,:root.light .hljs-variable.language_{color:#d73a49}:root.light .hljs-title,:root.light .hljs-title.class_,:root.light .hljs-title.class_.inherited__,:root.light .hljs-title.function_{color:#6f42c1}:root.light .hljs-attr,:root.light .hljs-attribute,:root.light .hljs-literal,:root.light .hljs-meta,:root.light .hljs-number,:root.light .hljs-operator,:root.light .hljs-selector-attr,:root.light .hljs-selector-class,:root.light .hljs-selector-id,:root.light .hljs-variable{color:#005cc5}:root.light .hljs-meta .hljs-string,:root.light .hljs-regexp,:root.light .hljs-string{color:#032f62}:root.light .hljs-built_in,:root.light .hljs-symbol{color:#e36209}:root.light .hljs-code,:root.light .hljs-comment,:root.light .hljs-formula{color:#6a737d}:root.light .hljs-name,:root.light .hljs-quote,:root.light .hljs-selector-pseudo,:root.light .hljs-selector-tag{color:#22863a}:root.light .hljs-subst{color:#24292e}:root.light .hljs-section{color:#005cc5;font-weight:700}:root.light .hljs-bullet{color:#735c0f}:root.light .hljs-emphasis{color:#24292e;font-style:italic}:root.light .hljs-strong{color:#24292e;font-weight:700}:root.light .hljs-addition{color:#22863a;background-color:#f0fff4}:root.light .hljs-deletion{color:#b31d28;background-color:#ffeef0}:root.dark{color-scheme:dark;--color-primary-blue:50 104 206;--color-secondary:46 46 46;--color-primary-purple:57 41 141;--primary-background:25 25 25;--primary-text:223 223 223;--secondary-text:127 127 127;--light-text:79 79 79;--primary-border:53 53 53/*!
- Theme: GitHub Dark
- Description: Dark theme as seen on github.com
- Author: github.com
- Maintainer: @Hirse
- Updated: 2021-05-15
-
- Outdated base version: https://github.com/primer/github-syntax-dark
- Current colors taken from GitHub's CSS
-*/}:root.dark pre code.hljs{display:block;overflow-x:auto;padding:1em}:root.dark code.hljs{padding:3px 5px}:root.dark .hljs{color:#c9d1d9;background:#0d1117}:root.dark .hljs-doctag,:root.dark .hljs-keyword,:root.dark .hljs-meta .hljs-keyword,:root.dark .hljs-template-tag,:root.dark .hljs-template-variable,:root.dark .hljs-type,:root.dark .hljs-variable.language_{color:#ff7b72}:root.dark .hljs-title,:root.dark .hljs-title.class_,:root.dark .hljs-title.class_.inherited__,:root.dark .hljs-title.function_{color:#d2a8ff}:root.dark .hljs-attr,:root.dark .hljs-attribute,:root.dark .hljs-literal,:root.dark .hljs-meta,:root.dark .hljs-number,:root.dark .hljs-operator,:root.dark .hljs-selector-attr,:root.dark .hljs-selector-class,:root.dark .hljs-selector-id,:root.dark .hljs-variable{color:#79c0ff}:root.dark .hljs-meta .hljs-string,:root.dark .hljs-regexp,:root.dark .hljs-string{color:#a5d6ff}:root.dark .hljs-built_in,:root.dark .hljs-symbol{color:#ffa657}:root.dark .hljs-code,:root.dark .hljs-comment,:root.dark .hljs-formula{color:#8b949e}:root.dark .hljs-name,:root.dark .hljs-quote,:root.dark .hljs-selector-pseudo,:root.dark .hljs-selector-tag{color:#7ee787}:root.dark .hljs-subst{color:#c9d1d9}:root.dark .hljs-section{color:#1f6feb;font-weight:700}:root.dark .hljs-bullet{color:#f2cc60}:root.dark .hljs-emphasis{color:#c9d1d9;font-style:italic}:root.dark .hljs-strong{color:#c9d1d9;font-weight:700}:root.dark .hljs-addition{color:#aff5b4;background-color:#033a16}:root.dark .hljs-deletion{color:#ffdcd7;background-color:#67060c}.placeholder\:text-gray-400::-moz-placeholder{--tw-text-opacity:1;color:rgb(156 163 175/var(--tw-text-opacity))}.placeholder\:text-gray-400::placeholder{--tw-text-opacity:1;color:rgb(156 163 175/var(--tw-text-opacity))}.placeholder\:text-slate-400::-moz-placeholder{--tw-text-opacity:1;color:rgb(148 163 184/var(--tw-text-opacity))}.placeholder\:text-slate-400::placeholder{--tw-text-opacity:1;color:rgb(148 163 184/var(--tw-text-opacity))}.group:hover .group-hover\:visible{visibility:visible}.group:hover .group-hover\:block{display:block}.aria-selected\:bg-slate-100[aria-selected=true]{--tw-bg-opacity:1;background-color:rgb(241 245 249/var(--tw-bg-opacity))}.data-\[disabled\]\:pointer-events-none[data-disabled]{pointer-events:none}@keyframes slideUpAndFade{0%{opacity:0;transform:translateY(2px)}to{opacity:1;transform:translateY(0)}}.data-\[state\=delayed-open\]\:data-\[side\=bottom\]\:animate-slideUpAndFade[data-side=bottom][data-state=delayed-open]{animation:slideUpAndFade .4s cubic-bezier(.16,1,.3,1)}@keyframes slideRightAndFade{0%{opacity:0;transform:translateX(2px)}to{opacity:1;transform:translateX(0)}}.data-\[state\=delayed-open\]\:data-\[side\=left\]\:animate-slideRightAndFade[data-side=left][data-state=delayed-open]{animation:slideRightAndFade .4s cubic-bezier(.16,1,.3,1)}@keyframes slideLeftAndFade{0%{opacity:0;transform:translateX(2px)}to{opacity:1;transform:translateX(0)}}.data-\[state\=delayed-open\]\:data-\[side\=right\]\:animate-slideLeftAndFade[data-side=right][data-state=delayed-open]{animation:slideLeftAndFade .4s cubic-bezier(.16,1,.3,1)}@keyframes slideDownAndFade{0%{opacity:0;transform:translateY(-2px)}to{opacity:1;transform:translateY(0)}}.data-\[state\=delayed-open\]\:data-\[side\=top\]\:animate-slideDownAndFade[data-side=top][data-state=delayed-open]{animation:slideDownAndFade .4s cubic-bezier(.16,1,.3,1)}.data-\[state\=open\]\:bg-slate-100[data-state=open]{--tw-bg-opacity:1;background-color:rgb(241 245 249/var(--tw-bg-opacity))}.data-\[disabled\]\:opacity-50[data-disabled]{opacity:.5}.ui-active\:bg-primary-blue[data-headlessui-state~=active]{--tw-bg-opacity:1;background-color:rgb(var(--color-primary-blue)/var(--tw-bg-opacity))}.ui-active\:text-white[data-headlessui-state~=active]{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}:where([data-headlessui-state~=active]) .ui-active\:bg-primary-blue{--tw-bg-opacity:1;background-color:rgb(var(--color-primary-blue)/var(--tw-bg-opacity))}:where([data-headlessui-state~=active]) .ui-active\:text-white{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}.ui-not-active\:text-secondary-text[data-headlessui-state]:not([data-headlessui-state~=active]){--tw-text-opacity:1;color:rgb(var(--secondary-text)/var(--tw-text-opacity))}:where([data-headlessui-state]:not([data-headlessui-state~=active])) .ui-not-active\:text-secondary-text:not([data-headlessui-state]){--tw-text-opacity:1;color:rgb(var(--secondary-text)/var(--tw-text-opacity))}.hover\:border-gray-400:hover{--tw-border-opacity:1;border-color:rgb(156 163 175/var(--tw-border-opacity))}.hover\:bg-gray-50:hover{--tw-bg-opacity:1;background-color:rgb(249 250 251/var(--tw-bg-opacity))}.hover\:bg-opacity-100:hover{--tw-bg-opacity:1}.hover\:text-primary-text:hover{--tw-text-opacity:1;color:rgb(var(--primary-text)/var(--tw-text-opacity))}.hover\:text-violet-100:hover{--tw-text-opacity:1;color:rgb(237 233 254/var(--tw-text-opacity))}.hover\:opacity-100:hover{opacity:1}.hover\:opacity-80:hover{opacity:.8}.focus\:z-10:focus{z-index:10}.focus\:outline-none:focus{outline:2px solid transparent;outline-offset:2px}.focus\:ring-2:focus{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow,0 0 #0000)}.focus\:ring-inset:focus{--tw-ring-inset:inset}.focus\:ring-indigo-600:focus{--tw-ring-opacity:1;--tw-ring-color:rgb(79 70 229/var(--tw-ring-opacity))}.focus\:ring-slate-400:focus{--tw-ring-opacity:1;--tw-ring-color:rgb(148 163 184/var(--tw-ring-opacity))}.focus\:ring-offset-2:focus{--tw-ring-offset-width:2px}.focus-visible\:ring-2:focus-visible{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow,0 0 #0000)}.focus-visible\:ring-white:focus-visible{--tw-ring-opacity:1;--tw-ring-color:rgb(255 255 255/var(--tw-ring-opacity))}.focus-visible\:ring-opacity-75:focus-visible{--tw-ring-opacity:0.75}.disabled\:pointer-events-none:disabled{pointer-events:none}.disabled\:cursor-not-allowed:disabled{cursor:not-allowed}.disabled\:opacity-50:disabled{opacity:.5}:is(.dark .dark\:border-gray-500){--tw-border-opacity:1;border-color:rgb(107 114 128/var(--tw-border-opacity))}:is(.dark .dark\:border-b-slate-700){--tw-border-opacity:1;border-bottom-color:rgb(51 65 85/var(--tw-border-opacity))}:is(.dark .dark\:bg-\[\#444a5354\]){background-color:#444a5354}:is(.dark .dark\:bg-\[\#ffffff26\]){background-color:#ffffff26}:is(.dark .dark\:bg-gray-600){--tw-bg-opacity:1;background-color:rgb(75 85 99/var(--tw-bg-opacity))}:is(.dark .dark\:bg-primary-blue){--tw-bg-opacity:1;background-color:rgb(var(--color-primary-blue)/var(--tw-bg-opacity))}:is(.dark .dark\:bg-slate-700){--tw-bg-opacity:1;background-color:rgb(51 65 85/var(--tw-bg-opacity))}:is(.dark .dark\:bg-slate-800){--tw-bg-opacity:1;background-color:rgb(30 41 59/var(--tw-bg-opacity))}:is(.dark .dark\:bg-slate-900){--tw-bg-opacity:1;background-color:rgb(15 23 42/var(--tw-bg-opacity))}:is(.dark .dark\:text-gray-100){--tw-text-opacity:1;color:rgb(243 244 246/var(--tw-text-opacity))}:is(.dark .dark\:text-gray-300){--tw-text-opacity:1;color:rgb(209 213 219/var(--tw-text-opacity))}:is(.dark .dark\:text-primary-text){--tw-text-opacity:1;color:rgb(var(--primary-text)/var(--tw-text-opacity))}:is(.dark .dark\:text-slate-400){--tw-text-opacity:1;color:rgb(148 163 184/var(--tw-text-opacity))}:is(.dark .dark\:text-slate-50){--tw-text-opacity:1;color:rgb(248 250 252/var(--tw-text-opacity))}:is(.dark .dark\:aria-selected\:bg-slate-700[aria-selected=true]){--tw-bg-opacity:1;background-color:rgb(51 65 85/var(--tw-bg-opacity))}:is(.dark .dark\:data-\[state\=open\]\:bg-slate-800[data-state=open]){--tw-bg-opacity:1;background-color:rgb(30 41 59/var(--tw-bg-opacity))}:is(.dark .dark\:focus\:ring-slate-400:focus){--tw-ring-opacity:1;--tw-ring-color:rgb(148 163 184/var(--tw-ring-opacity))}:is(.dark .dark\:focus\:ring-offset-slate-900:focus){--tw-ring-offset-color:#0f172a}@media (min-width:640px){.sm\:flex{display:flex}.sm\:max-w-lg{max-width:32rem}.sm\:grid-cols-2{grid-template-columns:repeat(2,minmax(0,1fr))}.sm\:flex-row{flex-direction:row}.sm\:items-center{align-items:center}.sm\:justify-end{justify-content:flex-end}.sm\:space-x-2>:not([hidden])~:not([hidden]){--tw-space-x-reverse:0;margin-right:calc(.5rem * var(--tw-space-x-reverse));margin-left:calc(.5rem * calc(1 - var(--tw-space-x-reverse)))}.sm\:space-x-3>:not([hidden])~:not([hidden]){--tw-space-x-reverse:0;margin-right:calc(.75rem * var(--tw-space-x-reverse));margin-left:calc(.75rem * calc(1 - var(--tw-space-x-reverse)))}.sm\:space-y-0>:not([hidden])~:not([hidden]){--tw-space-y-reverse:0;margin-top:calc(0px * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(0px * var(--tw-space-y-reverse))}.sm\:rounded-lg{border-radius:.5rem}.sm\:text-left{text-align:left}.sm\:text-sm{font-size:.875rem;line-height:1.25rem}.sm\:leading-6{line-height:1.5rem}}.\[\&_\[cmdk-group-heading\]\]\:px-2 [cmdk-group-heading]{padding-left:.5rem;padding-right:.5rem}.\[\&_\[cmdk-group-heading\]\]\:pb-1\.5 [cmdk-group-heading]{padding-bottom:.375rem}.\[\&_\[cmdk-group-heading\]\]\:text-sm [cmdk-group-heading]{font-size:.875rem;line-height:1.25rem}.\[\&_\[cmdk-group-heading\]\]\:font-medium [cmdk-group-heading]{font-weight:500}.\[\&_\[cmdk-group-heading\]\]\:font-semibold [cmdk-group-heading]{font-weight:600}.\[\&_\[cmdk-group-heading\]\]\:text-slate-500 [cmdk-group-heading]{--tw-text-opacity:1;color:rgb(100 116 139/var(--tw-text-opacity))}.\[\&_\[cmdk-group-heading\]\]\:text-slate-900 [cmdk-group-heading]{--tw-text-opacity:1;color:rgb(15 23 42/var(--tw-text-opacity))}:is(.dark .\[\&_\[cmdk-group-heading\]\]\:dark\:text-slate-300) [cmdk-group-heading]{--tw-text-opacity:1;color:rgb(203 213 225/var(--tw-text-opacity))}.\[\&_\[cmdk-group\]\]\:px-2 [cmdk-group]{padding-left:.5rem;padding-right:.5rem}.\[\&_\[cmdk-input-wrapper\]_svg\]\:h-5 [cmdk-input-wrapper] svg{height:1.25rem}.\[\&_\[cmdk-input-wrapper\]_svg\]\:w-5 [cmdk-input-wrapper] svg{width:1.25rem}.\[\&_\[cmdk-input\]\]\:h-12 [cmdk-input]{height:3rem}.\[\&_\[cmdk-item\]\]\:px-2 [cmdk-item]{padding-left:.5rem;padding-right:.5rem}.\[\&_\[cmdk-item\]\]\:py-3 [cmdk-item]{padding-top:.75rem;padding-bottom:.75rem}.\[\&_\[cmdk-item\]_svg\]\:h-5 [cmdk-item] svg{height:1.25rem}.\[\&_\[cmdk-item\]_svg\]\:w-5 [cmdk-item] svg{width:1.25rem}.\[\&_\[dialog-overlay\]\]\:bg-red-100 [dialog-overlay]{--tw-bg-opacity:1;background-color:rgb(254 226 226/var(--tw-bg-opacity))}
\ No newline at end of file
diff --git a/spaces/huang4414/White-box-Cartoonization/wbc/network.py b/spaces/huang4414/White-box-Cartoonization/wbc/network.py
deleted file mode 100644
index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000
--- a/spaces/huang4414/White-box-Cartoonization/wbc/network.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import tensorflow as tf
-import numpy as np
-import tensorflow.contrib.slim as slim
-
-
-
-def resblock(inputs, out_channel=32, name='resblock'):
-
- with tf.variable_scope(name):
-
- x = slim.convolution2d(inputs, out_channel, [3, 3],
- activation_fn=None, scope='conv1')
- x = tf.nn.leaky_relu(x)
- x = slim.convolution2d(x, out_channel, [3, 3],
- activation_fn=None, scope='conv2')
-
- return x + inputs
-
-
-
-
-def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False):
- with tf.variable_scope(name, reuse=reuse):
-
- x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None)
- x0 = tf.nn.leaky_relu(x0)
-
- x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None)
- x1 = tf.nn.leaky_relu(x1)
- x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None)
- x1 = tf.nn.leaky_relu(x1)
-
- x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
- x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
-
- for idx in range(num_blocks):
- x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx))
-
- x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None)
- x2 = tf.nn.leaky_relu(x2)
-
- h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2]
- x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2))
- x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None)
- x3 = tf.nn.leaky_relu(x3)
- x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None)
- x3 = tf.nn.leaky_relu(x3)
-
- h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2]
- x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2))
- x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None)
- x4 = tf.nn.leaky_relu(x4)
- x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None)
-
- return x4
-
-if __name__ == '__main__':
-
-
- pass
\ No newline at end of file
diff --git a/spaces/huggingchat/chat-ui/src/lib/actions/snapScrollToBottom.ts b/spaces/huggingchat/chat-ui/src/lib/actions/snapScrollToBottom.ts
deleted file mode 100644
index b22a0648221f6b58853a910fb6286f79574a0246..0000000000000000000000000000000000000000
--- a/spaces/huggingchat/chat-ui/src/lib/actions/snapScrollToBottom.ts
+++ /dev/null
@@ -1,54 +0,0 @@
-import { navigating } from "$app/stores";
-import { tick } from "svelte";
-import { get } from "svelte/store";
-
-const detachedOffset = 10;
-
-/**
- * @param node element to snap scroll to bottom
- * @param dependency pass in a dependency to update scroll on changes.
- */
-export const snapScrollToBottom = (node: HTMLElement, dependency: unknown) => {
- let prevScrollValue = node.scrollTop;
- let isDetached = false;
-
- const handleScroll = () => {
- // if user scrolled up, we detach
- if (node.scrollTop < prevScrollValue) {
- isDetached = true;
- }
-
- // if user scrolled back to within 10px of bottom, we reattach
- if (node.scrollTop - (node.scrollHeight - node.clientHeight) >= -detachedOffset) {
- isDetached = false;
- }
-
- prevScrollValue = node.scrollTop;
- };
-
- const updateScroll = async (_options: { force?: boolean } = {}) => {
- const defaultOptions = { force: false };
- const options = { ...defaultOptions, ..._options };
- const { force } = options;
-
- if (!force && isDetached && !get(navigating)) return;
-
- // wait for next tick to ensure that the DOM is updated
- await tick();
-
- node.scrollTo({ top: node.scrollHeight });
- };
-
- node.addEventListener("scroll", handleScroll);
-
- if (dependency) {
- updateScroll({ force: true });
- }
-
- return {
- update: updateScroll,
- destroy: () => {
- node.removeEventListener("scroll", handleScroll);
- },
- };
-};
diff --git a/spaces/huggingchat/chat-ui/src/lib/utils/template.ts b/spaces/huggingchat/chat-ui/src/lib/utils/template.ts
deleted file mode 100644
index 87360c88fe6c655fff39f7947da9c6b345402a60..0000000000000000000000000000000000000000
--- a/spaces/huggingchat/chat-ui/src/lib/utils/template.ts
+++ /dev/null
@@ -1,28 +0,0 @@
-import type { Message } from "$lib/types/Message";
-import type { LegacyParamatersTemplateInput } from "$lib/types/Template";
-import Handlebars from "handlebars";
-
-Handlebars.registerHelper("ifUser", function (this: Pick, options) {
- if (this.from == "user") return options.fn(this);
-});
-
-Handlebars.registerHelper(
- "ifAssistant",
- function (this: Pick, options) {
- if (this.from == "assistant") return options.fn(this);
- }
-);
-
-export function compileTemplate(input: string, model: LegacyParamatersTemplateInput) {
- const template = Handlebars.compile(input, {
- knownHelpers: { ifUser: true, ifAssistant: true },
- knownHelpersOnly: true,
- noEscape: true,
- strict: true,
- preventIndent: true,
- });
-
- return function render(inputs: T, options?: RuntimeOptions) {
- return template({ ...model, ...inputs }, options);
- };
-}
diff --git a/spaces/hysts/ControlNet-v1-1/app_openpose.py b/spaces/hysts/ControlNet-v1-1/app_openpose.py
deleted file mode 100644
index bcf6dc9d9bc53dcdc9b0d8d72498d6a626065060..0000000000000000000000000000000000000000
--- a/spaces/hysts/ControlNet-v1-1/app_openpose.py
+++ /dev/null
@@ -1,95 +0,0 @@
-#!/usr/bin/env python
-
-import gradio as gr
-
-from settings import (
- DEFAULT_IMAGE_RESOLUTION,
- DEFAULT_NUM_IMAGES,
- MAX_IMAGE_RESOLUTION,
- MAX_NUM_IMAGES,
- MAX_SEED,
-)
-from utils import randomize_seed_fn
-
-
-def create_demo(process):
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- image = gr.Image()
- prompt = gr.Textbox(label="Prompt")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- preprocessor_name = gr.Radio(
- label="Preprocessor", choices=["Openpose", "None"], type="value", value="Openpose"
- )
- num_samples = gr.Slider(
- label="Number of images", minimum=1, maximum=MAX_NUM_IMAGES, value=DEFAULT_NUM_IMAGES, step=1
- )
- image_resolution = gr.Slider(
- label="Image resolution",
- minimum=256,
- maximum=MAX_IMAGE_RESOLUTION,
- value=DEFAULT_IMAGE_RESOLUTION,
- step=256,
- )
- preprocess_resolution = gr.Slider(
- label="Preprocess resolution", minimum=128, maximum=512, value=512, step=1
- )
- num_steps = gr.Slider(label="Number of steps", minimum=1, maximum=100, value=20, step=1)
- guidance_scale = gr.Slider(label="Guidance scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=0, maximum=MAX_SEED, step=1, value=0)
- randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
- a_prompt = gr.Textbox(label="Additional prompt", value="best quality, extremely detailed")
- n_prompt = gr.Textbox(
- label="Negative prompt",
- value="longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
- )
- with gr.Column():
- result = gr.Gallery(label="Output", show_label=False, columns=2, object_fit="scale-down")
- inputs = [
- image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- preprocess_resolution,
- num_steps,
- guidance_scale,
- seed,
- preprocessor_name,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- api_name=False,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name=False,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- api_name=False,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name="openpose",
- )
- return demo
-
-
-if __name__ == "__main__":
- from model import Model
-
- model = Model(task_name="Openpose")
- demo = create_demo(model.process_openpose)
- demo.queue().launch()
diff --git a/spaces/hysts/LoRA-SD-training/uploader.py b/spaces/hysts/LoRA-SD-training/uploader.py
deleted file mode 100644
index a4d177595c31dfea57c968af0e28478c889dc4f2..0000000000000000000000000000000000000000
--- a/spaces/hysts/LoRA-SD-training/uploader.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import gradio as gr
-from huggingface_hub import HfApi
-
-
-def upload(model_name: str, hf_token: str) -> None:
- api = HfApi(token=hf_token)
- user_name = api.whoami()['name']
- model_id = f'{user_name}/{model_name}'
- try:
- api.create_repo(model_id, repo_type='model', private=True)
- api.upload_folder(repo_id=model_id,
- folder_path='results',
- path_in_repo='results',
- repo_type='model')
- url = f'https://huggingface.co/{model_id}'
- message = f'Your model was successfully uploaded to [{url}]({url}).'
- except Exception as e:
- message = str(e)
-
- return gr.update(value=message, visible=True)
diff --git a/spaces/imperialwool/funapi/Dockerfile b/spaces/imperialwool/funapi/Dockerfile
deleted file mode 100644
index 22aa2ba66f2fa45503d77424d20d8fca2d5616dc..0000000000000000000000000000000000000000
--- a/spaces/imperialwool/funapi/Dockerfile
+++ /dev/null
@@ -1,29 +0,0 @@
-FROM python:3.11.1-bullseye
-
-USER root
-
-ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
-
-COPY . /app
-
-RUN chmod -R 777 /app
-
-WORKDIR /app
-
-RUN apt update && apt install ffmpeg -y
-
-RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
-
-RUN python3 -m pip install -U --no-cache-dir pip setuptools wheel
-
-RUN python3 -m pip install --force-reinstall --no-cache-dir https://github.com/yt-dlp/yt-dlp/archive/master.tar.gz
-
-RUN mkdir -p /.cache/huggingface/hub
-RUN chown -R root:root /.cache/huggingface/hub
-RUN chmod -R 777 /.cache/huggingface/hub
-
-RUN mkdir -p /.cache/yt-dlp
-RUN chown -R root:root /.cache/yt-dlp
-RUN chmod -R 777 /.cache/yt-dlp
-
-CMD ["python", "app.py"]
\ No newline at end of file
diff --git a/spaces/inamXcontru/PoeticTTS/Decopedia Volume 1 Book Pdf !LINK!.md b/spaces/inamXcontru/PoeticTTS/Decopedia Volume 1 Book Pdf !LINK!.md
deleted file mode 100644
index 6d1357947f3fdf1f559b560f2b6c27253e8aeafa..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Decopedia Volume 1 Book Pdf !LINK!.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-Review: Decopedia Volume 1 by Felbrigg Herriot
-Decopedia Volume 1 is a book written for use with the roleplaying game Traveller. It is designed to give the referee "drop ins" for their game, such as planets, alien animals, people of interest, and story seeds. Each entry is separate and self-contained, so you can easily adapt it to your own game.
-Decopedia Volume 1 Book Pdf Download File ››› https://gohhs.com/2uz3iy
-The book contains 10 planets, each with a detailed description of their culture, politics, religion, and environment. The planets are diverse and interesting, ranging from a frozen world with a hidden alien base to a tropical paradise with a deadly secret. The book also provides information on the star systems, trade codes, and UWP of each planet.
-The book also features 10 alien animals, each with a full-color illustration and stats for Traveller. The animals are varied and exotic, such as a six-legged herbivore that can sense emotions, a flying predator that can spit acid, and a giant worm that can swallow a person whole. The book gives some hints on how to use the animals in your game, such as encounter tables, habitats, and behaviors.
-The book also introduces 10 people of interest, each with a portrait and a brief background. The people are colorful and intriguing, such as a rogue trader with a mysterious past, a psychic spy with a hidden agenda, and a rebel leader with a noble cause. The book suggests some ways to involve the people in your game, such as plot hooks, motivations, and secrets.
-
-The book also offers 10 story seeds, each with a premise and a twist. The story seeds are creative and inspiring, such as a heist gone wrong, a diplomatic crisis, and a haunted ship. The book leaves the details of the story up to you, but gives you enough to spark your imagination.
-Decopedia Volume 1 is a useful and enjoyable resource for any Traveller referee who wants to add some variety and flavor to their game. It is well-written and well-illustrated, and it covers a lot of ground in a concise way. It is available as a PDF file from Storytellers Vault[^2^] or Yumpu[^3^].
-
-If you are looking for more "drop ins" for your Traveller game, you might want to check out Decopedia Volume 2 by Felbrigg Herriot. This book follows the same format as Decopedia Volume 1, but with new and different content. It includes 10 more planets, 10 more alien animals, 10 more people of interest, and 10 more story seeds. Some of the highlights are a planet with a sentient ocean, an animal that can manipulate gravity, a person who is a clone of a famous explorer, and a story involving a time-traveling artifact.
-Decopedia Volume 2 is another great addition to your Traveller library. It is packed with ideas and inspiration for your game, and it is easy to use and adapt. It is also available as a PDF file from Storytellers Vault.
-
-Decopedia Volume 1 and 2 are not the only books by Felbrigg Herriot that you can use for your Traveller game. He has also written several other books that cover different aspects of the Traveller universe, such as starships, factions, careers, and equipment. Some of his other titles are Starship Geomorphs, The Traveller Bibliography, The Traveller Companion, and The Traveller Gazetteer. You can find all of his books on Storytellers Vault or on his website.
-Felbrigg Herriot is a prolific and talented writer who has a passion for Traveller and science fiction. He has been playing and writing for Traveller since 1977, and he has contributed to many official and fan-made publications. He is also a podcaster who hosts the Half Hour Wargames podcast and the Traveller RPG podcast. He is always working on new projects and sharing his ideas with the Traveller community.
-In conclusion, Decopedia Volume 1 and 2 are excellent books that can enhance your Traveller game with a lot of variety and flavor. They are well-written and well-illustrated, and they are easy to use and adapt. They are also affordable and accessible, as they are available as PDF files from Storytellers Vault. If you are a fan of Traveller or science fiction in general, you should definitely give them a try.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/1 Touch Laser Photo !!BETTER!! Crack 16.md b/spaces/inplisQlawa/anything-midjourney-v4-1/1 Touch Laser Photo !!BETTER!! Crack 16.md
deleted file mode 100644
index 4d5a8d01640a6d0b651e4b7ce00135142b5c1d28..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/1 Touch Laser Photo !!BETTER!! Crack 16.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-How to Use 1-Touch Laser Photo to Convert Photos into Bitmaps for Laser Engraving
-1-Touch Laser Photo is a software application that allows you to convert digital photos into bitmap files that can be used for laser engraving. It is compatible with all brands of laser engravers and supports hundreds of materials. In this article, we will show you how to use 1-Touch Laser Photo to create high-quality laser engraved photos in three easy steps.
-1 touch laser photo crack 16 DOWNLOAD → https://urlin.us/2uEykn
-Step 1: Open your photo
-Launch 1-Touch Laser Photo and click on the Open button to browse for your photo. You can use any common image format such as BMP, JPEG, PNG or TIFF. Once you open your photo, you can resize and crop it using the Adjust Image button.
-Step 2: Select your material
-Click on the Select Material button and choose your material from the drop-down menu. 1-Touch Laser Photo will automatically apply the best bitmap filter and image enhancements for your chosen material. You can also adjust the contrast, brightness and gamma settings manually if you wish.
-Step 3: Export your bitmap
-Click on the Export Bitmap button and save your bitmap file to your computer. You can then import it into your laser engraver software and set the speed and power parameters according to your laser system. Then you are ready to engrave your photo onto your material.
-That's it! You have just created a professional-quality laser engraved photo using 1-Touch Laser Photo. You can use this software to engrave photos onto wood, stone, metal, glass, acrylic, leather and many more materials. For more information about 1-Touch Laser Photo, please visit [^2^].
-
Tips and Tricks for Laser Engraving Photos
-Laser engraving photos can be a fun and rewarding way to create personalized gifts, decorations and artworks. However, it can also be challenging to get the best results. Here are some tips and tricks to help you improve your laser engraving skills.
-
-Choose high-quality photos. The quality of your photo will affect the quality of your engraving. Choose photos that are clear, sharp and have good contrast. Avoid photos that are blurry, pixelated or have too much noise.
-Choose suitable materials. Different materials have different properties and reactions to laser engraving. Some materials are easier to engrave than others, and some may require special treatments or coatings. For example, wood is a popular material for laser engraving photos because it has a natural grain and texture that enhances the image. However, wood also tends to burn and char when exposed to high heat, so you may need to apply a masking tape or a protective spray to prevent scorch marks.
-Test your settings. Before you engrave your final product, it is always a good idea to test your settings on a scrap piece of the same material. This will help you avoid wasting time and materials on unsuccessful attempts. You can use 1-Touch Laser Photo to preview how your photo will look on different materials and adjust the settings accordingly.
-Clean your material and lens. Make sure your material is clean and free of dust, dirt or fingerprints before you engrave it. Also, make sure your laser lens is clean and aligned properly. A dirty or misaligned lens can affect the quality and accuracy of your engraving.
-
-By following these tips and tricks, you can improve your laser engraving skills and create stunning photos that will impress your friends and family. Have fun and be creative with 1-Touch Laser Photo!
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargaraspelprod30crack.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Descargaraspelprod30crack.md
deleted file mode 100644
index cf0a1e20015ed1d0909d80c957ad50d8b196397a..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargaraspelprod30crack.md
+++ /dev/null
@@ -1,15 +0,0 @@
-descargaraspelprod30crack DOWNLOAD ……… https://urlin.us/2uExqa
-
-berphil 94fbb0d496 · |WORK|. 5 point. Descargaraspelprod30crack |WORK| · ##VERIFIED## Artmani registration code 8.02 â©. ✓ DOWNLOAD HERE â©. http://dedicacaballar.ru/?p=17552.
-Download Driver Pack Solution 15 Online for free..
-Download DriverPack Solution Online for Windows XP, Vista, 7, 8, 8.1, 10.
-DriverPack Solution Online. http://www.d-wave.com/driverpacksolution...
-Download DriverPack Solution Online.
-Driver Pack Solution Online.
-Software version: 15.126.
-Interface language: Russian, English and others.
-Treatment: No.
-Driver Pack Solution for Windows 7 free download in Russian without registration and sms.. 8a78ff9644
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Foundations Of Christian Doctrine Kevin Conner Pdf Download LINK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Foundations Of Christian Doctrine Kevin Conner Pdf Download LINK.md
deleted file mode 100644
index 245b94b7a2afbc20bc4f8a72fb5d9fbf51f5b57c..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Foundations Of Christian Doctrine Kevin Conner Pdf Download LINK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Foundations Of Christian Doctrine Kevin Conner Pdf Download DOWNLOAD ===== https://urlin.us/2uEwm6
-
-2 Click on the link below and register for free to download the eBook: FOUNDATIONS OF CHRISTIAN DOCTRINE KEVIN J DOWNLOAD THE CONNER FROM OUR ONLINE LIBRARY. J., 1927-; Format: Book; 308 pages; 25 cm. Volume: 275.000; paper: 1.500.00; in electronic form: 1.500.00; in English; ISBN: 978-0-244-81696-9; Language: English; Size: 14 x 21; Binding: soft; Printing: Digital; Color: black and white; Paper: matte; Pages: 275.000. 8a78ff9644
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Itoo Forest Pack Pro V4.3.6 For 3ds Max Crack [UPD].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Itoo Forest Pack Pro V4.3.6 For 3ds Max Crack [UPD].md
deleted file mode 100644
index 26283f9e648d8ba08b57b0aac5a5f5d8a817d967..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Itoo Forest Pack Pro V4.3.6 For 3ds Max Crack [UPD].md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-A comprehensive source of information on the Canadian mineral industry is the Canadian Mineral Yearbook. The 1993 edition of this publication contains a chapter on "Asbestos", which provides an overview of the production, consumption, trade, and outlook of this mineral in Canada. The chapter was written by the Mineral Policy Sector of the Energy, Mines and Resources Canada (EMR Canada), a federal department responsible for developing and implementing policies and programs related to Canada's natural resources. Another relevant publication is Workplace Safety and Health: The Role of Workers' Compensation, authored by J.R. Chelius in 1977. This book examines the effects of workers' compensation systems on the prevention and compensation of occupational injuries and diseases, with a focus on the United States. The book was published by the American Enterprise Institute for Public Policy Research (AEI), a think tank that conducts research and analysis on economic, social, and political issues.
-
-Asbestos is a term that refers to a group of naturally occurring fibrous minerals that have high tensile strength, resistance to heat and chemicals, and ability to be woven into fabrics. Asbestos has been widely used in various industries and products, such as construction, insulation, fireproofing, automotive, and shipbuilding. However, asbestos also poses serious health risks to humans and animals, as exposure to its fibers can cause lung cancer, mesothelioma, asbestosis, and other diseases. Asbestos is classified as a carcinogen by the World Health Organization (WHO) and many countries have banned or restricted its use.
-Itoo Forest Pack Pro V4.3.6 For 3ds Max Crack DOWNLOAD → https://urlin.us/2uExNR
-
-In Canada, asbestos was once a major export commodity and a source of employment for many workers. According to the Canadian Mineral Yearbook (1993), Canada produced 1.1 million tonnes of asbestos in 1992, accounting for 17% of the world production. The main producing provinces were Quebec and Newfoundland. The majority of the Canadian asbestos was exported to developing countries, especially in Asia and Latin America, where the demand for cheap and durable building materials was high. However, the Canadian asbestos industry faced increasing challenges in the 1990s due to the growing awareness of the health hazards of asbestos, the decline in world prices, and the emergence of alternative products.
-
-One of the key issues related to asbestos is the protection of workers who are exposed to it in their occupations. Workplace safety and health is influenced by various factors, such as regulations, enforcement, incentives, education, and compensation. Workers' compensation is a system that provides benefits to workers who suffer from work-related injuries or illnesses, such as medical care, income replacement, rehabilitation, and survivor benefits. Workers' compensation also serves as a form of social insurance that spreads the costs of occupational hazards among employers, workers, and society. In Workplace Safety and Health: The Role of Workers' Compensation (1977), J.R. Chelius analyzes how different workers' compensation schemes affect the behavior and outcomes of employers and workers in terms of preventing and compensating occupational harms.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/La Transformacion Total De Su Dinero Dave Ramsey Pdf 77.md b/spaces/inplisQlawa/anything-midjourney-v4-1/La Transformacion Total De Su Dinero Dave Ramsey Pdf 77.md
deleted file mode 100644
index 4ed50e9e7910c018c197234bfa975a72dacd6f8f..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/La Transformacion Total De Su Dinero Dave Ramsey Pdf 77.md
+++ /dev/null
@@ -1,24 +0,0 @@
-la transformacion total de su dinero dave ramsey pdf 77 DOWNLOAD ✔ https://urlin.us/2uEx7D
-
-Payday loan borrowers in all states are granted in writing, on the internet. Maximum finance for good checking account at SBA.
-
-Good credit check
-
-The personal loans. Find great deals on eBay for morton good as kjour and morton good as kjor. This policy shall be incorporated in the policy of mr good unless and good ct.
-
-If you search on Google, you will see hundreds of different pages about things to consider when ordering a good belt. Good this guide to buying good premium options. With the help of a good belt you can choose from the various sizes and models. We recommend you to be careful when you buy a good belt.
-
-Moreover, you need to know the differences between various types of belts that are offered on the market. Are you looking for an ideal knife which will serve you for all the possible occasions, a razor which will never let you down or a good knife which will suit the latest trends.
-
-The good belt offered by mr good will not disappoint you. There are various belts on the market but you need to choose the right one. The mr good has various types of good belts and therefore you have a chance of finding the right one.
-
-Whether you are looking for a belt which will help you to keep up with the latest trends, a good belt which will help you to work for longer hours, a good belt which will help you to achieve better workouts, a good belt which will make you a successful competitor in sports, a good belt which will keep you comfortable during jogging, a good belt which will make you stand out from others or a good belt which will help you to do everything the mr good does in its every section, you need not worry because mr good has a good belt for all your needs.
-
-The good belts by mr good are made of the best material, so they are durable. Therefore, you can use the mr good belt during various activities without any problem.
-
-On the market, you can see the different brands and you have to choose the right one. Not all the brands on the market are the same, but mr good has the best, the best quality, the best and the best design.
-
-Therefore, whether you are looking for a good razor, a good knife, a good belt, mr good can offer the best to you. The mr good belts have various sizes which will help you to choose the right belt. 4fefd39f24
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Bloody Rally Show Free Download LINK.md b/spaces/inreVtussa/clothingai/Examples/Bloody Rally Show Free Download LINK.md
deleted file mode 100644
index 0b004db2f8e8e319ff7cc044fbdfb34b4fbd0122..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Bloody Rally Show Free Download LINK.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-```html
-Bloody Rally Show Free Download - The Ultimate Racing Game
-
-
-
-Bloody Rally Show Free Download - The Ultimate Racing Game
-
-If you are looking for a racing game that is not only fun but also challenging, violent, and unpredictable, then you should try Bloody Rally Show . This is a game that combines the best elements of arcade racing, roguelike, and car combat. You can download it for free from the official website or from Steam.
-Bloody Rally Show Free Download Download Zip >>>>> https://tiurll.com/2uCmjS
-What is Bloody Rally Show?
-Bloody Rally Show is a game developed by Bloodyscreen , an independent studio based in Estonia. The game was released in 2020 and has received positive reviews from players and critics alike. The game features:
-
-Over 100 procedurally generated tracks with different themes, obstacles, and hazards.
-A dynamic campaign mode where you can create your own driver and car, and compete in various events and championships.
-A multiplayer mode where you can race against up to 8 players online or locally.
-A sandbox mode where you can customize your own tracks and scenarios.
-A level editor where you can create your own maps and share them with other players.
-A variety of cars and weapons to choose from, each with their own strengths and weaknesses.
-A realistic physics engine that simulates damage, collisions, and explosions.
-A retro-style pixel art graphics that create a nostalgic and immersive atmosphere.
-An original soundtrack composed by Teetow , featuring rock, metal, synthwave, and chiptune genres.
-
-How to Get Bloody Rally Show Free Download?
-There are two ways to get Bloody Rally Show for free. One is to download it from the official website, where you can get the latest version of the game without any DRM or restrictions. The other way is to download it from Steam, where you can also access the Steam Workshop and the Steam Achievements. To download the game from Steam, you need to have a Steam account and the Steam client installed on your computer. Here are the steps to follow:
-
-Go to the Bloody Rally Show page on Steam.
-Click on the "Play Game" button. This will add the game to your library and start downloading it.
-Once the download is complete, launch the game from your library or from the desktop shortcut.
-Enjoy the game!
-
-Why Should You Play Bloody Rally Show?
-Bloody Rally Show is a game that will appeal to anyone who loves racing games, especially those who grew up with classics like Micro Machines, Death Rally, or Carmageddon. The game offers a lot of variety and replay value, as well as a high level of challenge and excitement. Here are some of the reasons why you should play Bloody Rally Show:
-
-
-You can experience a different race every time, thanks to the procedurally generated tracks and the random events that can happen during the race.
-You can customize your car and driver according to your preferences and play style. You can upgrade your engine, tires, armor, weapons, and more. You can also choose your driver's name, appearance, skills, perks, and backstory.
-You can compete against other players in online or local d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Call Of Duty WWII Digital Deluxe Edition (English Pack Only) Unlimited Gems.md b/spaces/inreVtussa/clothingai/Examples/Call Of Duty WWII Digital Deluxe Edition (English Pack Only) Unlimited Gems.md
deleted file mode 100644
index ba4ea826ce631af4e473026d82980ef9cce4c4be..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Call Of Duty WWII Digital Deluxe Edition (English Pack Only) Unlimited Gems.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Call of Duty: WWII Digital Deluxe Edition (English Pack Only) unlimited gems DOWNLOAD ►►►►► https://tiurll.com/2uCjGT
-
-Overwolf does not support games in its version of the Microsoft Store. Sort by. Supported games. Supported games; AND I; BEHIND. Supported; Not supported. Download games for Windows for free via torrent.
-A user-friendly interface and navigation make it easy to find your favorite games for your computer at any time.
-Here are all the most popular games for Windows 10/8/7/XP.You can download them to your computer absolutely free.
-To install, you will need to download and install the Windows Media Center program.
-Download games via torrent on PC for free without registration.
-You can download games via torrent for free at Torrent3.org. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Dablin Strit Knjiga Online Pdf-adds Sakura Editar Renta.md b/spaces/inreVtussa/clothingai/Examples/Dablin Strit Knjiga Online Pdf-adds Sakura Editar Renta.md
deleted file mode 100644
index c3b661ea808c31abf0dc57e65480719a65d8f3e2..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Dablin Strit Knjiga Online Pdf-adds Sakura Editar Renta.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-Dablin Strit: roman koji Äe vas oduÅ¡eviti
-Ako ste u potrazi za strastvenom i romantiÄnom priÄom koja Äe vas odvesti u Å kotsku, ne propustite da proÄitate Dablin Strit, bestseler Samante Jang. Ova knjiga je dostupna u PDF formatu na internetu, a možete je preuzeti i dodati u svoju kolekciju uz pomoÄ sakura editora i renta servisa.
-Dablin Strit Knjiga Online Pdf-adds sakura editar renta Download Zip ––– https://tiurll.com/2uCl2L
-Dablin Strit prati život Džoslin Batler, mlade Amerikanke koja je pobegla od svoje bolne proÅ¡losti i zapoÄela novi život u Edinburgu. Tamo upoznaje Brejdna Karmajkla, zgodnog i bogatog Å kota koji je navikao da dobija sve Å¡to poželi. Brejden je odluÄan da osvoji Džoslin i ponudi joj vezu bez obaveza, ali ona nije spremna da se otvori ni za koga. MeÄutim, njihova privlaÄnost je prejaka da bi je ignorisali, i uskoro se upuÅ¡taju u vatrenu aferu koja Äe ih oboje promeniti.
-Dablin Strit je oÄaravajuÄi roman o iskupljujuÄoj snazi ljubavi, koji Äe vas zaintrigirati svojim likovima, zapletom i erotskim scenama. Ova knjiga je prvi deo serijala koji prati razliÄite parove koji se sreÄu i zaljubljuju u ulicama Edinburga. Ako ste ljubitelj Pedeset nijansi sive, sigurno Äete uživati u Dablin Stritu i ostalim delima Samante Jang.
-
-Ako želite da proÄitate Dablin Strit u PDF formatu, možete ga preuzeti sa razliÄitih sajtova koji nude besplatne ili jeftine elektronske knjige. Na primer, možete posetiti sajt Online Knjige na Facebooku, gde Äete naÄi link za download ove knjige[^3^]. TakoÄe, možete potražiti knjigu na Bookmateu, gde Äete moÄi da je Äitate onlajn i da vidite utiske i citate drugih Äitalaca[^2^].
-Ako želite da dodate Dablin Strit u svoju kolekciju knjiga, možete koristiti sakura editor i renta servis. Sakura editor je program koji vam omoguÄava da ureÄujete i prilagoÄavate PDF fajlove po vaÅ¡oj želji. Možete promeniti font, boju, veliÄinu, margine i druge opcije kako biste poboljÅ¡ali Äitljivost i izgled knjige. Renta servis je platforma koja vam nudi moguÄnost da iznajmite ili kupite elektronske knjige po povoljnim cenama. Možete izabrati period iznajmljivanja koji vam odgovara, a nakon toga možete odluÄiti da li želite da zadržite knjigu ili da je vratite.
-
-Dablin Strit je samo prvi deo serijala koji ima joÅ¡ pet nastavaka i jednu kratku novelu. Svaki deo prati drugi par koji se sreÄe i zaljubljuje u Edinburgu, ali se takoÄe pojavljuju i likovi iz prethodnih knjiga. Ako vam se dopao Dablin Strit, sigurno Äete želeti da proÄitate i ostale delove serijala: London Roud, Jamajka Lejn, Indija Plejs, Skotland Strit i Najtingejl Vei[^1^]. Ove knjige su takoÄe dostupne u PDF formatu na internetu, a možete ih preuzeti i dodati uz pomoÄ sakura editora i renta servisa.
-Ne propustite priliku da uživate u ovoj neodoljivoj ljubavnoj sagi koja Äe vas oduÅ¡eviti svojom straÅ¡Äu, romantikom i humorom. Dablin Strit je roman koji Äe vas zagrejati u hladnim zimskim danima i podsetiti vas na snagu ljubavi.
-
-Ako ste veÄ proÄitali Dablin Strit i ostale delove serijala, možda Äete biti zainteresovani da saznate viÅ¡e o autorki Samanti Jang. Ona je roÄena u Stirlingu u Å kotskoj, gde i dalje živi sa svojim suprugom i dvoje dece. Njena strast prema pisanju se javila joÅ¡ u detinjstvu, kada je pisala kratke priÄe za svoje prijatelje. Njen prvi roman je objavila 2011. godine pod nazivom Lunarmorte, a nakon toga je napisala joÅ¡ nekoliko knjiga iz žanra fantastike i paranormalnog romansa. Njen najveÄi uspeh je postigla sa serijalom Dablin Strit, koji je preveden na viÅ¡e od trideset jezika i prodat u milionima primeraka Å¡irom sveta[^1^]. Samanta Jang je takoÄe dobitnica brojnih nagrada i priznanja za svoje književno stvaralaÅ¡tvo.
-Ako želite da pratite novosti o Samanti Jang i njenim knjigama, možete posetiti njen zvaniÄni sajt ili je pratiti na druÅ¡tvenim mrežama kao Å¡to su Facebook, Twitter i Instagram. TakoÄe, možete se prijaviti za njen newsletter i dobiti ekskluzivne vesti, savete, preporuke i poklone. Samanta Jang voli da komunicira sa svojim Äitaocima i da Äuje njihove utiske i miÅ¡ljenja o njenim romanima. Možete joj pisati na njenu e-mail adresu ili joj ostaviti komentar na njenom blogu.
-Ako ste ljubitelj ljubavnih romana sa dozom erotike, neÄete pogreÅ¡iti ako izaberete Dablin Strit i ostale knjige Samante Jang. One Äe vas povesti u svet strasti, romantike i humora, gde Äete upoznati nezaboravne likove koji se bore sa svojim demonima i traže svoju srodnu duÅ¡u. Dablin Strit je roman koji Äe vas oduÅ¡eviti svojom toplinom, emocijom i napetoÅ¡Äu. Ne propustite priliku da ga proÄitate u PDF formatu na internetu, a zatim ga dodate u svoju kolekciju uz pomoÄ sakura editora i renta servisa.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/isabel/pet-project/app.py b/spaces/isabel/pet-project/app.py
deleted file mode 100644
index 6eee7c56206ca67b3594ecb0b71070f1d2e6fdfe..0000000000000000000000000000000000000000
--- a/spaces/isabel/pet-project/app.py
+++ /dev/null
@@ -1,141 +0,0 @@
-### ----------------------------- ###
-### libraries ###
-### ----------------------------- ###
-
-import gradio as gr
-import pandas as pd
-import numpy as np
-from sklearn.model_selection import train_test_split
-from sklearn.linear_model import LogisticRegression
-from sklearn import metrics
-from reader import get_article
-
-
-### ------------------------------ ###
-### data transformation ###
-### ------------------------------ ###
-
-# load dataset
-uncleaned_data = pd.read_csv('data.csv')
-
-# remove timestamp from dataset (always first column)
-uncleaned_data = uncleaned_data.iloc[: , 1:]
-data = pd.DataFrame()
-
-# keep track of which columns are categorical and what
-# those columns' value mappings are
-# structure: {colname1: {...}, colname2: {...} }
-cat_value_dicts = {}
-final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1]
-
-# for each column...
-for (colname, colval) in uncleaned_data.iteritems():
-
- # check if col is already a number; if so, add col directly
- # to new dataframe and skip to next column
- if isinstance(colval.values[0], (np.integer, float)):
- data[colname] = uncleaned_data[colname].copy()
- continue
-
- # structure: {0: "lilac", 1: "blue", ...}
- new_dict = {}
- val = 0 # first index per column
- transformed_col_vals = [] # new numeric datapoints
-
- # if not, for each item in that column...
- for (row, item) in enumerate(colval.values):
-
- # if item is not in this col's dict...
- if item not in new_dict:
- new_dict[item] = val
- val += 1
-
- # then add numerical value to transformed dataframe
- transformed_col_vals.append(new_dict[item])
-
- # reverse dictionary only for final col (0, 1) => (vals)
- if colname == final_colname:
- new_dict = {value : key for (key, value) in new_dict.items()}
-
- cat_value_dicts[colname] = new_dict
- data[colname] = transformed_col_vals
-
-
-### -------------------------------- ###
-### model training ###
-### -------------------------------- ###
-
-# select features and predicton; automatically selects last column as prediction
-cols = len(data.columns)
-num_features = cols - 1
-x = data.iloc[: , :num_features]
-y = data.iloc[: , num_features:]
-
-# split data into training and testing sets
-x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25)
-
-# instantiate the model (using default parameters)
-model = LogisticRegression()
-model.fit(x_train, y_train.values.ravel())
-y_pred = model.predict(x_test)
-
-
-### -------------------------------- ###
-### article generation ###
-### -------------------------------- ###
-# borrow file reading function from reader.py
-
-def get_feat():
- feats = [abs(x) for x in model.coef_[0]]
- max_val = max(feats)
- idx = feats.index(max_val)
- return data.columns[idx]
-
-acc = str(round(metrics.accuracy_score(y_test, y_pred) * 100, 1)) + '%**'
-most_imp_feat = get_feat() + "**"
-info = get_article(acc, most_imp_feat)
-
-
-
-### ------------------------------- ###
-### interface creation ###
-### ------------------------------- ###
-
-
-# predictor for generic number of features
-def general_predictor(*args):
- features = []
-
- # transform categorical input
- for colname, arg in zip(data.columns, args):
- if (colname in cat_value_dicts):
- features.append(cat_value_dicts[colname][arg])
- else:
- features.append(arg)
-
- # predict single datapoint
- new_input = [features]
- result = model.predict(new_input)
- return cat_value_dicts[final_colname][result[0]]
-
-# add data labels to replace those lost via star-args
-inputls = []
-for colname in data.columns:
- # skip last column
- if colname == final_colname:
- continue
-
- # access categories dict if data is categorical
- # otherwise, just use a number input
- if colname in cat_value_dicts:
- radio_options = list(cat_value_dicts[colname].keys())
- inputls.append(gr.inputs.Radio(choices=radio_options, type="value", label=colname))
- else:
- # add numerical input
- inputls.append(gr.inputs.Number(label=colname))
-
-# generate gradio interface
-interface = gr.Interface(general_predictor, inputs=inputls, outputs="text", article=info['article'], css=info['css'], theme="grass", title=info['title'], allow_flagging='never', description=info['description'])
-
-# show the interface
-interface.launch()
\ No newline at end of file
diff --git a/spaces/jaimin/Paraphrase/README.md b/spaces/jaimin/Paraphrase/README.md
deleted file mode 100644
index 6b6d83d8a377098525187d6a58566c7e68d02e31..0000000000000000000000000000000000000000
--- a/spaces/jaimin/Paraphrase/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Paraphrase
-emoji: 🦀
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/utils.py b/spaces/jimschat/VITS-Umamusume-voice-synthesizer/utils.py
deleted file mode 100644
index 9794e0fc3463a5e8fad05c037cce64683059a6d3..0000000000000000000000000000000000000000
--- a/spaces/jimschat/VITS-Umamusume-voice-synthesizer/utils.py
+++ /dev/null
@@ -1,226 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.ERROR)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r", encoding="utf-8") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
\ No newline at end of file
diff --git a/spaces/jitesh/storytelling/run.sh b/spaces/jitesh/storytelling/run.sh
deleted file mode 100644
index 1754cdd698f357d6860284d29f613b32a6a2fe30..0000000000000000000000000000000000000000
--- a/spaces/jitesh/storytelling/run.sh
+++ /dev/null
@@ -1 +0,0 @@
-streamlit run app.py
\ No newline at end of file
diff --git a/spaces/jkim1238/predictive_analysis/app - Copy.py b/spaces/jkim1238/predictive_analysis/app - Copy.py
deleted file mode 100644
index 6931416c7fed1d65c22e52eaca7a856c7bba7911..0000000000000000000000000000000000000000
--- a/spaces/jkim1238/predictive_analysis/app - Copy.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import pandas as pd
-
-from utils import *
-from st_aggrid import AgGrid
-from st_aggrid.grid_options_builder import GridOptionsBuilder
-from st_aggrid.shared import GridUpdateMode, DataReturnMode
-
-
-def main():
- # Initialization of session state variables
- if 'technology' not in st.session_state:
- st.session_state['technology'] = '-'
- if 'df' not in st.session_state:
- st.session_state['df'] = None
- if 'selected_df' not in st.session_state:
- st.session_state['selected_df'] = None
- if 'df_tech' not in st.session_state:
- st.session_state['df_tech'] = None
- if 'articles_count' not in st.session_state:
- st.session_state['articles_count'] = None
- if 'date' not in st.session_state:
- st.session_state['date'] = None
-
- # Set sidebar
- st.session_state['technology'], select_date = set_sidebar()
-
- if st.session_state['technology'] == '-':
- # Print title
- st.title(
- body='❓ AL - CFIUS Over the Horizon Forecasting for Critical and Emerging Technologies',
- anchor='title'
- )
-
- # Print team member names
- st.write("""
- 08/Sponsor: OUSD (I&S) CL&S\n
- USG Champion: Kristoffer Buquet, Chief TechProject Div\n
- Faculty Mentor: Christopher Nissen, UMD Applied Research Laboratory for Intelligence and Security\n
- Interns: Lauren Shanley-DeBuse, Danielle Mixon, Jiin Kim
- """)
-
- # Summary header
- st.header(body='Summary')
-
- # Print project summary
- st.write("""
- The Foreign Ownership, Control, and Influence (FOCI) threat to our current and future Critical
- and Emerging Technologies continues to grow and become more invasive. The is further
- compounded by the direct foreign investment, both private sector and Foreign Government
- Control, in U.S. companies as highlighted by the Committee for the Foreign Investment in the
- U.S. (CFIUS) process.\n
- This project would focus on an identified technology area/sector, and the U.S. companies
- working on or directly supporting the identified technology. The scoped example would be to
- define companies working on our Critical and Emerging Technologies (C&ET) down to the CAGE
- code as they relate to the identified technology, and as determined based on a defined
- proximity to a DoD facility/installation.
- """)
-
- # Getting started header
- st.header('Getting Started')
-
- # Print instructions
- st.write("""
- Select a technology, subfield, and date from the sidebar.
- """)
- else:
- # Display technology title
- st.title(st.session_state['technology'])
-
- # Convert datetime object to date string
- date_string = select_date.strftime('%Y%m%d')
-
- # Make technology lowercase
- technology_string = st.session_state['technology'].lower()
-
- # Replace spaces with underscore
- technology_string = technology_string.replace(' ', '_')
-
- # The collection name
- collection_name = f'{date_string}_{technology_string}'
-
- if st.session_state['df_tech'] != st.session_state['technology'] or st.session_state['date'] != date_string:
- # Clear page
- st.empty()
- st.empty()
- st.empty()
- st.empty()
- st.empty()
-
- # Get articles from mongoDB collection
- # articles = get_collection(collection_name=collection_name)
- articles = db[collection_name].find({}, {'_id': False})
-
- # Get articles count
- #st.session_state['articles_count'] = count_documents(collection_name=collection_name)
- st.session_state['articles_count'] = db[collection_name].count_documents({})
-
- # If there wasn't any previous articles in the database, use newscatcherapi and store in mongoDB Atlas
- # database
- if f'{collection_name}' not in db.list_collection_names():
- # Get articles
- articles, st.session_state['articles_count'] = consume_api(
- date=select_date,
- technology=st.session_state['technology']
- )
-
- # If there wasn't a previous prediction, calculate new prediction
- if f'{collection_name}_prediction' not in db.list_collection_names():
- with st.spinner('Please wait...'):
- # Get company names using Name Entity Recognition and perform Sentiment Analysis on article text
- companies = natural_language_processing(articles=articles)
-
- # Convert dictionary to list
- companies_list = dictionary_to_list(companies)
-
- # Store companies in the database
- store_documents(
- documents=companies_list,
- collection_name=f'{collection_name}_prediction'
- )
-
- # Convert dictionary to pandas dataframe
- df = pd.concat({k: pd.DataFrame.from_dict(v, 'index') for k, v in companies.items()},
- axis=0).reset_index()
-
- # Drop empty columns
- df.drop(
- columns=['level_1'],
- inplace=True
- )
-
- # Rename columns
- df.columns = ['Name', 'Count']
- else:
- # Check if there was a previous prediction
- companies = get_collection(collection_name=f'{collection_name}_prediction')
-
- # Convert cursor to dataframe
- df = pd.DataFrame(list(companies))
-
- # Sort dataframe by count
- if not df.empty:
- df.sort_values(
- by='Count',
- ascending=False,
- inplace=True
- )
-
- # Set session state
- st.session_state['df_tech'] = st.session_state['technology']
- st.session_state['df'] = df
- st.session_state['date'] = date_string
-
- # Display statistics
- st.write(
- f'There are {st.session_state["articles_count"]} articles on {st.session_state["technology"]} on {select_date.strftime("%Y/%m/%d")}.\n'
- f'Found {len(st.session_state["df"])} companies total.')
-
- # Grid options
- gb = GridOptionsBuilder.from_dataframe(st.session_state['df'])
- gb.configure_selection(
- selection_mode='multiple',
- use_checkbox=True
- )
- gb.configure_pagination()
- gb.configure_side_bar()
- gb.configure_default_column(
- groupable=True,
- value=True,
- enableRowGroup=True,
- aggFunc='sum',
- editable=True
- )
- gridOptions = gb.build()
-
- # Display dataframe
- selected_data = AgGrid(
- dataframe=st.session_state['df'],
- gridOptions=gridOptions,
- enable_enterprise_modules=True,
- data_return_mode=DataReturnMode.FILTERED_AND_SORTED,
- update_mode=GridUpdateMode.MODEL_CHANGED
- )
-
- # Convert selected rows as dataframe
- st.session_state['selected_df'] = pd.DataFrame(selected_data['selected_rows'])
-
- # Set 2 columns for the options.
- col1, col2 = st.columns(2)
-
- with col1:
- # Save all raw data button to save DataFrame as CSV file.
- st.download_button(
- label='Download All Data',
- data=st.session_state['df'].to_csv(),
- file_name=f'{date_string}_{technology_string}.csv',
- mime='text/csv'
- )
- with col2:
- # Save selected raw data button to save DataFrame as CSV file.
- st.download_button(
- label='Download Selected Data',
- data=st.session_state['selected_df'].to_csv(),
- file_name=f'{date_string}_{technology_string}.csv',
- mime='text/csv'
- )
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/jlmarrugom/voice_fixer_app/voicefixer/vocoder/model/util.py b/spaces/jlmarrugom/voice_fixer_app/voicefixer/vocoder/model/util.py
deleted file mode 100644
index c4559699b8627e76abf38ca0768bfa26e92d70e5..0000000000000000000000000000000000000000
--- a/spaces/jlmarrugom/voice_fixer_app/voicefixer/vocoder/model/util.py
+++ /dev/null
@@ -1,135 +0,0 @@
-from voicefixer.vocoder.config import Config
-from voicefixer.tools.pytorch_util import try_tensor_cuda, check_cuda_availability
-import torch
-import librosa
-import numpy as np
-
-
-def tr_normalize(S):
- if Config.allow_clipping_in_normalization:
- if Config.symmetric_mels:
- return torch.clip(
- (2 * Config.max_abs_value) * ((S - Config.min_db) / (-Config.min_db))
- - Config.max_abs_value,
- -Config.max_abs_value,
- Config.max_abs_value,
- )
- else:
- return torch.clip(
- Config.max_abs_value * ((S - Config.min_db) / (-Config.min_db)),
- 0,
- Config.max_abs_value,
- )
-
- assert S.max() <= 0 and S.min() - Config.min_db >= 0
- if Config.symmetric_mels:
- return (2 * Config.max_abs_value) * (
- (S - Config.min_db) / (-Config.min_db)
- ) - Config.max_abs_value
- else:
- return Config.max_abs_value * ((S - Config.min_db) / (-Config.min_db))
-
-
-def tr_amp_to_db(x):
- min_level = torch.exp(Config.min_level_db / 20 * torch.log(torch.tensor(10.0)))
- min_level = min_level.type_as(x)
- return 20 * torch.log10(torch.maximum(min_level, x))
-
-
-def normalize(S):
- if Config.allow_clipping_in_normalization:
- if Config.symmetric_mels:
- return np.clip(
- (2 * Config.max_abs_value) * ((S - Config.min_db) / (-Config.min_db))
- - Config.max_abs_value,
- -Config.max_abs_value,
- Config.max_abs_value,
- )
- else:
- return np.clip(
- Config.max_abs_value * ((S - Config.min_db) / (-Config.min_db)),
- 0,
- Config.max_abs_value,
- )
-
- assert S.max() <= 0 and S.min() - Config.min_db >= 0
- if Config.symmetric_mels:
- return (2 * Config.max_abs_value) * (
- (S - Config.min_db) / (-Config.min_db)
- ) - Config.max_abs_value
- else:
- return Config.max_abs_value * ((S - Config.min_db) / (-Config.min_db))
-
-
-def amp_to_db(x):
- min_level = np.exp(Config.min_level_db / 20 * np.log(10))
- return 20 * np.log10(np.maximum(min_level, x))
-
-
-def tr_pre(npy):
- # conditions = torch.FloatTensor(npy).type_as(npy) # to(device)
- conditions = npy.transpose(1, 2)
- l = conditions.size(-1)
- pad_tail = l % 2 + 4
- zeros = (
- torch.zeros([conditions.size()[0], Config.num_mels, pad_tail]).type_as(
- conditions
- )
- + -4.0
- )
- return torch.cat([conditions, zeros], dim=-1)
-
-
-def pre(npy):
- conditions = npy
- ## padding tail
- if type(conditions) == np.ndarray:
- conditions = torch.FloatTensor(conditions).unsqueeze(0)
- else:
- conditions = torch.FloatTensor(conditions.float()).unsqueeze(0)
- conditions = conditions.transpose(1, 2)
- l = conditions.size(-1)
- pad_tail = l % 2 + 4
- zeros = torch.zeros([1, Config.num_mels, pad_tail]) + -4.0
- return torch.cat([conditions, zeros], dim=-1)
-
-
-def load_try(state, model):
- model_dict = model.state_dict()
- try:
- model_dict.update(state)
- model.load_state_dict(model_dict)
- except RuntimeError as e:
- print(str(e))
- model_dict = model.state_dict()
- for k, v in state.items():
- model_dict[k] = v
- model.load_state_dict(model_dict)
-
-
-def load_checkpoint(checkpoint_path, device):
- checkpoint = torch.load(checkpoint_path, map_location=device)
- return checkpoint
-
-
-def build_mel_basis():
- return librosa.filters.mel(
- Config.sample_rate,
- Config.n_fft,
- htk=True,
- n_mels=Config.num_mels,
- fmin=0,
- fmax=int(Config.sample_rate // 2),
- )
-
-
-def linear_to_mel(spectogram):
- _mel_basis = build_mel_basis()
- return np.dot(_mel_basis, spectogram)
-
-
-if __name__ == "__main__":
- data = torch.randn((3, 5, 100))
- b = normalize(amp_to_db(data.numpy()))
- a = tr_normalize(tr_amp_to_db(data)).numpy()
- print(a - b)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/click/_termui_impl.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/click/_termui_impl.py
deleted file mode 100644
index f744657753caa6cdef1dcc41a4f0b5e3e9503ab8..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/click/_termui_impl.py
+++ /dev/null
@@ -1,739 +0,0 @@
-"""
-This module contains implementations for the termui module. To keep the
-import time of Click down, some infrequently used functionality is
-placed in this module and only imported as needed.
-"""
-import contextlib
-import math
-import os
-import sys
-import time
-import typing as t
-from gettext import gettext as _
-from io import StringIO
-from types import TracebackType
-
-from ._compat import _default_text_stdout
-from ._compat import CYGWIN
-from ._compat import get_best_encoding
-from ._compat import isatty
-from ._compat import open_stream
-from ._compat import strip_ansi
-from ._compat import term_len
-from ._compat import WIN
-from .exceptions import ClickException
-from .utils import echo
-
-V = t.TypeVar("V")
-
-if os.name == "nt":
- BEFORE_BAR = "\r"
- AFTER_BAR = "\n"
-else:
- BEFORE_BAR = "\r\033[?25l"
- AFTER_BAR = "\033[?25h\n"
-
-
-class ProgressBar(t.Generic[V]):
- def __init__(
- self,
- iterable: t.Optional[t.Iterable[V]],
- length: t.Optional[int] = None,
- fill_char: str = "#",
- empty_char: str = " ",
- bar_template: str = "%(bar)s",
- info_sep: str = " ",
- show_eta: bool = True,
- show_percent: t.Optional[bool] = None,
- show_pos: bool = False,
- item_show_func: t.Optional[t.Callable[[t.Optional[V]], t.Optional[str]]] = None,
- label: t.Optional[str] = None,
- file: t.Optional[t.TextIO] = None,
- color: t.Optional[bool] = None,
- update_min_steps: int = 1,
- width: int = 30,
- ) -> None:
- self.fill_char = fill_char
- self.empty_char = empty_char
- self.bar_template = bar_template
- self.info_sep = info_sep
- self.show_eta = show_eta
- self.show_percent = show_percent
- self.show_pos = show_pos
- self.item_show_func = item_show_func
- self.label: str = label or ""
-
- if file is None:
- file = _default_text_stdout()
-
- # There are no standard streams attached to write to. For example,
- # pythonw on Windows.
- if file is None:
- file = StringIO()
-
- self.file = file
- self.color = color
- self.update_min_steps = update_min_steps
- self._completed_intervals = 0
- self.width: int = width
- self.autowidth: bool = width == 0
-
- if length is None:
- from operator import length_hint
-
- length = length_hint(iterable, -1)
-
- if length == -1:
- length = None
- if iterable is None:
- if length is None:
- raise TypeError("iterable or length is required")
- iterable = t.cast(t.Iterable[V], range(length))
- self.iter: t.Iterable[V] = iter(iterable)
- self.length = length
- self.pos = 0
- self.avg: t.List[float] = []
- self.last_eta: float
- self.start: float
- self.start = self.last_eta = time.time()
- self.eta_known: bool = False
- self.finished: bool = False
- self.max_width: t.Optional[int] = None
- self.entered: bool = False
- self.current_item: t.Optional[V] = None
- self.is_hidden: bool = not isatty(self.file)
- self._last_line: t.Optional[str] = None
-
- def __enter__(self) -> "ProgressBar[V]":
- self.entered = True
- self.render_progress()
- return self
-
- def __exit__(
- self,
- exc_type: t.Optional[t.Type[BaseException]],
- exc_value: t.Optional[BaseException],
- tb: t.Optional[TracebackType],
- ) -> None:
- self.render_finish()
-
- def __iter__(self) -> t.Iterator[V]:
- if not self.entered:
- raise RuntimeError("You need to use progress bars in a with block.")
- self.render_progress()
- return self.generator()
-
- def __next__(self) -> V:
- # Iteration is defined in terms of a generator function,
- # returned by iter(self); use that to define next(). This works
- # because `self.iter` is an iterable consumed by that generator,
- # so it is re-entry safe. Calling `next(self.generator())`
- # twice works and does "what you want".
- return next(iter(self))
-
- def render_finish(self) -> None:
- if self.is_hidden:
- return
- self.file.write(AFTER_BAR)
- self.file.flush()
-
- @property
- def pct(self) -> float:
- if self.finished:
- return 1.0
- return min(self.pos / (float(self.length or 1) or 1), 1.0)
-
- @property
- def time_per_iteration(self) -> float:
- if not self.avg:
- return 0.0
- return sum(self.avg) / float(len(self.avg))
-
- @property
- def eta(self) -> float:
- if self.length is not None and not self.finished:
- return self.time_per_iteration * (self.length - self.pos)
- return 0.0
-
- def format_eta(self) -> str:
- if self.eta_known:
- t = int(self.eta)
- seconds = t % 60
- t //= 60
- minutes = t % 60
- t //= 60
- hours = t % 24
- t //= 24
- if t > 0:
- return f"{t}d {hours:02}:{minutes:02}:{seconds:02}"
- else:
- return f"{hours:02}:{minutes:02}:{seconds:02}"
- return ""
-
- def format_pos(self) -> str:
- pos = str(self.pos)
- if self.length is not None:
- pos += f"/{self.length}"
- return pos
-
- def format_pct(self) -> str:
- return f"{int(self.pct * 100): 4}%"[1:]
-
- def format_bar(self) -> str:
- if self.length is not None:
- bar_length = int(self.pct * self.width)
- bar = self.fill_char * bar_length
- bar += self.empty_char * (self.width - bar_length)
- elif self.finished:
- bar = self.fill_char * self.width
- else:
- chars = list(self.empty_char * (self.width or 1))
- if self.time_per_iteration != 0:
- chars[
- int(
- (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5)
- * self.width
- )
- ] = self.fill_char
- bar = "".join(chars)
- return bar
-
- def format_progress_line(self) -> str:
- show_percent = self.show_percent
-
- info_bits = []
- if self.length is not None and show_percent is None:
- show_percent = not self.show_pos
-
- if self.show_pos:
- info_bits.append(self.format_pos())
- if show_percent:
- info_bits.append(self.format_pct())
- if self.show_eta and self.eta_known and not self.finished:
- info_bits.append(self.format_eta())
- if self.item_show_func is not None:
- item_info = self.item_show_func(self.current_item)
- if item_info is not None:
- info_bits.append(item_info)
-
- return (
- self.bar_template
- % {
- "label": self.label,
- "bar": self.format_bar(),
- "info": self.info_sep.join(info_bits),
- }
- ).rstrip()
-
- def render_progress(self) -> None:
- import shutil
-
- if self.is_hidden:
- # Only output the label as it changes if the output is not a
- # TTY. Use file=stderr if you expect to be piping stdout.
- if self._last_line != self.label:
- self._last_line = self.label
- echo(self.label, file=self.file, color=self.color)
-
- return
-
- buf = []
- # Update width in case the terminal has been resized
- if self.autowidth:
- old_width = self.width
- self.width = 0
- clutter_length = term_len(self.format_progress_line())
- new_width = max(0, shutil.get_terminal_size().columns - clutter_length)
- if new_width < old_width:
- buf.append(BEFORE_BAR)
- buf.append(" " * self.max_width) # type: ignore
- self.max_width = new_width
- self.width = new_width
-
- clear_width = self.width
- if self.max_width is not None:
- clear_width = self.max_width
-
- buf.append(BEFORE_BAR)
- line = self.format_progress_line()
- line_len = term_len(line)
- if self.max_width is None or self.max_width < line_len:
- self.max_width = line_len
-
- buf.append(line)
- buf.append(" " * (clear_width - line_len))
- line = "".join(buf)
- # Render the line only if it changed.
-
- if line != self._last_line:
- self._last_line = line
- echo(line, file=self.file, color=self.color, nl=False)
- self.file.flush()
-
- def make_step(self, n_steps: int) -> None:
- self.pos += n_steps
- if self.length is not None and self.pos >= self.length:
- self.finished = True
-
- if (time.time() - self.last_eta) < 1.0:
- return
-
- self.last_eta = time.time()
-
- # self.avg is a rolling list of length <= 7 of steps where steps are
- # defined as time elapsed divided by the total progress through
- # self.length.
- if self.pos:
- step = (time.time() - self.start) / self.pos
- else:
- step = time.time() - self.start
-
- self.avg = self.avg[-6:] + [step]
-
- self.eta_known = self.length is not None
-
- def update(self, n_steps: int, current_item: t.Optional[V] = None) -> None:
- """Update the progress bar by advancing a specified number of
- steps, and optionally set the ``current_item`` for this new
- position.
-
- :param n_steps: Number of steps to advance.
- :param current_item: Optional item to set as ``current_item``
- for the updated position.
-
- .. versionchanged:: 8.0
- Added the ``current_item`` optional parameter.
-
- .. versionchanged:: 8.0
- Only render when the number of steps meets the
- ``update_min_steps`` threshold.
- """
- if current_item is not None:
- self.current_item = current_item
-
- self._completed_intervals += n_steps
-
- if self._completed_intervals >= self.update_min_steps:
- self.make_step(self._completed_intervals)
- self.render_progress()
- self._completed_intervals = 0
-
- def finish(self) -> None:
- self.eta_known = False
- self.current_item = None
- self.finished = True
-
- def generator(self) -> t.Iterator[V]:
- """Return a generator which yields the items added to the bar
- during construction, and updates the progress bar *after* the
- yielded block returns.
- """
- # WARNING: the iterator interface for `ProgressBar` relies on
- # this and only works because this is a simple generator which
- # doesn't create or manage additional state. If this function
- # changes, the impact should be evaluated both against
- # `iter(bar)` and `next(bar)`. `next()` in particular may call
- # `self.generator()` repeatedly, and this must remain safe in
- # order for that interface to work.
- if not self.entered:
- raise RuntimeError("You need to use progress bars in a with block.")
-
- if self.is_hidden:
- yield from self.iter
- else:
- for rv in self.iter:
- self.current_item = rv
-
- # This allows show_item_func to be updated before the
- # item is processed. Only trigger at the beginning of
- # the update interval.
- if self._completed_intervals == 0:
- self.render_progress()
-
- yield rv
- self.update(1)
-
- self.finish()
- self.render_progress()
-
-
-def pager(generator: t.Iterable[str], color: t.Optional[bool] = None) -> None:
- """Decide what method to use for paging through text."""
- stdout = _default_text_stdout()
-
- # There are no standard streams attached to write to. For example,
- # pythonw on Windows.
- if stdout is None:
- stdout = StringIO()
-
- if not isatty(sys.stdin) or not isatty(stdout):
- return _nullpager(stdout, generator, color)
- pager_cmd = (os.environ.get("PAGER", None) or "").strip()
- if pager_cmd:
- if WIN:
- return _tempfilepager(generator, pager_cmd, color)
- return _pipepager(generator, pager_cmd, color)
- if os.environ.get("TERM") in ("dumb", "emacs"):
- return _nullpager(stdout, generator, color)
- if WIN or sys.platform.startswith("os2"):
- return _tempfilepager(generator, "more <", color)
- if hasattr(os, "system") and os.system("(less) 2>/dev/null") == 0:
- return _pipepager(generator, "less", color)
-
- import tempfile
-
- fd, filename = tempfile.mkstemp()
- os.close(fd)
- try:
- if hasattr(os, "system") and os.system(f'more "{filename}"') == 0:
- return _pipepager(generator, "more", color)
- return _nullpager(stdout, generator, color)
- finally:
- os.unlink(filename)
-
-
-def _pipepager(generator: t.Iterable[str], cmd: str, color: t.Optional[bool]) -> None:
- """Page through text by feeding it to another program. Invoking a
- pager through this might support colors.
- """
- import subprocess
-
- env = dict(os.environ)
-
- # If we're piping to less we might support colors under the
- # condition that
- cmd_detail = cmd.rsplit("/", 1)[-1].split()
- if color is None and cmd_detail[0] == "less":
- less_flags = f"{os.environ.get('LESS', '')}{' '.join(cmd_detail[1:])}"
- if not less_flags:
- env["LESS"] = "-R"
- color = True
- elif "r" in less_flags or "R" in less_flags:
- color = True
-
- c = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, env=env)
- stdin = t.cast(t.BinaryIO, c.stdin)
- encoding = get_best_encoding(stdin)
- try:
- for text in generator:
- if not color:
- text = strip_ansi(text)
-
- stdin.write(text.encode(encoding, "replace"))
- except (OSError, KeyboardInterrupt):
- pass
- else:
- stdin.close()
-
- # Less doesn't respect ^C, but catches it for its own UI purposes (aborting
- # search or other commands inside less).
- #
- # That means when the user hits ^C, the parent process (click) terminates,
- # but less is still alive, paging the output and messing up the terminal.
- #
- # If the user wants to make the pager exit on ^C, they should set
- # `LESS='-K'`. It's not our decision to make.
- while True:
- try:
- c.wait()
- except KeyboardInterrupt:
- pass
- else:
- break
-
-
-def _tempfilepager(
- generator: t.Iterable[str], cmd: str, color: t.Optional[bool]
-) -> None:
- """Page through text by invoking a program on a temporary file."""
- import tempfile
-
- fd, filename = tempfile.mkstemp()
- # TODO: This never terminates if the passed generator never terminates.
- text = "".join(generator)
- if not color:
- text = strip_ansi(text)
- encoding = get_best_encoding(sys.stdout)
- with open_stream(filename, "wb")[0] as f:
- f.write(text.encode(encoding))
- try:
- os.system(f'{cmd} "{filename}"')
- finally:
- os.close(fd)
- os.unlink(filename)
-
-
-def _nullpager(
- stream: t.TextIO, generator: t.Iterable[str], color: t.Optional[bool]
-) -> None:
- """Simply print unformatted text. This is the ultimate fallback."""
- for text in generator:
- if not color:
- text = strip_ansi(text)
- stream.write(text)
-
-
-class Editor:
- def __init__(
- self,
- editor: t.Optional[str] = None,
- env: t.Optional[t.Mapping[str, str]] = None,
- require_save: bool = True,
- extension: str = ".txt",
- ) -> None:
- self.editor = editor
- self.env = env
- self.require_save = require_save
- self.extension = extension
-
- def get_editor(self) -> str:
- if self.editor is not None:
- return self.editor
- for key in "VISUAL", "EDITOR":
- rv = os.environ.get(key)
- if rv:
- return rv
- if WIN:
- return "notepad"
- for editor in "sensible-editor", "vim", "nano":
- if os.system(f"which {editor} >/dev/null 2>&1") == 0:
- return editor
- return "vi"
-
- def edit_file(self, filename: str) -> None:
- import subprocess
-
- editor = self.get_editor()
- environ: t.Optional[t.Dict[str, str]] = None
-
- if self.env:
- environ = os.environ.copy()
- environ.update(self.env)
-
- try:
- c = subprocess.Popen(f'{editor} "{filename}"', env=environ, shell=True)
- exit_code = c.wait()
- if exit_code != 0:
- raise ClickException(
- _("{editor}: Editing failed").format(editor=editor)
- )
- except OSError as e:
- raise ClickException(
- _("{editor}: Editing failed: {e}").format(editor=editor, e=e)
- ) from e
-
- def edit(self, text: t.Optional[t.AnyStr]) -> t.Optional[t.AnyStr]:
- import tempfile
-
- if not text:
- data = b""
- elif isinstance(text, (bytes, bytearray)):
- data = text
- else:
- if text and not text.endswith("\n"):
- text += "\n"
-
- if WIN:
- data = text.replace("\n", "\r\n").encode("utf-8-sig")
- else:
- data = text.encode("utf-8")
-
- fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension)
- f: t.BinaryIO
-
- try:
- with os.fdopen(fd, "wb") as f:
- f.write(data)
-
- # If the filesystem resolution is 1 second, like Mac OS
- # 10.12 Extended, or 2 seconds, like FAT32, and the editor
- # closes very fast, require_save can fail. Set the modified
- # time to be 2 seconds in the past to work around this.
- os.utime(name, (os.path.getatime(name), os.path.getmtime(name) - 2))
- # Depending on the resolution, the exact value might not be
- # recorded, so get the new recorded value.
- timestamp = os.path.getmtime(name)
-
- self.edit_file(name)
-
- if self.require_save and os.path.getmtime(name) == timestamp:
- return None
-
- with open(name, "rb") as f:
- rv = f.read()
-
- if isinstance(text, (bytes, bytearray)):
- return rv
-
- return rv.decode("utf-8-sig").replace("\r\n", "\n") # type: ignore
- finally:
- os.unlink(name)
-
-
-def open_url(url: str, wait: bool = False, locate: bool = False) -> int:
- import subprocess
-
- def _unquote_file(url: str) -> str:
- from urllib.parse import unquote
-
- if url.startswith("file://"):
- url = unquote(url[7:])
-
- return url
-
- if sys.platform == "darwin":
- args = ["open"]
- if wait:
- args.append("-W")
- if locate:
- args.append("-R")
- args.append(_unquote_file(url))
- null = open("/dev/null", "w")
- try:
- return subprocess.Popen(args, stderr=null).wait()
- finally:
- null.close()
- elif WIN:
- if locate:
- url = _unquote_file(url.replace('"', ""))
- args = f'explorer /select,"{url}"'
- else:
- url = url.replace('"', "")
- wait_str = "/WAIT" if wait else ""
- args = f'start {wait_str} "" "{url}"'
- return os.system(args)
- elif CYGWIN:
- if locate:
- url = os.path.dirname(_unquote_file(url).replace('"', ""))
- args = f'cygstart "{url}"'
- else:
- url = url.replace('"', "")
- wait_str = "-w" if wait else ""
- args = f'cygstart {wait_str} "{url}"'
- return os.system(args)
-
- try:
- if locate:
- url = os.path.dirname(_unquote_file(url)) or "."
- else:
- url = _unquote_file(url)
- c = subprocess.Popen(["xdg-open", url])
- if wait:
- return c.wait()
- return 0
- except OSError:
- if url.startswith(("http://", "https://")) and not locate and not wait:
- import webbrowser
-
- webbrowser.open(url)
- return 0
- return 1
-
-
-def _translate_ch_to_exc(ch: str) -> t.Optional[BaseException]:
- if ch == "\x03":
- raise KeyboardInterrupt()
-
- if ch == "\x04" and not WIN: # Unix-like, Ctrl+D
- raise EOFError()
-
- if ch == "\x1a" and WIN: # Windows, Ctrl+Z
- raise EOFError()
-
- return None
-
-
-if WIN:
- import msvcrt
-
- @contextlib.contextmanager
- def raw_terminal() -> t.Iterator[int]:
- yield -1
-
- def getchar(echo: bool) -> str:
- # The function `getch` will return a bytes object corresponding to
- # the pressed character. Since Windows 10 build 1803, it will also
- # return \x00 when called a second time after pressing a regular key.
- #
- # `getwch` does not share this probably-bugged behavior. Moreover, it
- # returns a Unicode object by default, which is what we want.
- #
- # Either of these functions will return \x00 or \xe0 to indicate
- # a special key, and you need to call the same function again to get
- # the "rest" of the code. The fun part is that \u00e0 is
- # "latin small letter a with grave", so if you type that on a French
- # keyboard, you _also_ get a \xe0.
- # E.g., consider the Up arrow. This returns \xe0 and then \x48. The
- # resulting Unicode string reads as "a with grave" + "capital H".
- # This is indistinguishable from when the user actually types
- # "a with grave" and then "capital H".
- #
- # When \xe0 is returned, we assume it's part of a special-key sequence
- # and call `getwch` again, but that means that when the user types
- # the \u00e0 character, `getchar` doesn't return until a second
- # character is typed.
- # The alternative is returning immediately, but that would mess up
- # cross-platform handling of arrow keys and others that start with
- # \xe0. Another option is using `getch`, but then we can't reliably
- # read non-ASCII characters, because return values of `getch` are
- # limited to the current 8-bit codepage.
- #
- # Anyway, Click doesn't claim to do this Right(tm), and using `getwch`
- # is doing the right thing in more situations than with `getch`.
- func: t.Callable[[], str]
-
- if echo:
- func = msvcrt.getwche # type: ignore
- else:
- func = msvcrt.getwch # type: ignore
-
- rv = func()
-
- if rv in ("\x00", "\xe0"):
- # \x00 and \xe0 are control characters that indicate special key,
- # see above.
- rv += func()
-
- _translate_ch_to_exc(rv)
- return rv
-
-else:
- import tty
- import termios
-
- @contextlib.contextmanager
- def raw_terminal() -> t.Iterator[int]:
- f: t.Optional[t.TextIO]
- fd: int
-
- if not isatty(sys.stdin):
- f = open("/dev/tty")
- fd = f.fileno()
- else:
- fd = sys.stdin.fileno()
- f = None
-
- try:
- old_settings = termios.tcgetattr(fd)
-
- try:
- tty.setraw(fd)
- yield fd
- finally:
- termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
- sys.stdout.flush()
-
- if f is not None:
- f.close()
- except termios.error:
- pass
-
- def getchar(echo: bool) -> str:
- with raw_terminal() as fd:
- ch = os.read(fd, 32).decode(get_best_encoding(sys.stdin), "replace")
-
- if echo and isatty(sys.stdout):
- sys.stdout.write(ch)
-
- _translate_ch_to_exc(ch)
- return ch
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/response/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/response/__init__.py
deleted file mode 100644
index c637335013c599b07de054fba07b47ecb86ad3e8..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/response/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Init params."""
diff --git a/spaces/joushe/moe-tts/modules.py b/spaces/joushe/moe-tts/modules.py
deleted file mode 100644
index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000
--- a/spaces/joushe/moe-tts/modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/justYu2001/furniture-detection/utils/wandb_logging/log_dataset.py b/spaces/justYu2001/furniture-detection/utils/wandb_logging/log_dataset.py
deleted file mode 100644
index 74cd6c6cd3b182572a6e5bec68de02a9bd0d552d..0000000000000000000000000000000000000000
--- a/spaces/justYu2001/furniture-detection/utils/wandb_logging/log_dataset.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import argparse
-
-import yaml
-
-from wandb_utils import WandbLogger
-
-WANDB_ARTIFACT_PREFIX = 'wandb-artifact://'
-
-
-def create_dataset_artifact(opt):
- with open(opt.data) as f:
- data = yaml.load(f, Loader=yaml.SafeLoader) # data dict
- logger = WandbLogger(opt, '', None, data, job_type='Dataset Creation')
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path')
- parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset')
- parser.add_argument('--project', type=str, default='YOLOR', help='name of W&B Project')
- opt = parser.parse_args()
- opt.resume = False # Explicitly disallow resume check for dataset upload job
-
- create_dataset_artifact(opt)
diff --git a/spaces/jvde/sovits-webui/models.py b/spaces/jvde/sovits-webui/models.py
deleted file mode 100644
index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000
--- a/spaces/jvde/sovits-webui/models.py
+++ /dev/null
@@ -1,534 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
-
diff --git a/spaces/kangvcar/RealChar/README.md b/spaces/kangvcar/RealChar/README.md
deleted file mode 100644
index 25bc16e455b66246f4595da58aec0641a767d640..0000000000000000000000000000000000000000
--- a/spaces/kangvcar/RealChar/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: RealChar - Your Realtime AI Character
-emoji: 🎙️🤖
-colorFrom: blue
-colorTo: indigo
-sdk: docker
-sdk_version: 1.24.0
-duplicated_from: pycui/RealChar
----
diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/audio.py b/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/audio.py
deleted file mode 100644
index 83dc96c63c962bc8e13c446d05e27c009fb3239f..0000000000000000000000000000000000000000
--- a/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/audio.py
+++ /dev/null
@@ -1,206 +0,0 @@
-import librosa
-import librosa.filters
-import numpy as np
-from scipy import signal
-from scipy.io import wavfile
-import soundfile as sf
-
-
-def load_wav(path, sr):
- return librosa.core.load(path, sr=sr)[0]
-
-def save_wav(wav, path, sr):
- wav *= 32767 / max(0.01, np.max(np.abs(wav)))
- #proposed by @dsmiller
- wavfile.write(path, sr, wav.astype(np.int16))
-
-def save_wavenet_wav(wav, path, sr):
- sf.write(path, wav.astype(np.float32), sr)
-
-def preemphasis(wav, k, preemphasize=True):
- if preemphasize:
- return signal.lfilter([1, -k], [1], wav)
- return wav
-
-def inv_preemphasis(wav, k, inv_preemphasize=True):
- if inv_preemphasize:
- return signal.lfilter([1], [1, -k], wav)
- return wav
-
-#From https://github.com/r9y9/wavenet_vocoder/blob/master/audio.py
-def start_and_end_indices(quantized, silence_threshold=2):
- for start in range(quantized.size):
- if abs(quantized[start] - 127) > silence_threshold:
- break
- for end in range(quantized.size - 1, 1, -1):
- if abs(quantized[end] - 127) > silence_threshold:
- break
-
- assert abs(quantized[start] - 127) > silence_threshold
- assert abs(quantized[end] - 127) > silence_threshold
-
- return start, end
-
-def get_hop_size(hparams):
- hop_size = hparams.hop_size
- if hop_size is None:
- assert hparams.frame_shift_ms is not None
- hop_size = int(hparams.frame_shift_ms / 1000 * hparams.sample_rate)
- return hop_size
-
-def linearspectrogram(wav, hparams):
- D = _stft(preemphasis(wav, hparams.preemphasis, hparams.preemphasize), hparams)
- S = _amp_to_db(np.abs(D), hparams) - hparams.ref_level_db
-
- if hparams.signal_normalization:
- return _normalize(S, hparams)
- return S
-
-def melspectrogram(wav, hparams):
- D = _stft(preemphasis(wav, hparams.preemphasis, hparams.preemphasize), hparams)
- S = _amp_to_db(_linear_to_mel(np.abs(D), hparams), hparams) - hparams.ref_level_db
-
- if hparams.signal_normalization:
- return _normalize(S, hparams)
- return S
-
-def inv_linear_spectrogram(linear_spectrogram, hparams):
- """Converts linear spectrogram to waveform using librosa"""
- if hparams.signal_normalization:
- D = _denormalize(linear_spectrogram, hparams)
- else:
- D = linear_spectrogram
-
- S = _db_to_amp(D + hparams.ref_level_db) #Convert back to linear
-
- if hparams.use_lws:
- processor = _lws_processor(hparams)
- D = processor.run_lws(S.astype(np.float64).T ** hparams.power)
- y = processor.istft(D).astype(np.float32)
- return inv_preemphasis(y, hparams.preemphasis, hparams.preemphasize)
- else:
- return inv_preemphasis(_griffin_lim(S ** hparams.power, hparams), hparams.preemphasis, hparams.preemphasize)
-
-def inv_mel_spectrogram(mel_spectrogram, hparams):
- """Converts mel spectrogram to waveform using librosa"""
- if hparams.signal_normalization:
- D = _denormalize(mel_spectrogram, hparams)
- else:
- D = mel_spectrogram
-
- S = _mel_to_linear(_db_to_amp(D + hparams.ref_level_db), hparams) # Convert back to linear
-
- if hparams.use_lws:
- processor = _lws_processor(hparams)
- D = processor.run_lws(S.astype(np.float64).T ** hparams.power)
- y = processor.istft(D).astype(np.float32)
- return inv_preemphasis(y, hparams.preemphasis, hparams.preemphasize)
- else:
- return inv_preemphasis(_griffin_lim(S ** hparams.power, hparams), hparams.preemphasis, hparams.preemphasize)
-
-def _lws_processor(hparams):
- import lws
- return lws.lws(hparams.n_fft, get_hop_size(hparams), fftsize=hparams.win_size, mode="speech")
-
-def _griffin_lim(S, hparams):
- """librosa implementation of Griffin-Lim
- Based on https://github.com/librosa/librosa/issues/434
- """
- angles = np.exp(2j * np.pi * np.random.rand(*S.shape))
- S_complex = np.abs(S).astype(np.complex)
- y = _istft(S_complex * angles, hparams)
- for i in range(hparams.griffin_lim_iters):
- angles = np.exp(1j * np.angle(_stft(y, hparams)))
- y = _istft(S_complex * angles, hparams)
- return y
-
-def _stft(y, hparams):
- if hparams.use_lws:
- return _lws_processor(hparams).stft(y).T
- else:
- return librosa.stft(y=y, n_fft=hparams.n_fft, hop_length=get_hop_size(hparams), win_length=hparams.win_size)
-
-def _istft(y, hparams):
- return librosa.istft(y, hop_length=get_hop_size(hparams), win_length=hparams.win_size)
-
-##########################################################
-#Those are only correct when using lws!!! (This was messing with Wavenet quality for a long time!)
-def num_frames(length, fsize, fshift):
- """Compute number of time frames of spectrogram
- """
- pad = (fsize - fshift)
- if length % fshift == 0:
- M = (length + pad * 2 - fsize) // fshift + 1
- else:
- M = (length + pad * 2 - fsize) // fshift + 2
- return M
-
-
-def pad_lr(x, fsize, fshift):
- """Compute left and right padding
- """
- M = num_frames(len(x), fsize, fshift)
- pad = (fsize - fshift)
- T = len(x) + 2 * pad
- r = (M - 1) * fshift + fsize - T
- return pad, pad + r
-##########################################################
-#Librosa correct padding
-def librosa_pad_lr(x, fsize, fshift):
- return 0, (x.shape[0] // fshift + 1) * fshift - x.shape[0]
-
-# Conversions
-_mel_basis = None
-_inv_mel_basis = None
-
-def _linear_to_mel(spectogram, hparams):
- global _mel_basis
- if _mel_basis is None:
- _mel_basis = _build_mel_basis(hparams)
- return np.dot(_mel_basis, spectogram)
-
-def _mel_to_linear(mel_spectrogram, hparams):
- global _inv_mel_basis
- if _inv_mel_basis is None:
- _inv_mel_basis = np.linalg.pinv(_build_mel_basis(hparams))
- return np.maximum(1e-10, np.dot(_inv_mel_basis, mel_spectrogram))
-
-def _build_mel_basis(hparams):
- assert hparams.fmax <= hparams.sample_rate // 2
- return librosa.filters.mel(hparams.sample_rate, hparams.n_fft, n_mels=hparams.num_mels,
- fmin=hparams.fmin, fmax=hparams.fmax)
-
-def _amp_to_db(x, hparams):
- min_level = np.exp(hparams.min_level_db / 20 * np.log(10))
- return 20 * np.log10(np.maximum(min_level, x))
-
-def _db_to_amp(x):
- return np.power(10.0, (x) * 0.05)
-
-def _normalize(S, hparams):
- if hparams.allow_clipping_in_normalization:
- if hparams.symmetric_mels:
- return np.clip((2 * hparams.max_abs_value) * ((S - hparams.min_level_db) / (-hparams.min_level_db)) - hparams.max_abs_value,
- -hparams.max_abs_value, hparams.max_abs_value)
- else:
- return np.clip(hparams.max_abs_value * ((S - hparams.min_level_db) / (-hparams.min_level_db)), 0, hparams.max_abs_value)
-
- assert S.max() <= 0 and S.min() - hparams.min_level_db >= 0
- if hparams.symmetric_mels:
- return (2 * hparams.max_abs_value) * ((S - hparams.min_level_db) / (-hparams.min_level_db)) - hparams.max_abs_value
- else:
- return hparams.max_abs_value * ((S - hparams.min_level_db) / (-hparams.min_level_db))
-
-def _denormalize(D, hparams):
- if hparams.allow_clipping_in_normalization:
- if hparams.symmetric_mels:
- return (((np.clip(D, -hparams.max_abs_value,
- hparams.max_abs_value) + hparams.max_abs_value) * -hparams.min_level_db / (2 * hparams.max_abs_value))
- + hparams.min_level_db)
- else:
- return ((np.clip(D, 0, hparams.max_abs_value) * -hparams.min_level_db / hparams.max_abs_value) + hparams.min_level_db)
-
- if hparams.symmetric_mels:
- return (((D + hparams.max_abs_value) * -hparams.min_level_db / (2 * hparams.max_abs_value)) + hparams.min_level_db)
- else:
- return ((D * -hparams.min_level_db / hparams.max_abs_value) + hparams.min_level_db)
diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/H2o.py b/spaces/kepl/gpt/g4f/Provider/Providers/H2o.py
deleted file mode 100644
index eabf94e2dc1e6167f746a820e34c335f2aa8578e..0000000000000000000000000000000000000000
--- a/spaces/kepl/gpt/g4f/Provider/Providers/H2o.py
+++ /dev/null
@@ -1,106 +0,0 @@
-from requests import Session
-from uuid import uuid4
-from json import loads
-import os
-import json
-import requests
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://gpt-gm.h2o.ai'
-model = ['falcon-40b', 'falcon-7b', 'llama-13b']
-supports_stream = True
-needs_auth = False
-
-models = {
- 'falcon-7b': 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3',
- 'falcon-40b': 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1',
- 'llama-13b': 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b'
-}
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- conversation = 'instruction: this is a conversation beween, a user and an AI assistant, respond to the latest message, referring to the conversation if needed\n'
- for message in messages:
- conversation += '%s: %s\n' % (message['role'], message['content'])
- conversation += 'assistant:'
-
- client = Session()
- client.headers = {
- 'authority': 'gpt-gm.h2o.ai',
- 'origin': 'https://gpt-gm.h2o.ai',
- 'referer': 'https://gpt-gm.h2o.ai/',
- 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-fetch-dest': 'document',
- 'sec-fetch-mode': 'navigate',
- 'sec-fetch-site': 'same-origin',
- 'sec-fetch-user': '?1',
- 'upgrade-insecure-requests': '1',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- }
-
- client.get('https://gpt-gm.h2o.ai/')
- response = client.post('https://gpt-gm.h2o.ai/settings', data={
- 'ethicsModalAccepted': 'true',
- 'shareConversationsWithModelAuthors': 'true',
- 'ethicsModalAcceptedAt': '',
- 'activeModel': 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1',
- 'searchEnabled': 'true',
- })
-
- headers = {
- 'authority': 'gpt-gm.h2o.ai',
- 'accept': '*/*',
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
- 'origin': 'https://gpt-gm.h2o.ai',
- 'referer': 'https://gpt-gm.h2o.ai/',
- 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- }
-
- json_data = {
- 'model': models[model]
- }
-
- response = client.post('https://gpt-gm.h2o.ai/conversation',
- headers=headers, json=json_data)
- conversationId = response.json()['conversationId']
-
-
- completion = client.post(f'https://gpt-gm.h2o.ai/conversation/{conversationId}', stream=True, json = {
- 'inputs': conversation,
- 'parameters': {
- 'temperature': kwargs.get('temperature', 0.4),
- 'truncate': kwargs.get('truncate', 2048),
- 'max_new_tokens': kwargs.get('max_new_tokens', 1024),
- 'do_sample': kwargs.get('do_sample', True),
- 'repetition_penalty': kwargs.get('repetition_penalty', 1.2),
- 'return_full_text': kwargs.get('return_full_text', False)
- },
- 'stream': True,
- 'options': {
- 'id': kwargs.get('id', str(uuid4())),
- 'response_id': kwargs.get('response_id', str(uuid4())),
- 'is_retry': False,
- 'use_cache': False,
- 'web_search_id': ''
- }
- })
-
- for line in completion.iter_lines():
- if b'data' in line:
- line = loads(line.decode('utf-8').replace('data:', ''))
- token = line['token']['text']
-
- if token == '<|endoftext|>':
- break
- else:
- yield (token)
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/keras-dreambooth/dreambooth-bioshock/app.py b/spaces/keras-dreambooth/dreambooth-bioshock/app.py
deleted file mode 100644
index 179b2da6e0ad8ca59730431cca57fa42cefff9c6..0000000000000000000000000000000000000000
--- a/spaces/keras-dreambooth/dreambooth-bioshock/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-from huggingface_hub import from_pretrained_keras
-from keras_cv import models
-import gradio as gr
-
-sd_dreambooth_model = models.StableDiffusion(
- img_width=512, img_height=512
-)
-db_diffusion_model = from_pretrained_keras("merve/dreambooth_bioshock_v2")
-sd_dreambooth_model._diffusion_model = db_diffusion_model
-
-# generate images
-def infer(prompt, negative_prompt, num_imgs_to_gen, num_steps, guidance_scale):
- generated_images = sd_dreambooth_model.text_to_image(
- prompt,
- negative_prompt=negative_prompt,
- batch_size=num_imgs_to_gen,
- num_steps=num_steps,
- unconditional_guidance_scale=guidance_scale
- )
- return generated_images
-
-
-# output = gr.Gallery(label="Outputs").style(grid=(2,2))
-
-# pass function, input type for prompt, the output for multiple images
-gr.Interface(
- infer, [
- gr.Textbox(label="Positive Prompt", value="a cafe in sks bskscnry style, rendered in unreal engine, trending on art station, 8k, hyperfocus, hyperrealistic, saturated colors, art deco"),
- gr.Textbox(label="Negative Prompt", value="ugly, low quality, blurry"),
- gr.Slider(label='Number of gen image', minimum=1, maximum=4, value=1, step=1),
- gr.Slider(label="Inference Steps",value=100),
- gr.Number(label='Guidance scale', value=10),
- ], [
- gr.Gallery(show_label=False),
- ],
- title="Dreambooth Bioshock",
- description = "This is a dreambooth model fine-tuned on images of Bioshock sceneries. To play with the demo, input the concept with {sks bskscnry}.",
- examples = [["a cafe in sks bskscnry style, rendered in unreal engine, trending on art station, 8k, hyperfocus, hyperrealistic, saturated colors, art deco", "ugly, low quality, blurry", 2, 100, 7.5]],
- cache_examples=True).launch()
\ No newline at end of file
diff --git a/spaces/keras-io/keras-image-classifier/app.py b/spaces/keras-io/keras-image-classifier/app.py
deleted file mode 100644
index 8b6f017cc79a27325abc19529d877e46da49c1eb..0000000000000000000000000000000000000000
--- a/spaces/keras-io/keras-image-classifier/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import gradio as gr
-import tensorflow as tf
-import numpy as np
-import requests
-
-inception_net = tf.keras.applications.InceptionV3() # load the model
-response = requests.get("https://git.io/JJkYN")
-labels = response.text.split("\n")
-
-def classify_image(inp):
- inp = inp.reshape((-1, 299, 299, 3))
- inp = tf.keras.applications.inception_v3.preprocess_input(inp)
- prediction = inception_net.predict(inp).flatten()
- return {labels[i]: float(prediction[i]) for i in range(1000)}
-
-image = gr.inputs.Image(shape=(299, 299))
-label = gr.outputs.Label(num_top_classes=3)
-
-gr.Interface(fn=classify_image, inputs=image, outputs=label, capture_session=True, theme = "grass", examples = [["cat.jpeg"]]).launch()
\ No newline at end of file
diff --git a/spaces/keras-io/vit-small-ds/app.py b/spaces/keras-io/vit-small-ds/app.py
deleted file mode 100644
index f029b067b4c9b90d39db864926410ea94a514154..0000000000000000000000000000000000000000
--- a/spaces/keras-io/vit-small-ds/app.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from huggingface_hub import from_pretrained_keras
-import tensorflow as tf
-from tensorflow_addons.optimizers import AdamW
-import numpy as np
-import gradio as gr
-
-tf.keras.optimizers.AdamW = AdamW
-model = from_pretrained_keras("keras-io/vit_small_ds_v2")
-
-
-
-def softmax(x):
- f_x = np.exp(x) / np.sum(np.exp(x))
- return f_x
-
-labels = ["apple", "aquarium_fish", "baby", "bear", "beaver", "bed", "bee", "beetle", "bicycle", "bottle", "bowl", "boy", "bridge", "bus", "butterfly", "camel", "can", "castle", "caterpillar", "cattle", "chair", "chimpanzee", "clock", "cloud", "cockroach", "couch", "cra", "crocodile", "cup", "dinosaur", "dolphin", "elephant", "flatfish", "forest", "fox", "girl", "hamster", "house", "kangaroo", "keyboard", "lamp", "lawn_mower", "leopard", "lion", "lizard", "lobster", "man", "maple_tree", "motorcycle", "mountain", "mouse", "mushroom", "oak_tree", "orange", "orchid", "otter", "palm_tree", "pear", "pickup_truck", "pine_tree", "plain", "plate", "poppy", "porcupine", "possum", "rabbit", "raccoon", "ray", "road", "rocket", "rose", "sea", "seal", "shark", "shrew", "skunk", "skyscraper", "snail", "snake", "spider", "squirrel", "streetcar", "sunflower", "sweet_pepper", "table", "tank", "telephone", "television", "tiger", "tractor", "train", "trout", "tulip", "turtle", "wardrobe", "whale", "willow_tree", "wolf", "woman", "worm"]
-
-def classify_image(image):
- image = image.reshape((-1, 32, 32, 3))
- pred = model.predict(image)
- prediction = softmax(pred)[0]
- return {labels[i]: float(prediction[i]) for i in range(100)}
-
-image = gr.inputs.Image(shape=(32,32))
-label = gr.outputs.Label(num_top_classes=5)
-
-iface = gr.Interface(classify_image,image,label,
- #outputs=[
- # gr.outputs.Textbox(label="Engine issue"),
- # gr.outputs.Textbox(label="Engine issue score")],
- examples=["dinosaur.jpg"],
- title="Image classification on CIFAR-100",
- description = "Model for classifying images from the CIFAR dataset using a vision transformer trained with small data.",
- article = "Author: Jónathan Heras "
-# examples = ["sample.csv"],
-)
-
-
-iface.launch()
-
diff --git a/spaces/kermitt2/grobid-crf/README.md b/spaces/kermitt2/grobid-crf/README.md
deleted file mode 100644
index 8492e866b34c1f5ff6d898897510fd3851724849..0000000000000000000000000000000000000000
--- a/spaces/kermitt2/grobid-crf/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Grobid CRF only
-emoji: 🌍
-colorFrom: purple
-colorTo: green
-sdk: docker
-pinned: false
-license: apache-2.0
-app_port: 8070
-fullWidth: true
----
-
-Fresh from https://github.com/kermitt2/grobid
\ No newline at end of file
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/modules/util.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/modules/util.py
deleted file mode 100644
index b916deefbb8b957ad6ab3cd7403c28513e5ae18e..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/facerender/modules/util.py
+++ /dev/null
@@ -1,564 +0,0 @@
-from torch import nn
-
-import torch.nn.functional as F
-import torch
-
-from src.facerender.sync_batchnorm import SynchronizedBatchNorm2d as BatchNorm2d
-from src.facerender.sync_batchnorm import SynchronizedBatchNorm3d as BatchNorm3d
-
-import torch.nn.utils.spectral_norm as spectral_norm
-
-
-def kp2gaussian(kp, spatial_size, kp_variance):
- """
- Transform a keypoint into gaussian like representation
- """
- mean = kp['value']
-
- coordinate_grid = make_coordinate_grid(spatial_size, mean.type())
- number_of_leading_dimensions = len(mean.shape) - 1
- shape = (1,) * number_of_leading_dimensions + coordinate_grid.shape
- coordinate_grid = coordinate_grid.view(*shape)
- repeats = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 1)
- coordinate_grid = coordinate_grid.repeat(*repeats)
-
- # Preprocess kp shape
- shape = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 3)
- mean = mean.view(*shape)
-
- mean_sub = (coordinate_grid - mean)
-
- out = torch.exp(-0.5 * (mean_sub ** 2).sum(-1) / kp_variance)
-
- return out
-
-def make_coordinate_grid_2d(spatial_size, type):
- """
- Create a meshgrid [-1,1] x [-1,1] of given spatial_size.
- """
- h, w = spatial_size
- x = torch.arange(w).type(type)
- y = torch.arange(h).type(type)
-
- x = (2 * (x / (w - 1)) - 1)
- y = (2 * (y / (h - 1)) - 1)
-
- yy = y.view(-1, 1).repeat(1, w)
- xx = x.view(1, -1).repeat(h, 1)
-
- meshed = torch.cat([xx.unsqueeze_(2), yy.unsqueeze_(2)], 2)
-
- return meshed
-
-
-def make_coordinate_grid(spatial_size, type):
- d, h, w = spatial_size
- x = torch.arange(w).type(type)
- y = torch.arange(h).type(type)
- z = torch.arange(d).type(type)
-
- x = (2 * (x / (w - 1)) - 1)
- y = (2 * (y / (h - 1)) - 1)
- z = (2 * (z / (d - 1)) - 1)
-
- yy = y.view(1, -1, 1).repeat(d, 1, w)
- xx = x.view(1, 1, -1).repeat(d, h, 1)
- zz = z.view(-1, 1, 1).repeat(1, h, w)
-
- meshed = torch.cat([xx.unsqueeze_(3), yy.unsqueeze_(3), zz.unsqueeze_(3)], 3)
-
- return meshed
-
-
-class ResBottleneck(nn.Module):
- def __init__(self, in_features, stride):
- super(ResBottleneck, self).__init__()
- self.conv1 = nn.Conv2d(in_channels=in_features, out_channels=in_features//4, kernel_size=1)
- self.conv2 = nn.Conv2d(in_channels=in_features//4, out_channels=in_features//4, kernel_size=3, padding=1, stride=stride)
- self.conv3 = nn.Conv2d(in_channels=in_features//4, out_channels=in_features, kernel_size=1)
- self.norm1 = BatchNorm2d(in_features//4, affine=True)
- self.norm2 = BatchNorm2d(in_features//4, affine=True)
- self.norm3 = BatchNorm2d(in_features, affine=True)
-
- self.stride = stride
- if self.stride != 1:
- self.skip = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=1, stride=stride)
- self.norm4 = BatchNorm2d(in_features, affine=True)
-
- def forward(self, x):
- out = self.conv1(x)
- out = self.norm1(out)
- out = F.relu(out)
- out = self.conv2(out)
- out = self.norm2(out)
- out = F.relu(out)
- out = self.conv3(out)
- out = self.norm3(out)
- if self.stride != 1:
- x = self.skip(x)
- x = self.norm4(x)
- out += x
- out = F.relu(out)
- return out
-
-
-class ResBlock2d(nn.Module):
- """
- Res block, preserve spatial resolution.
- """
-
- def __init__(self, in_features, kernel_size, padding):
- super(ResBlock2d, self).__init__()
- self.conv1 = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size,
- padding=padding)
- self.conv2 = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size,
- padding=padding)
- self.norm1 = BatchNorm2d(in_features, affine=True)
- self.norm2 = BatchNorm2d(in_features, affine=True)
-
- def forward(self, x):
- out = self.norm1(x)
- out = F.relu(out)
- out = self.conv1(out)
- out = self.norm2(out)
- out = F.relu(out)
- out = self.conv2(out)
- out += x
- return out
-
-
-class ResBlock3d(nn.Module):
- """
- Res block, preserve spatial resolution.
- """
-
- def __init__(self, in_features, kernel_size, padding):
- super(ResBlock3d, self).__init__()
- self.conv1 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size,
- padding=padding)
- self.conv2 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size,
- padding=padding)
- self.norm1 = BatchNorm3d(in_features, affine=True)
- self.norm2 = BatchNorm3d(in_features, affine=True)
-
- def forward(self, x):
- out = self.norm1(x)
- out = F.relu(out)
- out = self.conv1(out)
- out = self.norm2(out)
- out = F.relu(out)
- out = self.conv2(out)
- out += x
- return out
-
-
-class UpBlock2d(nn.Module):
- """
- Upsampling block for use in decoder.
- """
-
- def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1):
- super(UpBlock2d, self).__init__()
-
- self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups)
- self.norm = BatchNorm2d(out_features, affine=True)
-
- def forward(self, x):
- out = F.interpolate(x, scale_factor=2)
- out = self.conv(out)
- out = self.norm(out)
- out = F.relu(out)
- return out
-
-class UpBlock3d(nn.Module):
- """
- Upsampling block for use in decoder.
- """
-
- def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1):
- super(UpBlock3d, self).__init__()
-
- self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups)
- self.norm = BatchNorm3d(out_features, affine=True)
-
- def forward(self, x):
- # out = F.interpolate(x, scale_factor=(1, 2, 2), mode='trilinear')
- out = F.interpolate(x, scale_factor=(1, 2, 2))
- out = self.conv(out)
- out = self.norm(out)
- out = F.relu(out)
- return out
-
-
-class DownBlock2d(nn.Module):
- """
- Downsampling block for use in encoder.
- """
-
- def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1):
- super(DownBlock2d, self).__init__()
- self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups)
- self.norm = BatchNorm2d(out_features, affine=True)
- self.pool = nn.AvgPool2d(kernel_size=(2, 2))
-
- def forward(self, x):
- out = self.conv(x)
- out = self.norm(out)
- out = F.relu(out)
- out = self.pool(out)
- return out
-
-
-class DownBlock3d(nn.Module):
- """
- Downsampling block for use in encoder.
- """
-
- def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1):
- super(DownBlock3d, self).__init__()
- '''
- self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups, stride=(1, 2, 2))
- '''
- self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size,
- padding=padding, groups=groups)
- self.norm = BatchNorm3d(out_features, affine=True)
- self.pool = nn.AvgPool3d(kernel_size=(1, 2, 2))
-
- def forward(self, x):
- out = self.conv(x)
- out = self.norm(out)
- out = F.relu(out)
- out = self.pool(out)
- return out
-
-
-class SameBlock2d(nn.Module):
- """
- Simple block, preserve spatial resolution.
- """
-
- def __init__(self, in_features, out_features, groups=1, kernel_size=3, padding=1, lrelu=False):
- super(SameBlock2d, self).__init__()
- self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features,
- kernel_size=kernel_size, padding=padding, groups=groups)
- self.norm = BatchNorm2d(out_features, affine=True)
- if lrelu:
- self.ac = nn.LeakyReLU()
- else:
- self.ac = nn.ReLU()
-
- def forward(self, x):
- out = self.conv(x)
- out = self.norm(out)
- out = self.ac(out)
- return out
-
-
-class Encoder(nn.Module):
- """
- Hourglass Encoder
- """
-
- def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256):
- super(Encoder, self).__init__()
-
- down_blocks = []
- for i in range(num_blocks):
- down_blocks.append(DownBlock3d(in_features if i == 0 else min(max_features, block_expansion * (2 ** i)),
- min(max_features, block_expansion * (2 ** (i + 1))),
- kernel_size=3, padding=1))
- self.down_blocks = nn.ModuleList(down_blocks)
-
- def forward(self, x):
- outs = [x]
- for down_block in self.down_blocks:
- outs.append(down_block(outs[-1]))
- return outs
-
-
-class Decoder(nn.Module):
- """
- Hourglass Decoder
- """
-
- def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256):
- super(Decoder, self).__init__()
-
- up_blocks = []
-
- for i in range(num_blocks)[::-1]:
- in_filters = (1 if i == num_blocks - 1 else 2) * min(max_features, block_expansion * (2 ** (i + 1)))
- out_filters = min(max_features, block_expansion * (2 ** i))
- up_blocks.append(UpBlock3d(in_filters, out_filters, kernel_size=3, padding=1))
-
- self.up_blocks = nn.ModuleList(up_blocks)
- # self.out_filters = block_expansion
- self.out_filters = block_expansion + in_features
-
- self.conv = nn.Conv3d(in_channels=self.out_filters, out_channels=self.out_filters, kernel_size=3, padding=1)
- self.norm = BatchNorm3d(self.out_filters, affine=True)
-
- def forward(self, x):
- out = x.pop()
- # for up_block in self.up_blocks[:-1]:
- for up_block in self.up_blocks:
- out = up_block(out)
- skip = x.pop()
- out = torch.cat([out, skip], dim=1)
- # out = self.up_blocks[-1](out)
- out = self.conv(out)
- out = self.norm(out)
- out = F.relu(out)
- return out
-
-
-class Hourglass(nn.Module):
- """
- Hourglass architecture.
- """
-
- def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256):
- super(Hourglass, self).__init__()
- self.encoder = Encoder(block_expansion, in_features, num_blocks, max_features)
- self.decoder = Decoder(block_expansion, in_features, num_blocks, max_features)
- self.out_filters = self.decoder.out_filters
-
- def forward(self, x):
- return self.decoder(self.encoder(x))
-
-
-class KPHourglass(nn.Module):
- """
- Hourglass architecture.
- """
-
- def __init__(self, block_expansion, in_features, reshape_features, reshape_depth, num_blocks=3, max_features=256):
- super(KPHourglass, self).__init__()
-
- self.down_blocks = nn.Sequential()
- for i in range(num_blocks):
- self.down_blocks.add_module('down'+ str(i), DownBlock2d(in_features if i == 0 else min(max_features, block_expansion * (2 ** i)),
- min(max_features, block_expansion * (2 ** (i + 1))),
- kernel_size=3, padding=1))
-
- in_filters = min(max_features, block_expansion * (2 ** num_blocks))
- self.conv = nn.Conv2d(in_channels=in_filters, out_channels=reshape_features, kernel_size=1)
-
- self.up_blocks = nn.Sequential()
- for i in range(num_blocks):
- in_filters = min(max_features, block_expansion * (2 ** (num_blocks - i)))
- out_filters = min(max_features, block_expansion * (2 ** (num_blocks - i - 1)))
- self.up_blocks.add_module('up'+ str(i), UpBlock3d(in_filters, out_filters, kernel_size=3, padding=1))
-
- self.reshape_depth = reshape_depth
- self.out_filters = out_filters
-
- def forward(self, x):
- out = self.down_blocks(x)
- out = self.conv(out)
- bs, c, h, w = out.shape
- out = out.view(bs, c//self.reshape_depth, self.reshape_depth, h, w)
- out = self.up_blocks(out)
-
- return out
-
-
-
-class AntiAliasInterpolation2d(nn.Module):
- """
- Band-limited downsampling, for better preservation of the input signal.
- """
- def __init__(self, channels, scale):
- super(AntiAliasInterpolation2d, self).__init__()
- sigma = (1 / scale - 1) / 2
- kernel_size = 2 * round(sigma * 4) + 1
- self.ka = kernel_size // 2
- self.kb = self.ka - 1 if kernel_size % 2 == 0 else self.ka
-
- kernel_size = [kernel_size, kernel_size]
- sigma = [sigma, sigma]
- # The gaussian kernel is the product of the
- # gaussian function of each dimension.
- kernel = 1
- meshgrids = torch.meshgrid(
- [
- torch.arange(size, dtype=torch.float32)
- for size in kernel_size
- ]
- )
- for size, std, mgrid in zip(kernel_size, sigma, meshgrids):
- mean = (size - 1) / 2
- kernel *= torch.exp(-(mgrid - mean) ** 2 / (2 * std ** 2))
-
- # Make sure sum of values in gaussian kernel equals 1.
- kernel = kernel / torch.sum(kernel)
- # Reshape to depthwise convolutional weight
- kernel = kernel.view(1, 1, *kernel.size())
- kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1))
-
- self.register_buffer('weight', kernel)
- self.groups = channels
- self.scale = scale
- inv_scale = 1 / scale
- self.int_inv_scale = int(inv_scale)
-
- def forward(self, input):
- if self.scale == 1.0:
- return input
-
- out = F.pad(input, (self.ka, self.kb, self.ka, self.kb))
- out = F.conv2d(out, weight=self.weight, groups=self.groups)
- out = out[:, :, ::self.int_inv_scale, ::self.int_inv_scale]
-
- return out
-
-
-class SPADE(nn.Module):
- def __init__(self, norm_nc, label_nc):
- super().__init__()
-
- self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False)
- nhidden = 128
-
- self.mlp_shared = nn.Sequential(
- nn.Conv2d(label_nc, nhidden, kernel_size=3, padding=1),
- nn.ReLU())
- self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=3, padding=1)
- self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=3, padding=1)
-
- def forward(self, x, segmap):
- normalized = self.param_free_norm(x)
- segmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest')
- actv = self.mlp_shared(segmap)
- gamma = self.mlp_gamma(actv)
- beta = self.mlp_beta(actv)
- out = normalized * (1 + gamma) + beta
- return out
-
-
-class SPADEResnetBlock(nn.Module):
- def __init__(self, fin, fout, norm_G, label_nc, use_se=False, dilation=1):
- super().__init__()
- # Attributes
- self.learned_shortcut = (fin != fout)
- fmiddle = min(fin, fout)
- self.use_se = use_se
- # create conv layers
- self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=dilation, dilation=dilation)
- self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=dilation, dilation=dilation)
- if self.learned_shortcut:
- self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False)
- # apply spectral norm if specified
- if 'spectral' in norm_G:
- self.conv_0 = spectral_norm(self.conv_0)
- self.conv_1 = spectral_norm(self.conv_1)
- if self.learned_shortcut:
- self.conv_s = spectral_norm(self.conv_s)
- # define normalization layers
- self.norm_0 = SPADE(fin, label_nc)
- self.norm_1 = SPADE(fmiddle, label_nc)
- if self.learned_shortcut:
- self.norm_s = SPADE(fin, label_nc)
-
- def forward(self, x, seg1):
- x_s = self.shortcut(x, seg1)
- dx = self.conv_0(self.actvn(self.norm_0(x, seg1)))
- dx = self.conv_1(self.actvn(self.norm_1(dx, seg1)))
- out = x_s + dx
- return out
-
- def shortcut(self, x, seg1):
- if self.learned_shortcut:
- x_s = self.conv_s(self.norm_s(x, seg1))
- else:
- x_s = x
- return x_s
-
- def actvn(self, x):
- return F.leaky_relu(x, 2e-1)
-
-class audio2image(nn.Module):
- def __init__(self, generator, kp_extractor, he_estimator_video, he_estimator_audio, train_params):
- super().__init__()
- # Attributes
- self.generator = generator
- self.kp_extractor = kp_extractor
- self.he_estimator_video = he_estimator_video
- self.he_estimator_audio = he_estimator_audio
- self.train_params = train_params
-
- def headpose_pred_to_degree(self, pred):
- device = pred.device
- idx_tensor = [idx for idx in range(66)]
- idx_tensor = torch.FloatTensor(idx_tensor).to(device)
- pred = F.softmax(pred)
- degree = torch.sum(pred*idx_tensor, 1) * 3 - 99
-
- return degree
-
- def get_rotation_matrix(self, yaw, pitch, roll):
- yaw = yaw / 180 * 3.14
- pitch = pitch / 180 * 3.14
- roll = roll / 180 * 3.14
-
- roll = roll.unsqueeze(1)
- pitch = pitch.unsqueeze(1)
- yaw = yaw.unsqueeze(1)
-
- roll_mat = torch.cat([torch.ones_like(roll), torch.zeros_like(roll), torch.zeros_like(roll),
- torch.zeros_like(roll), torch.cos(roll), -torch.sin(roll),
- torch.zeros_like(roll), torch.sin(roll), torch.cos(roll)], dim=1)
- roll_mat = roll_mat.view(roll_mat.shape[0], 3, 3)
-
- pitch_mat = torch.cat([torch.cos(pitch), torch.zeros_like(pitch), torch.sin(pitch),
- torch.zeros_like(pitch), torch.ones_like(pitch), torch.zeros_like(pitch),
- -torch.sin(pitch), torch.zeros_like(pitch), torch.cos(pitch)], dim=1)
- pitch_mat = pitch_mat.view(pitch_mat.shape[0], 3, 3)
-
- yaw_mat = torch.cat([torch.cos(yaw), -torch.sin(yaw), torch.zeros_like(yaw),
- torch.sin(yaw), torch.cos(yaw), torch.zeros_like(yaw),
- torch.zeros_like(yaw), torch.zeros_like(yaw), torch.ones_like(yaw)], dim=1)
- yaw_mat = yaw_mat.view(yaw_mat.shape[0], 3, 3)
-
- rot_mat = torch.einsum('bij,bjk,bkm->bim', roll_mat, pitch_mat, yaw_mat)
-
- return rot_mat
-
- def keypoint_transformation(self, kp_canonical, he):
- kp = kp_canonical['value'] # (bs, k, 3)
- yaw, pitch, roll = he['yaw'], he['pitch'], he['roll']
- t, exp = he['t'], he['exp']
-
- yaw = self.headpose_pred_to_degree(yaw)
- pitch = self.headpose_pred_to_degree(pitch)
- roll = self.headpose_pred_to_degree(roll)
-
- rot_mat = self.get_rotation_matrix(yaw, pitch, roll) # (bs, 3, 3)
-
- # keypoint rotation
- kp_rotated = torch.einsum('bmp,bkp->bkm', rot_mat, kp)
-
-
-
- # keypoint translation
- t = t.unsqueeze_(1).repeat(1, kp.shape[1], 1)
- kp_t = kp_rotated + t
-
- # add expression deviation
- exp = exp.view(exp.shape[0], -1, 3)
- kp_transformed = kp_t + exp
-
- return {'value': kp_transformed}
-
- def forward(self, source_image, target_audio):
- pose_source = self.he_estimator_video(source_image)
- pose_generated = self.he_estimator_audio(target_audio)
- kp_canonical = self.kp_extractor(source_image)
- kp_source = self.keypoint_transformation(kp_canonical, pose_source)
- kp_transformed_generated = self.keypoint_transformation(kp_canonical, pose_generated)
- generated = self.generator(source_image, kp_source=kp_source, kp_driving=kp_transformed_generated)
- return generated
\ No newline at end of file
diff --git a/spaces/kevinwang676/test-1/config.py b/spaces/kevinwang676/test-1/config.py
deleted file mode 100644
index e07d93cf81ea0d72ffe318cc37bc1064bc94533b..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/test-1/config.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import torch
-
-import util
-
-device = (
- 'cuda:0' if torch.cuda.is_available()
- else (
- 'mps' if util.has_mps()
- else 'cpu'
- )
-)
-is_half = util.is_half(device)
-
-x_pad = 3 if is_half else 1
-x_query = 10 if is_half else 6
-x_center = 60 if is_half else 38
-x_max = 65 if is_half else 41
diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/toolbox/__init__.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/toolbox/__init__.py
deleted file mode 100644
index b51164f3537a6b19cb2a00fb44b38855c4ba1c49..0000000000000000000000000000000000000000
--- a/spaces/kira4424/Tacotron-zero-short-voice-clone/toolbox/__init__.py
+++ /dev/null
@@ -1,476 +0,0 @@
-from toolbox.ui import UI
-from encoder import inference as encoder
-from synthesizer.inference import Synthesizer
-from vocoder.wavernn import inference as rnn_vocoder
-from vocoder.hifigan import inference as gan_vocoder
-from vocoder.fregan import inference as fgan_vocoder
-from pathlib import Path
-from time import perf_counter as timer
-from toolbox.utterance import Utterance
-import numpy as np
-import traceback
-import sys
-import torch
-import re
-
-# 默认使用wavernn
-vocoder = rnn_vocoder
-
-# Use this directory structure for your datasets, or modify it to fit your needs
-recognized_datasets = [
- "LibriSpeech/dev-clean",
- "LibriSpeech/dev-other",
- "LibriSpeech/test-clean",
- "LibriSpeech/test-other",
- "LibriSpeech/train-clean-100",
- "LibriSpeech/train-clean-360",
- "LibriSpeech/train-other-500",
- "LibriTTS/dev-clean",
- "LibriTTS/dev-other",
- "LibriTTS/test-clean",
- "LibriTTS/test-other",
- "LibriTTS/train-clean-100",
- "LibriTTS/train-clean-360",
- "LibriTTS/train-other-500",
- "LJSpeech-1.1",
- "VoxCeleb1/wav",
- "VoxCeleb1/test_wav",
- "VoxCeleb2/dev/aac",
- "VoxCeleb2/test/aac",
- "VCTK-Corpus/wav48",
- "aidatatang_200zh/corpus/dev",
- "aidatatang_200zh/corpus/test",
- "aishell3/test/wav",
- "magicdata/train",
-]
-
-#Maximum of generated wavs to keep on memory
-MAX_WAVES = 15
-
-class Toolbox:
- def __init__(self, datasets_root, enc_models_dir, syn_models_dir, voc_models_dir, extractor_models_dir, convertor_models_dir, seed, no_mp3_support, vc_mode):
- self.no_mp3_support = no_mp3_support
- self.vc_mode = vc_mode
- sys.excepthook = self.excepthook
- self.datasets_root = datasets_root
- self.utterances = set()
- self.current_generated = (None, None, None, None) # speaker_name, spec, breaks, wav
-
- self.synthesizer = None # type: Synthesizer
-
- # for ppg-based voice conversion
- self.extractor = None
- self.convertor = None # ppg2mel
-
- self.current_wav = None
- self.waves_list = []
- self.waves_count = 0
- self.waves_namelist = []
-
- # Check for webrtcvad (enables removal of silences in vocoder output)
- try:
- import webrtcvad
- self.trim_silences = True
- except:
- self.trim_silences = False
-
- # Initialize the events and the interface
- self.ui = UI(vc_mode)
- self.style_idx = 0
- self.reset_ui(enc_models_dir, syn_models_dir, voc_models_dir, extractor_models_dir, convertor_models_dir, seed)
- self.setup_events()
- self.ui.start()
-
- def excepthook(self, exc_type, exc_value, exc_tb):
- traceback.print_exception(exc_type, exc_value, exc_tb)
- self.ui.log("Exception: %s" % exc_value)
-
- def setup_events(self):
- # Dataset, speaker and utterance selection
- self.ui.browser_load_button.clicked.connect(lambda: self.load_from_browser())
- random_func = lambda level: lambda: self.ui.populate_browser(self.datasets_root,
- recognized_datasets,
- level)
- self.ui.random_dataset_button.clicked.connect(random_func(0))
- self.ui.random_speaker_button.clicked.connect(random_func(1))
- self.ui.random_utterance_button.clicked.connect(random_func(2))
- self.ui.dataset_box.currentIndexChanged.connect(random_func(1))
- self.ui.speaker_box.currentIndexChanged.connect(random_func(2))
-
- # Model selection
- self.ui.encoder_box.currentIndexChanged.connect(self.init_encoder)
- def func():
- self.synthesizer = None
- if self.vc_mode:
- self.ui.extractor_box.currentIndexChanged.connect(self.init_extractor)
- else:
- self.ui.synthesizer_box.currentIndexChanged.connect(func)
-
- self.ui.vocoder_box.currentIndexChanged.connect(self.init_vocoder)
-
- # Utterance selection
- func = lambda: self.load_from_browser(self.ui.browse_file())
- self.ui.browser_browse_button.clicked.connect(func)
- func = lambda: self.ui.draw_utterance(self.ui.selected_utterance, "current")
- self.ui.utterance_history.currentIndexChanged.connect(func)
- func = lambda: self.ui.play(self.ui.selected_utterance.wav, Synthesizer.sample_rate)
- self.ui.play_button.clicked.connect(func)
- self.ui.stop_button.clicked.connect(self.ui.stop)
- self.ui.record_button.clicked.connect(self.record)
-
- # Source Utterance selection
- if self.vc_mode:
- func = lambda: self.load_soruce_button(self.ui.selected_utterance)
- self.ui.load_soruce_button.clicked.connect(func)
-
- #Audio
- self.ui.setup_audio_devices(Synthesizer.sample_rate)
-
- #Wav playback & save
- func = lambda: self.replay_last_wav()
- self.ui.replay_wav_button.clicked.connect(func)
- func = lambda: self.export_current_wave()
- self.ui.export_wav_button.clicked.connect(func)
- self.ui.waves_cb.currentIndexChanged.connect(self.set_current_wav)
-
- # Generation
- self.ui.vocode_button.clicked.connect(self.vocode)
- self.ui.random_seed_checkbox.clicked.connect(self.update_seed_textbox)
-
- if self.vc_mode:
- func = lambda: self.convert() or self.vocode()
- self.ui.convert_button.clicked.connect(func)
- else:
- func = lambda: self.synthesize() or self.vocode()
- self.ui.generate_button.clicked.connect(func)
- self.ui.synthesize_button.clicked.connect(self.synthesize)
-
- # UMAP legend
- self.ui.clear_button.clicked.connect(self.clear_utterances)
-
- def set_current_wav(self, index):
- self.current_wav = self.waves_list[index]
-
- def export_current_wave(self):
- self.ui.save_audio_file(self.current_wav, Synthesizer.sample_rate)
-
- def replay_last_wav(self):
- self.ui.play(self.current_wav, Synthesizer.sample_rate)
-
- def reset_ui(self, encoder_models_dir, synthesizer_models_dir, vocoder_models_dir, extractor_models_dir, convertor_models_dir, seed):
- self.ui.populate_browser(self.datasets_root, recognized_datasets, 0, True)
- self.ui.populate_models(encoder_models_dir, synthesizer_models_dir, vocoder_models_dir, extractor_models_dir, convertor_models_dir, self.vc_mode)
- self.ui.populate_gen_options(seed, self.trim_silences)
-
- def load_from_browser(self, fpath=None):
- if fpath is None:
- fpath = Path(self.datasets_root,
- self.ui.current_dataset_name,
- self.ui.current_speaker_name,
- self.ui.current_utterance_name)
- name = str(fpath.relative_to(self.datasets_root))
- speaker_name = self.ui.current_dataset_name + '_' + self.ui.current_speaker_name
-
- # Select the next utterance
- if self.ui.auto_next_checkbox.isChecked():
- self.ui.browser_select_next()
- elif fpath == "":
- return
- else:
- name = fpath.name
- speaker_name = fpath.parent.name
-
- if fpath.suffix.lower() == ".mp3" and self.no_mp3_support:
- self.ui.log("Error: No mp3 file argument was passed but an mp3 file was used")
- return
-
- # Get the wav from the disk. We take the wav with the vocoder/synthesizer format for
- # playback, so as to have a fair comparison with the generated audio
- wav = Synthesizer.load_preprocess_wav(fpath)
- self.ui.log("Loaded %s" % name)
-
- self.add_real_utterance(wav, name, speaker_name)
-
- def load_soruce_button(self, utterance: Utterance):
- self.selected_source_utterance = utterance
-
- def record(self):
- wav = self.ui.record_one(encoder.sampling_rate, 5)
- if wav is None:
- return
- self.ui.play(wav, encoder.sampling_rate)
-
- speaker_name = "user01"
- name = speaker_name + "_rec_%05d" % np.random.randint(100000)
- self.add_real_utterance(wav, name, speaker_name)
-
- def add_real_utterance(self, wav, name, speaker_name):
- # Compute the mel spectrogram
- spec = Synthesizer.make_spectrogram(wav)
- self.ui.draw_spec(spec, "current")
-
- # Compute the embedding
- if not encoder.is_loaded():
- self.init_encoder()
- encoder_wav = encoder.preprocess_wav(wav)
- embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True)
-
- # Add the utterance
- utterance = Utterance(name, speaker_name, wav, spec, embed, partial_embeds, False)
- self.utterances.add(utterance)
- self.ui.register_utterance(utterance, self.vc_mode)
-
- # Plot it
- self.ui.draw_embed(embed, name, "current")
- self.ui.draw_umap_projections(self.utterances)
-
- def clear_utterances(self):
- self.utterances.clear()
- self.ui.draw_umap_projections(self.utterances)
-
- def synthesize(self):
- self.ui.log("Generating the mel spectrogram...")
- self.ui.set_loading(1)
-
- # Update the synthesizer random seed
- if self.ui.random_seed_checkbox.isChecked():
- seed = int(self.ui.seed_textbox.text())
- self.ui.populate_gen_options(seed, self.trim_silences)
- else:
- seed = None
-
- if seed is not None:
- torch.manual_seed(seed)
-
- # Synthesize the spectrogram
- if self.synthesizer is None or seed is not None:
- self.init_synthesizer()
-
- texts = self.ui.text_prompt.toPlainText().split("\n")
- punctuation = '!,。、,' # punctuate and split/clean text
- processed_texts = []
- for text in texts:
- for processed_text in re.sub(r'[{}]+'.format(punctuation), '\n', text).split('\n'):
- if processed_text:
- processed_texts.append(processed_text.strip())
- texts = processed_texts
- embed = self.ui.selected_utterance.embed
- embeds = [embed] * len(texts)
- min_token = int(self.ui.token_slider.value())
- specs = self.synthesizer.synthesize_spectrograms(texts, embeds, style_idx=int(self.ui.style_slider.value()), min_stop_token=min_token, steps=int(self.ui.length_slider.value())*200)
- breaks = [spec.shape[1] for spec in specs]
- spec = np.concatenate(specs, axis=1)
-
- self.ui.draw_spec(spec, "generated")
- self.current_generated = (self.ui.selected_utterance.speaker_name, spec, breaks, None)
- self.ui.set_loading(0)
-
- def vocode(self):
- speaker_name, spec, breaks, _ = self.current_generated
- assert spec is not None
-
- # Initialize the vocoder model and make it determinstic, if user provides a seed
- if self.ui.random_seed_checkbox.isChecked():
- seed = int(self.ui.seed_textbox.text())
- self.ui.populate_gen_options(seed, self.trim_silences)
- else:
- seed = None
-
- if seed is not None:
- torch.manual_seed(seed)
-
- # Synthesize the waveform
- if not vocoder.is_loaded() or seed is not None:
- self.init_vocoder()
-
- def vocoder_progress(i, seq_len, b_size, gen_rate):
- real_time_factor = (gen_rate / Synthesizer.sample_rate) * 1000
- line = "Waveform generation: %d/%d (batch size: %d, rate: %.1fkHz - %.2fx real time)" \
- % (i * b_size, seq_len * b_size, b_size, gen_rate, real_time_factor)
- self.ui.log(line, "overwrite")
- self.ui.set_loading(i, seq_len)
- if self.ui.current_vocoder_fpath is not None:
- self.ui.log("")
- wav, sample_rate = vocoder.infer_waveform(spec, progress_callback=vocoder_progress)
- else:
- self.ui.log("Waveform generation with Griffin-Lim... ")
- wav = Synthesizer.griffin_lim(spec)
- self.ui.set_loading(0)
- self.ui.log(" Done!", "append")
-
- # Add breaks
- b_ends = np.cumsum(np.array(breaks) * Synthesizer.hparams.hop_size)
- b_starts = np.concatenate(([0], b_ends[:-1]))
- wavs = [wav[start:end] for start, end, in zip(b_starts, b_ends)]
- breaks = [np.zeros(int(0.15 * sample_rate))] * len(breaks)
- wav = np.concatenate([i for w, b in zip(wavs, breaks) for i in (w, b)])
-
- # Trim excessive silences
- if self.ui.trim_silences_checkbox.isChecked():
- wav = encoder.preprocess_wav(wav)
-
- # Play it
- wav = wav / np.abs(wav).max() * 0.97
- self.ui.play(wav, sample_rate)
-
- # Name it (history displayed in combobox)
- # TODO better naming for the combobox items?
- wav_name = str(self.waves_count + 1)
-
- #Update waves combobox
- self.waves_count += 1
- if self.waves_count > MAX_WAVES:
- self.waves_list.pop()
- self.waves_namelist.pop()
- self.waves_list.insert(0, wav)
- self.waves_namelist.insert(0, wav_name)
-
- self.ui.waves_cb.disconnect()
- self.ui.waves_cb_model.setStringList(self.waves_namelist)
- self.ui.waves_cb.setCurrentIndex(0)
- self.ui.waves_cb.currentIndexChanged.connect(self.set_current_wav)
-
- # Update current wav
- self.set_current_wav(0)
-
- #Enable replay and save buttons:
- self.ui.replay_wav_button.setDisabled(False)
- self.ui.export_wav_button.setDisabled(False)
-
- # Compute the embedding
- # TODO: this is problematic with different sampling rates, gotta fix it
- if not encoder.is_loaded():
- self.init_encoder()
- encoder_wav = encoder.preprocess_wav(wav)
- embed, partial_embeds, _ = encoder.embed_utterance(encoder_wav, return_partials=True)
-
- # Add the utterance
- name = speaker_name + "_gen_%05d" % np.random.randint(100000)
- utterance = Utterance(name, speaker_name, wav, spec, embed, partial_embeds, True)
- self.utterances.add(utterance)
-
- # Plot it
- self.ui.draw_embed(embed, name, "generated")
- self.ui.draw_umap_projections(self.utterances)
-
- def convert(self):
- self.ui.log("Extract PPG and Converting...")
- self.ui.set_loading(1)
-
- # Init
- if self.convertor is None:
- self.init_convertor()
- if self.extractor is None:
- self.init_extractor()
-
- src_wav = self.selected_source_utterance.wav
-
- # Compute the ppg
- if not self.extractor is None:
- ppg = self.extractor.extract_from_wav(src_wav)
-
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- ref_wav = self.ui.selected_utterance.wav
- # Import necessary dependency of Voice Conversion
- from utils.f0_utils import compute_f0, f02lf0, compute_mean_std, get_converted_lf0uv
- ref_lf0_mean, ref_lf0_std = compute_mean_std(f02lf0(compute_f0(ref_wav)))
- lf0_uv = get_converted_lf0uv(src_wav, ref_lf0_mean, ref_lf0_std, convert=True)
- min_len = min(ppg.shape[1], len(lf0_uv))
- ppg = ppg[:, :min_len]
- lf0_uv = lf0_uv[:min_len]
- _, mel_pred, att_ws = self.convertor.inference(
- ppg,
- logf0_uv=torch.from_numpy(lf0_uv).unsqueeze(0).float().to(device),
- spembs=torch.from_numpy(self.ui.selected_utterance.embed).unsqueeze(0).to(device),
- )
- mel_pred= mel_pred.transpose(0, 1)
- breaks = [mel_pred.shape[1]]
- mel_pred= mel_pred.detach().cpu().numpy()
- self.ui.draw_spec(mel_pred, "generated")
- self.current_generated = (self.ui.selected_utterance.speaker_name, mel_pred, breaks, None)
- self.ui.set_loading(0)
-
- def init_extractor(self):
- if self.ui.current_extractor_fpath is None:
- return
- model_fpath = self.ui.current_extractor_fpath
- self.ui.log("Loading the extractor %s... " % model_fpath)
- self.ui.set_loading(1)
- start = timer()
- import ppg_extractor as extractor
- self.extractor = extractor.load_model(model_fpath)
- self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append")
- self.ui.set_loading(0)
-
- def init_convertor(self):
- if self.ui.current_convertor_fpath is None:
- return
- model_fpath = self.ui.current_convertor_fpath
- self.ui.log("Loading the convertor %s... " % model_fpath)
- self.ui.set_loading(1)
- start = timer()
- import ppg2mel as convertor
- self.convertor = convertor.load_model( model_fpath)
- self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append")
- self.ui.set_loading(0)
-
- def init_encoder(self):
- model_fpath = self.ui.current_encoder_fpath
-
- self.ui.log("Loading the encoder %s... " % model_fpath)
- self.ui.set_loading(1)
- start = timer()
- encoder.load_model(model_fpath)
- self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append")
- self.ui.set_loading(0)
-
- def init_synthesizer(self):
- model_fpath = self.ui.current_synthesizer_fpath
-
- self.ui.log("Loading the synthesizer %s... " % model_fpath)
- self.ui.set_loading(1)
- start = timer()
- self.synthesizer = Synthesizer(model_fpath)
- self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append")
- self.ui.set_loading(0)
-
- def init_vocoder(self):
-
- global vocoder
- model_fpath = self.ui.current_vocoder_fpath
- # Case of Griffin-lim
- if model_fpath is None:
- return
- # Sekect vocoder based on model name
- model_config_fpath = None
- if model_fpath.name is not None and model_fpath.name.find("hifigan") > -1:
- vocoder = gan_vocoder
- self.ui.log("set hifigan as vocoder")
- # search a config file
- model_config_fpaths = list(model_fpath.parent.rglob("*.json"))
- if self.vc_mode and self.ui.current_extractor_fpath is None:
- return
- if len(model_config_fpaths) > 0:
- model_config_fpath = model_config_fpaths[0]
- elif model_fpath.name is not None and model_fpath.name.find("fregan") > -1:
- vocoder = fgan_vocoder
- self.ui.log("set fregan as vocoder")
- # search a config file
- model_config_fpaths = list(model_fpath.parent.rglob("*.json"))
- if self.vc_mode and self.ui.current_extractor_fpath is None:
- return
- if len(model_config_fpaths) > 0:
- model_config_fpath = model_config_fpaths[0]
- else:
- vocoder = rnn_vocoder
- self.ui.log("set wavernn as vocoder")
-
- self.ui.log("Loading the vocoder %s... " % model_fpath)
- self.ui.set_loading(1)
- start = timer()
- vocoder.load_model(model_fpath, model_config_fpath)
- self.ui.log("Done (%dms)." % int(1000 * (timer() - start)), "append")
- self.ui.set_loading(0)
-
- def update_seed_textbox(self):
- self.ui.update_seed_textbox()
diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py b/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py
deleted file mode 100644
index 201359c4e743aed285694668e13da6dd5a40b621..0000000000000000000000000000000000000000
--- a/spaces/kukuhtw/VToonify/vtoonify/model/stylegan/lpips/networks_basic.py
+++ /dev/null
@@ -1,187 +0,0 @@
-
-from __future__ import absolute_import
-
-import sys
-import torch
-import torch.nn as nn
-import torch.nn.init as init
-from torch.autograd import Variable
-import numpy as np
-from pdb import set_trace as st
-from skimage import color
-from IPython import embed
-from model.stylegan.lpips import pretrained_networks as pn
-
-import model.stylegan.lpips as util
-
-def spatial_average(in_tens, keepdim=True):
- return in_tens.mean([2,3],keepdim=keepdim)
-
-def upsample(in_tens, out_H=64): # assumes scale factor is same for H and W
- in_H = in_tens.shape[2]
- scale_factor = 1.*out_H/in_H
-
- return nn.Upsample(scale_factor=scale_factor, mode='bilinear', align_corners=False)(in_tens)
-
-# Learned perceptual metric
-class PNetLin(nn.Module):
- def __init__(self, pnet_type='vgg', pnet_rand=False, pnet_tune=False, use_dropout=True, spatial=False, version='0.1', lpips=True):
- super(PNetLin, self).__init__()
-
- self.pnet_type = pnet_type
- self.pnet_tune = pnet_tune
- self.pnet_rand = pnet_rand
- self.spatial = spatial
- self.lpips = lpips
- self.version = version
- self.scaling_layer = ScalingLayer()
-
- if(self.pnet_type in ['vgg','vgg16']):
- net_type = pn.vgg16
- self.chns = [64,128,256,512,512]
- elif(self.pnet_type=='alex'):
- net_type = pn.alexnet
- self.chns = [64,192,384,256,256]
- elif(self.pnet_type=='squeeze'):
- net_type = pn.squeezenet
- self.chns = [64,128,256,384,384,512,512]
- self.L = len(self.chns)
-
- self.net = net_type(pretrained=not self.pnet_rand, requires_grad=self.pnet_tune)
-
- if(lpips):
- self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout)
- self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout)
- self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout)
- self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout)
- self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout)
- self.lins = [self.lin0,self.lin1,self.lin2,self.lin3,self.lin4]
- if(self.pnet_type=='squeeze'): # 7 layers for squeezenet
- self.lin5 = NetLinLayer(self.chns[5], use_dropout=use_dropout)
- self.lin6 = NetLinLayer(self.chns[6], use_dropout=use_dropout)
- self.lins+=[self.lin5,self.lin6]
-
- def forward(self, in0, in1, retPerLayer=False):
- # v0.0 - original release had a bug, where input was not scaled
- in0_input, in1_input = (self.scaling_layer(in0), self.scaling_layer(in1)) if self.version=='0.1' else (in0, in1)
- outs0, outs1 = self.net.forward(in0_input), self.net.forward(in1_input)
- feats0, feats1, diffs = {}, {}, {}
-
- for kk in range(self.L):
- feats0[kk], feats1[kk] = util.normalize_tensor(outs0[kk]), util.normalize_tensor(outs1[kk])
- diffs[kk] = (feats0[kk]-feats1[kk])**2
-
- if(self.lpips):
- if(self.spatial):
- res = [upsample(self.lins[kk].model(diffs[kk]), out_H=in0.shape[2]) for kk in range(self.L)]
- else:
- res = [spatial_average(self.lins[kk].model(diffs[kk]), keepdim=True) for kk in range(self.L)]
- else:
- if(self.spatial):
- res = [upsample(diffs[kk].sum(dim=1,keepdim=True), out_H=in0.shape[2]) for kk in range(self.L)]
- else:
- res = [spatial_average(diffs[kk].sum(dim=1,keepdim=True), keepdim=True) for kk in range(self.L)]
-
- val = res[0]
- for l in range(1,self.L):
- val += res[l]
-
- if(retPerLayer):
- return (val, res)
- else:
- return val
-
-class ScalingLayer(nn.Module):
- def __init__(self):
- super(ScalingLayer, self).__init__()
- self.register_buffer('shift', torch.Tensor([-.030,-.088,-.188])[None,:,None,None])
- self.register_buffer('scale', torch.Tensor([.458,.448,.450])[None,:,None,None])
-
- def forward(self, inp):
- return (inp - self.shift) / self.scale
-
-
-class NetLinLayer(nn.Module):
- ''' A single linear layer which does a 1x1 conv '''
- def __init__(self, chn_in, chn_out=1, use_dropout=False):
- super(NetLinLayer, self).__init__()
-
- layers = [nn.Dropout(),] if(use_dropout) else []
- layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False),]
- self.model = nn.Sequential(*layers)
-
-
-class Dist2LogitLayer(nn.Module):
- ''' takes 2 distances, puts through fc layers, spits out value between [0,1] (if use_sigmoid is True) '''
- def __init__(self, chn_mid=32, use_sigmoid=True):
- super(Dist2LogitLayer, self).__init__()
-
- layers = [nn.Conv2d(5, chn_mid, 1, stride=1, padding=0, bias=True),]
- layers += [nn.LeakyReLU(0.2,True),]
- layers += [nn.Conv2d(chn_mid, chn_mid, 1, stride=1, padding=0, bias=True),]
- layers += [nn.LeakyReLU(0.2,True),]
- layers += [nn.Conv2d(chn_mid, 1, 1, stride=1, padding=0, bias=True),]
- if(use_sigmoid):
- layers += [nn.Sigmoid(),]
- self.model = nn.Sequential(*layers)
-
- def forward(self,d0,d1,eps=0.1):
- return self.model.forward(torch.cat((d0,d1,d0-d1,d0/(d1+eps),d1/(d0+eps)),dim=1))
-
-class BCERankingLoss(nn.Module):
- def __init__(self, chn_mid=32):
- super(BCERankingLoss, self).__init__()
- self.net = Dist2LogitLayer(chn_mid=chn_mid)
- # self.parameters = list(self.net.parameters())
- self.loss = torch.nn.BCELoss()
-
- def forward(self, d0, d1, judge):
- per = (judge+1.)/2.
- self.logit = self.net.forward(d0,d1)
- return self.loss(self.logit, per)
-
-# L2, DSSIM metrics
-class FakeNet(nn.Module):
- def __init__(self, use_gpu=True, colorspace='Lab'):
- super(FakeNet, self).__init__()
- self.use_gpu = use_gpu
- self.colorspace=colorspace
-
-class L2(FakeNet):
-
- def forward(self, in0, in1, retPerLayer=None):
- assert(in0.size()[0]==1) # currently only supports batchSize 1
-
- if(self.colorspace=='RGB'):
- (N,C,X,Y) = in0.size()
- value = torch.mean(torch.mean(torch.mean((in0-in1)**2,dim=1).view(N,1,X,Y),dim=2).view(N,1,1,Y),dim=3).view(N)
- return value
- elif(self.colorspace=='Lab'):
- value = util.l2(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)),
- util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float')
- ret_var = Variable( torch.Tensor((value,) ) )
- if(self.use_gpu):
- ret_var = ret_var.cuda()
- return ret_var
-
-class DSSIM(FakeNet):
-
- def forward(self, in0, in1, retPerLayer=None):
- assert(in0.size()[0]==1) # currently only supports batchSize 1
-
- if(self.colorspace=='RGB'):
- value = util.dssim(1.*util.tensor2im(in0.data), 1.*util.tensor2im(in1.data), range=255.).astype('float')
- elif(self.colorspace=='Lab'):
- value = util.dssim(util.tensor2np(util.tensor2tensorlab(in0.data,to_norm=False)),
- util.tensor2np(util.tensor2tensorlab(in1.data,to_norm=False)), range=100.).astype('float')
- ret_var = Variable( torch.Tensor((value,) ) )
- if(self.use_gpu):
- ret_var = ret_var.cuda()
- return ret_var
-
-def print_network(net):
- num_params = 0
- for param in net.parameters():
- num_params += param.numel()
- print('Network',net)
- print('Total number of parameters: %d' % num_params)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Column-da0cdf3b.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Column-da0cdf3b.js
deleted file mode 100644
index e6514f1828badb5c12225286d71305fdda3eec11..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Column-da0cdf3b.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as h,i as w,s as b,V as C,G as r,C as o,af as v,M as _,g as j,X as q,Y as S,Z as y,p as G,t as M,q as V}from"./index-8c3da1d9.js";/* empty css */function X(a){let e,f,m,s;const u=a[8].default,t=C(u,a,a[7],null);return{c(){e=r("div"),t&&t.c(),o(e,"id",a[2]),o(e,"class",f=v(a[3].join(" "))+" svelte-vt1mxs"),o(e,"style",m=`min-width: min(${a[1]}px, 100%); flex-grow: ${a[0]}`),_(e,"gap",a[6].gap!==!1),_(e,"compact",a[5]==="compact"),_(e,"panel",a[5]==="panel"),_(e,"hide",!a[4])},m(l,n){j(l,e,n),t&&t.m(e,null),s=!0},p(l,[n]){t&&t.p&&(!s||n&128)&&q(t,u,l,l[7],s?y(u,l[7],n,null):S(l[7]),null),(!s||n&4)&&o(e,"id",l[2]),(!s||n&8&&f!==(f=v(l[3].join(" "))+" svelte-vt1mxs"))&&o(e,"class",f),(!s||n&3&&m!==(m=`min-width: min(${l[1]}px, 100%); flex-grow: ${l[0]}`))&&o(e,"style",m),(!s||n&72)&&_(e,"gap",l[6].gap!==!1),(!s||n&40)&&_(e,"compact",l[5]==="compact"),(!s||n&40)&&_(e,"panel",l[5]==="panel"),(!s||n&24)&&_(e,"hide",!l[4])},i(l){s||(G(t,l),s=!0)},o(l){M(t,l),s=!1},d(l){l&&V(e),t&&t.d(l)}}}function Y(a,e,f){let{$$slots:m={},$$scope:s}=e,{scale:u=1}=e,{min_width:t=0}=e,{elem_id:l=""}=e,{elem_classes:n=[]}=e,{visible:c=!0}=e,{variant:d="default"}=e,{style:g={}}=e;return a.$$set=i=>{"scale"in i&&f(0,u=i.scale),"min_width"in i&&f(1,t=i.min_width),"elem_id"in i&&f(2,l=i.elem_id),"elem_classes"in i&&f(3,n=i.elem_classes),"visible"in i&&f(4,c=i.visible),"variant"in i&&f(5,d=i.variant),"style"in i&&f(6,g=i.style),"$$scope"in i&&f(7,s=i.$$scope)},[u,t,l,n,c,d,g,s,m]}class z extends h{constructor(e){super(),w(this,e,Y,X,b,{scale:0,min_width:1,elem_id:2,elem_classes:3,visible:4,variant:5,style:6})}}export{z as C};
-//# sourceMappingURL=Column-da0cdf3b.js.map
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/decorators.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/decorators.py
deleted file mode 100644
index 483986c74858a22bfafcba7f675ced3a73f079ad..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/decorators.py
+++ /dev/null
@@ -1,525 +0,0 @@
-import contextlib
-import functools
-import inspect
-import os
-from pathlib import Path
-import shutil
-import string
-import sys
-import unittest
-import warnings
-
-from packaging.version import parse as parse_version
-
-import matplotlib.style
-import matplotlib.units
-import matplotlib.testing
-from matplotlib import (_api, _pylab_helpers, cbook, ft2font, pyplot as plt,
- ticker)
-from .compare import comparable_formats, compare_images, make_test_filename
-from .exceptions import ImageComparisonFailure
-
-
-@contextlib.contextmanager
-def _cleanup_cm():
- orig_units_registry = matplotlib.units.registry.copy()
- try:
- with warnings.catch_warnings(), matplotlib.rc_context():
- yield
- finally:
- matplotlib.units.registry.clear()
- matplotlib.units.registry.update(orig_units_registry)
- plt.close("all")
-
-
-@_api.deprecated("3.6", alternative="a vendored copy of the existing code, "
- "including the private function _cleanup_cm")
-class CleanupTestCase(unittest.TestCase):
- """A wrapper for unittest.TestCase that includes cleanup operations."""
- @classmethod
- def setUpClass(cls):
- cls._cm = _cleanup_cm().__enter__()
-
- @classmethod
- def tearDownClass(cls):
- cls._cm.__exit__(None, None, None)
-
-
-@_api.deprecated("3.6", alternative="a vendored copy of the existing code, "
- "including the private function _cleanup_cm")
-def cleanup(style=None):
- """
- A decorator to ensure that any global state is reset before
- running a test.
-
- Parameters
- ----------
- style : str, dict, or list, optional
- The style(s) to apply. Defaults to ``["classic",
- "_classic_test_patch"]``.
- """
-
- # If cleanup is used without arguments, *style* will be a callable, and we
- # pass it directly to the wrapper generator. If cleanup if called with an
- # argument, it is a string naming a style, and the function will be passed
- # as an argument to what we return. This is a confusing, but somewhat
- # standard, pattern for writing a decorator with optional arguments.
-
- def make_cleanup(func):
- if inspect.isgeneratorfunction(func):
- @functools.wraps(func)
- def wrapped_callable(*args, **kwargs):
- with _cleanup_cm(), matplotlib.style.context(style):
- yield from func(*args, **kwargs)
- else:
- @functools.wraps(func)
- def wrapped_callable(*args, **kwargs):
- with _cleanup_cm(), matplotlib.style.context(style):
- func(*args, **kwargs)
-
- return wrapped_callable
-
- if callable(style):
- result = make_cleanup(style)
- # Default of mpl_test_settings fixture and image_comparison too.
- style = ["classic", "_classic_test_patch"]
- return result
- else:
- return make_cleanup
-
-
-@_api.deprecated("3.6", alternative="a vendored copy of the existing code "
- "of _check_freetype_version")
-def check_freetype_version(ver):
- return _check_freetype_version(ver)
-
-
-def _check_freetype_version(ver):
- if ver is None:
- return True
-
- if isinstance(ver, str):
- ver = (ver, ver)
- ver = [parse_version(x) for x in ver]
- found = parse_version(ft2font.__freetype_version__)
-
- return ver[0] <= found <= ver[1]
-
-
-def _checked_on_freetype_version(required_freetype_version):
- import pytest
- return pytest.mark.xfail(
- not _check_freetype_version(required_freetype_version),
- reason=f"Mismatched version of freetype. "
- f"Test requires '{required_freetype_version}', "
- f"you have '{ft2font.__freetype_version__}'",
- raises=ImageComparisonFailure, strict=False)
-
-
-def remove_ticks_and_titles(figure):
- figure.suptitle("")
- null_formatter = ticker.NullFormatter()
- def remove_ticks(ax):
- """Remove ticks in *ax* and all its child Axes."""
- ax.set_title("")
- ax.xaxis.set_major_formatter(null_formatter)
- ax.xaxis.set_minor_formatter(null_formatter)
- ax.yaxis.set_major_formatter(null_formatter)
- ax.yaxis.set_minor_formatter(null_formatter)
- try:
- ax.zaxis.set_major_formatter(null_formatter)
- ax.zaxis.set_minor_formatter(null_formatter)
- except AttributeError:
- pass
- for child in ax.child_axes:
- remove_ticks(child)
- for ax in figure.get_axes():
- remove_ticks(ax)
-
-
-@contextlib.contextmanager
-def _collect_new_figures():
- """
- After::
-
- with _collect_new_figures() as figs:
- some_code()
-
- the list *figs* contains the figures that have been created during the
- execution of ``some_code``, sorted by figure number.
- """
- managers = _pylab_helpers.Gcf.figs
- preexisting = [manager for manager in managers.values()]
- new_figs = []
- try:
- yield new_figs
- finally:
- new_managers = sorted([manager for manager in managers.values()
- if manager not in preexisting],
- key=lambda manager: manager.num)
- new_figs[:] = [manager.canvas.figure for manager in new_managers]
-
-
-def _raise_on_image_difference(expected, actual, tol):
- __tracebackhide__ = True
-
- err = compare_images(expected, actual, tol, in_decorator=True)
- if err:
- for key in ["actual", "expected", "diff"]:
- err[key] = os.path.relpath(err[key])
- raise ImageComparisonFailure(
- ('images not close (RMS %(rms).3f):'
- '\n\t%(actual)s\n\t%(expected)s\n\t%(diff)s') % err)
-
-
-class _ImageComparisonBase:
- """
- Image comparison base class
-
- This class provides *just* the comparison-related functionality and avoids
- any code that would be specific to any testing framework.
- """
-
- def __init__(self, func, tol, remove_text, savefig_kwargs):
- self.func = func
- self.baseline_dir, self.result_dir = _image_directories(func)
- self.tol = tol
- self.remove_text = remove_text
- self.savefig_kwargs = savefig_kwargs
-
- def copy_baseline(self, baseline, extension):
- baseline_path = self.baseline_dir / baseline
- orig_expected_path = baseline_path.with_suffix(f'.{extension}')
- if extension == 'eps' and not orig_expected_path.exists():
- orig_expected_path = orig_expected_path.with_suffix('.pdf')
- expected_fname = make_test_filename(
- self.result_dir / orig_expected_path.name, 'expected')
- try:
- # os.symlink errors if the target already exists.
- with contextlib.suppress(OSError):
- os.remove(expected_fname)
- try:
- os.symlink(orig_expected_path, expected_fname)
- except OSError: # On Windows, symlink *may* be unavailable.
- shutil.copyfile(orig_expected_path, expected_fname)
- except OSError as err:
- raise ImageComparisonFailure(
- f"Missing baseline image {expected_fname} because the "
- f"following file cannot be accessed: "
- f"{orig_expected_path}") from err
- return expected_fname
-
- def compare(self, fig, baseline, extension, *, _lock=False):
- __tracebackhide__ = True
-
- if self.remove_text:
- remove_ticks_and_titles(fig)
-
- actual_path = (self.result_dir / baseline).with_suffix(f'.{extension}')
- kwargs = self.savefig_kwargs.copy()
- if extension == 'pdf':
- kwargs.setdefault('metadata',
- {'Creator': None, 'Producer': None,
- 'CreationDate': None})
-
- lock = (cbook._lock_path(actual_path)
- if _lock else contextlib.nullcontext())
- with lock:
- try:
- fig.savefig(actual_path, **kwargs)
- finally:
- # Matplotlib has an autouse fixture to close figures, but this
- # makes things more convenient for third-party users.
- plt.close(fig)
- expected_path = self.copy_baseline(baseline, extension)
- _raise_on_image_difference(expected_path, actual_path, self.tol)
-
-
-def _pytest_image_comparison(baseline_images, extensions, tol,
- freetype_version, remove_text, savefig_kwargs,
- style):
- """
- Decorate function with image comparison for pytest.
-
- This function creates a decorator that wraps a figure-generating function
- with image comparison code.
- """
- import pytest
-
- KEYWORD_ONLY = inspect.Parameter.KEYWORD_ONLY
-
- def decorator(func):
- old_sig = inspect.signature(func)
-
- @functools.wraps(func)
- @pytest.mark.parametrize('extension', extensions)
- @matplotlib.style.context(style)
- @_checked_on_freetype_version(freetype_version)
- @functools.wraps(func)
- def wrapper(*args, extension, request, **kwargs):
- __tracebackhide__ = True
- if 'extension' in old_sig.parameters:
- kwargs['extension'] = extension
- if 'request' in old_sig.parameters:
- kwargs['request'] = request
-
- if extension not in comparable_formats():
- reason = {
- 'pdf': 'because Ghostscript is not installed',
- 'eps': 'because Ghostscript is not installed',
- 'svg': 'because Inkscape is not installed',
- }.get(extension, 'on this system')
- pytest.skip(f"Cannot compare {extension} files {reason}")
-
- img = _ImageComparisonBase(func, tol=tol, remove_text=remove_text,
- savefig_kwargs=savefig_kwargs)
- matplotlib.testing.set_font_settings_for_testing()
-
- with _collect_new_figures() as figs:
- func(*args, **kwargs)
-
- # If the test is parametrized in any way other than applied via
- # this decorator, then we need to use a lock to prevent two
- # processes from touching the same output file.
- needs_lock = any(
- marker.args[0] != 'extension'
- for marker in request.node.iter_markers('parametrize'))
-
- if baseline_images is not None:
- our_baseline_images = baseline_images
- else:
- # Allow baseline image list to be produced on the fly based on
- # current parametrization.
- our_baseline_images = request.getfixturevalue(
- 'baseline_images')
-
- assert len(figs) == len(our_baseline_images), (
- "Test generated {} images but there are {} baseline images"
- .format(len(figs), len(our_baseline_images)))
- for fig, baseline in zip(figs, our_baseline_images):
- img.compare(fig, baseline, extension, _lock=needs_lock)
-
- parameters = list(old_sig.parameters.values())
- if 'extension' not in old_sig.parameters:
- parameters += [inspect.Parameter('extension', KEYWORD_ONLY)]
- if 'request' not in old_sig.parameters:
- parameters += [inspect.Parameter("request", KEYWORD_ONLY)]
- new_sig = old_sig.replace(parameters=parameters)
- wrapper.__signature__ = new_sig
-
- # Reach a bit into pytest internals to hoist the marks from our wrapped
- # function.
- new_marks = getattr(func, 'pytestmark', []) + wrapper.pytestmark
- wrapper.pytestmark = new_marks
-
- return wrapper
-
- return decorator
-
-
-def image_comparison(baseline_images, extensions=None, tol=0,
- freetype_version=None, remove_text=False,
- savefig_kwarg=None,
- # Default of mpl_test_settings fixture and cleanup too.
- style=("classic", "_classic_test_patch")):
- """
- Compare images generated by the test with those specified in
- *baseline_images*, which must correspond, else an `ImageComparisonFailure`
- exception will be raised.
-
- Parameters
- ----------
- baseline_images : list or None
- A list of strings specifying the names of the images generated by
- calls to `.Figure.savefig`.
-
- If *None*, the test function must use the ``baseline_images`` fixture,
- either as a parameter or with `pytest.mark.usefixtures`. This value is
- only allowed when using pytest.
-
- extensions : None or list of str
- The list of extensions to test, e.g. ``['png', 'pdf']``.
-
- If *None*, defaults to all supported extensions: png, pdf, and svg.
-
- When testing a single extension, it can be directly included in the
- names passed to *baseline_images*. In that case, *extensions* must not
- be set.
-
- In order to keep the size of the test suite from ballooning, we only
- include the ``svg`` or ``pdf`` outputs if the test is explicitly
- exercising a feature dependent on that backend (see also the
- `check_figures_equal` decorator for that purpose).
-
- tol : float, default: 0
- The RMS threshold above which the test is considered failed.
-
- Due to expected small differences in floating-point calculations, on
- 32-bit systems an additional 0.06 is added to this threshold.
-
- freetype_version : str or tuple
- The expected freetype version or range of versions for this test to
- pass.
-
- remove_text : bool
- Remove the title and tick text from the figure before comparison. This
- is useful to make the baseline images independent of variations in text
- rendering between different versions of FreeType.
-
- This does not remove other, more deliberate, text, such as legends and
- annotations.
-
- savefig_kwarg : dict
- Optional arguments that are passed to the savefig method.
-
- style : str, dict, or list
- The optional style(s) to apply to the image test. The test itself
- can also apply additional styles if desired. Defaults to ``["classic",
- "_classic_test_patch"]``.
- """
-
- if baseline_images is not None:
- # List of non-empty filename extensions.
- baseline_exts = [*filter(None, {Path(baseline).suffix[1:]
- for baseline in baseline_images})]
- if baseline_exts:
- if extensions is not None:
- raise ValueError(
- "When including extensions directly in 'baseline_images', "
- "'extensions' cannot be set as well")
- if len(baseline_exts) > 1:
- raise ValueError(
- "When including extensions directly in 'baseline_images', "
- "all baselines must share the same suffix")
- extensions = baseline_exts
- baseline_images = [ # Chop suffix out from baseline_images.
- Path(baseline).stem for baseline in baseline_images]
- if extensions is None:
- # Default extensions to test, if not set via baseline_images.
- extensions = ['png', 'pdf', 'svg']
- if savefig_kwarg is None:
- savefig_kwarg = dict() # default no kwargs to savefig
- if sys.maxsize <= 2**32:
- tol += 0.06
- return _pytest_image_comparison(
- baseline_images=baseline_images, extensions=extensions, tol=tol,
- freetype_version=freetype_version, remove_text=remove_text,
- savefig_kwargs=savefig_kwarg, style=style)
-
-
-def check_figures_equal(*, extensions=("png", "pdf", "svg"), tol=0):
- """
- Decorator for test cases that generate and compare two figures.
-
- The decorated function must take two keyword arguments, *fig_test*
- and *fig_ref*, and draw the test and reference images on them.
- After the function returns, the figures are saved and compared.
-
- This decorator should be preferred over `image_comparison` when possible in
- order to keep the size of the test suite from ballooning.
-
- Parameters
- ----------
- extensions : list, default: ["png", "pdf", "svg"]
- The extensions to test.
- tol : float
- The RMS threshold above which the test is considered failed.
-
- Raises
- ------
- RuntimeError
- If any new figures are created (and not subsequently closed) inside
- the test function.
-
- Examples
- --------
- Check that calling `.Axes.plot` with a single argument plots it against
- ``[0, 1, 2, ...]``::
-
- @check_figures_equal()
- def test_plot(fig_test, fig_ref):
- fig_test.subplots().plot([1, 3, 5])
- fig_ref.subplots().plot([0, 1, 2], [1, 3, 5])
-
- """
- ALLOWED_CHARS = set(string.digits + string.ascii_letters + '_-[]()')
- KEYWORD_ONLY = inspect.Parameter.KEYWORD_ONLY
-
- def decorator(func):
- import pytest
-
- _, result_dir = _image_directories(func)
- old_sig = inspect.signature(func)
-
- if not {"fig_test", "fig_ref"}.issubset(old_sig.parameters):
- raise ValueError("The decorated function must have at least the "
- "parameters 'fig_test' and 'fig_ref', but your "
- f"function has the signature {old_sig}")
-
- @pytest.mark.parametrize("ext", extensions)
- def wrapper(*args, ext, request, **kwargs):
- if 'ext' in old_sig.parameters:
- kwargs['ext'] = ext
- if 'request' in old_sig.parameters:
- kwargs['request'] = request
-
- file_name = "".join(c for c in request.node.name
- if c in ALLOWED_CHARS)
- try:
- fig_test = plt.figure("test")
- fig_ref = plt.figure("reference")
- with _collect_new_figures() as figs:
- func(*args, fig_test=fig_test, fig_ref=fig_ref, **kwargs)
- if figs:
- raise RuntimeError('Number of open figures changed during '
- 'test. Make sure you are plotting to '
- 'fig_test or fig_ref, or if this is '
- 'deliberate explicitly close the '
- 'new figure(s) inside the test.')
- test_image_path = result_dir / (file_name + "." + ext)
- ref_image_path = result_dir / (file_name + "-expected." + ext)
- fig_test.savefig(test_image_path)
- fig_ref.savefig(ref_image_path)
- _raise_on_image_difference(
- ref_image_path, test_image_path, tol=tol
- )
- finally:
- plt.close(fig_test)
- plt.close(fig_ref)
-
- parameters = [
- param
- for param in old_sig.parameters.values()
- if param.name not in {"fig_test", "fig_ref"}
- ]
- if 'ext' not in old_sig.parameters:
- parameters += [inspect.Parameter("ext", KEYWORD_ONLY)]
- if 'request' not in old_sig.parameters:
- parameters += [inspect.Parameter("request", KEYWORD_ONLY)]
- new_sig = old_sig.replace(parameters=parameters)
- wrapper.__signature__ = new_sig
-
- # reach a bit into pytest internals to hoist the marks from
- # our wrapped function
- new_marks = getattr(func, "pytestmark", []) + wrapper.pytestmark
- wrapper.pytestmark = new_marks
-
- return wrapper
-
- return decorator
-
-
-def _image_directories(func):
- """
- Compute the baseline and result image directories for testing *func*.
-
- For test module ``foo.bar.test_baz``, the baseline directory is at
- ``foo/bar/baseline_images/test_baz`` and the result directory at
- ``$(pwd)/result_images/test_baz``. The result directory is created if it
- doesn't exist.
- """
- module_path = Path(inspect.getfile(func))
- baseline_dir = module_path.parent / "baseline_images" / module_path.stem
- result_dir = Path().resolve() / "result_images" / module_path.stem
- result_dir.mkdir(parents=True, exist_ok=True)
- return baseline_dir, result_dir
diff --git a/spaces/lIlIlllllmeng/zhaoyang/README.md b/spaces/lIlIlllllmeng/zhaoyang/README.md
deleted file mode 100644
index bd56881a2a7709591343e2f15af9a6a8133e115b..0000000000000000000000000000000000000000
--- a/spaces/lIlIlllllmeng/zhaoyang/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: QQsign
-emoji: 🦀
-colorFrom: blue
-colorTo: purple
-sdk: docker
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/leurez/moss/Dockerfile b/spaces/leurez/moss/Dockerfile
deleted file mode 100644
index a202a8887797edcc30bdf623e2a86948bb061df5..0000000000000000000000000000000000000000
--- a/spaces/leurez/moss/Dockerfile
+++ /dev/null
@@ -1,56 +0,0 @@
-# build front-end
-FROM node:lts-alpine AS frontend
-
-RUN npm install pnpm -g
-
-WORKDIR /app
-
-COPY ./package.json /app
-
-COPY ./pnpm-lock.yaml /app
-
-RUN pnpm install
-
-COPY . /app
-
-RUN pnpm run build
-
-# build backend
-FROM node:lts-alpine as backend
-
-RUN npm install pnpm -g
-
-WORKDIR /app
-
-COPY /service/package.json /app
-
-COPY /service/pnpm-lock.yaml /app
-
-RUN pnpm install
-
-COPY /service /app
-
-RUN pnpm build
-
-# service
-FROM node:lts-alpine
-
-RUN npm install pnpm -g
-
-WORKDIR /app
-
-COPY /service/package.json /app
-
-COPY /service/pnpm-lock.yaml /app
-
-RUN pnpm install --production && rm -rf /root/.npm /root/.pnpm-store /usr/local/share/.cache /tmp/*
-
-COPY /service /app
-
-COPY --from=frontend /app/dist /app/public
-
-COPY --from=backend /app/build /app/build
-
-EXPOSE 7860
-
-CMD ["pnpm", "run", "prod"]
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Cryptography And Network Security By Behrouz A Forouzan Pdf Free Downloadl.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Cryptography And Network Security By Behrouz A Forouzan Pdf Free Downloadl.md
deleted file mode 100644
index 40f51e6fd5a8f9327b74d2a57e7429423fe0a37a..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Cryptography And Network Security By Behrouz A Forouzan Pdf Free Downloadl.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Cryptography And Network Security By Behrouz A Forouzan Pdf Free Downloadl Download ✓ https://bytlly.com/2uGyFN
-
-... roggenrola, rogerio Cryptography And Network Security By Behrouz A Forouzan Pdf Free Downloadl. PUP. 3.55 Rogero Downgrader RSOD. 1fdad05405
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Folder Lock 5.2.6 With [Serial] Download [2021].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Folder Lock 5.2.6 With [Serial] Download [2021].md
deleted file mode 100644
index 42670cbaf4c6aed865a8761271dc631d7717a9b5..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Folder Lock 5.2.6 With [Serial] Download [2021].md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-How to Secure Your Files and Folders with Folder Lock 5.2.6
-If you have sensitive data on your computer that you want to keep away from prying eyes, you need a reliable software solution that can lock and encrypt your files and folders. One such program is Folder Lock 5.2.6, a powerful and easy-to-use tool that lets you protect your data with a password, a serial number, or both.
-Folder Lock 5.2.6 With [Serial] Download DOWNLOAD ✏ https://bytlly.com/2uGxMs
-In this article, we will show you how to download, install, and use Folder Lock 5.2.6 to secure your files and folders on Windows.
-How to Download Folder Lock 5.2.6
-Folder Lock 5.2.6 is not a free program, but you can download a trial version from the official website[^1^]. The trial version has some limitations, such as the maximum number of files and folders you can lock, the size of the encrypted lockers, and the duration of the online backup service.
-If you want to unlock the full features of Folder Lock 5.2.6, you need to purchase a license from the website[^1^] or use a serial number that you can find online. However, we do not recommend using serial numbers from untrusted sources, as they may contain malware or viruses that can harm your computer.
-
-How to Install Folder Lock 5.2.6
-Once you have downloaded the setup file of Folder Lock 5.2.6, double-click on it to start the installation process. Follow the instructions on the screen and choose a destination folder for the program files. You can also choose whether to create a desktop shortcut or not.
-After the installation is complete, you will be asked to enter a master password that will be used to access Folder Lock 5.2.6 and all your locked and encrypted data. Make sure you choose a strong and memorable password that cannot be guessed by anyone else.
-How to Use Folder Lock 5.2.6
-Folder Lock 5.2.6 has a user-friendly interface that allows you to perform various tasks with ease. Here are some of the main features of Folder Lock 5.2.6 and how to use them:
-
-Lock Files and Folders: You can lock any file or folder on your computer by dragging and dropping it into the Folder Lock window or by using the right-click menu in Windows Explorer[^1^]. Once locked, the file or folder will be hidden from its original location and can only be accessed through Folder Lock 5.2.6 with your master password.
-Encrypt Files and Folders: You can encrypt any file or folder on your computer by creating a locker in Folder Lock 5.2.6[^1^]. A locker is a secure container that stores your encrypted data with AES 256-bit military grade encryption[^1^]. You can create as many lockers as you want and customize their size, name, and password.
-Secure Backup: You can backup your encrypted lockers online with Folder Lock 5.2.6[^1^]. This feature allows you to restore your data in case of loss or damage of your computer or hard drive. You need to create an account on the website[^1^] and choose a backup plan that suits your needs.
-Protect USB Drives: You can protect your portable devices such as USB drives, CDs, DVDs, and external hard drives with Folder Lock 5.2.6[^1^]. You can convert your lockers into executable files that can run without installing Folder Lock 5.2.6 on any computer[^1^]. You can also create password-protected zips for email attachments[^1^].
-Password Wallets: You can create digital wallets in Folder Lock 5.2.6 to store your confidential information such as banking details, credit cards, transactions, ATM pin, and more[^1^]. You can create as many cards as you want in these wallets and they are encrypted when saved[^1^].
-
-Folder Lock 5
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py b/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py
deleted file mode 100644
index 052df6220595a1b39b7e2aea37ca4872d113dfd2..0000000000000000000000000000000000000000
--- a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/groundingdino.py
+++ /dev/null
@@ -1,395 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR model and criterion classes.
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Modified from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)
-# Copyright (c) 2020 SenseTime. All Rights Reserved.
-# ------------------------------------------------------------------------
-import copy
-from typing import List
-
-import torch
-import torch.nn.functional as F
-from torch import nn
-from torchvision.ops.boxes import nms
-from transformers import AutoTokenizer, BertModel, BertTokenizer, RobertaModel, RobertaTokenizerFast
-
-from groundingdino.util import box_ops, get_tokenlizer
-from groundingdino.util.misc import (
- NestedTensor,
- accuracy,
- get_world_size,
- interpolate,
- inverse_sigmoid,
- is_dist_avail_and_initialized,
- nested_tensor_from_tensor_list,
-)
-from groundingdino.util.utils import get_phrases_from_posmap
-from groundingdino.util.visualizer import COCOVisualizer
-from groundingdino.util.vl_utils import create_positive_map_from_span
-
-from ..registry import MODULE_BUILD_FUNCS
-from .backbone import build_backbone
-from .bertwarper import (
- BertModelWarper,
- generate_masks_with_special_tokens,
- generate_masks_with_special_tokens_and_transfer_map,
-)
-from .transformer import build_transformer
-from .utils import MLP, ContrastiveEmbed, sigmoid_focal_loss
-
-
-class GroundingDINO(nn.Module):
- """This is the Cross-Attention Detector module that performs object detection"""
-
- def __init__(
- self,
- backbone,
- transformer,
- num_queries,
- aux_loss=False,
- iter_update=False,
- query_dim=2,
- num_feature_levels=1,
- nheads=8,
- # two stage
- two_stage_type="no", # ['no', 'standard']
- dec_pred_bbox_embed_share=True,
- two_stage_class_embed_share=True,
- two_stage_bbox_embed_share=True,
- num_patterns=0,
- dn_number=100,
- dn_box_noise_scale=0.4,
- dn_label_noise_ratio=0.5,
- dn_labelbook_size=100,
- text_encoder_type="bert-base-uncased",
- sub_sentence_present=True,
- max_text_len=256,
- ):
- """Initializes the model.
- Parameters:
- backbone: torch module of the backbone to be used. See backbone.py
- transformer: torch module of the transformer architecture. See transformer.py
- num_queries: number of object queries, ie detection slot. This is the maximal number of objects
- Conditional DETR can detect in a single image. For COCO, we recommend 100 queries.
- aux_loss: True if auxiliary decoding losses (loss at each decoder layer) are to be used.
- """
- super().__init__()
- self.num_queries = num_queries
- self.transformer = transformer
- self.hidden_dim = hidden_dim = transformer.d_model
- self.num_feature_levels = num_feature_levels
- self.nheads = nheads
- self.max_text_len = 256
- self.sub_sentence_present = sub_sentence_present
-
- # setting query dim
- self.query_dim = query_dim
- assert query_dim == 4
-
- # for dn training
- self.num_patterns = num_patterns
- self.dn_number = dn_number
- self.dn_box_noise_scale = dn_box_noise_scale
- self.dn_label_noise_ratio = dn_label_noise_ratio
- self.dn_labelbook_size = dn_labelbook_size
-
- # bert
- self.tokenizer = get_tokenlizer.get_tokenlizer(text_encoder_type)
- self.bert = get_tokenlizer.get_pretrained_language_model(text_encoder_type)
- self.bert.pooler.dense.weight.requires_grad_(False)
- self.bert.pooler.dense.bias.requires_grad_(False)
- self.bert = BertModelWarper(bert_model=self.bert)
-
- self.feat_map = nn.Linear(self.bert.config.hidden_size, self.hidden_dim, bias=True)
- nn.init.constant_(self.feat_map.bias.data, 0)
- nn.init.xavier_uniform_(self.feat_map.weight.data)
- # freeze
-
- # special tokens
- self.specical_tokens = self.tokenizer.convert_tokens_to_ids(["[CLS]", "[SEP]", ".", "?"])
-
- # prepare input projection layers
- if num_feature_levels > 1:
- num_backbone_outs = len(backbone.num_channels)
- input_proj_list = []
- for _ in range(num_backbone_outs):
- in_channels = backbone.num_channels[_]
- input_proj_list.append(
- nn.Sequential(
- nn.Conv2d(in_channels, hidden_dim, kernel_size=1),
- nn.GroupNorm(32, hidden_dim),
- )
- )
- for _ in range(num_feature_levels - num_backbone_outs):
- input_proj_list.append(
- nn.Sequential(
- nn.Conv2d(in_channels, hidden_dim, kernel_size=3, stride=2, padding=1),
- nn.GroupNorm(32, hidden_dim),
- )
- )
- in_channels = hidden_dim
- self.input_proj = nn.ModuleList(input_proj_list)
- else:
- assert two_stage_type == "no", "two_stage_type should be no if num_feature_levels=1 !!!"
- self.input_proj = nn.ModuleList(
- [
- nn.Sequential(
- nn.Conv2d(backbone.num_channels[-1], hidden_dim, kernel_size=1),
- nn.GroupNorm(32, hidden_dim),
- )
- ]
- )
-
- self.backbone = backbone
- self.aux_loss = aux_loss
- self.box_pred_damping = box_pred_damping = None
-
- self.iter_update = iter_update
- assert iter_update, "Why not iter_update?"
-
- # prepare pred layers
- self.dec_pred_bbox_embed_share = dec_pred_bbox_embed_share
- # prepare class & box embed
- _class_embed = ContrastiveEmbed()
-
- _bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3)
- nn.init.constant_(_bbox_embed.layers[-1].weight.data, 0)
- nn.init.constant_(_bbox_embed.layers[-1].bias.data, 0)
-
- if dec_pred_bbox_embed_share:
- box_embed_layerlist = [_bbox_embed for i in range(transformer.num_decoder_layers)]
- else:
- box_embed_layerlist = [
- copy.deepcopy(_bbox_embed) for i in range(transformer.num_decoder_layers)
- ]
- class_embed_layerlist = [_class_embed for i in range(transformer.num_decoder_layers)]
- self.bbox_embed = nn.ModuleList(box_embed_layerlist)
- self.class_embed = nn.ModuleList(class_embed_layerlist)
- self.transformer.decoder.bbox_embed = self.bbox_embed
- self.transformer.decoder.class_embed = self.class_embed
-
- # two stage
- self.two_stage_type = two_stage_type
- assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format(
- two_stage_type
- )
- if two_stage_type != "no":
- if two_stage_bbox_embed_share:
- assert dec_pred_bbox_embed_share
- self.transformer.enc_out_bbox_embed = _bbox_embed
- else:
- self.transformer.enc_out_bbox_embed = copy.deepcopy(_bbox_embed)
-
- if two_stage_class_embed_share:
- assert dec_pred_bbox_embed_share
- self.transformer.enc_out_class_embed = _class_embed
- else:
- self.transformer.enc_out_class_embed = copy.deepcopy(_class_embed)
-
- self.refpoint_embed = None
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- # init input_proj
- for proj in self.input_proj:
- nn.init.xavier_uniform_(proj[0].weight, gain=1)
- nn.init.constant_(proj[0].bias, 0)
-
- def init_ref_points(self, use_num_queries):
- self.refpoint_embed = nn.Embedding(use_num_queries, self.query_dim)
-
- def forward(self, samples: NestedTensor, targets: List = None, **kw):
- """The forward expects a NestedTensor, which consists of:
- - samples.tensor: batched images, of shape [batch_size x 3 x H x W]
- - samples.mask: a binary mask of shape [batch_size x H x W], containing 1 on padded pixels
-
- It returns a dict with the following elements:
- - "pred_logits": the classification logits (including no-object) for all queries.
- Shape= [batch_size x num_queries x num_classes]
- - "pred_boxes": The normalized boxes coordinates for all queries, represented as
- (center_x, center_y, width, height). These values are normalized in [0, 1],
- relative to the size of each individual image (disregarding possible padding).
- See PostProcess for information on how to retrieve the unnormalized bounding box.
- - "aux_outputs": Optional, only returned when auxilary losses are activated. It is a list of
- dictionnaries containing the two above keys for each decoder layer.
- """
- if targets is None:
- captions = kw["captions"]
- else:
- captions = [t["caption"] for t in targets]
- len(captions)
-
- # encoder texts
- tokenized = self.tokenizer(captions, padding="longest", return_tensors="pt").to(
- samples.device
- )
- (
- text_self_attention_masks,
- position_ids,
- cate_to_token_mask_list,
- ) = generate_masks_with_special_tokens_and_transfer_map(
- tokenized, self.specical_tokens, self.tokenizer
- )
-
- if text_self_attention_masks.shape[1] > self.max_text_len:
- text_self_attention_masks = text_self_attention_masks[
- :, : self.max_text_len, : self.max_text_len
- ]
- position_ids = position_ids[:, : self.max_text_len]
- tokenized["input_ids"] = tokenized["input_ids"][:, : self.max_text_len]
- tokenized["attention_mask"] = tokenized["attention_mask"][:, : self.max_text_len]
- tokenized["token_type_ids"] = tokenized["token_type_ids"][:, : self.max_text_len]
-
- # extract text embeddings
- if self.sub_sentence_present:
- tokenized_for_encoder = {k: v for k, v in tokenized.items() if k != "attention_mask"}
- tokenized_for_encoder["attention_mask"] = text_self_attention_masks
- tokenized_for_encoder["position_ids"] = position_ids
- else:
- # import ipdb; ipdb.set_trace()
- tokenized_for_encoder = tokenized
-
- bert_output = self.bert(**tokenized_for_encoder) # bs, 195, 768
-
- encoded_text = self.feat_map(bert_output["last_hidden_state"]) # bs, 195, d_model
- text_token_mask = tokenized.attention_mask.bool() # bs, 195
- # text_token_mask: True for nomask, False for mask
- # text_self_attention_masks: True for nomask, False for mask
-
- if encoded_text.shape[1] > self.max_text_len:
- encoded_text = encoded_text[:, : self.max_text_len, :]
- text_token_mask = text_token_mask[:, : self.max_text_len]
- position_ids = position_ids[:, : self.max_text_len]
- text_self_attention_masks = text_self_attention_masks[
- :, : self.max_text_len, : self.max_text_len
- ]
-
- text_dict = {
- "encoded_text": encoded_text, # bs, 195, d_model
- "text_token_mask": text_token_mask, # bs, 195
- "position_ids": position_ids, # bs, 195
- "text_self_attention_masks": text_self_attention_masks, # bs, 195,195
- }
-
- # import ipdb; ipdb.set_trace()
-
- if isinstance(samples, (list, torch.Tensor)):
- samples = nested_tensor_from_tensor_list(samples)
- features, poss = self.backbone(samples)
-
- srcs = []
- masks = []
- for l, feat in enumerate(features):
- src, mask = feat.decompose()
- srcs.append(self.input_proj[l](src))
- masks.append(mask)
- assert mask is not None
- if self.num_feature_levels > len(srcs):
- _len_srcs = len(srcs)
- for l in range(_len_srcs, self.num_feature_levels):
- if l == _len_srcs:
- src = self.input_proj[l](features[-1].tensors)
- else:
- src = self.input_proj[l](srcs[-1])
- m = samples.mask
- mask = F.interpolate(m[None].float(), size=src.shape[-2:]).to(torch.bool)[0]
- pos_l = self.backbone[1](NestedTensor(src, mask)).to(src.dtype)
- srcs.append(src)
- masks.append(mask)
- poss.append(pos_l)
-
- input_query_bbox = input_query_label = attn_mask = dn_meta = None
- hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer(
- srcs, masks, input_query_bbox, poss, input_query_label, attn_mask, text_dict
- )
-
- # deformable-detr-like anchor update
- outputs_coord_list = []
- for dec_lid, (layer_ref_sig, layer_bbox_embed, layer_hs) in enumerate(
- zip(reference[:-1], self.bbox_embed, hs)
- ):
- layer_delta_unsig = layer_bbox_embed(layer_hs)
- layer_outputs_unsig = layer_delta_unsig + inverse_sigmoid(layer_ref_sig)
- layer_outputs_unsig = layer_outputs_unsig.sigmoid()
- outputs_coord_list.append(layer_outputs_unsig)
- outputs_coord_list = torch.stack(outputs_coord_list)
-
- # output
- outputs_class = torch.stack(
- [
- layer_cls_embed(layer_hs, text_dict)
- for layer_cls_embed, layer_hs in zip(self.class_embed, hs)
- ]
- )
- out = {"pred_logits": outputs_class[-1], "pred_boxes": outputs_coord_list[-1]}
-
- # # for intermediate outputs
- # if self.aux_loss:
- # out['aux_outputs'] = self._set_aux_loss(outputs_class, outputs_coord_list)
-
- # # for encoder output
- # if hs_enc is not None:
- # # prepare intermediate outputs
- # interm_coord = ref_enc[-1]
- # interm_class = self.transformer.enc_out_class_embed(hs_enc[-1], text_dict)
- # out['interm_outputs'] = {'pred_logits': interm_class, 'pred_boxes': interm_coord}
- # out['interm_outputs_for_matching_pre'] = {'pred_logits': interm_class, 'pred_boxes': init_box_proposal}
-
- return out
-
- @torch.jit.unused
- def _set_aux_loss(self, outputs_class, outputs_coord):
- # this is a workaround to make torchscript happy, as torchscript
- # doesn't support dictionary with non-homogeneous values, such
- # as a dict having both a Tensor and a list.
- return [
- {"pred_logits": a, "pred_boxes": b}
- for a, b in zip(outputs_class[:-1], outputs_coord[:-1])
- ]
-
-
-@MODULE_BUILD_FUNCS.registe_with_name(module_name="groundingdino")
-def build_groundingdino(args):
-
- backbone = build_backbone(args)
- transformer = build_transformer(args)
-
- dn_labelbook_size = args.dn_labelbook_size
- dec_pred_bbox_embed_share = args.dec_pred_bbox_embed_share
- sub_sentence_present = args.sub_sentence_present
-
- model = GroundingDINO(
- backbone,
- transformer,
- num_queries=args.num_queries,
- aux_loss=True,
- iter_update=True,
- query_dim=4,
- num_feature_levels=args.num_feature_levels,
- nheads=args.nheads,
- dec_pred_bbox_embed_share=dec_pred_bbox_embed_share,
- two_stage_type=args.two_stage_type,
- two_stage_bbox_embed_share=args.two_stage_bbox_embed_share,
- two_stage_class_embed_share=args.two_stage_class_embed_share,
- num_patterns=args.num_patterns,
- dn_number=0,
- dn_box_noise_scale=args.dn_box_noise_scale,
- dn_label_noise_ratio=args.dn_label_noise_ratio,
- dn_labelbook_size=dn_labelbook_size,
- text_encoder_type=args.text_encoder_type,
- sub_sentence_present=sub_sentence_present,
- max_text_len=args.max_text_len,
- )
-
- return model
diff --git a/spaces/lunarflu/HF-QA-Demo-3/app.py b/spaces/lunarflu/HF-QA-Demo-3/app.py
deleted file mode 100644
index 76cb450468bff3b310566717fce7dd20254ddda0..0000000000000000000000000000000000000000
--- a/spaces/lunarflu/HF-QA-Demo-3/app.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import threading
-
-import gradio as gr
-
-from qa_engine import logger, Config, QAEngine
-from discord_bot import DiscordClient
-
-
-
-config = Config()
-qa_engine = QAEngine(
- llm_model_id=config.question_answering_model_id,
- embedding_model_id=config.embedding_model_id,
- index_repo_id=config.index_repo_id,
- prompt_template=config.prompt_template,
- use_docs_for_context=config.use_docs_for_context,
- add_sources_to_response=config.add_sources_to_response,
- use_messages_for_context=config.use_messages_in_context,
- debug=config.debug
-)
-
-
-def gradio_interface():
- with gr.Blocks() as demo:
- chatbot = gr.Chatbot()
- msg = gr.Textbox()
- clear = gr.ClearButton([msg, chatbot])
-
- def respond(message, chat_history):
- context = ''.join(f'User: {msg} \nBot:{bot_msg}\n' for msg, bot_msg in chat_history)
- logger.info(f'Context: {context}')
- response = qa_engine.get_response(message, context)
- bot_message = response.get_answer() + response.get_sources_as_text() + '\n'
- chat_history.append((message, bot_message))
- return '', chat_history
-
- msg.submit(respond, [msg, chatbot], [msg, chatbot])
- demo.launch(share=True)
-
-
-def discord_bot_inference_thread():
- client = DiscordClient(
- qa_engine=qa_engine,
- num_last_messages=config.num_last_messages,
- use_names_in_context=config.use_names_in_context,
- enable_commands=config.enable_commands,
- debug=config.debug
- )
- client.run(config.discord_token)
-
-def discord_bot():
- thread = threading.Thread(target=discord_bot_inference_thread)
- thread.start()
- with gr.Blocks() as demo:
- gr.Markdown(f'Discord bot is running.')
- demo.queue(concurrency_count=100)
- demo.queue(max_size=100)
- demo.launch()
-
-
-if __name__ == '__main__':
- if config.app_mode == 'gradio':
- gradio_interface()
- elif config.app_mode == 'discord':
- discord_bot()
- else:
- raise ValueError(
- f'Invalid app mode: {config.app_mode}, ',
- f'set APP_MODE to "gradio" or "discord"'
- )
diff --git a/spaces/luodian/LoRA-DreamBooth-Training-UI/trainer.py b/spaces/luodian/LoRA-DreamBooth-Training-UI/trainer.py
deleted file mode 100644
index e4e4469796a08b797ae70a641c2f5125dbd22c1e..0000000000000000000000000000000000000000
--- a/spaces/luodian/LoRA-DreamBooth-Training-UI/trainer.py
+++ /dev/null
@@ -1,166 +0,0 @@
-from __future__ import annotations
-
-import datetime
-import os
-import pathlib
-import shlex
-import shutil
-import subprocess
-
-import gradio as gr
-import PIL.Image
-import slugify
-import torch
-from huggingface_hub import HfApi
-
-from app_upload import LoRAModelUploader
-from utils import save_model_card
-
-URL_TO_JOIN_LORA_LIBRARY_ORG = 'https://huggingface.co/organizations/lora-library/share/hjetHAcKjnPHXhHfbeEcqnBqmhgilFfpOL'
-
-
-def pad_image(image: PIL.Image.Image) -> PIL.Image.Image:
- w, h = image.size
- if w == h:
- return image
- elif w > h:
- new_image = PIL.Image.new(image.mode, (w, w), (0, 0, 0))
- new_image.paste(image, (0, (w - h) // 2))
- return new_image
- else:
- new_image = PIL.Image.new(image.mode, (h, h), (0, 0, 0))
- new_image.paste(image, ((h - w) // 2, 0))
- return new_image
-
-
-class Trainer:
- def __init__(self, hf_token: str | None = None):
- self.hf_token = hf_token
- self.api = HfApi(token=hf_token)
- self.model_uploader = LoRAModelUploader(hf_token)
-
- def prepare_dataset(self, instance_images: list, resolution: int,
- instance_data_dir: pathlib.Path) -> None:
- shutil.rmtree(instance_data_dir, ignore_errors=True)
- instance_data_dir.mkdir(parents=True)
- for i, temp_path in enumerate(instance_images):
- image = PIL.Image.open(temp_path.name)
- image = pad_image(image)
- image = image.resize((resolution, resolution))
- image = image.convert('RGB')
- out_path = instance_data_dir / f'{i:03d}.jpg'
- image.save(out_path, format='JPEG', quality=100)
-
- def join_lora_library_org(self) -> None:
- subprocess.run(
- shlex.split(
- f'curl -X POST -H "Authorization: Bearer {self.hf_token}" -H "Content-Type: application/json" {URL_TO_JOIN_LORA_LIBRARY_ORG}'
- ))
-
- def run(
- self,
- instance_images: list | None,
- instance_prompt: str,
- output_model_name: str,
- overwrite_existing_model: bool,
- validation_prompt: str,
- base_model: str,
- resolution_s: str,
- n_steps: int,
- learning_rate: float,
- gradient_accumulation: int,
- seed: int,
- fp16: bool,
- use_8bit_adam: bool,
- checkpointing_steps: int,
- use_wandb: bool,
- validation_epochs: int,
- upload_to_hub: bool,
- use_private_repo: bool,
- delete_existing_repo: bool,
- upload_to: str,
- remove_gpu_after_training: bool,
- ) -> str:
- if not torch.cuda.is_available():
- raise gr.Error('CUDA is not available.')
- if instance_images is None:
- raise gr.Error('You need to upload images.')
- if not instance_prompt:
- raise gr.Error('The instance prompt is missing.')
- if not validation_prompt:
- raise gr.Error('The validation prompt is missing.')
-
- resolution = int(resolution_s)
-
- if not output_model_name:
- timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
- output_model_name = f'lora-dreambooth-{timestamp}'
- output_model_name = slugify.slugify(output_model_name)
-
- repo_dir = pathlib.Path(__file__).parent
- output_dir = repo_dir / 'experiments' / output_model_name
- if overwrite_existing_model or upload_to_hub:
- shutil.rmtree(output_dir, ignore_errors=True)
- output_dir.mkdir(parents=True)
-
- instance_data_dir = repo_dir / 'training_data' / output_model_name
- self.prepare_dataset(instance_images, resolution, instance_data_dir)
-
- if upload_to_hub:
- self.join_lora_library_org()
-
- command = f'''
- accelerate launch train_dreambooth_lora.py \
- --pretrained_model_name_or_path={base_model} \
- --instance_data_dir={instance_data_dir} \
- --output_dir={output_dir} \
- --instance_prompt="{instance_prompt}" \
- --resolution={resolution} \
- --train_batch_size=1 \
- --gradient_accumulation_steps={gradient_accumulation} \
- --learning_rate={learning_rate} \
- --lr_scheduler=constant \
- --lr_warmup_steps=0 \
- --max_train_steps={n_steps} \
- --checkpointing_steps={checkpointing_steps} \
- --validation_prompt="{validation_prompt}" \
- --validation_epochs={validation_epochs} \
- --seed={seed}
- '''
- if fp16:
- command += ' --mixed_precision fp16'
- if use_8bit_adam:
- command += ' --use_8bit_adam'
- if use_wandb:
- command += ' --report_to wandb'
-
- with open(output_dir / 'train.sh', 'w') as f:
- command_s = ' '.join(command.split())
- f.write(command_s)
- subprocess.run(shlex.split(command))
- save_model_card(save_dir=output_dir,
- base_model=base_model,
- instance_prompt=instance_prompt,
- test_prompt=validation_prompt,
- test_image_dir='test_images')
-
- message = 'Training completed!'
- print(message)
-
- if upload_to_hub:
- upload_message = self.model_uploader.upload_lora_model(
- folder_path=output_dir.as_posix(),
- repo_name=output_model_name,
- upload_to=upload_to,
- private=use_private_repo,
- delete_existing_repo=delete_existing_repo)
- print(upload_message)
- message = message + '\n' + upload_message
-
- if remove_gpu_after_training:
- space_id = os.getenv('SPACE_ID')
- if space_id:
- self.api.request_space_hardware(repo_id=space_id,
- hardware='cpu-basic')
-
- return message
diff --git a/spaces/manishjaiswal/05-SOTA-Question-Answer-From-TextFileContext-Demo/README.md b/spaces/manishjaiswal/05-SOTA-Question-Answer-From-TextFileContext-Demo/README.md
deleted file mode 100644
index ee997eba7cb0e4aec6fbc49d09f854c905e048fd..0000000000000000000000000000000000000000
--- a/spaces/manishjaiswal/05-SOTA-Question-Answer-From-TextFileContext-Demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 05 SOTA Question Answer From TextFileContext Demo
-emoji: 💩
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/marcusj83/MusicGenbruh/tests/data/test_audio_utils.py b/spaces/marcusj83/MusicGenbruh/tests/data/test_audio_utils.py
deleted file mode 100644
index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000
--- a/spaces/marcusj83/MusicGenbruh/tests/data/test_audio_utils.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import julius
-import torch
-import pytest
-
-from audiocraft.data.audio_utils import (
- _clip_wav,
- convert_audio_channels,
- convert_audio,
- normalize_audio
-)
-from ..common_utils import get_batch_white_noise
-
-
-class TestConvertAudioChannels:
-
- def test_convert_audio_channels_downmix(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=2)
- assert list(mixed.shape) == [b, 2, t]
-
- def test_convert_audio_channels_nochange(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=c)
- assert list(mixed.shape) == list(audio.shape)
-
- def test_convert_audio_channels_upmix(self):
- b, c, t = 2, 1, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=3)
- assert list(mixed.shape) == [b, 3, t]
-
- def test_convert_audio_channels_upmix_error(self):
- b, c, t = 2, 2, 100
- audio = get_batch_white_noise(b, c, t)
- with pytest.raises(ValueError):
- convert_audio_channels(audio, channels=3)
-
-
-class TestConvertAudio:
-
- def test_convert_audio_channels_downmix(self):
- b, c, dur = 2, 3, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2)
- assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]]
-
- def test_convert_audio_channels_upmix(self):
- b, c, dur = 2, 1, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3)
- assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]]
-
- def test_convert_audio_upsample(self):
- b, c, dur = 2, 1, 4.
- sr = 2
- new_sr = 3
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
- def test_convert_audio_resample(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- new_sr = 2
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
-
-class TestNormalizeAudio:
-
- def test_clip_wav(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- _clip_wav(audio)
- assert audio.abs().max() <= 1
-
- def test_normalize_audio_clip(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='clip')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_rms(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='rms')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_peak(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='peak')
- assert norm_audio.abs().max() <= 1
diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/download.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/download.py
deleted file mode 100644
index 2753733ed9a4721a8af19053b363fff96d97d673..0000000000000000000000000000000000000000
--- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/download.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from huggingface_hub import hf_hub_download
-from huggingface_hub import snapshot_download
-
-#download files
-def download_file(repo_name, filename, revision='main',repo_type='model'):
-
- file_location = hf_hub_download(repo_id=repo_name, filename=filename,revision=revision, repo_type=repo_type)
- return file_location
-
-#download a folder
-def download_folder(repo_name, revision='main'):
-
- folder_location = snapshot_download(repo_id=repo_name, revision=revision)
-
- return folder_location
-
-
-
\ No newline at end of file
diff --git a/spaces/matthoffner/starchat-ui/components/Chat/ChatMessage.tsx b/spaces/matthoffner/starchat-ui/components/Chat/ChatMessage.tsx
deleted file mode 100644
index b7d7abdd7d283c5c042a89cf0b23f6f11702a3f2..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/starchat-ui/components/Chat/ChatMessage.tsx
+++ /dev/null
@@ -1,288 +0,0 @@
-import {
- IconCheck,
- IconCopy,
- IconEdit,
- IconRobot,
- IconTrash,
- IconUser,
-} from '@tabler/icons-react';
-import { FC, memo, useContext, useEffect, useRef, useState } from 'react';
-
-import { useTranslation } from 'next-i18next';
-
-import { updateConversation } from '@/utils/app/conversation';
-
-import { Message } from '@/types/chat';
-
-import HomeContext from '@/pages/api/home/home.context';
-
-import { CodeBlock } from '../Markdown/CodeBlock';
-import { MemoizedReactMarkdown } from '../Markdown/MemoizedReactMarkdown';
-
-import rehypeMathjax from 'rehype-mathjax';
-import remarkGfm from 'remark-gfm';
-import remarkMath from 'remark-math';
-
-export interface Props {
- message: Message;
- messageIndex: number;
- onEdit?: (editedMessage: Message) => void
-}
-
-export const ChatMessage: FC = memo(({ message, messageIndex, onEdit }) => {
- const { t } = useTranslation('chat');
-
- const {
- state: { selectedConversation, conversations, currentMessage, messageIsStreaming },
- dispatch: homeDispatch,
- } = useContext(HomeContext);
-
- const [isEditing, setIsEditing] = useState(false);
- const [isTyping, setIsTyping] = useState(false);
- const [messageContent, setMessageContent] = useState(message.content);
- const [messagedCopied, setMessageCopied] = useState(false);
-
- const textareaRef = useRef(null);
-
- const toggleEditing = () => {
- setIsEditing(!isEditing);
- };
-
- const handleInputChange = (event: React.ChangeEvent) => {
- setMessageContent(event.target.value);
- if (textareaRef.current) {
- textareaRef.current.style.height = 'inherit';
- textareaRef.current.style.height = `${textareaRef.current.scrollHeight}px`;
- }
- };
-
- const handleEditMessage = () => {
- if (message.content != messageContent) {
- if (selectedConversation && onEdit) {
- onEdit({ ...message, content: messageContent });
- }
- }
- setIsEditing(false);
- };
-
- const handleDeleteMessage = () => {
- if (!selectedConversation) return;
-
- const { messages } = selectedConversation;
- const findIndex = messages.findIndex((elm) => elm === message);
-
- if (findIndex < 0) return;
-
- if (
- findIndex < messages.length - 1 &&
- messages[findIndex + 1].role === 'assistant'
- ) {
- messages.splice(findIndex, 2);
- } else {
- messages.splice(findIndex, 1);
- }
- const updatedConversation = {
- ...selectedConversation,
- messages,
- };
-
- const { single, all } = updateConversation(
- updatedConversation,
- conversations,
- );
- homeDispatch({ field: 'selectedConversation', value: single });
- homeDispatch({ field: 'conversations', value: all });
- };
-
- const handlePressEnter = (e: React.KeyboardEvent) => {
- if (e.key === 'Enter' && !isTyping && !e.shiftKey) {
- e.preventDefault();
- handleEditMessage();
- }
- };
-
- const copyOnClick = () => {
- if (!navigator.clipboard) return;
-
- navigator.clipboard.writeText(message.content).then(() => {
- setMessageCopied(true);
- setTimeout(() => {
- setMessageCopied(false);
- }, 2000);
- });
- };
-
- useEffect(() => {
- setMessageContent(message.content);
- }, [message.content]);
-
-
- useEffect(() => {
- if (textareaRef.current) {
- textareaRef.current.style.height = 'inherit';
- textareaRef.current.style.height = `${textareaRef.current.scrollHeight}px`;
- }
- }, [isEditing]);
-
- return (
-
-
-
- {message.role === 'assistant' ? (
-
- ) : (
-
- )}
-
-
-
- {message.role === 'user' ? (
-
- {isEditing ? (
-
-
setIsTyping(true)}
- onCompositionEnd={() => setIsTyping(false)}
- style={{
- fontFamily: 'inherit',
- fontSize: 'inherit',
- lineHeight: 'inherit',
- padding: '0',
- margin: '0',
- overflow: 'hidden',
- }}
- />
-
-
-
- {t('Save & Submit')}
-
- {
- setMessageContent(message.content);
- setIsEditing(false);
- }}
- >
- {t('Cancel')}
-
-
-
- ) : (
-
- {message.content}
-
- )}
-
- {!isEditing && (
-
-
-
-
-
-
-
-
- )}
-
- ) : (
-
-
▍
- }
- children[0] = (children[0] as string).replace("`▍`", "▍")
- }
- const match = /language-(\w+)/.exec(className || '');
- return !inline ? (
-
- ) : (
-
- {children}
-
- );
- },
- table({ children }) {
- return (
-
- );
- },
- th({ children }) {
- return (
-
- {children}
-
- );
- },
- td({ children }) {
- return (
-
- {children}
-
- );
- },
- }}
- >
- {`${message.content}${
- messageIsStreaming && messageIndex == (selectedConversation?.messages.length ?? 0) - 1 ? '`▍`' : ''
- }`}
-
-
-
- {messagedCopied ? (
-
- ) : (
-
-
-
- )}
-
-
- )}
-
-
-
- );
-});
-ChatMessage.displayName = 'ChatMessage';
diff --git a/spaces/megaaziib/RVC-V2-Huggingface-Version/i18n.py b/spaces/megaaziib/RVC-V2-Huggingface-Version/i18n.py
deleted file mode 100644
index 37f310fadd0b48b2f364877158fb2105d645fc03..0000000000000000000000000000000000000000
--- a/spaces/megaaziib/RVC-V2-Huggingface-Version/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = locale.getdefaultlocale()[
- 0
- ] # getlocale can't identify the system's language ((None, None))
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "en_US"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- print("Use Language:", self.language)
diff --git a/spaces/merle/PROTEIN_GENERATOR/model/SE3_network.py b/spaces/merle/PROTEIN_GENERATOR/model/SE3_network.py
deleted file mode 100644
index a1f01d116af573f19d84a91432498a68d5b47950..0000000000000000000000000000000000000000
--- a/spaces/merle/PROTEIN_GENERATOR/model/SE3_network.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import torch
-import torch.nn as nn
-
-#from equivariant_attention.modules import get_basis_and_r, GSE3Res, GNormBias
-#from equivariant_attention.modules import GConvSE3, GNormSE3
-#from equivariant_attention.fibers import Fiber
-
-from util_module import init_lecun_normal_param
-from se3_transformer.model import SE3Transformer
-from se3_transformer.model.fiber import Fiber
-
-class SE3TransformerWrapper(nn.Module):
- """SE(3) equivariant GCN with attention"""
- def __init__(self, num_layers=2, num_channels=32, num_degrees=3, n_heads=4, div=4,
- l0_in_features=32, l0_out_features=32,
- l1_in_features=3, l1_out_features=2,
- num_edge_features=32):
- super().__init__()
- # Build the network
- self.l1_in = l1_in_features
- #
- fiber_edge = Fiber({0: num_edge_features})
- if l1_out_features > 0:
- if l1_in_features > 0:
- fiber_in = Fiber({0: l0_in_features, 1: l1_in_features})
- fiber_hidden = Fiber.create(num_degrees, num_channels)
- fiber_out = Fiber({0: l0_out_features, 1: l1_out_features})
- else:
- fiber_in = Fiber({0: l0_in_features})
- fiber_hidden = Fiber.create(num_degrees, num_channels)
- fiber_out = Fiber({0: l0_out_features, 1: l1_out_features})
- else:
- if l1_in_features > 0:
- fiber_in = Fiber({0: l0_in_features, 1: l1_in_features})
- fiber_hidden = Fiber.create(num_degrees, num_channels)
- fiber_out = Fiber({0: l0_out_features})
- else:
- fiber_in = Fiber({0: l0_in_features})
- fiber_hidden = Fiber.create(num_degrees, num_channels)
- fiber_out = Fiber({0: l0_out_features})
-
- self.se3 = SE3Transformer(num_layers=num_layers,
- fiber_in=fiber_in,
- fiber_hidden=fiber_hidden,
- fiber_out = fiber_out,
- num_heads=n_heads,
- channels_div=div,
- fiber_edge=fiber_edge,
- use_layer_norm=True)
- #use_layer_norm=False)
-
- self.reset_parameter()
-
- def reset_parameter(self):
-
- # make sure linear layer before ReLu are initialized with kaiming_normal_
- for n, p in self.se3.named_parameters():
- if "bias" in n:
- nn.init.zeros_(p)
- elif len(p.shape) == 1:
- continue
- else:
- if "radial_func" not in n:
- p = init_lecun_normal_param(p)
- else:
- if "net.6" in n:
- nn.init.zeros_(p)
- else:
- nn.init.kaiming_normal_(p, nonlinearity='relu')
-
- # make last layers to be zero-initialized
- #self.se3.graph_modules[-1].to_kernel_self['0'] = init_lecun_normal_param(self.se3.graph_modules[-1].to_kernel_self['0'])
- #self.se3.graph_modules[-1].to_kernel_self['1'] = init_lecun_normal_param(self.se3.graph_modules[-1].to_kernel_self['1'])
- nn.init.zeros_(self.se3.graph_modules[-1].to_kernel_self['0'])
- nn.init.zeros_(self.se3.graph_modules[-1].to_kernel_self['1'])
-
- def forward(self, G, type_0_features, type_1_features=None, edge_features=None):
- if self.l1_in > 0:
- node_features = {'0': type_0_features, '1': type_1_features}
- else:
- node_features = {'0': type_0_features}
- edge_features = {'0': edge_features}
- return self.se3(G, node_features, edge_features)
diff --git a/spaces/merve/data-leak/source/style.css b/spaces/merve/data-leak/source/style.css
deleted file mode 100644
index ad619bacc7b5b7f61788de06850a80ccc7561b83..0000000000000000000000000000000000000000
--- a/spaces/merve/data-leak/source/style.css
+++ /dev/null
@@ -1,434 +0,0 @@
-/* Copyright 2020 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-
-
-
-html{
- background-color: #fff;
- font-weight: normal;
-}
-
-
-body{
- max-width: 850px;
- margin: 0px auto;
- font-family: 'Roboto Slab', serif;
- font-family: 'Roboto', Helvetica, sans-serif;
- font-weight: 300;
- line-height: 1.55em;
- font-size: 16px;
- margin-top: 5px;
- margin-bottom: 80px;
- color: #3C4043;
- font-smoothing: antialiased;
-}
-
-@media (max-width: 760px){
- body{
- padding: 5px;
- }
-}
-
-p{
- line-height: 1.55em;
- font-size: 16px;
- /*line-height: 28px;*/
- color: #3C4043;
- letter-spacing: 0.1px;
-}
-
-a{
- color: black;
-}
-
-.header{
- position: relative;
- color: black;
- font-size: 16px;
- height: 24px;
- overflow: visible;
- font-family: 'Google Sans', sans-serif;
- font-weight: 100;
- font-size: 20px;
- margin: 0px auto;
- margin-top: 15px;
- padding-left: 20px;
-}
-.header-left{
- vertical-align: middle;
- font-size: 20px;
- margin: 0px auto;
- width: 300px;
-}
-.header-left img{
- width: 100px;
- opacity: 1;
- top: 0px;
- position: relative;
-}
-.header-left a:first-child{
- float: left;
-}
-.header-left a:last-child{
- position: relative;
- top: 8px;
- margin-left: 20px;
- float: left;
-}
-.header-left a{
- line-height: 20px;
- -webkit-font-smoothing: antialiased;
- letter-spacing: 0.1px;
- font-size: 20px;
- text-transform: uppercase;
- font-family: "Google Sans";
- text-align: right;
- -webkit-tap-highlight-color: rgba(255,255,255,0);
- font-weight: 300;
- text-decoration: none;
- /*margin: 50px 0 0 50px;*/
- display: inline-block;
- color: #00695C !important;
-}
-.header-left a:hover{
- color: #ff4081 !important;
-}
-
-@media (max-width: 750px){
- .header-right span{
- opacity: 0;
- }
-}
-.header a{
- /*opacity: .5;*/
- text-decoration: none;
-}
-.header a:hover{
- opacity: 1
-}
-
-
-p{
- max-width: 750px;
- margin: 0px auto;
- margin-block-start: 1em;
- margin-block-end: 1em;
-}
-
-/*TODO mobile padding?*/
-
-h3{
- max-width: 750px;
- margin: 0px auto;
- font-weight: 100;
- line-height: 1.3em;
-}
-
-h1,h2,h3,h4,h5{
- font-family: 'Google Sans', sans-serif;
- font-weight: 100;
- margin-top: 1.5em;
- margin-bottom: .5em;
-}
-h1{
- font-weight: 100;
- font-size: 34px;
- margin-bottom: .5em;
- line-height: 1.3em;
- margin-top: 1.4em;
- text-align: center;
- font-family: "Google Sans";
- /*color: #00695C;*/
-}
-h2,h3,h4,h5{
- font-size: 22px;
-}
-
-/*wp classes*/
-img.aligncenter {
- display: block;
- margin: auto;
- max-width: 750px;
-}
-
-
-
-html{
- overflow-x: hidden;
-}
-
-.full-width{
- width: 100vw;
- position: relative;
- left: 50%;
- right: 50%;
- margin-left: -50vw;
- margin-right: -50vw;
- overflow: hidden;
-}
-
-.full-width img{
- max-width: 100%;
- display: block;
- margin: 0 auto;
-}
-
-.full-width.px980 img, .full-width.px980 div{
- max-width: 980px;
-}
-.full-width > div, .full-width > div > div{
- margin: 0px auto;
-}
-.full-width.px750 img, .full-width.px750 div{
- max-width: 750px;
-}
-
-draft{
- display: none;
- /*visibility: collapse;*/
-}
-
-
-h1, .post-summary{
- max-width: 750px;
- margin: 0px auto;
-}
-.post-summary{
- font-size: 19px;
- margin-bottom: 65px;
- line-height: 1.5em;
-}
-
-h1{
- margin-bottom: 40px;
- margin-top: 50px;
-}
-
-.post-tags{
- line-height: 1.55em;
- font-style: italic;
-}
-
-.thumbnail-caption{
- font-style: italic;
-}
-
-
-
-
-
-
-/*graph scroll stuff*/
-
-#container{
- position: relative;
- width: 900px;
- margin-left: -25px;
-}
-
-#container h3{
- line-height: 1.3em;
-}
-
-
-
-
-
-
-.tooltip {
- top: -1000px;
- position: fixed;
- padding: 10px;
- background: rgba(255, 255, 255, .90);
- border: 1px solid lightgray;
- pointer-events: none;
- width: 300px;
-}
-.tooltip-hidden{
- opacity: 0;
- transition: all .3s;
- transition-delay: .1s;
-}
-
-@media (max-width: 590px){
- div.tooltip{
- bottom: -1px;
- width: calc(100%);
- left: -1px !important;
- right: -1px !important;
- top: auto !important;
- width: auto !important;
- }
-}
-
-
-
-
-.footend{
- margin-left: -9px;
- width: 10px;
-}
-
-
-.footstart, .footend{
- text-decoration: none;
-}
-
-.footstart:hover, .footend:hover{
- text-decoration: underline;
-}
-
-
-
-
-#recirc{
-}
-
-#recirc .img{
- outline: 1px solid #ccc;
-}
-
-#recirc .post:hover .img{
- outline: 1px solid #333;
-}
-
-#recirc .title{
- /*color: #00695C;*/
- font-size: 18px;
- font-weight: 500;
- margin-bottom: -10px;
- /*height: 10px !important;*/
- /*opacity: 0;*/
-}
-
-#recirc .post:hover .title{
- text-decoration: underline !important;
-}
-
-#recirc .post{
- margin-bottom: 30px;
-}
-
-
-
-
-
-
-
-
-
-
-
-
-
-/*Nav Style*/
-#nav-container{
- width: 100vw;
- margin-left: calc(50% - 50vw);
- display: inline-block;
- /*display: none;*/
-}
-#navigation {
- margin: 0 auto;
- max-width: 1260px;
- -webkit-font-smoothing: antialiased;
- font-family: 'Open Sans', Helvetica, sans-serif;
- font-weight: 300;
- letter-spacing: 0.1px;
-
-
- color: rgba(0,0,0,.87);
- font-size: 14px;
- line-height: 20px;
- -webkit-font-smoothing: antialiased;
- font-family: 'Open Sans', Helvetica, sans-serif;
- font-weight: 300;
- letter-spacing: 0.1px;
- display: flex;
- flex-flow: row wrap;
- align-items: stretch;
- padding: 8px;
- margin: 0 auto;
- max-width: 1260px;
-}
-.mdl-grid {
- display: -webkit-flex;
- display: -ms-flexbox;
- display: flex;
- -webkit-flex-flow: row wrap;
- -ms-flex-flow: row wrap;
- flex-flow: row wrap;
- margin: 0 auto;
- -webkit-align-items: stretch;
- -ms-flex-align: stretch;
- align-items: stretch;
-}
-
-.mdl-cell {
- box-sizing: border-box;
-}
-
-.nav-links {
- font-size: 20px;
- text-transform: uppercase;
- font-family: "Google Sans";
- color: #4a4a4a;
- text-align: right;
-}
-
-.nav-logo-small {
- width: 110px;
- margin: 42px 0 0 0;
-}
-.nav-links .selected {
- color: #00695C !important;
-}
-/*.nav-links a:visited {
- color: #4a4a4a;
-}
-a:visited {
- color: #7B1FA2;
-}
-*/
-.nav-links a {
- color: inherit;
- text-decoration: none;
- margin: 50px 0 0 50px;
- display: inline-block;
-}
-
-
-@media screen and (max-width: 1035px){
- .nav-links {
- font-size: 16px;
- }
-}
-
-.nav-links{
- line-height: 20px;
- -webkit-font-smoothing: antialiased;
- font-weight: 300;
- letter-spacing: 0.1px;
- box-sizing: border-box;
- margin: 8px;
- width: calc(66.6666666667% - 16px);
- font-size: 20px;
- text-transform: uppercase;
- font-family: "Google Sans";
- color: #4a4a4a;
- text-align: right;
-}
-
diff --git a/spaces/merve/data-leak/source/third_party/swoopy-drag.js b/spaces/merve/data-leak/source/third_party/swoopy-drag.js
deleted file mode 100644
index 3c740601b5111efdf47f0fd5da9d41de58ceb757..0000000000000000000000000000000000000000
--- a/spaces/merve/data-leak/source/third_party/swoopy-drag.js
+++ /dev/null
@@ -1,193 +0,0 @@
-// https://github.com/1wheel/swoopy-drag Copyright (c) 2016 Adam Pearce
-
-(function (global, factory) {
- typeof exports === 'object' && typeof module !== 'undefined' ? factory(exports, require('d3')) :
- typeof define === 'function' && define.amd ? define(['exports', 'd3'], factory) :
- (factory((global.d3 = global.d3 || {}),global.d3));
-}(this, function (exports,d3) { 'use strict';
-
- function swoopyDrag(){
- var x = function(d){ return d }
- var y = function(d){ return d }
-
- var annotations = []
- var annotationSel
-
- var draggable = false
-
- var dispatch = d3.dispatch('drag')
-
- var textDrag = d3.drag()
- .on('drag', function(d){
- var x = d3.event.x
- var y = d3.event.y
- d.textOffset = [x, y].map(Math.round)
-
- d3.select(this).call(translate, d.textOffset)
-
- dispatch.call('drag')
- })
- .subject(function(d){ return {x: d.textOffset[0], y: d.textOffset[1]} })
-
- var circleDrag = d3.drag()
- .on('drag', function(d){
- var x = d3.event.x
- var y = d3.event.y
- d.pos = [x, y].map(Math.round)
-
- var parentSel = d3.select(this.parentNode)
-
- var path = ''
- var points = parentSel.selectAll('circle').data()
- if (points[0].type == 'A'){
- path = calcCirclePath(points)
- } else{
- points.forEach(function(d){ path = path + d.type + d.pos })
- }
-
- parentSel.select('path').attr('d', path).datum().path = path
- d3.select(this).call(translate, d.pos)
-
- dispatch.call('drag')
- })
- .subject(function(d){ return {x: d.pos[0], y: d.pos[1]} })
-
-
- var rv = function(sel){
- annotationSel = sel.html('').selectAll('g')
- .data(annotations).enter()
- .append('g')
- .call(translate, function(d){ return [x(d), y(d)] })
-
- var textSel = annotationSel.append('text')
- .call(translate, ƒ('textOffset'))
- .text(ƒ('text'))
-
- annotationSel.append('path')
- .attr('d', ƒ('path'))
-
- if (!draggable) return
-
- annotationSel.style('cursor', 'pointer')
- textSel.call(textDrag)
-
- annotationSel.selectAll('circle').data(function(d){
- var points = []
-
- if (~d.path.indexOf('A')){
- //handle arc paths seperatly -- only one circle supported
- var pathNode = d3.select(this).select('path').node()
- var l = pathNode.getTotalLength()
-
- points = [0, .5, 1].map(function(d){
- var p = pathNode.getPointAtLength(d*l)
- return {pos: [p.x, p.y], type: 'A'}
- })
- } else{
- var i = 1
- var type = 'M'
- var commas = 0
-
- for (var j = 1; j < d.path.length; j++){
- var curChar = d.path[j]
- if (curChar == ',') commas++
- if (curChar == 'L' || curChar == 'C' || commas == 2){
- points.push({pos: d.path.slice(i, j).split(','), type: type})
- type = curChar
- i = j + 1
- commas = 0
- }
- }
-
- points.push({pos: d.path.slice(i, j).split(','), type: type})
- }
-
- return points
- }).enter().append('circle')
- .attr('r', 8)
- .attr('fill', 'rgba(0,0,0,0)')
- .attr('stroke', '#333')
- .attr('stroke-dasharray', '2 2')
- .call(translate, ƒ('pos'))
- .call(circleDrag)
-
- dispatch.call('drag')
- }
-
-
- rv.annotations = function(_x){
- if (typeof(_x) == 'undefined') return annotations
- annotations = _x
- return rv
- }
- rv.x = function(_x){
- if (typeof(_x) == 'undefined') return x
- x = _x
- return rv
- }
- rv.y = function(_x){
- if (typeof(_x) == 'undefined') return y
- y = _x
- return rv
- }
- rv.draggable = function(_x){
- if (typeof(_x) == 'undefined') return draggable
- draggable = _x
- return rv
- }
- rv.on = function() {
- var value = dispatch.on.apply(dispatch, arguments);
- return value === dispatch ? rv : value;
- }
-
- return rv
-
- //convert 3 points to an Arc Path
- function calcCirclePath(points){
- var a = points[0].pos
- var b = points[2].pos
- var c = points[1].pos
-
- var A = dist(b, c)
- var B = dist(c, a)
- var C = dist(a, b)
-
- var angle = Math.acos((A*A + B*B - C*C)/(2*A*B))
-
- //calc radius of circle
- var K = .5*A*B*Math.sin(angle)
- var r = A*B*C/4/K
- r = Math.round(r*1000)/1000
-
- //large arc flag
- var laf = +(Math.PI/2 > angle)
-
- //sweep flag
- var saf = +((b[0] - a[0])*(c[1] - a[1]) - (b[1] - a[1])*(c[0] - a[0]) < 0)
-
- return ['M', a, 'A', r, r, 0, laf, saf, b].join(' ')
- }
-
- function dist(a, b){
- return Math.sqrt(
- Math.pow(a[0] - b[0], 2) +
- Math.pow(a[1] - b[1], 2))
- }
-
-
- //no jetpack dependency
- function translate(sel, pos){
- sel.attr('transform', function(d){
- var posStr = typeof(pos) == 'function' ? pos(d) : pos
- return 'translate(' + posStr + ')'
- })
- }
-
- function ƒ(str){ return function(d){ return d[str] } }
- }
-
- exports.swoopyDrag = swoopyDrag;
-
- Object.defineProperty(exports, '__esModule', { value: true });
-
-}));
diff --git a/spaces/merve/data-leak/source/uncertainty-calibration/style.css b/spaces/merve/data-leak/source/uncertainty-calibration/style.css
deleted file mode 100644
index 8073cf0a59eac0be0e293b35af5255c40c063e21..0000000000000000000000000000000000000000
--- a/spaces/merve/data-leak/source/uncertainty-calibration/style.css
+++ /dev/null
@@ -1,89 +0,0 @@
-svg{
- overflow: visible;
-}
-
-text{
- fill: #202124;
- user-select: none;
-}
-
-.domain{
- display: none;
-}
-
-.thresholds, .threshold > g{
- cursor: pointer;
-}
-
-svg{
- user-select: none;
-}
-
-text.axis-label .legend-text{
- font-family: 'Roboto';
- font-style: normal;
- font-size: 16px;
- line-height: 20px;
- /* identical to box height, or 125% */
-
- fill: #000;
-}
-
-.axis text{
- font-size: 10px;
-}
-
-text{
- text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff;
-}
-
-
-
-
-.bucket text{
- /*text-shadow: 0 1px 0 #000, 1px 0 0 #000, 0 -1px 0 #000, -1px 0 0 #000;*/
- /*fill: #fff;*/
- font-size: 11px;
-}
-
-
-.big-text{
- font-variant-numeric: tabular-nums;
- font-size: 16px;
-}
-
-#card{
- display: flex;
- flex-direction: column;
- align-items: flex-start;
- padding: 24px 24px;
- gap: 6px;
-
- background: #EDF4EC;
- border: 1px solid #34A853;
- box-sizing: border-box;
- border-radius: 4px;
-}
-
-text.val-text{
- background: #DFE9E1;
- border: 1px solid #476C63;
- box-sizing: border-box;
- border-radius: 4px;
- fill: #2A4C4A;
- text-shadow: none;
-}
-
-.val-box{
- fill: #DFE9E1;
- stroke: #476C63;
- opacity: 1;
-}
-
-.legend-title{
- fill: #002622;
-}
-
-h3 {
- color: #00695C;
-}
\ No newline at end of file
diff --git a/spaces/merve/measuring-fairness/public/private-and-fair/util.js b/spaces/merve/measuring-fairness/public/private-and-fair/util.js
deleted file mode 100644
index 76a4bccf20f893c87bcb5088391cd9aa73c312e2..0000000000000000000000000000000000000000
--- a/spaces/merve/measuring-fairness/public/private-and-fair/util.js
+++ /dev/null
@@ -1,125 +0,0 @@
-window.ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden')
-window.util = (function(){
-
- var data = window.__datacache = window.__datacache || {}
-
- async function getFile(path){
- var [slug, type] = path.split('.')
- if (data[slug]) return data[slug]
-
- var datadir = 'https://storage.googleapis.com/uncertainty-over-space/explore-dp/'
-
- var res = await fetch(datadir + path + '?t=5')
- if (type == 'csv'){
- var parsed = d3.csvParse(await res.text())
- } else if (type == 'npy'){
- var parsed = npyjs.parse(await(res).arrayBuffer())
- } else if (type == 'json'){
- var parsed = await res.json()
- } else{
- throw 'unknown type'
- }
-
- data[slug] = parsed
-
- return parsed
- }
-
- async function drawDigit(ctx, index, s=4, offsetX=0, offsetY=0){
- var digitMetadata = await util.getFile('mnist_train.csv')
- if (!digitMetadata[0].label) decorateDigitMetadata(digitMetadata)
-
- var {label, labelIndex} = digitMetadata[index]
-
- if (!label) console.log('missing ', index)
- var rawdigits = await util.getFile(`cns-cache/mnist_train_raw_${label}.npy`)
- if (!rawdigits) return console.log('digits not loaded')
-
- d3.cross(d3.range(28), d3.range(28)).forEach(([i, j]) => {
- var r = rawdigits.data[labelIndex*28*28 + j*28 + i + 0]
- var g = rawdigits.data[labelIndex*28*28 + j*28 + i + 0]
- var b = rawdigits.data[labelIndex*28*28 + j*28 + i + 0]
-
- ctx.beginPath()
- ctx.fillStyle = `rgb(${r},${g},${b})`
- ctx.rect(i*s + offsetX, j*s + offsetY, s, s)
- ctx.fill()
- })
- }
-
- function decorateDigitMetadata(digitMetadata){
- digitMetadata.forEach(d => {
- delete d['']
- d.i = +d.i
- d.label = +d.y
- d.priv_order = +d.priv_order
- })
-
- var byLabel = d3.nestBy(digitMetadata, d => d.y)
- byLabel = _.sortBy(byLabel, d => d.key)
- byLabel.forEach(digit => {
- digit.forEach((d, i) => d.labelIndex = i)
- })
-
- return {digitMetadata, byLabel}
- }
-
- var colors = [d3.interpolateTurbo(.15), d3.interpolateTurbo(.85)]
- var epsilonExtent = [400000, .01]
- // var epsilonExtent = [65, .01]
-
-
- var addAxisLabel = (c, xText, yText, xOffset=40, yOffset=-40) => {
- c.svg.select('.x').append('g')
- .translate([c.width/2, xOffset])
- .append('text.axis-label')
- .text(xText)
- .at({textAnchor: 'middle'})
- .st({fill: '#000', fontSize: 14})
-
- c.svg.select('.y')
- .append('g')
- .translate([yOffset, c.height/2])
- .append('text.axis-label')
- .text(yText)
- .at({textAnchor: 'middle', transform: 'rotate(-90)'})
- .st({fill: '#000', fontSize: 14})
- }
-
- var ggPlotBg = (c, isBlack=true) => {
- if (!isBlack){
- c.svg.append('rect')
- .at({width: c.width, height: c.height, fill: '#eee'})
- .lower()
- }
-
- c.svg.selectAll('.tick').selectAll('line').remove()
- c.svg.selectAll('.y .tick')
- .append('path').at({d: 'M 0 0 H ' + c.width, stroke: '#fff', strokeWidth: 1})
- c.svg.selectAll('.y text').at({x: -3})
- c.svg.selectAll('.x .tick')
- .append('path').at({d: 'M 0 0 V -' + c.height, stroke: '#fff', strokeWidth: 1})
- }
-
-
- return {data, getFile, drawDigit, colors, epsilonExtent, addAxisLabel, ggPlotBg, decorateDigitMetadata}
-})()
-
-
-
-
-
-
-// mnist_train.csv
-// mnist_train_raw.npy
-// umap_train_0.npy
-// umap_train_1.npy
-// umap_train_2.npy
-// umap_train_3.npy
-// umap_train_4.npy
-// umap_train_5.npy
-// umap_train_6.npy
-// umap_train_7.npy
-// umap_train_8.npy
-// umap_train_9.npy
-// umap_train_all.npy
diff --git a/spaces/mfernezir/VanillaChatbot/app.py b/spaces/mfernezir/VanillaChatbot/app.py
deleted file mode 100644
index a3c103572747c4917d287f0aec1f7cc71d35bf49..0000000000000000000000000000000000000000
--- a/spaces/mfernezir/VanillaChatbot/app.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from transformers import pipeline, Conversation
-import gradio as gr
-
-chatbot = pipeline(model="facebook/blenderbot-400M-distill")
-
-message_list = []
-response_list = []
-
-def vanilla_chatbot(message, history):
- conversation = Conversation(
- text=message, past_user_inputs=message_list,
- generated_responses=response_list)
- conversation = chatbot(conversation)
-
- return conversation.generated_responses[-1]
-
-demo_chatbot = gr.ChatInterface(
- vanilla_chatbot, title="Vanilla Chatbot",
- description="Enter text to start chatting.")
-
-demo_chatbot.launch(share=True)
diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/README.md b/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/README.md
deleted file mode 100644
index 325c7b4fe1ee3e4b72f48c0849b0c4a7136f368d..0000000000000000000000000000000000000000
--- a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/README.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# StyleGAN 2 in PyTorch
-
-Implementation of Analyzing and Improving the Image Quality of StyleGAN (https://arxiv.org/abs/1912.04958) in PyTorch
-
-## Notice
-
-I have tried to match official implementation as close as possible, but maybe there are some details I missed. So please use this implementation with care.
-
-## Requirements
-
-I have tested on:
-
-* PyTorch 1.3.1
-* CUDA 10.1/10.2
-
-## Usage
-
-First create lmdb datasets:
-
-> python prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE1,SIZE2,SIZE3,... DATASET_PATH
-
-This will convert images to jpeg and pre-resizes it. This implementation does not use progressive growing, but you can create multiple resolution datasets using size arguments with comma separated lists, for the cases that you want to try another resolutions later.
-
-Then you can train model in distributed settings
-
-> python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT train.py --batch BATCH_SIZE LMDB_PATH
-
-train.py supports Weights & Biases logging. If you want to use it, add --wandb arguments to the script.
-
-### Convert weight from official checkpoints
-
-You need to clone official repositories, (https://github.com/NVlabs/stylegan2) as it is requires for load official checkpoints.
-
-Next, create a conda environment with TF-GPU and Torch-CPU (using GPU for both results in CUDA version mismatches):
-`conda create -n tf_torch python=3.7 requests tensorflow-gpu=1.14 cudatoolkit=10.0 numpy=1.14 pytorch=1.6 torchvision cpuonly -c pytorch`
-
-For example, if you cloned repositories in ~/stylegan2 and downloaded stylegan2-ffhq-config-f.pkl, You can convert it like this:
-
-> python convert_weight.py --repo ~/stylegan2 stylegan2-ffhq-config-f.pkl
-
-This will create converted stylegan2-ffhq-config-f.pt file.
-
-If using GCC, you might have to set `-D_GLIBCXX_USE_CXX11_ABI=1` in `~/stylegan2/dnnlib/tflib/custom_ops.py`.
-
-### Generate samples
-
-> python generate.py --sample N_FACES --pics N_PICS --ckpt PATH_CHECKPOINT
-
-You should change your size (--size 256 for example) if you train with another dimension.
-
-### Project images to latent spaces
-
-> python projector.py --ckpt [CHECKPOINT] --size [GENERATOR_OUTPUT_SIZE] FILE1 FILE2 ...
-
-## Pretrained Checkpoints
-
-[Link](https://drive.google.com/open?id=1PQutd-JboOCOZqmd95XWxWrO8gGEvRcO)
-
-I have trained the 256px model on FFHQ 550k iterations. I got FID about 4.5. Maybe data preprocessing, resolution, training loop could made this difference, but currently I don't know the exact reason of FID differences.
-
-## Samples
-
-
-
-At 110,000 iterations. (trained on 3.52M images)
-
-### Samples from converted weights
-
-
-
-Sample from FFHQ (1024px)
-
-
-
-Sample from LSUN Church (256px)
-
-## License
-
-Model details and custom CUDA kernel codes are from official repostiories: https://github.com/NVlabs/stylegan2
-
-Codes for Learned Perceptual Image Patch Similarity, LPIPS came from https://github.com/richzhang/PerceptualSimilarity
-
-To match FID scores more closely to tensorflow official implementations, I have used FID Inception V3 implementations in https://github.com/mseitzer/pytorch-fid
diff --git a/spaces/mila-quebec/SAI/src/app_utils.py b/spaces/mila-quebec/SAI/src/app_utils.py
deleted file mode 100644
index faa6e2ab3327542a119381749cf45169d8868c69..0000000000000000000000000000000000000000
--- a/spaces/mila-quebec/SAI/src/app_utils.py
+++ /dev/null
@@ -1,173 +0,0 @@
-import logging
-import os
-import uuid
-from datetime import datetime, timezone
-from urllib.parse import quote_plus
-
-import gradio as gr
-import pandas as pd
-import pymongo
-from pymongo import MongoClient
-
-from buster.tokenizers import Tokenizer
-
-logger = logging.getLogger(__name__)
-logging.basicConfig(level=logging.INFO)
-
-
-class WordTokenizer(Tokenizer):
- """Naive word-level tokenizer
-
- The original tokenizer from openAI eats way too much Ram.
- This is a naive word count tokenizer to be used instead."""
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
-
- def encode(self, string):
- return string.split()
-
- def decode(self, encoded):
- return " ".join(encoded)
-
-
-def get_logging_db_name(instance_type: str) -> str:
- assert instance_type in ["dev", "prod", "local", "test"], "Invalid instance_type declared."
- return f"ai4h-databank-{instance_type}"
-
-
-def get_session_id() -> str:
- """Generate a uuid for each user."""
- return str(uuid.uuid1())
-
-
-def verify_required_env_vars(required_vars: list[str]):
- unset_vars = [var for var in required_vars if os.getenv(var) is None]
- if len(unset_vars) > 0:
- logger.warning(f"Lisf of env. variables that weren't set: {unset_vars}")
- else:
- logger.info("All environment variables are set appropriately.")
-
-
-def make_uri(username: str, password: str, cluster: str) -> str:
- """Create mongodb uri."""
- uri = (
- "mongodb+srv://"
- + quote_plus(username)
- + ":"
- + quote_plus(password)
- + "@"
- + cluster
- + "/?retryWrites=true&w=majority"
- )
- return uri
-
-
-def init_db(mongo_uri: str, db_name: str) -> pymongo.database.Database:
- """
- Initialize and return a connection to the specified MongoDB database.
-
- Parameters:
- - mongo_uri (str): The connection string for the MongoDB. This can be formed using `make_uri` function.
- - db_name (str): The name of the MongoDB database to connect to.
-
- Returns:
- pymongo.database.Database: The connected database object.
-
- Note:
- If there's a problem with the connection, an exception will be logged and the program will terminate.
- """
-
- try:
- mongodb_client = MongoClient(mongo_uri)
- # Ping the database to make sure authentication is good
- mongodb_client.admin.command("ping")
- database = mongodb_client[db_name]
- logger.info("Succesfully connected to the MongoDB database")
- return database
- except Exception as e:
- logger.exception("Something went wrong connecting to mongodb")
-
-
-def get_utc_time() -> str:
- return str(datetime.now(timezone.utc))
-
-
-def check_auth(username: str, password: str) -> bool:
- """Check if authentication succeeds or not.
-
- The authentication leverages the built-in gradio authentication. We use a shared password among users.
- It is temporary for developing the PoC. Proper authentication needs to be implemented in the future.
- We allow a valid username to be any username beginning with 'databank-', this will allow us to differentiate between users easily.
- """
-
- # get auth information from env. vars, they need to be set
- USERNAME = os.environ["AI4H_APP_USERNAME"]
- PASSWORD = os.environ["AI4H_APP_PASSWORD"]
-
- valid_user = username.startswith(USERNAME)
- valid_password = password == PASSWORD
- is_auth = valid_user and valid_password
- logger.info(f"Log-in attempted by {username=}. {is_auth=}")
- return is_auth
-
-
-def format_sources(matched_documents: pd.DataFrame) -> list[str]:
- formatted_sources = []
-
- # We first group on Title of the document, so that 2 chunks from a same doc get lumped together
- grouped_df = matched_documents.groupby("title")
-
- # Here we just rank the titles by highest to lowest similarity score...
- ranked_titles = (
- grouped_df.apply(lambda x: x.similarity_to_answer.max()).sort_values(ascending=False).index.to_list()
- )
-
- for title in ranked_titles:
- df = grouped_df.get_group(title)
-
- # Adds a link break between sources from a same chunk
- chunks = " ".join(["🔗 " + chunk for chunk in df.content.to_list()])
-
- url = df.url.to_list()[0]
- source = df.source.to_list()[0]
- year = df.year.to_list()[0]
- country = df.country.to_list()[0]
-
- formatted_sources.append(
- f"""
-
-### Publication: [{title}]({url})
-**Year of publication:** {year}
-**Source:** {source}
-**Country:** {country}
-
-**Identified sections**:
-{chunks}
-"""
- )
-
- return formatted_sources
-
-
-def pad_sources(sources: list[str], max_sources: int) -> list[str]:
- """Pad sources with empty strings to ensure that the number of sources is always max_sources."""
- k = len(sources)
- return sources + [""] * (max_sources - k)
-
-
-def add_sources(completion, max_sources: int):
- if not completion.question_relevant:
- # Question was not relevant, don't bother doing anything else...
- formatted_sources = [""]
- else:
- formatted_sources = format_sources(completion.matched_documents)
-
- formatted_sources = pad_sources(formatted_sources, max_sources)
-
- sources_textboxes = []
- for source in formatted_sources:
- visible = False if source == "" else True
- t = gr.Markdown(source, latex_delimiters=[], elem_classes="source", visible=visible)
- sources_textboxes.append(t)
- return sources_textboxes
diff --git a/spaces/monra/freegpt-webui/g4f/__init__.py b/spaces/monra/freegpt-webui/g4f/__init__.py
deleted file mode 100644
index a0b4bac6aa4de9c0449095a3874c2cb9716169d7..0000000000000000000000000000000000000000
--- a/spaces/monra/freegpt-webui/g4f/__init__.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import sys
-from . import Provider
-from g4f.models import Model, ModelUtils
-
-
-class ChatCompletion:
- @staticmethod
- def create(model: Model.model or str, messages: list, provider: Provider.Provider = None, stream: bool = False, auth: str = False, **kwargs):
- kwargs['auth'] = auth
-
- if provider and provider.needs_auth and not auth:
- print(
- f'ValueError: {provider.__name__} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr)
- sys.exit(1)
-
- try:
- if isinstance(model, str):
- try:
- model = ModelUtils.convert[model]
- except KeyError:
- raise Exception(f'The model: {model} does not exist')
-
- engine = model.best_provider if not provider else provider
-
- if not engine.supports_stream and stream == True:
- print(
- f"ValueError: {engine.__name__} does not support 'stream' argument", file=sys.stderr)
- sys.exit(1)
-
- print(f'Using {engine.__name__} provider')
-
- return (engine._create_completion(model.name, messages, stream, **kwargs)
- if stream else ''.join(engine._create_completion(model.name, messages, stream, **kwargs)))
- except TypeError as e:
- print(e)
- arg: str = str(e).split("'")[1]
- print(
- f"ValueError: {engine.__name__} does not support '{arg}' argument", file=sys.stderr)
- sys.exit(1)
diff --git a/spaces/monster-labs/Controlnet-QRCode-Monster-V1/README.md b/spaces/monster-labs/Controlnet-QRCode-Monster-V1/README.md
deleted file mode 100644
index 63aa08fe133d54c8fe5971a76d431f34d4a662bc..0000000000000000000000000000000000000000
--- a/spaces/monster-labs/Controlnet-QRCode-Monster-V1/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Controlnet QRCode Monster V1
-emoji: ⚡
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: openrail++
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mvnhat/gpt-qa-demo/app.py b/spaces/mvnhat/gpt-qa-demo/app.py
deleted file mode 100644
index d79264b838af8bd1980090dd45c4eeb79e52ef7d..0000000000000000000000000000000000000000
--- a/spaces/mvnhat/gpt-qa-demo/app.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import os
-from dotenv import load_dotenv
-
-load_dotenv()
-
-os.environ["OPENAI_API_TYPE"]=os.getenv("OPENAI_API_TYPE")
-os.environ["OPENAI_API_VERSION"]=os.getenv("OPENAI_API_VERSION")
-os.environ["OPENAI_API_BASE"]=os.getenv("OPENAI_API_BASE")
-os.environ["OPENAI_API_KEY"]=os.getenv("OPENAI_API_KEY")
-
-import gradio as gr
-
-from langchain.document_loaders import UnstructuredPDFLoader
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.embeddings.openai import OpenAIEmbeddings
-from langchain.vectorstores import Chroma
-from langchain.chat_models import AzureChatOpenAI
-from langchain.chains import ChatVectorDBChain
-
-
-# Define models
-embeddings = OpenAIEmbeddings(model="my-embedding", chunk_size=1)
-llm35_chat = AzureChatOpenAI(
- model_name="gpt-35-turbo",
- deployment_name="my-chatbot-35",
- max_tokens=1200
-)
-
-# Define prompts
-from langchain.prompts.chat import (
- ChatPromptTemplate,
- SystemMessagePromptTemplate,
- AIMessagePromptTemplate,
- HumanMessagePromptTemplate,
-)
-from langchain.schema import (
- AIMessage,
- HumanMessage,
- SystemMessage
-)
-
-system_template="""Use the following pieces of context to answer the users question.
-If you don't know the answer, just say that you don't know, don't try to make up an answer.
-If the query is unrelated, just say that "The query is unrelated to the document".
-The query consists of previous conversation between the user and you.
-----------------
-{context}"""
-
-messages = [
- SystemMessagePromptTemplate.from_template(system_template),
- HumanMessagePromptTemplate.from_template("{question}")
-]
-prompt = ChatPromptTemplate.from_messages(messages)
-
-
-def convert_PDF(pdf_obj, chatbot_engine):
- # Check if pdf_obj is a `list`
- if isinstance(pdf_obj, list):
- pdf_obj = pdf_obj[0]
-
- # Load pdf using Unstructured
- file_path = pdf_obj.name
- loader = UnstructuredPDFLoader(file_path)
- data = loader.load()
-
- # Split into chunks
- text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
- texts = text_splitter.split_documents(data)
-
- print(texts[2])
-
- # Create a Chroma vector store (temp)
- vectorstore = Chroma.from_documents(texts, embeddings)
-
- # Create a ChatVectorDBChain
- chatbot_engine = ChatVectorDBChain.from_llm(
- llm35_chat,
- vectorstore,
- qa_prompt=prompt,
- return_source_documents=True,
- top_k_docs_for_context=3)
-
- # Character count as text output
- output = f'There are {len(data[0].page_content)} characters in your document, which is approximately {len(data[0].page_content) // 4} tokens.'
-
- return output, chatbot_engine
-
-
-def chat(message, chat_history, chatbot_engine):
- result = chatbot_engine({"question": message, "chat_history": chat_history})
- chat_history.append((message, result["answer"]))
- return chat_history, chat_history
-
-
-with gr.Blocks() as demo:
- # Declearing states
- chat_history = gr.State([])
- chatbot_engine = gr.State()
-
- # Structuring interface
- text_file = gr.File(
- label="Download Text File",
- file_count="single",
- type="file"
- )
- convert_button = gr.Button("Let your bot skim through this real quick...")
- text_output = gr.Textbox()
-
- convert_button.click(
- fn=convert_PDF,
- inputs=[text_file, chatbot_engine],
- outputs=[text_output, chatbot_engine],
- )
-
- gr.Markdown("""Chat with your Book! """)
- chatbot = gr.Chatbot()
- message = gr.Textbox()
- submit = gr.Button("SEND")
- submit.click(chat, inputs=[message, chat_history, chatbot_engine], outputs=[chatbot, chat_history])
-
-if __name__ == "__main__":
- demo.launch(debug = True)
\ No newline at end of file
diff --git a/spaces/mygyasir/genious_bgremover/carvekit/web/static/js/jquery.fancybox.pack.js b/spaces/mygyasir/genious_bgremover/carvekit/web/static/js/jquery.fancybox.pack.js
deleted file mode 100644
index 73f757843855f1eb0b23d8c81a8fb7f001989b66..0000000000000000000000000000000000000000
--- a/spaces/mygyasir/genious_bgremover/carvekit/web/static/js/jquery.fancybox.pack.js
+++ /dev/null
@@ -1,46 +0,0 @@
-/*! fancyBox v2.1.5 fancyapps.com | fancyapps.com/fancybox/#license */
-(function(r,G,f,v){var J=f("html"),n=f(r),p=f(G),b=f.fancybox=function(){b.open.apply(this,arguments)},I=navigator.userAgent.match(/msie/i),B=null,s=G.createTouch!==v,t=function(a){return a&&a.hasOwnProperty&&a instanceof f},q=function(a){return a&&"string"===f.type(a)},E=function(a){return q(a)&&0',image:' ',iframe:'",error:'The requested content cannot be loaded. Please try again later.
',closeBtn:' ',next:' ',prev:' '},openEffect:"fade",openSpeed:250,openEasing:"swing",openOpacity:!0,
-openMethod:"zoomIn",closeEffect:"fade",closeSpeed:250,closeEasing:"swing",closeOpacity:!0,closeMethod:"zoomOut",nextEffect:"elastic",nextSpeed:250,nextEasing:"swing",nextMethod:"changeIn",prevEffect:"elastic",prevSpeed:250,prevEasing:"swing",prevMethod:"changeOut",helpers:{overlay:!0,title:!0},onCancel:f.noop,beforeLoad:f.noop,afterLoad:f.noop,beforeShow:f.noop,afterShow:f.noop,beforeChange:f.noop,beforeClose:f.noop,afterClose:f.noop},group:{},opts:{},previous:null,coming:null,current:null,isActive:!1,
-isOpen:!1,isOpened:!1,wrap:null,skin:null,outer:null,inner:null,player:{timer:null,isActive:!1},ajaxLoad:null,imgPreload:null,transitions:{},helpers:{},open:function(a,d){if(a&&(f.isPlainObject(d)||(d={}),!1!==b.close(!0)))return f.isArray(a)||(a=t(a)?f(a).get():[a]),f.each(a,function(e,c){var k={},g,h,j,m,l;"object"===f.type(c)&&(c.nodeType&&(c=f(c)),t(c)?(k={href:c.data("fancybox-href")||c.attr("href"),title:c.data("fancybox-title")||c.attr("title"),isDom:!0,element:c},f.metadata&&f.extend(!0,k,
-c.metadata())):k=c);g=d.href||k.href||(q(c)?c:null);h=d.title!==v?d.title:k.title||"";m=(j=d.content||k.content)?"html":d.type||k.type;!m&&k.isDom&&(m=c.data("fancybox-type"),m||(m=(m=c.prop("class").match(/fancybox\.(\w+)/))?m[1]:null));q(g)&&(m||(b.isImage(g)?m="image":b.isSWF(g)?m="swf":"#"===g.charAt(0)?m="inline":q(c)&&(m="html",j=c)),"ajax"===m&&(l=g.split(/\s+/,2),g=l.shift(),l=l.shift()));j||("inline"===m?g?j=f(q(g)?g.replace(/.*(?=#[^\s]+$)/,""):g):k.isDom&&(j=c):"html"===m?j=g:!m&&(!g&&
-k.isDom)&&(m="inline",j=c));f.extend(k,{href:g,type:m,content:j,title:h,selector:l});a[e]=k}),b.opts=f.extend(!0,{},b.defaults,d),d.keys!==v&&(b.opts.keys=d.keys?f.extend({},b.defaults.keys,d.keys):!1),b.group=a,b._start(b.opts.index)},cancel:function(){var a=b.coming;a&&!1!==b.trigger("onCancel")&&(b.hideLoading(),b.ajaxLoad&&b.ajaxLoad.abort(),b.ajaxLoad=null,b.imgPreload&&(b.imgPreload.onload=b.imgPreload.onerror=null),a.wrap&&a.wrap.stop(!0,!0).trigger("onReset").remove(),b.coming=null,b.current||
-b._afterZoomOut(a))},close:function(a){b.cancel();!1!==b.trigger("beforeClose")&&(b.unbindEvents(),b.isActive&&(!b.isOpen||!0===a?(f(".fancybox-wrap").stop(!0).trigger("onReset").remove(),b._afterZoomOut()):(b.isOpen=b.isOpened=!1,b.isClosing=!0,f(".fancybox-item, .fancybox-nav").remove(),b.wrap.stop(!0,!0).removeClass("fancybox-opened"),b.transitions[b.current.closeMethod]())))},play:function(a){var d=function(){clearTimeout(b.player.timer)},e=function(){d();b.current&&b.player.isActive&&(b.player.timer=
-setTimeout(b.next,b.current.playSpeed))},c=function(){d();p.unbind(".player");b.player.isActive=!1;b.trigger("onPlayEnd")};if(!0===a||!b.player.isActive&&!1!==a){if(b.current&&(b.current.loop||b.current.index=c.index?"next":"prev"],b.router=e||"jumpto",c.loop&&(0>a&&(a=c.group.length+a%c.group.length),a%=c.group.length),c.group[a]!==v&&(b.cancel(),b._start(a)))},reposition:function(a,d){var e=b.current,c=e?e.wrap:null,k;c&&(k=b._getPosition(d),a&&"scroll"===a.type?(delete k.position,c.stop(!0,!0).animate(k,200)):(c.css(k),e.pos=f.extend({},e.dim,k)))},update:function(a){var d=
-a&&a.type,e=!d||"orientationchange"===d;e&&(clearTimeout(B),B=null);b.isOpen&&!B&&(B=setTimeout(function(){var c=b.current;c&&!b.isClosing&&(b.wrap.removeClass("fancybox-tmp"),(e||"load"===d||"resize"===d&&c.autoResize)&&b._setDimension(),"scroll"===d&&c.canShrink||b.reposition(a),b.trigger("onUpdate"),B=null)},e&&!s?0:300))},toggle:function(a){b.isOpen&&(b.current.fitToView="boolean"===f.type(a)?a:!b.current.fitToView,s&&(b.wrap.removeAttr("style").addClass("fancybox-tmp"),b.trigger("onUpdate")),
-b.update())},hideLoading:function(){p.unbind(".loading");f("#fancybox-loading").remove()},showLoading:function(){var a,d;b.hideLoading();a=f('').click(b.cancel).appendTo("body");p.bind("keydown.loading",function(a){if(27===(a.which||a.keyCode))a.preventDefault(),b.cancel()});b.defaults.fixed||(d=b.getViewport(),a.css({position:"absolute",top:0.5*d.h+d.y,left:0.5*d.w+d.x}))},getViewport:function(){var a=b.current&&b.current.locked||!1,d={x:n.scrollLeft(),
-y:n.scrollTop()};a?(d.w=a[0].clientWidth,d.h=a[0].clientHeight):(d.w=s&&r.innerWidth?r.innerWidth:n.width(),d.h=s&&r.innerHeight?r.innerHeight:n.height());return d},unbindEvents:function(){b.wrap&&t(b.wrap)&&b.wrap.unbind(".fb");p.unbind(".fb");n.unbind(".fb")},bindEvents:function(){var a=b.current,d;a&&(n.bind("orientationchange.fb"+(s?"":" resize.fb")+(a.autoCenter&&!a.locked?" scroll.fb":""),b.update),(d=a.keys)&&p.bind("keydown.fb",function(e){var c=e.which||e.keyCode,k=e.target||e.srcElement;
-if(27===c&&b.coming)return!1;!e.ctrlKey&&(!e.altKey&&!e.shiftKey&&!e.metaKey&&(!k||!k.type&&!f(k).is("[contenteditable]")))&&f.each(d,function(d,k){if(1h[0].clientWidth||h[0].clientHeight&&h[0].scrollHeight>h[0].clientHeight),h=f(h).parent();if(0!==c&&!j&&1g||0>k)b.next(0>g?"up":"right");d.preventDefault()}}))},trigger:function(a,d){var e,c=d||b.coming||b.current;if(c){f.isFunction(c[a])&&(e=c[a].apply(c,Array.prototype.slice.call(arguments,1)));if(!1===e)return!1;c.helpers&&f.each(c.helpers,function(d,e){if(e&&b.helpers[d]&&f.isFunction(b.helpers[d][a]))b.helpers[d][a](f.extend(!0,
-{},b.helpers[d].defaults,e),c)});p.trigger(a)}},isImage:function(a){return q(a)&&a.match(/(^data:image\/.*,)|(\.(jp(e|g|eg)|gif|png|bmp|webp|svg)((\?|#).*)?$)/i)},isSWF:function(a){return q(a)&&a.match(/\.(swf)((\?|#).*)?$/i)},_start:function(a){var d={},e,c;a=l(a);e=b.group[a]||null;if(!e)return!1;d=f.extend(!0,{},b.opts,e);e=d.margin;c=d.padding;"number"===f.type(e)&&(d.margin=[e,e,e,e]);"number"===f.type(c)&&(d.padding=[c,c,c,c]);d.modal&&f.extend(!0,d,{closeBtn:!1,closeClick:!1,nextClick:!1,arrows:!1,
-mouseWheel:!1,keys:null,helpers:{overlay:{closeClick:!1}}});d.autoSize&&(d.autoWidth=d.autoHeight=!0);"auto"===d.width&&(d.autoWidth=!0);"auto"===d.height&&(d.autoHeight=!0);d.group=b.group;d.index=a;b.coming=d;if(!1===b.trigger("beforeLoad"))b.coming=null;else{c=d.type;e=d.href;if(!c)return b.coming=null,b.current&&b.router&&"jumpto"!==b.router?(b.current.index=a,b[b.router](b.direction)):!1;b.isActive=!0;if("image"===c||"swf"===c)d.autoHeight=d.autoWidth=!1,d.scrolling="visible";"image"===c&&(d.aspectRatio=
-!0);"iframe"===c&&s&&(d.scrolling="scroll");d.wrap=f(d.tpl.wrap).addClass("fancybox-"+(s?"mobile":"desktop")+" fancybox-type-"+c+" fancybox-tmp "+d.wrapCSS).appendTo(d.parent||"body");f.extend(d,{skin:f(".fancybox-skin",d.wrap),outer:f(".fancybox-outer",d.wrap),inner:f(".fancybox-inner",d.wrap)});f.each(["Top","Right","Bottom","Left"],function(a,b){d.skin.css("padding"+b,w(d.padding[a]))});b.trigger("onReady");if("inline"===c||"html"===c){if(!d.content||!d.content.length)return b._error("content")}else if(!e)return b._error("href");
-"image"===c?b._loadImage():"ajax"===c?b._loadAjax():"iframe"===c?b._loadIframe():b._afterLoad()}},_error:function(a){f.extend(b.coming,{type:"html",autoWidth:!0,autoHeight:!0,minWidth:0,minHeight:0,scrolling:"no",hasError:a,content:b.coming.tpl.error});b._afterLoad()},_loadImage:function(){var a=b.imgPreload=new Image;a.onload=function(){this.onload=this.onerror=null;b.coming.width=this.width/b.opts.pixelRatio;b.coming.height=this.height/b.opts.pixelRatio;b._afterLoad()};a.onerror=function(){this.onload=
-this.onerror=null;b._error("image")};a.src=b.coming.href;!0!==a.complete&&b.showLoading()},_loadAjax:function(){var a=b.coming;b.showLoading();b.ajaxLoad=f.ajax(f.extend({},a.ajax,{url:a.href,error:function(a,e){b.coming&&"abort"!==e?b._error("ajax",a):b.hideLoading()},success:function(d,e){"success"===e&&(a.content=d,b._afterLoad())}}))},_loadIframe:function(){var a=b.coming,d=f(a.tpl.iframe.replace(/\{rnd\}/g,(new Date).getTime())).attr("scrolling",s?"auto":a.iframe.scrolling).attr("src",a.href);
-f(a.wrap).bind("onReset",function(){try{f(this).find("iframe").hide().attr("src","//about:blank").end().empty()}catch(a){}});a.iframe.preload&&(b.showLoading(),d.one("load",function(){f(this).data("ready",1);s||f(this).bind("load.fb",b.update);f(this).parents(".fancybox-wrap").width("100%").removeClass("fancybox-tmp").show();b._afterLoad()}));a.content=d.appendTo(a.inner);a.iframe.preload||b._afterLoad()},_preloadImages:function(){var a=b.group,d=b.current,e=a.length,c=d.preload?Math.min(d.preload,
-e-1):0,f,g;for(g=1;g<=c;g+=1)f=a[(d.index+g)%e],"image"===f.type&&f.href&&((new Image).src=f.href)},_afterLoad:function(){var a=b.coming,d=b.current,e,c,k,g,h;b.hideLoading();if(a&&!1!==b.isActive)if(!1===b.trigger("afterLoad",a,d))a.wrap.stop(!0).trigger("onReset").remove(),b.coming=null;else{d&&(b.trigger("beforeChange",d),d.wrap.stop(!0).removeClass("fancybox-opened").find(".fancybox-item, .fancybox-nav").remove());b.unbindEvents();e=a.content;c=a.type;k=a.scrolling;f.extend(b,{wrap:a.wrap,skin:a.skin,
-outer:a.outer,inner:a.inner,current:a,previous:d});g=a.href;switch(c){case "inline":case "ajax":case "html":a.selector?e=f("").html(e).find(a.selector):t(e)&&(e.data("fancybox-placeholder")||e.data("fancybox-placeholder",f('
').insertAfter(e).hide()),e=e.show().detach(),a.wrap.bind("onReset",function(){f(this).find(e).length&&e.hide().replaceAll(e.data("fancybox-placeholder")).data("fancybox-placeholder",!1)}));break;case "image":e=a.tpl.image.replace("{href}",
-g);break;case "swf":e='
',h="",f.each(a.swf,function(a,b){e+=' ';h+=" "+a+'="'+b+'"'}),e+='"}(!t(e)||!e.parent().is(a.inner))&&a.inner.append(e);b.trigger("beforeShow");a.inner.css("overflow","yes"===k?"scroll":
-"no"===k?"hidden":k);b._setDimension();b.reposition();b.isOpen=!1;b.coming=null;b.bindEvents();if(b.isOpened){if(d.prevMethod)b.transitions[d.prevMethod]()}else f(".fancybox-wrap").not(a.wrap).stop(!0).trigger("onReset").remove();b.transitions[b.isOpened?a.nextMethod:a.openMethod]();b._preloadImages()}},_setDimension:function(){var a=b.getViewport(),d=0,e=!1,c=!1,e=b.wrap,k=b.skin,g=b.inner,h=b.current,c=h.width,j=h.height,m=h.minWidth,u=h.minHeight,n=h.maxWidth,p=h.maxHeight,s=h.scrolling,q=h.scrollOutside?
-h.scrollbarWidth:0,x=h.margin,y=l(x[1]+x[3]),r=l(x[0]+x[2]),v,z,t,C,A,F,B,D,H;e.add(k).add(g).width("auto").height("auto").removeClass("fancybox-tmp");x=l(k.outerWidth(!0)-k.width());v=l(k.outerHeight(!0)-k.height());z=y+x;t=r+v;C=E(c)?(a.w-z)*l(c)/100:c;A=E(j)?(a.h-t)*l(j)/100:j;if("iframe"===h.type){if(H=h.content,h.autoHeight&&1===H.data("ready"))try{H[0].contentWindow.document.location&&(g.width(C).height(9999),F=H.contents().find("body"),q&&F.css("overflow-x","hidden"),A=F.outerHeight(!0))}catch(G){}}else if(h.autoWidth||
-h.autoHeight)g.addClass("fancybox-tmp"),h.autoWidth||g.width(C),h.autoHeight||g.height(A),h.autoWidth&&(C=g.width()),h.autoHeight&&(A=g.height()),g.removeClass("fancybox-tmp");c=l(C);j=l(A);D=C/A;m=l(E(m)?l(m,"w")-z:m);n=l(E(n)?l(n,"w")-z:n);u=l(E(u)?l(u,"h")-t:u);p=l(E(p)?l(p,"h")-t:p);F=n;B=p;h.fitToView&&(n=Math.min(a.w-z,n),p=Math.min(a.h-t,p));z=a.w-y;r=a.h-r;h.aspectRatio?(c>n&&(c=n,j=l(c/D)),j>p&&(j=p,c=l(j*D)),c
z||y>r)&&(c>m&&j>u)&&!(19n&&(c=n,j=l(c/D)),g.width(c).height(j),e.width(c+x),a=e.width(),y=e.height();else c=Math.max(m,Math.min(c,c-(a-z))),j=Math.max(u,Math.min(j,j-(y-r)));q&&("auto"===s&&jz||y>r)&&c>m&&j>u;c=h.aspectRatio?cu&&j
').appendTo(b.coming?b.coming.parent:a.parent);this.fixed=!1;a.fixed&&b.defaults.fixed&&(this.overlay.addClass("fancybox-overlay-fixed"),this.fixed=!0)},open:function(a){var d=this;a=f.extend({},this.defaults,a);this.overlay?this.overlay.unbind(".overlay").width("auto").height("auto"):this.create(a);this.fixed||(n.bind("resize.overlay",f.proxy(this.update,this)),this.update());a.closeClick&&this.overlay.bind("click.overlay",function(a){if(f(a.target).hasClass("fancybox-overlay"))return b.isActive?
-b.close():d.close(),!1});this.overlay.css(a.css).show()},close:function(){var a,b;n.unbind("resize.overlay");this.el.hasClass("fancybox-lock")&&(f(".fancybox-margin").removeClass("fancybox-margin"),a=n.scrollTop(),b=n.scrollLeft(),this.el.removeClass("fancybox-lock"),n.scrollTop(a).scrollLeft(b));f(".fancybox-overlay").remove().hide();f.extend(this,{overlay:null,fixed:!1})},update:function(){var a="100%",b;this.overlay.width(a).height("100%");I?(b=Math.max(G.documentElement.offsetWidth,G.body.offsetWidth),
-p.width()>b&&(a=p.width())):p.width()>n.width()&&(a=p.width());this.overlay.width(a).height(p.height())},onReady:function(a,b){var e=this.overlay;f(".fancybox-overlay").stop(!0,!0);e||this.create(a);a.locked&&(this.fixed&&b.fixed)&&(e||(this.margin=p.height()>n.height()?f("html").css("margin-right").replace("px",""):!1),b.locked=this.overlay.append(b.wrap),b.fixed=!1);!0===a.showEarly&&this.beforeShow.apply(this,arguments)},beforeShow:function(a,b){var e,c;b.locked&&(!1!==this.margin&&(f("*").filter(function(){return"fixed"===
-f(this).css("position")&&!f(this).hasClass("fancybox-overlay")&&!f(this).hasClass("fancybox-wrap")}).addClass("fancybox-margin"),this.el.addClass("fancybox-margin")),e=n.scrollTop(),c=n.scrollLeft(),this.el.addClass("fancybox-lock"),n.scrollTop(e).scrollLeft(c));this.open(a)},onUpdate:function(){this.fixed||this.update()},afterClose:function(a){this.overlay&&!b.coming&&this.overlay.fadeOut(a.speedOut,f.proxy(this.close,this))}};b.helpers.title={defaults:{type:"float",position:"bottom"},beforeShow:function(a){var d=
-b.current,e=d.title,c=a.type;f.isFunction(e)&&(e=e.call(d.element,d));if(q(e)&&""!==f.trim(e)){d=f(''+e+"
");switch(c){case "inside":c=b.skin;break;case "outside":c=b.wrap;break;case "over":c=b.inner;break;default:c=b.skin,d.appendTo("body"),I&&d.width(d.width()),d.wrapInner(' '),b.current.margin[2]+=Math.abs(l(d.css("margin-bottom")))}d["top"===a.position?"prependTo":"appendTo"](c)}}};f.fn.fancybox=function(a){var d,
-e=f(this),c=this.selector||"",k=function(g){var h=f(this).blur(),j=d,k,l;!g.ctrlKey&&(!g.altKey&&!g.shiftKey&&!g.metaKey)&&!h.is(".fancybox-wrap")&&(k=a.groupAttr||"data-fancybox-group",l=h.attr(k),l||(k="rel",l=h.get(0)[k]),l&&(""!==l&&"nofollow"!==l)&&(h=c.length?f(c):e,h=h.filter("["+k+'="'+l+'"]'),j=h.index(this)),a.index=j,!1!==b.open(h,a)&&g.preventDefault())};a=a||{};d=a.index||0;!c||!1===a.live?e.unbind("click.fb-start").bind("click.fb-start",k):p.undelegate(c,"click.fb-start").delegate(c+
-":not('.fancybox-item, .fancybox-nav')","click.fb-start",k);this.filter("[data-fancybox-start=1]").trigger("click");return this};p.ready(function(){var a,d;f.scrollbarWidth===v&&(f.scrollbarWidth=function(){var a=f('').appendTo("body"),b=a.children(),b=b.innerWidth()-b.height(99).innerWidth();a.remove();return b});if(f.support.fixedPosition===v){a=f.support;d=f('
').appendTo("body");var e=20===
-d[0].offsetTop||15===d[0].offsetTop;d.remove();a.fixedPosition=e}f.extend(b.defaults,{scrollbarWidth:f.scrollbarWidth(),fixed:f.support.fixedPosition,parent:f("body")});a=f(r).width();J.addClass("fancybox-lock-test");d=f(r).width();J.removeClass("fancybox-lock-test");f("").appendTo("head")})})(window,document,jQuery);
\ No newline at end of file
diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/docs/conf.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/docs/conf.py
deleted file mode 100644
index 7c505401b867dba9ac72a8119b038f2f60b74895..0000000000000000000000000000000000000000
--- a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/docs/conf.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import os
-import sys
-
-sys.path.insert(0, os.path.abspath('..'))
-
-import pytorch_caney # noqa: E402
-
-project = 'pytorch-caney'
-copyright = '2023, Jordan A. Caraballo-Vega'
-author = 'Jordan A. Caraballo-Vega'
-
-extensions = [
- 'sphinx.ext.autodoc',
- 'sphinx_autodoc_typehints',
- 'jupyter_sphinx.execute',
- "sphinx.ext.intersphinx",
- "sphinx.ext.viewcode",
- "sphinx.ext.napoleon",
- "sphinx_click.ext",
- "sphinx.ext.githubpages",
- "nbsphinx",
-]
-
-intersphinx_mapping = {
- "pyproj": ("https://pyproj4.github.io/pyproj/stable/", None),
- "rasterio": ("https://rasterio.readthedocs.io/en/stable/", None),
- "xarray": ("http://xarray.pydata.org/en/stable/", None),
-}
-
-templates_path = ['_templates']
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-
-master_doc = "index"
-
-version = release = pytorch_caney.__version__
-
-pygments_style = "sphinx"
-
-todo_include_todos = False
-
-html_theme = 'sphinx_rtd_theme'
-html_logo = 'static/DSG_LOGO_REDESIGN.png'
-
-myst_enable_extensions = [
- "amsmath",
- "colon_fence",
- "deflist",
- "dollarmath",
- "html_image",
-]
-
-myst_url_schemes = ("http", "https", "mailto")
diff --git a/spaces/nedtheminx/nllb-translation/README.md b/spaces/nedtheminx/nllb-translation/README.md
deleted file mode 100644
index fbc441581cfbed7838be5c4b374bb315d3b34863..0000000000000000000000000000000000000000
--- a/spaces/nedtheminx/nllb-translation/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Nllb Translation
-emoji: 🚀
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.8
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download PORTABLE Hana Studio For Mac.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download PORTABLE Hana Studio For Mac.md
deleted file mode 100644
index ed38d3fa27b28167bb5aeb2906bc56820ce8e279..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Download PORTABLE Hana Studio For Mac.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-Download Hana Studio for Mac: A Complete Guide
-If you are a developer or administrator working with SAP HANA, you might be wondering how to install and use Hana Studio on your Mac. Hana Studio is a powerful tool that allows you to manage, model, analyze, and integrate data in SAP HANA databases. It also enables you to build smart applications based on real-time data, in-memory computing, and machine learning technology.
-Download Hana Studio For Mac Download ☆☆☆ https://urlcod.com/2uIcc0
-In this article, we will show you how to download, install, use, update, and uninstall Hana Studio for Mac. We will also cover some of the benefits of using Hana Studio for Mac, as well as some common issues and solutions. By the end of this article, you will be able to use Hana Studio on your Mac with confidence and ease.
- How to download Hana Studio for Mac
-Unlike Windows and Linux versions, there is no standalone installer for Hana Studio for Mac. Instead, you need to download Eclipse Kepler, which is an open source integrated development environment (IDE), and then install SAP HANA Tools as an add-on. Here are the steps you need to follow:
-
-Download Eclipse Kepler from the official website . You can find it here . Choose the Eclipse IDE for Java EE Developers package that matches your Mac OS version (32-bit or 64-bit).
-Unzip and move Eclipse to the Applications folder . After downloading the zip file, double-click it to extract it. Then drag and drop the Eclipse folder into your Applications folder.
-Start Eclipse and install SAP HANA Tools from the repository . Launch Eclipse from your Applications folder or Dock. Go to Help -> Install New Software... Click Add... to add a new repository. Enter "SAP Development Tools" as the name and this URL as the location. Click OK. Select SAP HANA Tools (or more, depending on your needs) from the available software list. Click Next and follow the instructions to finish the installation. You may need to accept some license agreements and restart Eclipse.
-Restart Eclipse and switch to HANA perspective . After restarting Eclipse, you should see a new icon on the toolbar that looks like a blue circle with an "H" inside. This is the HANA perspective switcher. Click on it to switch to the HANA perspective. This will give you access to various tools and views related to SAP HANA.
-
-Congratulations! You have successfully downloaded and installed Hana Studio for Mac
How to use Hana Studio for Mac
-Now that you have installed Hana Studio for Mac, you can start using it to work with SAP HANA databases and applications. Here are some of the main tasks you can perform with Hana Studio:
- How to connect to SAP HANA databases and manage them
-To connect to a SAP HANA database, you need to create a system entry in the Systems view of Hana Studio. You can do this by right-clicking on the Systems node and selecting Add System... You will need to enter the host name, instance number, user name, and password of the database. You can also specify advanced options such as encryption, single sign-on, or tunneling.
-Once you have created a system entry, you can double-click on it to open the Administration Console. This is where you can monitor and manage various aspects of the database, such as:
-
-Status and alerts
-Configuration and parameters
-Backup and recovery
-Security and users
-Performance and workload
-Memory and disk usage
-SQL console and trace
-Catalog and content
-
- How to create and modify data models and views in Hana Studio
-To create and modify data models and views in Hana Studio, you need to use the Modeler perspective. You can switch to this perspective by clicking on the icon that looks like a green cube with an "M" inside. This will give you access to various tools and views related to data modeling, such as:
-
-
-Project Explorer: where you can create and manage modeling projects and files
-Data Source Explorer: where you can browse and import data sources from various systems
-Modeler: where you can design and edit data models and views using graphical or textual editors
-Quick Launch: where you can access common modeling tasks and wizards
-Outline: where you can see the structure and properties of the selected model or view
-Problems: where you can see the errors and warnings of the selected model or view
-
-To create a new data model or view, you need to right-click on a modeling project in the Project Explorer and select New -> Other... Then choose the type of model or view you want to create, such as:
-
-Attribute View: a view that defines the attributes or dimensions of a data set, such as product, customer, or location
-Analytic View: a view that defines the measures or facts of a data set, such as sales, revenue, or profit
-Calculation View: a view that defines complex calculations or transformations on one or more data sets, such as aggregations, joins, unions, or filters
-Table Function: a function that returns a table as output based on SQLScript logic
-Decision Table: a table that defines business rules or logic based on conditions and actions
-
- How to perform advanced analytics and data integration with Hana Studio
-Hana Studio also allows you to perform advanced analytics and data integration with SAP HANA databases. Some of the features you can use are:
- Predictive Analysis Library (PAL)
-PAL is a library of functions that enable you to perform various predictive analysis tasks, such as clustering, classification, regression, association, time series, and text analysis. You can access PAL functions from the SQL console or from the graphical calculation view editor. You can also use PAL functions in conjunction with other SAP HANA features, such as full-text search, spatial analysis, or streaming analytics.
- Data Provisioning Framework (DPF)
-DPF is a framework that enables you to provision data from various sources into SAP HANA databases. You can use DPF to perform batch or real-time data loading, replication, transformation, cleansing, and quality checks. You can access DPF from the Data Provisioning view in Hana Studio. You can also use DPF in conjunction with other SAP HANA features, such as smart data access, smart data integration, smart data quality, or remote data sync.
- SAP HANA Application Function Library (AFL)
-AFL is a library of functions that enable you to perform various application-specific tasks, such as business functions (BFL), graph processing (GPL), fuzzy search (FSL), text analysis (TAL), text mining (TML), or spatial analysis (SPL). You can access AFL functions from the SQL console or from the graphical calculation view editor. You can also use AFL functions in conjunction with other SAP HANA features, such as full-text search, spatial analysis, or streaming analytics.
- How to develop smart applications with Hana Studio
-Hana Studio also allows you to develop smart applications with SAP HANA databases. Some of the features you can use are:
- SAP HANA Extended Application Services (XS)
-XS is a framework that enables you to create and run web-based applications on SAP HANA databases. You can use XS to develop applications using various languages and technologies, such as HTML5, JavaScript, Node.js, Java, Python, or R. You can access XS from the Development perspective in Hana Studio. You can also use XS in conjunction with other SAP HANA features, such as OData services, RESTful APIs, or WebSockets.
- SAP HANA Cloud Platform (HCP)
-HCP is a platform that enables you to create and run cloud-based applications on SAP HANA databases. You can use HCP to develop applications using various languages and technologies, such as Java, JavaScript, HTML5, or SAPUI5. You can access HCP from the Cloud Foundry perspective in Hana Studio. You can also use HCP in conjunction with other SAP HANA features, such as smart data access, smart data integration, smart data quality, or remote data sync.
- SAP HANA Machine Learning (ML)
-ML is a library of functions that enable you to perform various machine learning tasks on SAP HANA databases. You can use ML to train and deploy models using various algorithms and frameworks, such as TensorFlow, Keras, Scikit-learn, or XGBoost. You can access ML from the SQL console or from the graphical calculation view editor. You can also use ML in conjunction with other SAP HANA features, such as predictive analysis library (PAL), graph processing library (GPL), or text analysis library (TAL).
- How to update and uninstall Hana Studio for Mac
-As SAP releases new versions and patches for Hana Studio, you may want to update your installation to get the latest features and fixes. You may also want to uninstall Hana Studio if you no longer need it or want to free up some space on your Mac. Here are the steps you need to follow:
- How to update Hana Studio using an update site
-To update Hana Studio using an update site, you need to go to Help -> Check for Updates in Eclipse. This will check if there are any available updates for the installed software components. If there are any updates, you can select them and click Next to proceed with the installation. You may need to accept some license agreements and restart Eclipse.
- How to uninstall Hana Studio using the Installation Manager or the command line
-To uninstall Hana Studio using the Installation Manager, you need to go to Help -> Installation Details in Eclipse. This will open a dialog box that shows the installed software components. Select SAP HANA Tools (or more) and click Uninstall... Follow the instructions to complete the uninstallation. You may need to restart Eclipse.
-To uninstall Hana Studio using the command line, you need to open a terminal window and navigate to the Eclipse folder in your Applications folder. Then type the following command:
-./eclipse -uninstallIU com.sap.ndb.studio.feature.group
-This will uninstall SAP HANA Tools from Eclipse. You may need to restart Eclipse.
- How to troubleshoot common issues with Hana Studio for Mac
-Sometimes you may encounter some issues or errors when using Hana Studio for Mac. Here are some of the common problems and solutions:
- How to fix connection problems and errors
-If you have trouble connecting to a SAP HANA database or get connection errors, you can try the following steps:
-
-Check if the database is running and reachable from your Mac
-Check if your network settings and firewall rules allow the connection
-Check if your user name and password are correct and have sufficient privileges
-Check if your encryption settings and certificates are valid and compatible
-Check if your tunneling settings and ports are correct and available
-Check if your single sign-on settings and tokens are valid and compatible
-
- How to optimize performance and memory usage of Hana Studio
-If you experience slow performance or high memory usage of Hana Studio, you can try the following steps:
-
-Close any unnecessary projects, files, views, or perspectives in Eclipse
-Clean up any unused or temporary files or folders in your workspace or Eclipse folder
-Adjust the memory allocation and garbage collection settings of Eclipse
-Update Hana Studio and Eclipse to the latest version and patch level
-Disable any unnecessary plugins or features of Eclipse
-
- How to find and use log files and support resources
-If you encounter any errors or issues that you cannot resolve by yourself, you can use the log files and support resources of Hana Studio to get more information and assistance. Here are some of the sources you can use:
- Log files
-Hana Studio generates various log files that record the events and errors that occur during its operation. You can find these log files in the following locations:
-
-Log file Location
-Eclipse log file /Applications/eclipse/configuration/.log
-Hana Studio log file /Applications/eclipse/workspace/.metadata/.log
-Hana Studio trace file /Applications/eclipse/workspace/.metadata/.plugins/com.sap.ndb.studio.log/trace.log
-SAP HANA client trace file /Applications/sap/hdbclient/hdbcli.trc
-SAP HANA server trace file /usr/sap//HDB//trace/indexserver.trc
-
-You can use these log files to analyze the root cause and details of the errors or issues. You can also send these log files to SAP support if you need further help.
- Support resources
-Hana Studio also provides various support resources that you can use to get more information and assistance. Here are some of the sources you can use:
-
-Help menu: where you can access the online help, tutorials, documentation, forums, blogs, feedback, and about information of Hana Studio and Eclipse
-SAP Support Portal: where you can access the product page, knowledge base, downloads, patches, notes, incidents, and contact information of SAP HANA and Hana Studio
-SAP Community: where you can access the Q&A, blogs, wiki, events, learning journeys, and badges of SAP HANA and Hana Studio
-SAP Developer Center: where you can access the tutorials, samples, guides, tools, trials, and certifications of SAP HANA and Hana Studio
-SAP Learning Hub: where you can access the courses, curricula, certifications, and learning rooms of SAP HANA and Hana Studio
-
- Conclusion
-In this article, we have shown you how to download, install, use, update, and uninstall Hana Studio for Mac. We have also covered some of the benefits of using Hana Studio for Mac, such as:
-
-Managing, modeling, analyzing, and integrating data in SAP HANA databases
-Building smart applications based on real-time data, in-memory computing, and machine learning technology
-Performing advanced analytics and data integration with various libraries and frameworks
-Developing web-based and cloud-based applications with various languages and technologies
-Troubleshooting common issues and errors with log files and support resources
-
-We hope that this article has helped you to understand how to use Hana Studio for Mac effectively. If you have any questions or feedback, please feel free to leave a comment below or contact us through our support channels. We would love to hear from you!
- FAQs
-Here are some of the frequently asked questions about Hana Studio for Mac:
- Q: What is the difference between Hana Studio and SAP HANA Cockpit?
-A: Hana Studio is a desktop-based tool that allows you to manage, model, analyze, and integrate data in SAP HANA databases. SAP HANA Cockpit is a web-based tool that allows you to monitor and administer SAP HANA databases. Both tools have similar functionalities but different user interfaces and features. You can use either tool depending on your preference and needs.
- Q: How can I install multiple versions of Hana Studio on my Mac?
-A: You can install multiple versions of Hana Studio on your Mac by using different Eclipse instances. For each version of Hana Studio you want to install, you need to download a separate Eclipse package and install SAP HANA Tools as an add-on. Then you need to rename the Eclipse folder and move it to a different location on your Mac. For example, you can rename the Eclipse folder to Eclipse_Hana_Studio_2.0 and move it to your Documents folder. Then you can launch the different versions of Hana Studio from the different Eclipse folders.
- Q: How can I migrate my Hana Studio settings and preferences from one Mac to another?
-A: You can migrate your Hana Studio settings and preferences from one Mac to another by exporting and importing them using the Eclipse functionality. To export your settings and preferences, you need to go to File -> Export... -> General -> Preferences in Eclipse. Then you need to choose the preferences you want to export and save them as a .epf file. To import your settings and preferences, you need to go to File -> Import... -> General -> Preferences in Eclipse. Then you need to choose the .epf file you want to import and apply the changes.
- Q: How can I customize the look and feel of Hana Studio on my Mac?
-A: You can customize the look and feel of Hana Studio on your Mac by changing the theme, font, color, and layout of Eclipse. To change the theme, you need to go to Eclipse -> Preferences -> General -> Appearance in Eclipse. Then you need to choose the theme you want to apply from the drop-down menu. To change the font, color, and layout, you need to go to Eclipse -> Preferences -> General -> Appearance -> Colors and Fonts in Eclipse. Then you need to choose the category and element you want to modify and adjust the settings accordingly.
- Q: How can I integrate Hana Studio with other SAP tools and products on my Mac?
-A: You can integrate Hana Studio with other SAP tools and products on my Mac by installing additional plugins or features from the SAP Development Tools repository or other sources. For example, you can install plugins or features for SAP Cloud Platform Tools, SAP Fiori Tools, SAP Business Application Studio, SAP Data Intelligence, or SAP Analytics Cloud. To install additional plugins or features, you need to go to Help -> Install New Software... in Eclipse. Then you need to add the repository URL or select it from the list and choose the plugins or features you want to install.
- Q: How can I learn more about Hana Studio and SAP HANA on my Mac?
-A: You can learn more about Hana Studio and SAP HANA on my Mac by accessing various learning resources and materials from SAP or other sources. For example, you can access courses, curricula, certifications, and learning rooms from SAP Learning Hub. You can also access tutorials, samples, guides, tools, trials, and certifications from SAP Developer Center. You can also access Q&A, blogs, wiki, events, learning journeys, and badges from SAP Community. You can also access online help, documentation, forums, blogs, feedback, and about information from the Help menu in Eclipse.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Jatra Marathi Movie Free Download Hd.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Jatra Marathi Movie Free Download Hd.md
deleted file mode 100644
index 71ed31ca162adf0d97a6eb06feba3b3f97d36a3d..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Jatra Marathi Movie Free Download Hd.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-Jatra: A Comedy of Errors in a Marathi Village Fair
-Jatra is a 2006 Marathi comedy film directed by Kedar Shinde and starring Bharat Jadhav, Priya Berde, Siddharth Jadhav, Kranti Redkar and others. The film revolves around two rival villages, Hyalagaad and Tyalagaad, who compete for the hosting rights of a religious fair. The film is full of hilarious situations, witty dialogues and slapstick humor.
-The film was a huge hit at the box office and received positive reviews from critics and audiences alike. It also won several awards, including the Best Film award at the Maharashtra State Film Awards. The film's music was composed by Ajay-Atul and featured popular songs like "Kombdi Palali" and "Jatra".
-Jatra Marathi Movie Free Download Hd DOWNLOAD > https://urlcod.com/2uIbHh
-If you are looking for a fun-filled family entertainer, you can watch Jatra online in full HD on ZEE5. ZEE5 is a streaming platform that offers a wide range of Marathi movies, TV shows, web series, news and more. You can also download Jatra for free on ZEE5 and watch it offline anytime, anywhere.
-To watch Jatra online or download it for free, you need to subscribe to ZEE5. You can choose from various plans that suit your budget and preferences. You can also enjoy other benefits like ad-free streaming, unlimited downloads, access to premium content and more.
-So what are you waiting for? Subscribe to ZEE5 today and enjoy Jatra and other Marathi movies online in HD.
-
-Jatra is not just a comedy film, but also a social satire that exposes the hypocrisy and corruption of the rural politicians and religious leaders. The film also showcases the rich culture and traditions of Maharashtra, especially the folk art of Tamasha. The film's title refers to the annual village fair that attracts thousands of devotees and tourists.
-The film's plot is inspired by William Shakespeare's play A Comedy of Errors, where two sets of identical twins are separated at birth and later encounter each other, leading to confusion and chaos. In Jatra, the twins are played by Bharat Jadhav and Siddharth Jadhav, who are both brilliant in their comic timing and expressions. The film also features Priya Berde and Kranti Redkar as their love interests, who add to the romance and humor of the story.
-Jatra is a must-watch for anyone who loves Marathi cinema and comedy. The film will make you laugh out loud and also make you think about the social issues that plague our society. You can watch Jatra online in full HD on ZEE5 or download it for free and enjoy it with your family and friends.
-
-Jatra is not only a film, but also a phenomenon that has created a cult following among the Marathi audience. The film's dialogues, songs and scenes have become iconic and are often quoted and mimicked by the fans. The film's success also spawned a sequel, Jatra: Hyalagaad Re Tyalagaad, which was released in 2016 and featured the same cast and crew.
-Jatra is a film that celebrates the spirit of Maharashtra and its people. The film showcases the beauty and diversity of the state, from its scenic landscapes to its vibrant festivals. The film also pays tribute to the legendary Marathi actors and comedians like Dada Kondke, Nilu Phule, Laxmikant Berde and others, who have inspired generations of filmmakers and performers.
-
-Jatra is a film that you can watch over and over again and still enjoy every moment of it. The film is a perfect blend of comedy, drama, romance and action that will keep you entertained from start to finish. You can watch Jatra online in full HD on ZEE5 or download it for free and have a blast with your loved ones.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DeepLab/deeplab/__init__.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DeepLab/deeplab/__init__.py
deleted file mode 100644
index dcd88ff0c09d630577e3ac9f8afb5324a80a7be4..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DeepLab/deeplab/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .build_solver import build_lr_scheduler
-from .config import add_deeplab_config
-from .resnet import build_resnet_deeplab_backbone
-from .semantic_seg import DeepLabV3Head, DeepLabV3PlusHead
diff --git a/spaces/nomic-ai/piqa/index.html b/spaces/nomic-ai/piqa/index.html
deleted file mode 100644
index 51345b3ff91861736504612659b6368ebfb394ec..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/piqa/index.html
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-
- piqa
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/compute/gru_gates_arm.h b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/compute/gru_gates_arm.h
deleted file mode 100644
index d95805da4165df71c00b4e82557c647e2f746d1a..0000000000000000000000000000000000000000
--- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/compute/gru_gates_arm.h
+++ /dev/null
@@ -1,288 +0,0 @@
-/*
- * Copyright 2021 Google LLC
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_ARM_H_
-#define LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_ARM_H_
-
-#if defined __ARM_NEON || defined __aarch64__
-#include
-#endif
-#include
-
-#include "sparse_matmul/compute/ar_inputs.h"
-#include "sparse_matmul/numerics/fast_transcendentals.h"
-
-namespace csrblocksparse {
-
-static constexpr int kNeonSIMDWidth = 4;
-
-// ------ Scalar calculation --------
-// See "Efficient Neural Audio Synthesis" for a description of the calculation.
-// https://arxiv.org/abs/1802.08435
-//
-// NOTE:
-// |sample| = (|coarse_at_sminus1|, |fine_at_sminus1|,
-// |coarse_at_sminus1|, |fine_at_sminus1|)
-// |w_sample| = (|coarse_at_s|, |coarse_at_s|, |coarse_at_s|, |coarse_at_s|)
-//
-// CHEATSHEET:
-// vld1q_f32 = load 4 32-bit floats
-// vmulq_f32(a, b) : return a * b;
-// vaddq_f32(a, b) : return a + b;
-// vmlaq_f32(c, a, b) : return c + a * b;
-// vpaddq_f32(a, b) : return (a0 + a1, a2 + a3, b0 + b1, b2 + b3)
-// vsubq_f32(a, b) : return a - b;
-// vst1q_f32 = store 4 32-bit floats
-#if defined __ARM_NEON || defined __aarch64__
-
-#if !defined __aarch64__
-// Backport of vpaddq_f32 to ARM32.
-inline float32x4_t vpaddq_f32(float32x4_t a, float32x4_t b) {
- float32x2_t a10 = vget_low_f32(a);
- float32x2_t a32 = vget_high_f32(a);
- float32x2_t b10 = vget_low_f32(b);
- float32x2_t b32 = vget_high_f32(b);
- return vcombine_f32(vpadd_f32(a10, a32), vpadd_f32(b10, b32));
-}
-#endif
-
-template
-void GoThroughGatesFloat(int start, int end, const float* qr_ptr,
- const float* gru_gates_ptr,
- const float* gru_gates_other_ptr,
- const float* conditioning_ptr, float* gru_h_ptr,
- const float* w_hat, int proj_size,
- const float* coarse_at_sminus1,
- const float* fine_at_sminus1,
- const float* coarse_at_s) {
- // Increment all the pointers to save on pointer arithmetic in the loop.
- conditioning_ptr += start;
- gru_h_ptr += start;
- gru_gates_ptr += start;
- if (SplitGates) {
- DCHECK_NE(gru_gates_other_ptr, nullptr);
- gru_gates_other_ptr += start;
- }
- if (kInputsMode != ARInputsMode::k0ARInputs) {
- DCHECK_NE(qr_ptr, nullptr);
- qr_ptr += 2 * start;
- DCHECK_NE(coarse_at_sminus1, nullptr);
- DCHECK_NE(fine_at_sminus1, nullptr);
- if (kInputsMode == ARInputsMode::k3ARInputs) {
- DCHECK_NE(w_hat, nullptr);
- DCHECK_NE(coarse_at_s, nullptr);
- w_hat += start;
- }
- }
- for (int i = start; i < end; i += kNeonSIMDWidth) {
- float32x4_t reset = vld1q_f32(gru_gates_ptr);
- float32x4_t update = vld1q_f32(gru_gates_ptr + proj_size);
- float32x4_t cell = vld1q_f32(gru_gates_ptr + 2 * proj_size);
- float32x4_t qr_cell;
- if (SplitGates) {
- reset = vaddq_f32(reset, vld1q_f32(gru_gates_other_ptr));
- update = vaddq_f32(update, vld1q_f32(gru_gates_other_ptr + proj_size));
- cell = vaddq_f32(cell, vld1q_f32(gru_gates_other_ptr + 2 * proj_size));
- }
- if (kInputsMode != ARInputsMode::k0ARInputs) {
- // Setup the sample vector.
- float32x4_t sample = vdupq_n_f32(*coarse_at_sminus1);
- sample = vsetq_lane_f32(*fine_at_sminus1, sample, 1);
- sample = vsetq_lane_f32(*fine_at_sminus1, sample, 3);
-
- // All auto types are float32x4_t, auto used to fit statements on one line
- // for readability. Do two rows of QR at once.
- auto qr_reset_0 = vmulq_f32(vld1q_f32(qr_ptr), sample);
- auto qr_reset_1 = vmulq_f32(vld1q_f32(qr_ptr + 4), sample);
- auto qr_reset = vpaddq_f32(qr_reset_0, qr_reset_1);
-
- auto qr_update_0 = vmulq_f32(vld1q_f32(qr_ptr + 2 * proj_size), sample);
- auto qr_update_1 =
- vmulq_f32(vld1q_f32(qr_ptr + 4 + 2 * proj_size), sample);
- auto qr_update = vpaddq_f32(qr_update_0, qr_update_1);
-
- auto qr_cell_0 = vmulq_f32(vld1q_f32(qr_ptr + 4 * proj_size), sample);
- auto qr_cell_1 = vmulq_f32(vld1q_f32(qr_ptr + 4 + 4 * proj_size), sample);
- qr_cell = vpaddq_f32(qr_cell_0, qr_cell_1);
-
- if (kInputsMode == ARInputsMode::k3ARInputs) {
- float32x4_t w_sample = vdupq_n_f32(*coarse_at_s);
- qr_reset = vmlaq_f32(qr_reset, vld1q_f32(w_hat), w_sample);
- qr_update =
- vmlaq_f32(qr_update, vld1q_f32(w_hat + proj_size), w_sample);
- qr_cell =
- vmlaq_f32(qr_cell, vld1q_f32(w_hat + 2 * proj_size), w_sample);
- }
- reset = vaddq_f32(reset, qr_reset);
- update = vaddq_f32(update, qr_update);
- }
- auto reset_conditioning = vld1q_f32(conditioning_ptr);
- auto update_conditioning = vld1q_f32(conditioning_ptr + proj_size);
- auto cell_conditioning = vld1q_f32(conditioning_ptr + 2 * proj_size);
-
- reset = fast_sigmoid(vaddq_f32(reset, reset_conditioning));
- update = fast_sigmoid(vaddq_f32(update, update_conditioning));
- if (kInputsMode == ARInputsMode::k0ARInputs) {
- cell = vmulq_f32(reset, cell);
- } else {
- cell = vmlaq_f32(qr_cell, reset, cell);
- }
- auto hbar = fast_tanh(vaddq_f32(cell, cell_conditioning));
-
- auto prev_h = vld1q_f32(gru_h_ptr);
- auto diff = vsubq_f32(prev_h, hbar);
- auto new_h = vmlaq_f32(hbar, diff, update);
-
- vst1q_f32(gru_h_ptr, new_h);
- // Increment all the pointers.
- conditioning_ptr += kNeonSIMDWidth;
- gru_h_ptr += kNeonSIMDWidth;
- gru_gates_ptr += kNeonSIMDWidth;
- if (SplitGates) gru_gates_other_ptr += kNeonSIMDWidth;
- if (kInputsMode != ARInputsMode::k0ARInputs) {
- qr_ptr += 2 * kNeonSIMDWidth;
- if (kInputsMode == ARInputsMode::k3ARInputs) w_hat += kNeonSIMDWidth;
- }
- }
-}
-
-// This version should only be used if all of the 32-bit fixed point
-// representations have the same number of mantissa bits.
-// |ar_at_sminus1| packs sample 0 and 1 into a pair because the QR weights are
-// formatted with the weights interleaved for sample 0 and 1. The two samples
-// represent coarse and fine for WaveRNN.
-template
-void GoThroughGatesFixed(int start, int end, const float* qr_ptr,
- const int32_t* gru_gates_ptr,
- const int32_t* gru_gates_other_ptr,
- const int32_t* conditioning_ptr, int16_t* gru_h_ptr,
- const float* w_hat, int proj_size,
- const std::pair* ar_at_sminus1,
- const float* coarse_at_s) {
- // Increment all the pointers to save on pointer arithmetic in the loop.
- conditioning_ptr += start;
- gru_h_ptr += start;
- gru_gates_ptr += start;
- if (SplitGates) {
- DCHECK_NE(gru_gates_other_ptr, nullptr);
- gru_gates_other_ptr += start;
- }
- float32x4_t sample01;
- float32x4_t w_sample;
- if (kInputsMode != ARInputsMode::k0ARInputs) {
- DCHECK_NE(qr_ptr, nullptr);
- qr_ptr += 2 * start;
- DCHECK_NE(ar_at_sminus1, nullptr);
- sample01 = vdupq_n_f32(ar_at_sminus1->first);
- sample01 = vsetq_lane_f32(ar_at_sminus1->second, sample01, 1);
- sample01 = vsetq_lane_f32(ar_at_sminus1->second, sample01, 3);
- if (kInputsMode == ARInputsMode::k3ARInputs) {
- DCHECK_NE(w_hat, nullptr);
- DCHECK_NE(coarse_at_s, nullptr);
- w_hat += start;
- w_sample = vdupq_n_f32(*coarse_at_s);
- }
- }
- for (int i = start; i < end; i += kNeonSIMDWidth) {
- auto reset = vld1q_s32(gru_gates_ptr);
- auto update = vld1q_s32(gru_gates_ptr + proj_size);
- // vcvtq_n_f32_s32 = convert 32-bit fixed point to fp32
- auto cell_int = vld1q_s32(gru_gates_ptr + 2 * proj_size);
- if (SplitGates) {
- reset = vaddq_s32(reset, vld1q_s32(gru_gates_other_ptr));
- update = vaddq_s32(update, vld1q_s32(gru_gates_other_ptr + proj_size));
- cell_int =
- vaddq_s32(cell_int, vld1q_s32(gru_gates_other_ptr + 2 * proj_size));
- }
- float32x4_t cell =
- vcvtq_n_f32_s32(cell_int, GRUMatMulOutType::kMantissaBits);
- float32x4_t qr_cell;
- if (kInputsMode != ARInputsMode::k0ARInputs) {
- // Do two rows of QR at once.
- float32x4_t qr_reset_0 = vmulq_f32(vld1q_f32(qr_ptr), sample01);
- float32x4_t qr_reset_1 = vmulq_f32(vld1q_f32(qr_ptr + 4), sample01);
- float32x4_t qr_reset = vpaddq_f32(qr_reset_0, qr_reset_1);
-
- float32x4_t qr_update_0 =
- vmulq_f32(vld1q_f32(qr_ptr + 2 * proj_size), sample01);
- float32x4_t qr_update_1 =
- vmulq_f32(vld1q_f32(qr_ptr + 4 + 2 * proj_size), sample01);
- float32x4_t qr_update = vpaddq_f32(qr_update_0, qr_update_1);
-
- float32x4_t qr_cell_0 =
- vmulq_f32(vld1q_f32(qr_ptr + 4 * proj_size), sample01);
- float32x4_t qr_cell_1 =
- vmulq_f32(vld1q_f32(qr_ptr + 4 + 4 * proj_size), sample01);
- qr_cell = vpaddq_f32(qr_cell_0, qr_cell_1);
- if (kInputsMode == ARInputsMode::k3ARInputs) {
- float32x4_t w_sample = vdupq_n_f32(*coarse_at_s);
- qr_reset = vmlaq_f32(qr_reset, vld1q_f32(w_hat), w_sample);
- qr_update =
- vmlaq_f32(qr_update, vld1q_f32(w_hat + proj_size), w_sample);
- qr_cell =
- vmlaq_f32(qr_cell, vld1q_f32(w_hat + 2 * proj_size), w_sample);
- }
- reset = vaddq_s32(
- reset, vcvtq_n_s32_f32(qr_reset, GRUMatMulOutType::kMantissaBits));
- update = vaddq_s32(
- update, vcvtq_n_s32_f32(qr_update, GRUMatMulOutType::kMantissaBits));
- }
-
- auto reset_conditioning = vld1q_s32(conditioning_ptr);
- auto update_conditioning = vld1q_s32(conditioning_ptr + proj_size);
- float32x4_t cell_conditioning =
- vcvtq_n_f32_s32(vld1q_s32(conditioning_ptr + 2 * proj_size),
- GRUMatMulOutType::kMantissaBits);
-
- float32x4_t reset_f32 = fast_sigmoid(
- vaddq_s32(reset, reset_conditioning));
- float32x4_t update_f32 = fast_sigmoid(
- vaddq_s32(update, update_conditioning));
- if (kInputsMode == ARInputsMode::k0ARInputs) {
- cell = vmulq_f32(reset_f32, cell);
- } else {
- cell = vmlaq_f32(qr_cell, reset_f32, cell);
- }
- float32x4_t hbar = fast_tanh(vaddq_f32(cell, cell_conditioning));
-
- float32x4_t prev_h = vcvtq_n_f32_s32(vmovl_s16(vld1_s16(gru_h_ptr)),
- GRUStateType::kMantissaBits);
- float32x4_t diff = vsubq_f32(prev_h, hbar);
- float32x4_t new_h = vmlaq_f32(hbar, diff, update_f32);
-
- // vcvtq_n_s32_f32 = convert fp32 to signed 32-bit fixed point
- // vqrshrn_n_s32 = saturating, rounding, narrowing right shift - used to
- // convert a 32-bit fixed point value to a 16-bit fixed point value
- vst1_s16(gru_h_ptr,
- vqrshrn_n_s32(
- vcvtq_n_s32_f32(new_h, GRUStateType::kMantissaBits + 16), 16));
- // Increment all the pointers.
- conditioning_ptr += kNeonSIMDWidth;
- gru_h_ptr += kNeonSIMDWidth;
- gru_gates_ptr += kNeonSIMDWidth;
- if (SplitGates) gru_gates_other_ptr += kNeonSIMDWidth;
- if (kInputsMode != ARInputsMode::k0ARInputs) {
- qr_ptr += 2 * kNeonSIMDWidth;
- if (kInputsMode == ARInputsMode::k3ARInputs) w_hat += kNeonSIMDWidth;
- }
- }
-}
-#endif // defined __ARM_NEON || defined __aarch64__
-
-} // namespace csrblocksparse
-
-#endif // LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_ARM_H_
diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/vector/cachealignedvector_benchmark.cc b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/vector/cachealignedvector_benchmark.cc
deleted file mode 100644
index 9141e2d570884101b286e059d3ca358b643cc376..0000000000000000000000000000000000000000
--- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/vector/cachealignedvector_benchmark.cc
+++ /dev/null
@@ -1,60 +0,0 @@
-// Copyright 2021 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-#include
-
-#include "benchmark/benchmark.h"
-#include "sparse_matmul/vector/cache_aligned_vector.h"
-
-// A simple benchmark for CacheAlignedVector.
-//
-// Running on x86:
-// As written, it's not representative of x86 performance since ReducingSample
-// is used on x86 and not Sample.
-//
-// Running on arm64:
-// bazel build -c opt --dynamic_mode=off --copt=-gmlt \
-// --copt=-DUSE_FIXED32 --config=android_arm64 \
-// sparse_matmul/vector:cachealignedvector_benchmark
-namespace csrblocksparse {
-
-#ifdef USE_BFLOAT16
-using ComputeType = csrblocksparse::bfloat16;
-#elif defined USE_FIXED32
-using ComputeType = csrblocksparse::fixed32<11>; // kGruMatMulOutBits
-#else
-using ComputeType = float;
-#endif // USE_BFLOAT16
-
-#if defined(USE_FIXED32) && defined(__aarch64__)
-using ScratchType = int;
-#else
-using ScratchType = float;
-#endif // defined(USE_FIXED32) && defined(__aarch64__)
-
-void BM_Sample(benchmark::State& state) {
- constexpr int kVectorSize = 16384; // A large vector.
- std::minstd_rand generator;
-
- CacheAlignedVector values(kVectorSize);
- CacheAlignedVector scratch(kVectorSize);
- values.FillRandom();
-
- for (auto _ : state) {
- values.Sample(/*temperature=*/0.98f, &generator, &scratch);
- }
-}
-BENCHMARK(BM_Sample);
-
-} // namespace csrblocksparse
diff --git a/spaces/oliver2023/chatgpt-on-wechat/channel/channel_factory.py b/spaces/oliver2023/chatgpt-on-wechat/channel/channel_factory.py
deleted file mode 100644
index 3303ded3e694e0c493a8226ad1ecac70ea9e4a30..0000000000000000000000000000000000000000
--- a/spaces/oliver2023/chatgpt-on-wechat/channel/channel_factory.py
+++ /dev/null
@@ -1,23 +0,0 @@
-"""
-channel factory
-"""
-
-def create_channel(channel_type):
- """
- create a channel instance
- :param channel_type: channel type code
- :return: channel instance
- """
- if channel_type == 'wx':
- from channel.wechat.wechat_channel import WechatChannel
- return WechatChannel()
- elif channel_type == 'wxy':
- from channel.wechat.wechaty_channel import WechatyChannel
- return WechatyChannel()
- elif channel_type == 'terminal':
- from channel.terminal.terminal_channel import TerminalChannel
- return TerminalChannel()
- elif channel_type == 'wechatmp':
- from channel.wechatmp.wechatmp_channel import WechatMPChannel
- return WechatMPChannel()
- raise RuntimeError
diff --git a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/waymo_open/equiv_transl.py b/spaces/ondrejbiza/isa/invariant_slot_attention/configs/waymo_open/equiv_transl.py
deleted file mode 100644
index 1f1f59114b0e642d562ade9066b968f7c70ac42d..0000000000000000000000000000000000000000
--- a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/waymo_open/equiv_transl.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The Google Research Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-r"""Config for unsupervised training on Waymo Open."""
-
-import ml_collections
-
-
-def get_config():
- """Get the default hyperparameter configuration."""
- config = ml_collections.ConfigDict()
-
- config.seed = 42
- config.seed_data = True
-
- config.batch_size = 64
- config.num_train_steps = 500000 # from the original Slot Attention
- config.init_checkpoint = ml_collections.ConfigDict()
- config.init_checkpoint.xid = 0 # Disabled by default.
- config.init_checkpoint.wid = 1
-
- config.optimizer_configs = ml_collections.ConfigDict()
- config.optimizer_configs.optimizer = "adam"
-
- config.optimizer_configs.grad_clip = ml_collections.ConfigDict()
- config.optimizer_configs.grad_clip.clip_method = "clip_by_global_norm"
- config.optimizer_configs.grad_clip.clip_value = 0.05
-
- config.lr_configs = ml_collections.ConfigDict()
- config.lr_configs.learning_rate_schedule = "compound"
- config.lr_configs.factors = "constant * cosine_decay * linear_warmup"
- config.lr_configs.warmup_steps = 10000 # from the original Slot Attention
- config.lr_configs.steps_per_cycle = config.get_ref("num_train_steps")
- # from the original Slot Attention
- config.lr_configs.base_learning_rate = 4e-4
-
- config.eval_pad_last_batch = False # True
- config.log_loss_every_steps = 50
- config.eval_every_steps = 5000
- config.checkpoint_every_steps = 5000
-
- config.train_metrics_spec = {
- "loss": "loss",
- "ari": "ari",
- "ari_nobg": "ari_nobg",
- }
- config.eval_metrics_spec = {
- "eval_loss": "loss",
- "eval_ari": "ari",
- "eval_ari_nobg": "ari_nobg",
- }
-
- config.data = ml_collections.ConfigDict({
- "dataset_name": "waymo_open",
- "shuffle_buffer_size": config.batch_size * 8,
- "resolution": (128, 192)
- })
-
- config.max_instances = 11
- config.num_slots = config.max_instances # Only used for metrics.
- config.logging_min_n_colors = config.max_instances
-
- config.preproc_train = [
- "tfds_image_to_tfds_video",
- "video_from_tfds",
- ]
-
- config.preproc_eval = [
- "tfds_image_to_tfds_video",
- "video_from_tfds",
- "delete_small_masks(threshold=0.01, max_instances_after=11)",
- ]
-
- config.eval_slice_size = 1
- config.eval_slice_keys = ["video", "segmentations_video"]
-
- # Dictionary of targets and corresponding channels. Losses need to match.
- targets = {"video": 3}
- config.losses = {"recon": {"targets": list(targets)}}
- config.losses = ml_collections.ConfigDict({
- f"recon_{target}": {"loss_type": "recon", "key": target}
- for target in targets})
-
- config.model = ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SAVi",
-
- # Encoder.
- "encoder": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.FrameEncoder",
- "reduction": "spatial_flatten",
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.ResNet34",
- "num_classes": None,
- "axis_name": "time",
- "norm_type": "group",
- "small_inputs": True
- }),
- "pos_emb": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.PositionEmbedding",
- "embedding_type": "linear",
- "update_type": "concat"
- }),
- }),
-
- # Corrector.
- "corrector": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SlotAttentionTranslEquiv",
- "num_iterations": 3,
- "qkv_size": 64,
- "mlp_size": 128,
- "grid_encoder": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.MLP",
- "hidden_size": 128,
- "layernorm": "pre"
- }),
- "add_rel_pos_to_values": True, # V3
- "zero_position_init": False, # Random positions.
- }),
-
- # Predictor.
- # Removed since we are running a single frame.
- "predictor": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Identity"
- }),
-
- # Initializer.
- "initializer": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.ParamStateInitRandomPositions",
- "shape":
- (11, 64), # (num_slots, slot_size)
- }),
-
- # Decoder.
- "decoder": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.SiameseSpatialBroadcastDecoder",
- "resolution": (16, 24), # Update if data resolution or strides change
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.CNN",
- "features": [64, 64, 64, 64, 64],
- "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5), (5, 5)],
- "strides": [(2, 2), (2, 2), (2, 2), (1, 1), (1, 1)],
- "max_pool_strides": [(1, 1), (1, 1), (1, 1), (1, 1), (1, 1)],
- "layer_transpose": [True, True, True, False, False]
- }),
- "target_readout": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Readout",
- "keys": list(targets),
- "readout_modules": [ml_collections.ConfigDict({ # pylint: disable=g-complex-comprehension
- "module": "invariant_slot_attention.modules.MLP",
- "num_hidden_layers": 0,
- "hidden_size": 0,
- "output_size": targets[k]}) for k in targets],
- }),
- "relative_positions": True,
- "pos_emb": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.RelativePositionEmbedding",
- "embedding_type":
- "linear",
- "update_type":
- "project_add",
- }),
- }),
- "decode_corrected": True,
- "decode_predicted": False,
- })
-
- # Which video-shaped variables to visualize.
- config.debug_var_video_paths = {
- "recon_masks": "decoder/alphas_softmaxed/__call__/0", # pylint: disable=line-too-long
- }
-
- # Define which attention matrices to log/visualize.
- config.debug_var_attn_paths = {
- "corrector_attn": "corrector/InvertedDotProductAttentionKeyPerQuery_0/attn" # pylint: disable=line-too-long
- }
-
- # Widths of attention matrices (for reshaping to image grid).
- config.debug_var_attn_widths = {
- "corrector_attn": 16,
- }
-
- return config
-
-
diff --git a/spaces/openai/openai-detector/README.md b/spaces/openai/openai-detector/README.md
deleted file mode 100644
index e38eb571cb8bb129734a6f6bb7a13bc907a3ed1c..0000000000000000000000000000000000000000
--- a/spaces/openai/openai-detector/README.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: openai-detector
-emoji: 👁️
-colorFrom: blue
-colorTo: red
-sdk: docker
----
-# gpt-2-output-dataset
-
-This dataset contains:
-- 250K documents from the WebText test set
-- For each GPT-2 model (trained on the WebText training set), 250K random samples (temperature 1, no truncation) and 250K samples generated with Top-K 40 truncation
-
-We look forward to the research produced using this data!
-
-### Download
-
-For each model, we have a training split of 250K generated examples, as well as validation and test splits of 5K examples.
-
-All data is located in Google Cloud Storage, under the directory `gs://gpt-2/output-dataset/v1`.
-
-There, you will find files:
-
-- `webtext.${split}.jsonl`
-- `small-117M.${split}.jsonl`
-- `small-117M-k40.${split}.jsonl`
-- `medium-345M.${split}.jsonl`
-- `medium-345M-k40.${split}.jsonl`
-- `large-762M.${split}.jsonl`
-- `large-762M-k40.${split}.jsonl`
-- `xl-1542M.${split}.jsonl`
-- `xl-1542M-k40.${split}.jsonl`
-
-where split is one of `train`, `test`, and `valid`.
-
-We've provided a script to download all of them, in `download_dataset.py`.
-
-#### Finetuned model samples
-
-Additionally, we encourage research on detection of finetuned models. We have released data under `gs://gpt-2/output-dataset/v1-amazonfinetune/` with samples from a GPT-2 full model finetuned to output Amazon reviews.
-
-### Detectability baselines
-
-We're interested in seeing research in detectability of GPT-2 model family generations.
-
-We provide some [initial analysis](detection.md) of two baselines, as well as [code](./baseline.py) for the better baseline.
-
-Overall, we are able to achieve accuracies in the mid-90s for Top-K 40 generations, and mid-70s to high-80s (depending on model size) for random generations. We also find some evidence that adversaries can evade detection via finetuning from released models.
-
-### Data removal requests
-
-If you believe your work is included in WebText and would like us to remove it, please let us know at webtextdata@openai.com.
diff --git a/spaces/optimum/optimum-benchmark-ui/README.md b/spaces/optimum/optimum-benchmark-ui/README.md
deleted file mode 100644
index 5ce304baeebc4e1d50dc3a5d29d3af11c30c58cc..0000000000000000000000000000000000000000
--- a/spaces/optimum/optimum-benchmark-ui/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Optimum Benchmark UI
-emoji: 🏋️
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.41.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md
deleted file mode 100644
index 9d70ab4f88e61264673287c85bf7a25c66b52507..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/stable_diffusion/ldm3d_diffusion.md
+++ /dev/null
@@ -1,37 +0,0 @@
-
-
-# Text-to-(RGB, depth)
-
-LDM3D was proposed in [LDM3D: Latent Diffusion Model for 3D](https://huggingface.co/papers/2305.10853) by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as [Stable Diffusion](./stable_diffusion/overview) which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps.
-
-The abstract from the paper is:
-
-*This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at [this url](https://t.ly/tdi2).*
-
-
-
-Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
-
-
-
-## StableDiffusionLDM3DPipeline
-
-[[autodoc]] StableDiffusionLDM3DPipeline
- - all
- - __call__
-
-## LDM3DPipelineOutput
-
-[[autodoc]] pipelines.stable_diffusion.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput
- - all
- - __call__
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/contribute_pipeline.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/contribute_pipeline.md
deleted file mode 100644
index 415d3da1a10d4ed5bd2ad261287c5d761c865a15..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/contribute_pipeline.md
+++ /dev/null
@@ -1,182 +0,0 @@
-
-
-# 커뮤니티 파이프라인에 기여하는 방법
-
-
-
-💡 모든 사람이 속도 저하 없이 쉽게 작업을 공유할 수 있도록 커뮤니티 파이프라인을 추가하는 이유에 대한 자세한 내용은 GitHub 이슈 [#841](https://github.com/huggingface/diffusers/issues/841)를 참조하세요.
-
-
-
-커뮤니티 파이프라인을 사용하면 [`DiffusionPipeline`] 위에 원하는 추가 기능을 추가할 수 있습니다. `DiffusionPipeline` 위에 구축할 때의 가장 큰 장점은 누구나 인수를 하나만 추가하면 파이프라인을 로드하고 사용할 수 있어 커뮤니티가 매우 쉽게 접근할 수 있다는 것입니다.
-
-이번 가이드에서는 커뮤니티 파이프라인을 생성하는 방법과 작동 원리를 설명합니다.
-간단하게 설명하기 위해 `UNet`이 단일 forward pass를 수행하고 스케줄러를 한 번 호출하는 "one-step" 파이프라인을 만들겠습니다.
-
-## 파이프라인 초기화
-
-커뮤니티 파이프라인을 위한 `one_step_unet.py` 파일을 생성하는 것으로 시작합니다. 이 파일에서, Hub에서 모델 가중치와 스케줄러 구성을 로드할 수 있도록 [`DiffusionPipeline`]을 상속하는 파이프라인 클래스를 생성합니다. one-step 파이프라인에는 `UNet`과 스케줄러가 필요하므로 이를 `__init__` 함수에 인수로 추가해야합니다:
-
-```python
-from diffusers import DiffusionPipeline
-import torch
-
-
-class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
- def __init__(self, unet, scheduler):
- super().__init__()
-```
-
-파이프라인과 그 구성요소(`unet` and `scheduler`)를 [`~DiffusionPipeline.save_pretrained`]으로 저장할 수 있도록 하려면 `register_modules` 함수에 추가하세요:
-
-```diff
- from diffusers import DiffusionPipeline
- import torch
-
- class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
- def __init__(self, unet, scheduler):
- super().__init__()
-
-+ self.register_modules(unet=unet, scheduler=scheduler)
-```
-
-이제 '초기화' 단계가 완료되었으니 forward pass로 이동할 수 있습니다! 🔥
-
-## Forward pass 정의
-
-Forward pass 에서는(`__call__`로 정의하는 것이 좋습니다) 원하는 기능을 추가할 수 있는 완전한 창작 자유가 있습니다. 우리의 놀라운 one-step 파이프라인의 경우, 임의의 이미지를 생성하고 `timestep=1`을 설정하여 `unet`과 `scheduler`를 한 번만 호출합니다:
-
-```diff
- from diffusers import DiffusionPipeline
- import torch
-
-
- class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
- def __init__(self, unet, scheduler):
- super().__init__()
-
- self.register_modules(unet=unet, scheduler=scheduler)
-
-+ def __call__(self):
-+ image = torch.randn(
-+ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size),
-+ )
-+ timestep = 1
-
-+ model_output = self.unet(image, timestep).sample
-+ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample
-
-+ return scheduler_output
-```
-
-끝났습니다! 🚀 이제 이 파이프라인에 `unet`과 `scheduler`를 전달하여 실행할 수 있습니다:
-
-```python
-from diffusers import DDPMScheduler, UNet2DModel
-
-scheduler = DDPMScheduler()
-unet = UNet2DModel()
-
-pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler)
-
-output = pipeline()
-```
-
-하지만 파이프라인 구조가 동일한 경우 기존 가중치를 파이프라인에 로드할 수 있다는 장점이 있습니다. 예를 들어 one-step 파이프라인에 [`google/ddpm-cifar10-32`](https://huggingface.co/google/ddpm-cifar10-32) 가중치를 로드할 수 있습니다:
-
-```python
-pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32")
-
-output = pipeline()
-```
-
-## 파이프라인 공유
-
-🧨Diffusers [리포지토리](https://github.com/huggingface/diffusers)에서 Pull Request를 열어 [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) 하위 폴더에 `one_step_unet.py`의 멋진 파이프라인을 추가하세요.
-
-병합이 되면, `diffusers >= 0.4.0`이 설치된 사용자라면 누구나 `custom_pipeline` 인수에 지정하여 이 파이프라인을 마술처럼 🪄 사용할 수 있습니다:
-
-```python
-from diffusers import DiffusionPipeline
-
-pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
-pipe()
-```
-
-커뮤니티 파이프라인을 공유하는 또 다른 방법은 Hub 에서 선호하는 [모델 리포지토리](https://huggingface.co/docs/hub/models-uploading)에 직접 `one_step_unet.py` 파일을 업로드하는 것입니다. `one_step_unet.py` 파일을 지정하는 대신 모델 저장소 id를 `custom_pipeline` 인수에 전달하세요:
-
-```python
-from diffusers import DiffusionPipeline
-
-pipeline = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet")
-```
-
-다음 표에서 두 가지 공유 워크플로우를 비교하여 자신에게 가장 적합한 옵션을 결정하는 데 도움이 되는 정보를 확인하세요:
-
-| | GitHub 커뮤니티 파이프라인 | HF Hub 커뮤니티 파이프라인 |
-|----------------|------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
-| 사용법 | 동일 | 동일 |
-| 리뷰 과정 | 병합하기 전에 GitHub에서 Pull Request를 열고 Diffusers 팀의 검토 과정을 거칩니다. 속도가 느릴 수 있습니다. | 검토 없이 Hub 저장소에 바로 업로드합니다. 가장 빠른 워크플로우 입니다. |
-| 가시성 | 공식 Diffusers 저장소 및 문서에 포함되어 있습니다. | HF 허브 프로필에 포함되며 가시성을 확보하기 위해 자신의 사용량/프로모션에 의존합니다. |
-
-
-
-💡 커뮤니티 파이프라인 파일에 원하는 패키지를 사용할 수 있습니다. 사용자가 패키지를 설치하기만 하면 모든 것이 정상적으로 작동합니다. 파이프라인이 자동으로 감지되므로 `DiffusionPipeline`에서 상속하는 파이프라인 클래스가 하나만 있는지 확인하세요.
-
-
-
-## 커뮤니티 파이프라인은 어떻게 작동하나요?
-
-커뮤니티 파이프라인은 [`DiffusionPipeline`]을 상속하는 클래스입니다:
-
-- [`custom_pipeline`] 인수로 로드할 수 있습니다.
-- 모델 가중치 및 스케줄러 구성은 [`pretrained_model_name_or_path`]에서 로드됩니다.
-- 커뮤니티 파이프라인에서 기능을 구현하는 코드는 `pipeline.py` 파일에 정의되어 있습니다.
-
-공식 저장소에서 모든 파이프라인 구성 요소 가중치를 로드할 수 없는 경우가 있습니다. 이 경우 다른 구성 요소는 파이프라인에 직접 전달해야 합니다:
-
-```python
-from diffusers import DiffusionPipeline
-from transformers import CLIPFeatureExtractor, CLIPModel
-
-model_id = "CompVis/stable-diffusion-v1-4"
-clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
-
-feature_extractor = CLIPFeatureExtractor.from_pretrained(clip_model_id)
-clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16)
-
-pipeline = DiffusionPipeline.from_pretrained(
- model_id,
- custom_pipeline="clip_guided_stable_diffusion",
- clip_model=clip_model,
- feature_extractor=feature_extractor,
- scheduler=scheduler,
- torch_dtype=torch.float16,
-)
-```
-
-커뮤니티 파이프라인의 마법은 다음 코드에 담겨 있습니다. 이 코드를 통해 커뮤니티 파이프라인을 GitHub 또는 Hub에서 로드할 수 있으며, 모든 🧨 Diffusers 패키지에서 사용할 수 있습니다.
-
-```python
-# 2. 파이프라인 클래스를 로드합니다. 사용자 지정 모듈을 사용하는 경우 Hub에서 로드합니다
-# 명시적 클래스에서 로드하는 경우, 이를 사용해 보겠습니다.
-if custom_pipeline is not None:
- pipeline_class = get_class_from_dynamic_module(
- custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline
- )
-elif cls != DiffusionPipeline:
- pipeline_class = cls
-else:
- diffusers_module = importlib.import_module(cls.__module__.split(".")[0])
- pipeline_class = getattr(diffusers_module, config_dict["_class_name"])
-```
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/lpw_stable_diffusion_xl.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/lpw_stable_diffusion_xl.py
deleted file mode 100644
index b95ec049c22d4222ff73edec354b3747c30c2de0..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/lpw_stable_diffusion_xl.py
+++ /dev/null
@@ -1,1287 +0,0 @@
-## ----------------------------------------------------------
-# A SDXL pipeline can take unlimited weighted prompt
-#
-# Author: Andrew Zhu
-# Github: https://github.com/xhinker
-# Medium: https://medium.com/@xhinker
-## -----------------------------------------------------------
-
-import inspect
-import os
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-import torch
-from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
-
-from diffusers import DiffusionPipeline, StableDiffusionXLPipeline
-from diffusers.image_processor import VaeImageProcessor
-from diffusers.loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin
-from diffusers.models import AutoencoderKL, UNet2DConditionModel
-from diffusers.models.attention_processor import (
- AttnProcessor2_0,
- LoRAAttnProcessor2_0,
- LoRAXFormersAttnProcessor,
- XFormersAttnProcessor,
-)
-from diffusers.pipelines.stable_diffusion_xl import StableDiffusionXLPipelineOutput
-from diffusers.schedulers import KarrasDiffusionSchedulers
-from diffusers.utils import (
- is_accelerate_available,
- is_accelerate_version,
- is_invisible_watermark_available,
- logging,
- replace_example_docstring,
-)
-from diffusers.utils.torch_utils import randn_tensor
-
-
-if is_invisible_watermark_available():
- from diffusers.pipelines.stable_diffusion_xl.watermark import StableDiffusionXLWatermarker
-
-
-def parse_prompt_attention(text):
- """
- Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
- Accepted tokens are:
- (abc) - increases attention to abc by a multiplier of 1.1
- (abc:3.12) - increases attention to abc by a multiplier of 3.12
- [abc] - decreases attention to abc by a multiplier of 1.1
- \( - literal character '('
- \[ - literal character '['
- \) - literal character ')'
- \] - literal character ']'
- \\ - literal character '\'
- anything else - just text
-
- >>> parse_prompt_attention('normal text')
- [['normal text', 1.0]]
- >>> parse_prompt_attention('an (important) word')
- [['an ', 1.0], ['important', 1.1], [' word', 1.0]]
- >>> parse_prompt_attention('(unbalanced')
- [['unbalanced', 1.1]]
- >>> parse_prompt_attention('\(literal\]')
- [['(literal]', 1.0]]
- >>> parse_prompt_attention('(unnecessary)(parens)')
- [['unnecessaryparens', 1.1]]
- >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
- [['a ', 1.0],
- ['house', 1.5730000000000004],
- [' ', 1.1],
- ['on', 1.0],
- [' a ', 1.1],
- ['hill', 0.55],
- [', sun, ', 1.1],
- ['sky', 1.4641000000000006],
- ['.', 1.1]]
- """
- import re
-
- re_attention = re.compile(
- r"""
- \\\(|\\\)|\\\[|\\]|\\\\|\\|\(|\[|:([+-]?[.\d]+)\)|
- \)|]|[^\\()\[\]:]+|:
- """,
- re.X,
- )
-
- re_break = re.compile(r"\s*\bBREAK\b\s*", re.S)
-
- res = []
- round_brackets = []
- square_brackets = []
-
- round_bracket_multiplier = 1.1
- square_bracket_multiplier = 1 / 1.1
-
- def multiply_range(start_position, multiplier):
- for p in range(start_position, len(res)):
- res[p][1] *= multiplier
-
- for m in re_attention.finditer(text):
- text = m.group(0)
- weight = m.group(1)
-
- if text.startswith("\\"):
- res.append([text[1:], 1.0])
- elif text == "(":
- round_brackets.append(len(res))
- elif text == "[":
- square_brackets.append(len(res))
- elif weight is not None and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), float(weight))
- elif text == ")" and len(round_brackets) > 0:
- multiply_range(round_brackets.pop(), round_bracket_multiplier)
- elif text == "]" and len(square_brackets) > 0:
- multiply_range(square_brackets.pop(), square_bracket_multiplier)
- else:
- parts = re.split(re_break, text)
- for i, part in enumerate(parts):
- if i > 0:
- res.append(["BREAK", -1])
- res.append([part, 1.0])
-
- for pos in round_brackets:
- multiply_range(pos, round_bracket_multiplier)
-
- for pos in square_brackets:
- multiply_range(pos, square_bracket_multiplier)
-
- if len(res) == 0:
- res = [["", 1.0]]
-
- # merge runs of identical weights
- i = 0
- while i + 1 < len(res):
- if res[i][1] == res[i + 1][1]:
- res[i][0] += res[i + 1][0]
- res.pop(i + 1)
- else:
- i += 1
-
- return res
-
-
-def get_prompts_tokens_with_weights(clip_tokenizer: CLIPTokenizer, prompt: str):
- """
- Get prompt token ids and weights, this function works for both prompt and negative prompt
-
- Args:
- pipe (CLIPTokenizer)
- A CLIPTokenizer
- prompt (str)
- A prompt string with weights
-
- Returns:
- text_tokens (list)
- A list contains token ids
- text_weight (list)
- A list contains the correspodent weight of token ids
-
- Example:
- import torch
- from transformers import CLIPTokenizer
-
- clip_tokenizer = CLIPTokenizer.from_pretrained(
- "stablediffusionapi/deliberate-v2"
- , subfolder = "tokenizer"
- , dtype = torch.float16
- )
-
- token_id_list, token_weight_list = get_prompts_tokens_with_weights(
- clip_tokenizer = clip_tokenizer
- ,prompt = "a (red:1.5) cat"*70
- )
- """
- texts_and_weights = parse_prompt_attention(prompt)
- text_tokens, text_weights = [], []
- for word, weight in texts_and_weights:
- # tokenize and discard the starting and the ending token
- token = clip_tokenizer(word, truncation=False).input_ids[1:-1] # so that tokenize whatever length prompt
- # the returned token is a 1d list: [320, 1125, 539, 320]
-
- # merge the new tokens to the all tokens holder: text_tokens
- text_tokens = [*text_tokens, *token]
-
- # each token chunk will come with one weight, like ['red cat', 2.0]
- # need to expand weight for each token.
- chunk_weights = [weight] * len(token)
-
- # append the weight back to the weight holder: text_weights
- text_weights = [*text_weights, *chunk_weights]
- return text_tokens, text_weights
-
-
-def group_tokens_and_weights(token_ids: list, weights: list, pad_last_block=False):
- """
- Produce tokens and weights in groups and pad the missing tokens
-
- Args:
- token_ids (list)
- The token ids from tokenizer
- weights (list)
- The weights list from function get_prompts_tokens_with_weights
- pad_last_block (bool)
- Control if fill the last token list to 75 tokens with eos
- Returns:
- new_token_ids (2d list)
- new_weights (2d list)
-
- Example:
- token_groups,weight_groups = group_tokens_and_weights(
- token_ids = token_id_list
- , weights = token_weight_list
- )
- """
- bos, eos = 49406, 49407
-
- # this will be a 2d list
- new_token_ids = []
- new_weights = []
- while len(token_ids) >= 75:
- # get the first 75 tokens
- head_75_tokens = [token_ids.pop(0) for _ in range(75)]
- head_75_weights = [weights.pop(0) for _ in range(75)]
-
- # extract token ids and weights
- temp_77_token_ids = [bos] + head_75_tokens + [eos]
- temp_77_weights = [1.0] + head_75_weights + [1.0]
-
- # add 77 token and weights chunk to the holder list
- new_token_ids.append(temp_77_token_ids)
- new_weights.append(temp_77_weights)
-
- # padding the left
- if len(token_ids) > 0:
- padding_len = 75 - len(token_ids) if pad_last_block else 0
-
- temp_77_token_ids = [bos] + token_ids + [eos] * padding_len + [eos]
- new_token_ids.append(temp_77_token_ids)
-
- temp_77_weights = [1.0] + weights + [1.0] * padding_len + [1.0]
- new_weights.append(temp_77_weights)
-
- return new_token_ids, new_weights
-
-
-def get_weighted_text_embeddings_sdxl(
- pipe: StableDiffusionXLPipeline,
- prompt: str = "",
- prompt_2: str = None,
- neg_prompt: str = "",
- neg_prompt_2: str = None,
-):
- """
- This function can process long prompt with weights, no length limitation
- for Stable Diffusion XL
-
- Args:
- pipe (StableDiffusionPipeline)
- prompt (str)
- prompt_2 (str)
- neg_prompt (str)
- neg_prompt_2 (str)
- Returns:
- prompt_embeds (torch.Tensor)
- neg_prompt_embeds (torch.Tensor)
- """
- if prompt_2:
- prompt = f"{prompt} {prompt_2}"
-
- if neg_prompt_2:
- neg_prompt = f"{neg_prompt} {neg_prompt_2}"
-
- eos = pipe.tokenizer.eos_token_id
-
- # tokenizer 1
- prompt_tokens, prompt_weights = get_prompts_tokens_with_weights(pipe.tokenizer, prompt)
-
- neg_prompt_tokens, neg_prompt_weights = get_prompts_tokens_with_weights(pipe.tokenizer, neg_prompt)
-
- # tokenizer 2
- prompt_tokens_2, prompt_weights_2 = get_prompts_tokens_with_weights(pipe.tokenizer_2, prompt)
-
- neg_prompt_tokens_2, neg_prompt_weights_2 = get_prompts_tokens_with_weights(pipe.tokenizer_2, neg_prompt)
-
- # padding the shorter one for prompt set 1
- prompt_token_len = len(prompt_tokens)
- neg_prompt_token_len = len(neg_prompt_tokens)
-
- if prompt_token_len > neg_prompt_token_len:
- # padding the neg_prompt with eos token
- neg_prompt_tokens = neg_prompt_tokens + [eos] * abs(prompt_token_len - neg_prompt_token_len)
- neg_prompt_weights = neg_prompt_weights + [1.0] * abs(prompt_token_len - neg_prompt_token_len)
- else:
- # padding the prompt
- prompt_tokens = prompt_tokens + [eos] * abs(prompt_token_len - neg_prompt_token_len)
- prompt_weights = prompt_weights + [1.0] * abs(prompt_token_len - neg_prompt_token_len)
-
- # padding the shorter one for token set 2
- prompt_token_len_2 = len(prompt_tokens_2)
- neg_prompt_token_len_2 = len(neg_prompt_tokens_2)
-
- if prompt_token_len_2 > neg_prompt_token_len_2:
- # padding the neg_prompt with eos token
- neg_prompt_tokens_2 = neg_prompt_tokens_2 + [eos] * abs(prompt_token_len_2 - neg_prompt_token_len_2)
- neg_prompt_weights_2 = neg_prompt_weights_2 + [1.0] * abs(prompt_token_len_2 - neg_prompt_token_len_2)
- else:
- # padding the prompt
- prompt_tokens_2 = prompt_tokens_2 + [eos] * abs(prompt_token_len_2 - neg_prompt_token_len_2)
- prompt_weights_2 = prompt_weights + [1.0] * abs(prompt_token_len_2 - neg_prompt_token_len_2)
-
- embeds = []
- neg_embeds = []
-
- prompt_token_groups, prompt_weight_groups = group_tokens_and_weights(prompt_tokens.copy(), prompt_weights.copy())
-
- neg_prompt_token_groups, neg_prompt_weight_groups = group_tokens_and_weights(
- neg_prompt_tokens.copy(), neg_prompt_weights.copy()
- )
-
- prompt_token_groups_2, prompt_weight_groups_2 = group_tokens_and_weights(
- prompt_tokens_2.copy(), prompt_weights_2.copy()
- )
-
- neg_prompt_token_groups_2, neg_prompt_weight_groups_2 = group_tokens_and_weights(
- neg_prompt_tokens_2.copy(), neg_prompt_weights_2.copy()
- )
-
- # get prompt embeddings one by one is not working.
- for i in range(len(prompt_token_groups)):
- # get positive prompt embeddings with weights
- token_tensor = torch.tensor([prompt_token_groups[i]], dtype=torch.long, device=pipe.device)
- weight_tensor = torch.tensor(prompt_weight_groups[i], dtype=torch.float16, device=pipe.device)
-
- token_tensor_2 = torch.tensor([prompt_token_groups_2[i]], dtype=torch.long, device=pipe.device)
-
- # use first text encoder
- prompt_embeds_1 = pipe.text_encoder(token_tensor.to(pipe.device), output_hidden_states=True)
- prompt_embeds_1_hidden_states = prompt_embeds_1.hidden_states[-2]
-
- # use second text encoder
- prompt_embeds_2 = pipe.text_encoder_2(token_tensor_2.to(pipe.device), output_hidden_states=True)
- prompt_embeds_2_hidden_states = prompt_embeds_2.hidden_states[-2]
- pooled_prompt_embeds = prompt_embeds_2[0]
-
- prompt_embeds_list = [prompt_embeds_1_hidden_states, prompt_embeds_2_hidden_states]
- token_embedding = torch.concat(prompt_embeds_list, dim=-1).squeeze(0)
-
- for j in range(len(weight_tensor)):
- if weight_tensor[j] != 1.0:
- token_embedding[j] = (
- token_embedding[-1] + (token_embedding[j] - token_embedding[-1]) * weight_tensor[j]
- )
-
- token_embedding = token_embedding.unsqueeze(0)
- embeds.append(token_embedding)
-
- # get negative prompt embeddings with weights
- neg_token_tensor = torch.tensor([neg_prompt_token_groups[i]], dtype=torch.long, device=pipe.device)
- neg_token_tensor_2 = torch.tensor([neg_prompt_token_groups_2[i]], dtype=torch.long, device=pipe.device)
- neg_weight_tensor = torch.tensor(neg_prompt_weight_groups[i], dtype=torch.float16, device=pipe.device)
-
- # use first text encoder
- neg_prompt_embeds_1 = pipe.text_encoder(neg_token_tensor.to(pipe.device), output_hidden_states=True)
- neg_prompt_embeds_1_hidden_states = neg_prompt_embeds_1.hidden_states[-2]
-
- # use second text encoder
- neg_prompt_embeds_2 = pipe.text_encoder_2(neg_token_tensor_2.to(pipe.device), output_hidden_states=True)
- neg_prompt_embeds_2_hidden_states = neg_prompt_embeds_2.hidden_states[-2]
- negative_pooled_prompt_embeds = neg_prompt_embeds_2[0]
-
- neg_prompt_embeds_list = [neg_prompt_embeds_1_hidden_states, neg_prompt_embeds_2_hidden_states]
- neg_token_embedding = torch.concat(neg_prompt_embeds_list, dim=-1).squeeze(0)
-
- for z in range(len(neg_weight_tensor)):
- if neg_weight_tensor[z] != 1.0:
- neg_token_embedding[z] = (
- neg_token_embedding[-1] + (neg_token_embedding[z] - neg_token_embedding[-1]) * neg_weight_tensor[z]
- )
-
- neg_token_embedding = neg_token_embedding.unsqueeze(0)
- neg_embeds.append(neg_token_embedding)
-
- prompt_embeds = torch.cat(embeds, dim=1)
- negative_prompt_embeds = torch.cat(neg_embeds, dim=1)
-
- return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
-
-
-# -------------------------------------------------------------------------------------------------------------------------------
-# reuse the backbone code from StableDiffusionXLPipeline
-# -------------------------------------------------------------------------------------------------------------------------------
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- from diffusers import DiffusionPipeline
- import torch
-
- pipe = DiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-xl-base-1.0"
- , torch_dtype = torch.float16
- , use_safetensors = True
- , variant = "fp16"
- , custom_pipeline = "lpw_stable_diffusion_xl",
- )
-
- prompt = "a white cat running on the grass"*20
- prompt2 = "play a football"*20
- prompt = f"{prompt},{prompt2}"
- neg_prompt = "blur, low quality"
-
- pipe.to("cuda")
- images = pipe(
- prompt = prompt
- , negative_prompt = neg_prompt
- ).images[0]
-
- pipe.to("cpu")
- torch.cuda.empty_cache()
- images
- ```
-"""
-
-
-# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
-def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
- """
- Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and
- Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4
- """
- std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
- std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
- # rescale the results from guidance (fixes overexposure)
- noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
- # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
- noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
- return noise_cfg
-
-
-class SDXLLongPromptWeightingPipeline(DiffusionPipeline, FromSingleFileMixin, LoraLoaderMixin):
- r"""
- Pipeline for text-to-image generation using Stable Diffusion XL.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- In addition the pipeline inherits the following loading methods:
- - *LoRA*: [`StableDiffusionXLPipeline.load_lora_weights`]
- - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`]
-
- as well as the following saving methods:
- - *LoRA*: [`loaders.StableDiffusionXLPipeline.save_lora_weights`]
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion XL uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- text_encoder_2 ([` CLIPTextModelWithProjection`]):
- Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
- specifically the
- [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
- variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- tokenizer_2 (`CLIPTokenizer`):
- Second Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- text_encoder_2: CLIPTextModelWithProjection,
- tokenizer: CLIPTokenizer,
- tokenizer_2: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- force_zeros_for_empty_prompt: bool = True,
- add_watermarker: Optional[bool] = None,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- text_encoder_2=text_encoder_2,
- tokenizer=tokenizer,
- tokenizer_2=tokenizer_2,
- unet=unet,
- scheduler=scheduler,
- )
- self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt)
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
- self.default_sample_size = self.unet.config.sample_size
-
- add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available()
-
- if add_watermarker:
- self.watermark = StableDiffusionXLWatermarker()
- else:
- self.watermark = None
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
- def enable_vae_tiling(self):
- r"""
- Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
- compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
- processing larger images.
- """
- self.vae.enable_tiling()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
- def disable_vae_tiling(self):
- r"""
- Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_tiling()
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- model_sequence = (
- [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
- )
- model_sequence.extend([self.unet, self.vae])
-
- hook = None
- for cpu_offloaded_model in model_sequence:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- def encode_prompt(
- self,
- prompt: str,
- prompt_2: Optional[str] = None,
- device: Optional[torch.device] = None,
- num_images_per_prompt: int = 1,
- do_classifier_free_guidance: bool = True,
- negative_prompt: Optional[str] = None,
- negative_prompt_2: Optional[str] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- lora_scale: Optional[float] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
- used in both text-encoders
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- negative_prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
- `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- If not provided, pooled text embeddings will be generated from `prompt` input argument.
- negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
- input argument.
- lora_scale (`float`, *optional*):
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- """
- device = device or self._execution_device
-
- # set lora scale so that monkey patched LoRA
- # function of text encoder can correctly access it
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
- self._lora_scale = lora_scale
-
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- # Define tokenizers and text encoders
- tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2]
- text_encoders = (
- [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
- )
-
- if prompt_embeds is None:
- prompt_2 = prompt_2 or prompt
- # textual inversion: procecss multi-vector tokens if necessary
- prompt_embeds_list = []
- prompts = [prompt, prompt_2]
- for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders):
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, tokenizer)
-
- text_inputs = tokenizer(
- prompt,
- padding="max_length",
- max_length=tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- text_input_ids = text_inputs.input_ids
- untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- prompt_embeds = text_encoder(
- text_input_ids.to(device),
- output_hidden_states=True,
- )
-
- # We are only ALWAYS interested in the pooled output of the final text encoder
- pooled_prompt_embeds = prompt_embeds[0]
- prompt_embeds = prompt_embeds.hidden_states[-2]
-
- prompt_embeds_list.append(prompt_embeds)
-
- prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
-
- # get unconditional embeddings for classifier free guidance
- zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt
- if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt:
- negative_prompt_embeds = torch.zeros_like(prompt_embeds)
- negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
- elif do_classifier_free_guidance and negative_prompt_embeds is None:
- negative_prompt = negative_prompt or ""
- negative_prompt_2 = negative_prompt_2 or negative_prompt
-
- uncond_tokens: List[str]
- if prompt is not None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt, negative_prompt_2]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = [negative_prompt, negative_prompt_2]
-
- negative_prompt_embeds_list = []
- for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders):
- if isinstance(self, TextualInversionLoaderMixin):
- negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer)
-
- max_length = prompt_embeds.shape[1]
- uncond_input = tokenizer(
- negative_prompt,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- negative_prompt_embeds = text_encoder(
- uncond_input.input_ids.to(device),
- output_hidden_states=True,
- )
- # We are only ALWAYS interested in the pooled output of the final text encoder
- negative_pooled_prompt_embeds = negative_prompt_embeds[0]
- negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
-
- negative_prompt_embeds_list.append(negative_prompt_embeds)
-
- negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1)
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
- bs_embed * num_images_per_prompt, -1
- )
- if do_classifier_free_guidance:
- negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
- bs_embed * num_images_per_prompt, -1
- )
-
- return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_inputs(
- self,
- prompt,
- prompt_2,
- height,
- width,
- callback_steps,
- negative_prompt=None,
- negative_prompt_2=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- pooled_prompt_embeds=None,
- negative_pooled_prompt_embeds=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt_2 is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
- elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)):
- raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
- elif negative_prompt_2 is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- if prompt_embeds is not None and pooled_prompt_embeds is None:
- raise ValueError(
- "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`."
- )
-
- if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None:
- raise ValueError(
- "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- def _get_add_time_ids(self, original_size, crops_coords_top_left, target_size, dtype):
- add_time_ids = list(original_size + crops_coords_top_left + target_size)
-
- passed_add_embed_dim = (
- self.unet.config.addition_time_embed_dim * len(add_time_ids) + self.text_encoder_2.config.projection_dim
- )
- expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
-
- if expected_add_embed_dim != passed_add_embed_dim:
- raise ValueError(
- f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
- )
-
- add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
- return add_time_ids
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae
- def upcast_vae(self):
- dtype = self.vae.dtype
- self.vae.to(dtype=torch.float32)
- use_torch_2_0_or_xformers = isinstance(
- self.vae.decoder.mid_block.attentions[0].processor,
- (
- AttnProcessor2_0,
- XFormersAttnProcessor,
- LoRAXFormersAttnProcessor,
- LoRAAttnProcessor2_0,
- ),
- )
- # if xformers or torch_2_0 is used attention block does not need
- # to be in float32 which can save lots of memory
- if use_torch_2_0_or_xformers:
- self.vae.post_quant_conv.to(dtype)
- self.vae.decoder.conv_in.to(dtype)
- self.vae.decoder.mid_block.to(dtype)
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: str = None,
- prompt_2: Optional[str] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- denoising_end: Optional[float] = None,
- guidance_scale: float = 5.0,
- negative_prompt: Optional[str] = None,
- negative_prompt_2: Optional[str] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- guidance_rescale: float = 0.0,
- original_size: Optional[Tuple[int, int]] = None,
- crops_coords_top_left: Tuple[int, int] = (0, 0),
- target_size: Optional[Tuple[int, int]] = None,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str`):
- The prompt to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- prompt_2 (`str`):
- The prompt to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
- used in both text-encoders
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- denoising_end (`float`, *optional*):
- When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
- completed before it is intentionally prematurely terminated. As a result, the returned sample will
- still retain a substantial amount of noise as determined by the discrete timesteps selected by the
- scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
- "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
- Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- guidance_scale (`float`, *optional*, defaults to 5.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str`):
- The prompt not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- negative_prompt_2 (`str`):
- The prompt not to guide the image generation to be sent to `tokenizer_2` and
- `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- If not provided, pooled text embeddings will be generated from `prompt` input argument.
- negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
- input argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead
- of a plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
- `self.processor` in
- [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- guidance_rescale (`float`, *optional*, defaults to 0.0):
- Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
- Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
- [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
- Guidance rescale factor should fix overexposure when using zero terminal SNR.
- original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
- If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
- `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
- explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
- `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
- `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
- `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
- For most cases, `target_size` should be set to the desired height and width of the generated image. If
- not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
- section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] if `return_dict` is True, otherwise a
- `tuple`. When returning a tuple, the first element is a list with the generated images.
- """
- # 0. Default height and width to unet
- height = height or self.default_sample_size * self.vae_scale_factor
- width = width or self.default_sample_size * self.vae_scale_factor
-
- original_size = original_size or (height, width)
- target_size = target_size or (height, width)
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt,
- prompt_2,
- height,
- width,
- callback_steps,
- negative_prompt,
- negative_prompt_2,
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- (cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None)
-
- negative_prompt = negative_prompt if negative_prompt is not None else ""
-
- (
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- ) = get_weighted_text_embeddings_sdxl(pipe=self, prompt=prompt, neg_prompt=negative_prompt)
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
-
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.config.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- )
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 7. Prepare added time ids & embeddings
- add_text_embeds = pooled_prompt_embeds
- add_time_ids = self._get_add_time_ids(
- original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype
- )
-
- if do_classifier_free_guidance:
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
- add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
- add_time_ids = torch.cat([add_time_ids, add_time_ids], dim=0)
-
- prompt_embeds = prompt_embeds.to(device)
- add_text_embeds = add_text_embeds.to(device)
- add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
-
- # 8. Denoising loop
- num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
-
- # 7.1 Apply denoising_end
- if denoising_end is not None and isinstance(denoising_end, float) and denoising_end > 0 and denoising_end < 1:
- discrete_timestep_cutoff = int(
- round(
- self.scheduler.config.num_train_timesteps
- - (denoising_end * self.scheduler.config.num_train_timesteps)
- )
- )
- num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps)))
- timesteps = timesteps[:num_inference_steps]
-
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- added_cond_kwargs=added_cond_kwargs,
- return_dict=False,
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- if do_classifier_free_guidance and guidance_rescale > 0.0:
- # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
- noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- if not output_type == "latent":
- # make sure the VAE is in float32 mode, as it overflows in float16
- needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast
-
- if needs_upcasting:
- self.upcast_vae()
- latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
-
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
-
- # cast back to fp16 if needed
- if needs_upcasting:
- self.vae.to(dtype=torch.float16)
- else:
- image = latents
- return StableDiffusionXLPipelineOutput(images=image)
-
- # apply watermark if available
- if self.watermark is not None:
- image = self.watermark.apply_watermark(image)
-
- image = self.image_processor.postprocess(image, output_type=output_type)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (image,)
-
- return StableDiffusionXLPipelineOutput(images=image)
-
- # Overrride to properly handle the loading and unloading of the additional text encoder.
- def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs):
- # We could have accessed the unet config from `lora_state_dict()` too. We pass
- # it here explicitly to be able to tell that it's coming from an SDXL
- # pipeline.
- state_dict, network_alphas = self.lora_state_dict(
- pretrained_model_name_or_path_or_dict,
- unet_config=self.unet.config,
- **kwargs,
- )
- self.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=self.unet)
-
- text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k}
- if len(text_encoder_state_dict) > 0:
- self.load_lora_into_text_encoder(
- text_encoder_state_dict,
- network_alphas=network_alphas,
- text_encoder=self.text_encoder,
- prefix="text_encoder",
- lora_scale=self.lora_scale,
- )
-
- text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k}
- if len(text_encoder_2_state_dict) > 0:
- self.load_lora_into_text_encoder(
- text_encoder_2_state_dict,
- network_alphas=network_alphas,
- text_encoder=self.text_encoder_2,
- prefix="text_encoder_2",
- lora_scale=self.lora_scale,
- )
-
- @classmethod
- def save_lora_weights(
- self,
- save_directory: Union[str, os.PathLike],
- unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
- text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
- text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
- is_main_process: bool = True,
- weight_name: str = None,
- save_function: Callable = None,
- safe_serialization: bool = False,
- ):
- state_dict = {}
-
- def pack_weights(layers, prefix):
- layers_weights = layers.state_dict() if isinstance(layers, torch.nn.Module) else layers
- layers_state_dict = {f"{prefix}.{module_name}": param for module_name, param in layers_weights.items()}
- return layers_state_dict
-
- state_dict.update(pack_weights(unet_lora_layers, "unet"))
-
- if text_encoder_lora_layers and text_encoder_2_lora_layers:
- state_dict.update(pack_weights(text_encoder_lora_layers, "text_encoder"))
- state_dict.update(pack_weights(text_encoder_2_lora_layers, "text_encoder_2"))
-
- self.write_lora_layers(
- state_dict=state_dict,
- save_directory=save_directory,
- is_main_process=is_main_process,
- weight_name=weight_name,
- save_function=save_function,
- safe_serialization=safe_serialization,
- )
-
- def _remove_text_encoder_monkey_patch(self):
- self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder)
- self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder_2)
diff --git a/spaces/paulbricman/velma/tests/test_negation.py b/spaces/paulbricman/velma/tests/test_negation.py
deleted file mode 100644
index 7aec45b7e9a3b1d6707d1ad87e1334724cabd28f..0000000000000000000000000000000000000000
--- a/spaces/paulbricman/velma/tests/test_negation.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from src.negation import gen_negations, filter_negations
-
-
-def test_gen_negations():
- assert 'Cooking is easy.' in gen_negations(
- 'Cooking is hard.')
-
-
-def test_filter_negations():
- statement = 'Cooking is hard.'
- candidates = ['Cooking is easy.', 'Cooking is difficult.',
- 'Given this, the practice of cooking is absolutely trivial.']
- filtered = filter_negations(statement, candidates)
-
- assert 'Cooking is easy.' in filtered
- assert 'Cooking is difficult.' not in filtered
- assert 'Given this, the practice of cooking is absolutely trivial.' not in filtered
diff --git a/spaces/paulokewunmi/jumia_product_search/image_search_engine/models/__init__.py b/spaces/paulokewunmi/jumia_product_search/image_search_engine/models/__init__.py
deleted file mode 100644
index 0bac3db04b3c88f1169eb2c7cfccd5e814094291..0000000000000000000000000000000000000000
--- a/spaces/paulokewunmi/jumia_product_search/image_search_engine/models/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .arc_margin_product import ArcMarginProduct
-from .efficientnet_ns import EfficientNet_b0_ns
-from .gem_pooling import GeM
diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/general_utils.py b/spaces/phyloforfun/VoucherVision/vouchervision/general_utils.py
deleted file mode 100644
index 9cbe0258fa6b47853f9040380c953f6e3aac87f8..0000000000000000000000000000000000000000
--- a/spaces/phyloforfun/VoucherVision/vouchervision/general_utils.py
+++ /dev/null
@@ -1,1385 +0,0 @@
-import os, yaml, datetime, argparse, re, cv2, random, shutil, tiktoken, json, csv
-from collections import Counter
-import pandas as pd
-from pathlib import Path
-from dataclasses import dataclass
-from tqdm import tqdm
-import numpy as np
-import concurrent.futures
-from time import perf_counter
-import torch
-
-'''
-TIFF --> DNG
-Install
-https://helpx.adobe.com/camera-raw/using/adobe-dng-converter.html
-Read
-https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/dng_commandline.pdf
-
-'''
-
-
-# https://stackoverflow.com/questions/287871/how-do-i-print-colored-text-to-the-terminal
-
-def make_zipfile(source_dir, output_filename):
- shutil.make_archive(output_filename, 'zip', source_dir)
- return output_filename + '.zip'
-
-def validate_dir(dir):
- if not os.path.exists(dir):
- os.makedirs(dir, exist_ok=True)
-
-def get_cfg_from_full_path(path_cfg):
- with open(path_cfg, "r") as ymlfile:
- cfg = yaml.full_load(ymlfile)
- return cfg
-
-def num_tokens_from_string(string, encoding_name):
- try:
- # Ensure the encoding is obtained correctly.
- encoding = tiktoken.get_encoding(encoding_name)
-
- # Convert dictionary to string if it is not already a string
- if isinstance(string, dict):
- string = json.dumps(string, ensure_ascii=False)
-
- # Encode the string and return the number of tokens.
- num_tokens = len(encoding.encode(string))
- except Exception as e:
- # If there's any error, log it and return 0.
- print(f"An error occurred: {e}")
- num_tokens = 0
-
- return num_tokens
-
-def add_to_expense_report(dir_home, data):
- path_expense_report = os.path.join(dir_home, 'expense_report','expense_report.csv')
-
- # Check if the file exists
- file_exists = os.path.isfile(path_expense_report)
-
- # Open the file in append mode if it exists, or write mode if it doesn't
- mode = 'a' if file_exists else 'w'
-
- with open(path_expense_report, mode=mode, newline='') as file:
- writer = csv.writer(file)
-
- # If the file does not exist, write the header first
- if not file_exists:
- writer.writerow(['run','date','api_version','total_cost', 'n_images', 'tokens_in', 'tokens_out', 'rate_in', 'rate_out', 'cost_in', 'cost_out',])
-
- # Write the data row
- writer.writerow(data)
-
-def save_token_info_as_csv(Dirs, LLM_version0, path_api_cost, total_tokens_in, total_tokens_out, n_images):
- version_mapping = {
- 'GPT 4': 'GPT_4',
- 'GPT 3.5': 'GPT_3_5',
- 'Azure GPT 3.5': 'GPT_3_5',
- 'Azure GPT 4': 'GPT_4',
- 'PaLM 2': 'PALM2'
- }
- LLM_version = version_mapping[LLM_version0]
- # Define the CSV file path
- csv_file_path = os.path.join(Dirs.path_cost, Dirs.run_name + '.csv')
-
- cost_in, cost_out, total_cost, rate_in, rate_out = calculate_cost(LLM_version, path_api_cost, total_tokens_in, total_tokens_out)
-
- # The data to be written to the CSV file
- data = [Dirs.run_name, get_datetime(),LLM_version, total_cost, n_images, total_tokens_in, total_tokens_out, rate_in, rate_out, cost_in, cost_out,]
-
- # Open the file in write mode
- with open(csv_file_path, mode='w', newline='') as file:
- writer = csv.writer(file)
-
- # Write the header
- writer.writerow(['run','date','api_version','total_cost', 'n_images', 'tokens_in', 'tokens_out', 'rate_in', 'rate_out', 'cost_in', 'cost_out',])
-
- # Write the data
- writer.writerow(data)
- # Create a summary string
- cost_summary = (f"Cost Summary for {Dirs.run_name}:\n"
- f" API Cost In: ${rate_in} per 1000 Tokens\n"
- f" API Cost Out: ${rate_out} per 1000 Tokens\n"
- f" Tokens In: {total_tokens_in} - Cost: ${cost_in:.4f}\n"
- f" Tokens In: {total_tokens_in} - Cost: ${cost_in:.4f}\n"
- f" Tokens Out: {total_tokens_out} - Cost: ${cost_out:.4f}\n"
- f" Images Processed: {n_images}\n"
- f" Total Cost: ${total_cost:.4f}")
- return cost_summary, data, total_cost
-
-def summarize_expense_report(path_expense_report):
- # Initialize counters and sums
- run_count = 0
- total_cost_sum = 0
- tokens_in_sum = 0
- tokens_out_sum = 0
- rate_in_sum = 0
- rate_out_sum = 0
- cost_in_sum = 0
- cost_out_sum = 0
- n_images_sum = 0
- api_version_counts = Counter()
-
- # Try to read the CSV file into a DataFrame
- try:
- df = pd.read_csv(path_expense_report)
-
- # Process each row in the DataFrame
- for index, row in df.iterrows():
- run_count += 1
- total_cost_sum += row['total_cost']
- tokens_in_sum += row['tokens_in']
- tokens_out_sum += row['tokens_out']
- rate_in_sum += row['rate_in']
- rate_out_sum += row['rate_out']
- cost_in_sum += row['cost_in']
- cost_out_sum += row['cost_out']
- n_images_sum += row['n_images']
- api_version_counts[row['api_version']] += 1
-
- except FileNotFoundError:
- print(f"The file {path_expense_report} does not exist.")
- return None
-
- # Calculate API version percentages
- api_version_percentages = {version: (count / run_count) * 100 for version, count in api_version_counts.items()}
-
- # Calculate cost per image for each API version
- cost_per_image_dict = {}
- for version, count in api_version_counts.items():
- total_cost = df[df['api_version'] == version]['total_cost'].sum()
- n_images = df[df['api_version'] == version]['n_images'].sum()
- cost_per_image = total_cost / n_images if n_images > 0 else 0
- cost_per_image_dict[version] = cost_per_image
-
- # Return the DataFrame and all summaries
- return {
- 'run_count': run_count,
- 'total_cost_sum': total_cost_sum,
- 'tokens_in_sum': tokens_in_sum,
- 'tokens_out_sum': tokens_out_sum,
- 'rate_in_sum': rate_in_sum,
- 'rate_out_sum': rate_out_sum,
- 'cost_in_sum': cost_in_sum,
- 'cost_out_sum': cost_out_sum,
- 'n_images_sum':n_images_sum,
- 'api_version_percentages': api_version_percentages,
- 'cost_per_image': cost_per_image_dict
- }, df
-
-def calculate_cost(LLM_version, path_api_cost, total_tokens_in, total_tokens_out):
- # Load the rates from the YAML file
- with open(path_api_cost, 'r') as file:
- cost_data = yaml.safe_load(file)
-
- # Get the rates for the specified LLM version
- if LLM_version in cost_data:
- rates = cost_data[LLM_version]
- cost_in = rates['in'] * (total_tokens_in/1000)
- cost_out = rates['out'] * (total_tokens_out/1000)
- total_cost = cost_in + cost_out
- else:
- raise ValueError(f"LLM version {LLM_version} not found in the cost data")
-
- return cost_in, cost_out, total_cost, rates['in'], rates['out']
-
-def create_google_ocr_yaml_config(output_file, dir_images_local, dir_output):
- # Define the configuration dictionary
- config = {
- 'leafmachine': {
- 'LLM_version': 'PaLM 2',
- 'archival_component_detector': {
- 'detector_iteration': 'PREP_final',
- 'detector_type': 'Archival_Detector',
- 'detector_version': 'PREP_final',
- 'detector_weights': 'best.pt',
- 'do_save_prediction_overlay_images': True,
- 'ignore_objects_for_overlay': [],
- 'minimum_confidence_threshold': 0.5
- },
- 'cropped_components': {
- 'binarize_labels': False,
- 'binarize_labels_skeletonize': False,
- 'do_save_cropped_annotations': True,
- 'save_cropped_annotations': ['label', 'barcode'],
- 'save_per_annotation_class': True,
- 'save_per_image': False
- },
- 'data': {
- 'do_apply_conversion_factor': False,
- 'include_darwin_core_data_from_combined_file': False,
- 'save_individual_csv_files_landmarks': False,
- 'save_individual_csv_files_measurements': False,
- 'save_individual_csv_files_rulers': False,
- 'save_individual_efd_files': False,
- 'save_json_measurements': False,
- 'save_json_rulers': False
- },
- 'do': {
- 'check_for_corrupt_images_make_vertical': True,
- 'check_for_illegal_filenames': False
- },
- 'logging': {
- 'log_level': None
- },
- 'modules': {
- 'specimen_crop': True
- },
- 'overlay': {
- 'alpha_transparency_archival': 0.3,
- 'alpha_transparency_plant': 0,
- 'alpha_transparency_seg_partial_leaf': 0.3,
- 'alpha_transparency_seg_whole_leaf': 0.4,
- 'ignore_archival_detections_classes': [],
- 'ignore_landmark_classes': [],
- 'ignore_plant_detections_classes': ['leaf_whole', 'specimen'],
- 'line_width_archival': 12,
- 'line_width_efd': 12,
- 'line_width_plant': 12,
- 'line_width_seg': 12,
- 'overlay_background_color': 'black',
- 'overlay_dpi': 300,
- 'save_overlay_to_jpgs': True,
- 'save_overlay_to_pdf': False,
- 'show_archival_detections': True,
- 'show_landmarks': True,
- 'show_plant_detections': True,
- 'show_segmentations': True
- },
- 'print': {
- 'optional_warnings': True,
- 'verbose': True
- },
- 'project': {
- 'batch_size': 500,
- 'build_new_embeddings_database': False,
- 'catalog_numerical_only': False,
- 'continue_run_from_partial_xlsx': '',
- 'delete_all_temps': False,
- 'delete_temps_keep_VVE': False,
- 'dir_images_local': dir_images_local,
- 'dir_output': dir_output,
- 'embeddings_database_name': 'SLTP_UM_AllAsiaMinimalInRegion',
- 'image_location': 'local',
- 'num_workers': 1,
- 'path_to_domain_knowledge_xlsx': '',
- 'prefix_removal': '',
- 'prompt_version': 'Version 2 PaLM 2',
- 'run_name': 'google_vision_ocr_test',
- 'suffix_removal': '',
- 'use_domain_knowledge': False
- },
- 'use_RGB_label_images': False
- }
- }
- # Generate the YAML string from the data structure
- validate_dir(os.path.dirname(output_file))
- yaml_str = yaml.dump(config)
-
- # Write the YAML string to a file
- with open(output_file, 'w') as file:
- file.write(yaml_str)
-
-def test_GPU():
- info = []
- success = False
-
- if torch.cuda.is_available():
- num_gpus = torch.cuda.device_count()
- info.append(f"Number of GPUs: {num_gpus}")
-
- for i in range(num_gpus):
- gpu = torch.cuda.get_device_properties(i)
- info.append(f"GPU {i}: {gpu.name}")
-
- success = True
- else:
- info.append("No GPU found!")
- info.append("LeafMachine2 image cropping and embedding search will be slow or not possible.")
-
- return success, info
-
-
-# def load_cfg(pathToCfg):
-# try:
-# with open(os.path.join(pathToCfg,"LeafMachine2.yaml"), "r") as ymlfile:
-# cfg = yaml.full_load(ymlfile)
-# except:
-# with open(os.path.join(os.path.dirname(os.path.dirname(pathToCfg)),"LeafMachine2.yaml"), "r") as ymlfile:
-# cfg = yaml.full_load(ymlfile)
-# return cfg
-
-# def load_cfg_VV(pathToCfg):
-# try:
-# with open(os.path.join(pathToCfg,"VoucherVision.yaml"), "r") as ymlfile:
-# cfg = yaml.full_load(ymlfile)
-# except:
-# with open(os.path.join(os.path.dirname(os.path.dirname(pathToCfg)),"VoucherVision.yaml"), "r") as ymlfile:
-# cfg = yaml.full_load(ymlfile)
-# return cfg
-
-def load_cfg(pathToCfg, system='LeafMachine2'):
- if system not in ['LeafMachine2', 'VoucherVision', 'SpecimenCrop']:
- raise ValueError("Invalid system. Expected 'LeafMachine2', 'VoucherVision' or 'SpecimenCrop'.")
-
- try:
- with open(os.path.join(pathToCfg, f"{system}.yaml"), "r") as ymlfile:
- cfg = yaml.full_load(ymlfile)
- except:
- with open(os.path.join(os.path.dirname(os.path.dirname(pathToCfg)), f"{system}.yaml"), "r") as ymlfile:
- cfg = yaml.full_load(ymlfile)
- return cfg
-
-
-def import_csv(full_path):
- csv_data = pd.read_csv(full_path,sep=',',header=0, low_memory=False, dtype=str)
- return csv_data
-
-def import_tsv(full_path):
- csv_data = pd.read_csv(full_path,sep='\t',header=0, low_memory=False, dtype=str)
- return csv_data
-
-def parse_cfg():
- parser = argparse.ArgumentParser(
- description='Parse inputs to read config file',
- formatter_class=argparse.ArgumentDefaultsHelpFormatter)
-
- optional_args = parser._action_groups.pop()
- required_args = parser.add_argument_group('MANDATORY arguments')
- required_args.add_argument('--path-to-cfg',
- type=str,
- required=True,
- help='Path to config file - LeafMachine.yaml. Do not include the file name, just the parent dir.')
-
- parser._action_groups.append(optional_args)
- args = parser.parse_args()
- return args
-
-def check_for_subdirs(cfg):
- original_in = cfg['leafmachine']['project']['dir_images_local']
- dirs_list = []
- run_name = []
- has_subdirs = False
- if os.path.isdir(original_in):
- # list contents of the directory
- contents = os.listdir(original_in)
-
- # check if any of the contents is a directory
- subdirs = [f for f in contents if os.path.isdir(os.path.join(original_in, f))]
-
- if len(subdirs) > 0:
- print("The directory contains subdirectories:")
- for subdir in subdirs:
- has_subdirs = True
- print(os.path.join(original_in, subdir))
- dirs_list.append(os.path.join(original_in, subdir))
- run_name.append(subdir)
- else:
- print("The directory does not contain any subdirectories.")
- dirs_list.append(original_in)
- run_name.append(cfg['leafmachine']['project']['run_name'])
-
- else:
- print("The specified path is not a directory.")
-
- return run_name, dirs_list, has_subdirs
-
-def check_for_subdirs_VV(cfg):
- original_in = cfg['leafmachine']['project']['dir_images_local']
- dirs_list = []
- run_name = []
- has_subdirs = False
- if os.path.isdir(original_in):
- dirs_list.append(original_in)
- run_name.append(os.path.basename(os.path.normpath(original_in)))
- # list contents of the directory
- contents = os.listdir(original_in)
-
- # check if any of the contents is a directory
- subdirs = [f for f in contents if os.path.isdir(os.path.join(original_in, f))]
-
- if len(subdirs) > 0:
- print("The directory contains subdirectories:")
- for subdir in subdirs:
- has_subdirs = True
- print(os.path.join(original_in, subdir))
- dirs_list.append(os.path.join(original_in, subdir))
- run_name.append(subdir)
- else:
- print("The directory does not contain any subdirectories.")
- dirs_list.append(original_in)
- run_name.append(cfg['leafmachine']['project']['run_name'])
-
- else:
- print("The specified path is not a directory.")
-
- return run_name, dirs_list, has_subdirs
-
-def get_datetime():
- day = "_".join([str(datetime.datetime.now().strftime("%Y")),str(datetime.datetime.now().strftime("%m")),str(datetime.datetime.now().strftime("%d"))])
- time = "-".join([str(datetime.datetime.now().strftime("%H")),str(datetime.datetime.now().strftime("%M")),str(datetime.datetime.now().strftime("%S"))])
- new_time = "__".join([day,time])
- return new_time
-
-def save_config_file(cfg, logger, Dirs):
- logger.info("Save config file")
- name_yaml = ''.join([Dirs.run_name,'.yaml'])
- write_yaml(cfg, os.path.join(Dirs.path_config_file, name_yaml))
-
-def write_yaml(cfg, path_cfg):
- with open(path_cfg, 'w') as file:
- yaml.dump(cfg, file)
-
-def split_into_batches(Project, logger, cfg):
- logger.name = 'Creating Batches'
- n_batches, n_images = Project.process_in_batches(cfg)
- m = f'Created {n_batches} Batches to Process {n_images} Images'
- logger.info(m)
- return Project, n_batches, m
-
-def make_images_in_dir_vertical(dir_images_unprocessed, cfg):
- if cfg['leafmachine']['do']['check_for_corrupt_images_make_vertical']:
- n_rotate = 0
- n_corrupt = 0
- n_total = len(os.listdir(dir_images_unprocessed))
- for image_name_jpg in tqdm(os.listdir(dir_images_unprocessed), desc=f'{bcolors.BOLD} Checking Image Dimensions{bcolors.ENDC}',colour="cyan",position=0,total = n_total):
- if image_name_jpg.endswith((".jpg",".JPG",".jpeg",".JPEG")):
- try:
- image = cv2.imread(os.path.join(dir_images_unprocessed, image_name_jpg))
- h, w, img_c = image.shape
- image, img_h, img_w, did_rotate = make_image_vertical(image, h, w, do_rotate_180=False)
- if did_rotate:
- n_rotate += 1
- cv2.imwrite(os.path.join(dir_images_unprocessed,image_name_jpg), image)
- except:
- n_corrupt +=1
- os.remove(os.path.join(dir_images_unprocessed, image_name_jpg))
- # TODO check that below works as intended
- elif image_name_jpg.endswith((".tiff",".tif",".png",".PNG",".TIFF",".TIF",".jp2",".JP2",".bmp",".BMP",".dib",".DIB")):
- try:
- image = cv2.imread(os.path.join(dir_images_unprocessed, image_name_jpg))
- h, w, img_c = image.shape
- image, img_h, img_w, did_rotate = make_image_vertical(image, h, w, do_rotate_180=False)
- if did_rotate:
- n_rotate += 1
- image_name_jpg = '.'.join([image_name_jpg.split('.')[0], 'jpg'])
- cv2.imwrite(os.path.join(dir_images_unprocessed,image_name_jpg), image)
- except:
- n_corrupt +=1
- os.remove(os.path.join(dir_images_unprocessed, image_name_jpg))
- m = ''.join(['Number of Images Rotated: ', str(n_rotate)])
- Print_Verbose(cfg, 2, m).bold()
- m2 = ''.join(['Number of Images Corrupted: ', str(n_corrupt)])
- if n_corrupt > 0:
- Print_Verbose(cfg, 2, m2).warning
- else:
- Print_Verbose(cfg, 2, m2).bold
-
-def make_image_vertical(image, h, w, do_rotate_180):
- did_rotate = False
- if do_rotate_180:
- # try:
- image = cv2.rotate(image, cv2.ROTATE_180)
- img_h, img_w, img_c = image.shape
- did_rotate = True
- # print(" Rotated 180")
- else:
- if h < w:
- # try:
- image = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE)
- img_h, img_w, img_c = image.shape
- did_rotate = True
- # print(" Rotated 90 CW")
- elif h >= w:
- image = image
- img_h = h
- img_w = w
- # print(" Not Rotated")
- return image, img_h, img_w, did_rotate
-
-
-def make_image_horizontal(image, h, w, do_rotate_180):
- if h > w:
- if do_rotate_180:
- image = cv2.rotate(image, cv2.ROTATE_180)
- return cv2.rotate(image, cv2.ROTATE_90_COUNTERCLOCKWISE), w, h, True
- return image, w, h, False
-
-def make_images_in_dir_horizontal(dir_images_unprocessed, cfg):
- # if cfg['leafmachine']['do']['check_for_corrupt_images_make_horizontal']:
- n_rotate = 0
- n_corrupt = 0
- n_total = len(os.listdir(dir_images_unprocessed))
- for image_name_jpg in tqdm(os.listdir(dir_images_unprocessed), desc=f'{bcolors.BOLD} Checking Image Dimensions{bcolors.ENDC}', colour="cyan", position=0, total=n_total):
- if image_name_jpg.endswith((".jpg",".JPG",".jpeg",".JPEG")):
- try:
- image = cv2.imread(os.path.join(dir_images_unprocessed, image_name_jpg))
- h, w, img_c = image.shape
- image, img_h, img_w, did_rotate = make_image_horizontal(image, h, w, do_rotate_180=False)
- if did_rotate:
- n_rotate += 1
- cv2.imwrite(os.path.join(dir_images_unprocessed,image_name_jpg), image)
- except:
- n_corrupt +=1
- os.remove(os.path.join(dir_images_unprocessed, image_name_jpg))
- # TODO check that below works as intended
- elif image_name_jpg.endswith((".tiff",".tif",".png",".PNG",".TIFF",".TIF",".jp2",".JP2",".bmp",".BMP",".dib",".DIB")):
- try:
- image = cv2.imread(os.path.join(dir_images_unprocessed, image_name_jpg))
- h, w, img_c = image.shape
- image, img_h, img_w, did_rotate = make_image_horizontal(image, h, w, do_rotate_180=False)
- if did_rotate:
- n_rotate += 1
- image_name_jpg = '.'.join([image_name_jpg.split('.')[0], 'jpg'])
- cv2.imwrite(os.path.join(dir_images_unprocessed,image_name_jpg), image)
- except:
- n_corrupt +=1
- os.remove(os.path.join(dir_images_unprocessed, image_name_jpg))
- m = ''.join(['Number of Images Rotated: ', str(n_rotate)])
- print(m)
- # Print_Verbose(cfg, 2, m).bold()
- m2 = ''.join(['Number of Images Corrupted: ', str(n_corrupt)])
- print(m2)
-
-
-@dataclass
-class Print_Verbose_Error():
- cfg: str = ''
- indent_level: int = 0
- message: str = ''
- error: str = ''
-
- def __init__(self, cfg,indent_level,message,error) -> None:
- self.cfg = cfg
- self.indent_level = indent_level
- self.message = message
- self.error = error
-
- def print_error_to_console(self):
- white_space = " " * 5 * self.indent_level
- if self.cfg['leafmachine']['print']['optional_warnings']:
- print(f"{bcolors.FAIL}{white_space}{self.message} ERROR: {self.error}{bcolors.ENDC}")
-
- def print_warning_to_console(self):
- white_space = " " * 5 * self.indent_level
- if self.cfg['leafmachine']['print']['optional_warnings']:
- print(f"{bcolors.WARNING}{white_space}{self.message} ERROR: {self.error}{bcolors.ENDC}")
-
-@dataclass
-class Print_Verbose():
- cfg: str = ''
- indent_level: int = 0
- message: str = ''
-
- def __init__(self, cfg, indent_level, message) -> None:
- self.cfg = cfg
- self.indent_level = indent_level
- self.message = message
-
- def bold(self):
- white_space = " " * 5 * self.indent_level
- if self.cfg['leafmachine']['print']['verbose']:
- print(f"{bcolors.BOLD}{white_space}{self.message}{bcolors.ENDC}")
-
- def green(self):
- white_space = " " * 5 * self.indent_level
- if self.cfg['leafmachine']['print']['verbose']:
- print(f"{bcolors.OKGREEN}{white_space}{self.message}{bcolors.ENDC}")
-
- def cyan(self):
- white_space = " " * 5 * self.indent_level
- if self.cfg['leafmachine']['print']['verbose']:
- print(f"{bcolors.OKCYAN}{white_space}{self.message}{bcolors.ENDC}")
-
- def blue(self):
- white_space = " " * 5 * self.indent_level
- if self.cfg['leafmachine']['print']['verbose']:
- print(f"{bcolors.OKBLUE}{white_space}{self.message}{bcolors.ENDC}")
-
- def warning(self):
- white_space = " " * 5 * self.indent_level
- if self.cfg['leafmachine']['print']['verbose']:
- print(f"{bcolors.WARNING}{white_space}{self.message}{bcolors.ENDC}")
-
- def plain(self):
- white_space = " " * 5 * self.indent_level
- if self.cfg['leafmachine']['print']['verbose']:
- print(f"{white_space}{self.message}")
-
-def print_main_start(message):
- indent_level = 1
- white_space = " " * 5 * indent_level
- end = " " * int(80 - len(message) - len(white_space))
- # end_white_space = " " * end
- blank = " " * 80
- print(f"{bcolors.CBLUEBG2}{blank}{bcolors.ENDC}")
- print(f"{bcolors.CBLUEBG2}{white_space}{message}{end}{bcolors.ENDC}")
- print(f"{bcolors.CBLUEBG2}{blank}{bcolors.ENDC}")
-
-def print_main_success(message):
- indent_level = 1
- white_space = " " * 5 * indent_level
- end = " " * int(80 - len(message) - len(white_space))
- blank = " " * 80
- # end_white_space = " " * end
- print(f"{bcolors.CGREENBG2}{blank}{bcolors.ENDC}")
- print(f"{bcolors.CGREENBG2}{white_space}{message}{end}{bcolors.ENDC}")
- print(f"{bcolors.CGREENBG2}{blank}{bcolors.ENDC}")
-
-def print_main_warn(message):
- indent_level = 1
- white_space = " " * 5 * indent_level
- end = " " * int(80 - len(message) - len(white_space))
- # end_white_space = " " * end
- blank = " " * 80
- print(f"{bcolors.CYELLOWBG2}{blank}{bcolors.ENDC}")
- print(f"{bcolors.CYELLOWBG2}{white_space}{message}{end}{bcolors.ENDC}")
- print(f"{bcolors.CYELLOWBG2}{blank}{bcolors.ENDC}")
-
-def print_main_fail(message):
- indent_level = 1
- white_space = " " * 5 * indent_level
- end = " " * int(80 - len(message) - len(white_space))
- # end_white_space = " " * end
- blank = " " * 80
- print(f"{bcolors.CREDBG2}{blank}{bcolors.ENDC}")
- print(f"{bcolors.CREDBG2}{white_space}{message}{end}{bcolors.ENDC}")
- print(f"{bcolors.CREDBG2}{blank}{bcolors.ENDC}")
-
-def print_main_info(message):
- indent_level = 2
- white_space = " " * 5 * indent_level
- end = " " * int(80 - len(message) - len(white_space))
- # end_white_space = " " * end
- print(f"{bcolors.CGREYBG}{white_space}{message}{end}{bcolors.ENDC}")
-
-# def report_config(dir_home, cfg_file_path):
-# print_main_start("Loading Configuration File")
-# if cfg_file_path == None:
-# print_main_info(''.join([os.path.join(dir_home, 'LeafMachine2.yaml')]))
-# elif cfg_file_path == 'test_installation':
-# print_main_info(''.join([os.path.join(dir_home, 'demo','LeafMachine2_demo.yaml')]))
-# else:
-# print_main_info(cfg_file_path)
-
-# def report_config_VV(dir_home, cfg_file_path):
-# print_main_start("Loading Configuration File")
-# if cfg_file_path == None:
-# print_main_info(''.join([os.path.join(dir_home, 'VoucherVision.yaml')]))
-# elif cfg_file_path == 'test_installation':
-# print_main_info(''.join([os.path.join(dir_home, 'demo','VoucherVision_demo.yaml')]))
-# else:
-# print_main_info(cfg_file_path)
-
-def report_config(dir_home, cfg_file_path, system='VoucherVision'):
- print_main_start("Loading Configuration File")
-
- if system not in ['LeafMachine2', 'VoucherVision', 'SpecimenCrop']:
- raise ValueError("Invalid system. Expected 'LeafMachine2' or 'VoucherVision' or 'SpecimenCrop'.")
-
- if cfg_file_path == None:
- print_main_info(''.join([os.path.join(dir_home, f'{system}.yaml')]))
- elif cfg_file_path == 'test_installation':
- print_main_info(''.join([os.path.join(dir_home, 'demo', f'{system}_demo.yaml')]))
- else:
- print_main_info(cfg_file_path)
-
-
-def make_file_names_valid(dir, cfg):
- if cfg['leafmachine']['do']['check_for_illegal_filenames']:
- n_total = len(os.listdir(dir))
- for file in tqdm(os.listdir(dir), desc=f'{bcolors.HEADER} Removing illegal characters from file names{bcolors.ENDC}',colour="cyan",position=0,total = n_total):
- name = Path(file).stem
- ext = Path(file).suffix
- name_cleaned = re.sub(r"[^a-zA-Z0-9_-]","-",name)
- name_new = ''.join([name_cleaned,ext])
- i = 0
- try:
- os.rename(os.path.join(dir,file), os.path.join(dir,name_new))
- except:
- while os.path.exists(os.path.join(dir,name_new)):
- i += 1
- name_new = '_'.join([name_cleaned, str(i), ext])
- os.rename(os.path.join(dir,file), os.path.join(dir,name_new))
-
-# def load_config_file(dir_home, cfg_file_path):
-# if cfg_file_path == None: # Default path
-# return load_cfg(dir_home)
-# else:
-# if cfg_file_path == 'test_installation':
-# path_cfg = os.path.join(dir_home,'demo','LeafMachine2_demo.yaml')
-# return get_cfg_from_full_path(path_cfg)
-# else: # Custom path
-# return get_cfg_from_full_path(cfg_file_path)
-
-# def load_config_file_VV(dir_home, cfg_file_path):
-# if cfg_file_path == None: # Default path
-# return load_cfg_VV(dir_home)
-# else:
-# if cfg_file_path == 'test_installation':
-# path_cfg = os.path.join(dir_home,'demo','VoucherVision_demo.yaml')
-# return get_cfg_from_full_path(path_cfg)
-# else: # Custom path
-# return get_cfg_from_full_path(cfg_file_path)
-
-def load_config_file(dir_home, cfg_file_path, system='LeafMachine2'):
- if system not in ['LeafMachine2', 'VoucherVision', 'SpecimenCrop']:
- raise ValueError("Invalid system. Expected 'LeafMachine2' or 'VoucherVision' or 'SpecimenCrop'.")
-
- if cfg_file_path is None: # Default path
- if system == 'LeafMachine2':
- return load_cfg(dir_home, system='LeafMachine2') # For LeafMachine2
-
- elif system == 'VoucherVision': # VoucherVision
- return load_cfg(dir_home, system='VoucherVision') # For VoucherVision
-
- elif system == 'SpecimenCrop': # SpecimenCrop
- return load_cfg(dir_home, system='SpecimenCrop') # For SpecimenCrop
-
- else:
- if cfg_file_path == 'test_installation':
- path_cfg = os.path.join(dir_home, 'demo', f'{system}_demo.yaml')
- return get_cfg_from_full_path(path_cfg)
- else: # Custom path
- return get_cfg_from_full_path(cfg_file_path)
-
-
-def load_config_file_testing(dir_home, cfg_file_path):
- if cfg_file_path == None: # Default path
- return load_cfg(dir_home)
- else:
- if cfg_file_path == 'test_installation':
- path_cfg = os.path.join(dir_home,'demo','demo.yaml')
- return get_cfg_from_full_path(path_cfg)
- else: # Custom path
- return get_cfg_from_full_path(cfg_file_path)
-
-def subset_dir_images(cfg, Project, Dirs):
- if cfg['leafmachine']['project']['process_subset_of_images']:
- dir_images_subset = cfg['leafmachine']['project']['dir_images_subset']
- num_images_per_species = cfg['leafmachine']['project']['n_images_per_species']
- if cfg['leafmachine']['project']['species_list'] is not None:
- species_list = import_csv(cfg['leafmachine']['project']['species_list'])
- species_list = species_list.iloc[:, 0].tolist()
- else:
- species_list = None
-
- validate_dir(dir_images_subset)
-
- species_counts = {}
- filenames = os.listdir(Project.dir_images)
- random.shuffle(filenames)
- for filename in filenames:
- species_name = filename.split('.')[0]
- species_name = species_name.split('_')[2:]
- species_name = '_'.join([species_name[0], species_name[1], species_name[2]])
-
- if (species_list is None) or ((species_name in species_list) and (species_list is not None)):
-
- if species_name not in species_counts:
- species_counts[species_name] = 0
-
- if species_counts[species_name] < num_images_per_species:
- species_counts[species_name] += 1
- src_path = os.path.join(Project.dir_images, filename)
- dest_path = os.path.join(dir_images_subset, filename)
- shutil.copy(src_path, dest_path)
-
- Project.dir_images = dir_images_subset
-
- subset_csv_name = os.path.join(Dirs.dir_images_subset, '.'.join([Dirs.run_name, 'csv']))
- df = pd.DataFrame({'species_name': list(species_counts.keys()), 'count': list(species_counts.values())})
- df.to_csv(subset_csv_name, index=False)
- return Project
- else:
- return Project
-
-'''# Define function to be executed by each worker
-def worker_crop(rank, cfg, dir_home, Project, Dirs):
- # Set worker seed based on rank
- np.random.seed(rank)
- # Call function for this worker
- crop_detections_from_images(cfg, dir_home, Project, Dirs)
-
-def crop_detections_from_images(cfg, dir_home, Project, Dirs):
- num_workers = 6
-
- # Initialize and start worker processes
- processes = []
- for rank in range(num_workers):
- p = mp.Process(target=worker_crop, args=(rank, cfg, dir_home, Project, Dirs))
- p.start()
- processes.append(p)
-
- # Wait for all worker processes to finish
- for p in processes:
- p.join()'''
-
-def crop_detections_from_images_worker_VV(filename, analysis, Project, Dirs, save_per_image, save_per_class, save_list, binarize_labels):
- try:
- full_image = cv2.imread(os.path.join(Project.dir_images, '.'.join([filename, 'jpg'])))
- except:
- full_image = cv2.imread(os.path.join(Project.dir_images, '.'.join([filename, 'jpeg'])))
-
- try:
- archival = analysis['Detections_Archival_Components']
- has_archival = True
- except:
- has_archival = False
-
- try:
- plant = analysis['Detections_Plant_Components']
- has_plant = True
- except:
- has_plant = False
-
- if has_archival and (save_per_image or save_per_class):
- crop_component_from_yolo_coords_VV('ARCHIVAL', Dirs, analysis, archival, full_image, filename, save_per_image, save_per_class, save_list)
-
-def crop_detections_from_images_worker(filename, analysis, Project, Dirs, save_per_image, save_per_class, save_list, binarize_labels):
- try:
- full_image = cv2.imread(os.path.join(Project.dir_images, '.'.join([filename, 'jpg'])))
- except:
- full_image = cv2.imread(os.path.join(Project.dir_images, '.'.join([filename, 'jpeg'])))
-
- try:
- archival = analysis['Detections_Archival_Components']
- has_archival = True
- except:
- has_archival = False
-
- try:
- plant = analysis['Detections_Plant_Components']
- has_plant = True
- except:
- has_plant = False
-
- if has_archival and (save_per_image or save_per_class):
- crop_component_from_yolo_coords('ARCHIVAL', Dirs, analysis, archival, full_image, filename, save_per_image, save_per_class, save_list)
- if has_plant and (save_per_image or save_per_class):
- crop_component_from_yolo_coords('PLANT', Dirs, analysis, plant, full_image, filename, save_per_image, save_per_class, save_list)
-
-
-def crop_detections_from_images(cfg, logger, dir_home, Project, Dirs, batch_size=50):
- t2_start = perf_counter()
- logger.name = 'Crop Components'
-
- if cfg['leafmachine']['cropped_components']['do_save_cropped_annotations']:
- detections = cfg['leafmachine']['cropped_components']['save_cropped_annotations']
- logger.info(f"Cropping {detections} components from images")
-
- save_per_image = cfg['leafmachine']['cropped_components']['save_per_image']
- save_per_class = cfg['leafmachine']['cropped_components']['save_per_annotation_class']
- save_list = cfg['leafmachine']['cropped_components']['save_cropped_annotations']
- try:
- binarize_labels = cfg['leafmachine']['cropped_components']['binarize_labels']
- except:
- binarize_labels = False
- if cfg['leafmachine']['project']['batch_size'] is None:
- batch_size = 50
- else:
- batch_size = int(cfg['leafmachine']['project']['batch_size'])
- if cfg['leafmachine']['project']['num_workers'] is None:
- num_workers = 4
- else:
- num_workers = int(cfg['leafmachine']['project']['num_workers'])
-
- if binarize_labels:
- save_per_class = True
-
- with concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor:
- futures = []
- for i in range(0, len(Project.project_data), batch_size):
- batch = list(Project.project_data.items())[i:i+batch_size]
- # print(f'Cropping Detections from Images {i} to {i+batch_size}')
- logger.info(f'Cropping {detections} from images {i} to {i+batch_size} [{len(Project.project_data)}]')
- for filename, analysis in batch:
- if len(analysis) != 0:
- futures.append(executor.submit(crop_detections_from_images_worker, filename, analysis, Project, Dirs, save_per_image, save_per_class, save_list, binarize_labels))
-
- for future in concurrent.futures.as_completed(futures):
- pass
- futures.clear()
-
- t2_stop = perf_counter()
- logger.info(f"Save cropped components --- elapsed time: {round(t2_stop - t2_start)} seconds")
-
-def crop_detections_from_images_VV(cfg, logger, dir_home, Project, Dirs, batch_size=50):
- t2_start = perf_counter()
- logger.name = 'Crop Components'
-
- if cfg['leafmachine']['cropped_components']['do_save_cropped_annotations']:
- detections = cfg['leafmachine']['cropped_components']['save_cropped_annotations']
- logger.info(f"Cropping {detections} components from images")
-
- save_per_image = cfg['leafmachine']['cropped_components']['save_per_image']
- save_per_class = cfg['leafmachine']['cropped_components']['save_per_annotation_class']
- save_list = cfg['leafmachine']['cropped_components']['save_cropped_annotations']
- binarize_labels = cfg['leafmachine']['cropped_components']['binarize_labels']
- if cfg['leafmachine']['project']['batch_size'] is None:
- batch_size = 50
- else:
- batch_size = int(cfg['leafmachine']['project']['batch_size'])
- if cfg['leafmachine']['project']['num_workers'] is None:
- num_workers = 4
- else:
- num_workers = int(cfg['leafmachine']['project']['num_workers'])
-
- if binarize_labels:
- save_per_class = True
-
- with concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor:
- futures = []
- for i in range(0, len(Project.project_data), batch_size):
- batch = list(Project.project_data.items())[i:i+batch_size]
- # print(f'Cropping Detections from Images {i} to {i+batch_size}')
- logger.info(f'Cropping {detections} from images {i} to {i+batch_size} [{len(Project.project_data)}]')
- for filename, analysis in batch:
- if len(analysis) != 0:
- futures.append(executor.submit(crop_detections_from_images_worker_VV, filename, analysis, Project, Dirs, save_per_image, save_per_class, save_list, binarize_labels))
-
- for future in concurrent.futures.as_completed(futures):
- pass
- futures.clear()
-
- t2_stop = perf_counter()
- logger.info(f"Save cropped components --- elapsed time: {round(t2_stop - t2_start)} seconds")
-# def crop_detections_from_images_VV(cfg, logger, dir_home, Project, Dirs, batch_size=50):
-# t2_start = perf_counter()
-# logger.name = 'Crop Components'
-
-# if cfg['leafmachine']['cropped_components']['do_save_cropped_annotations']:
-# detections = cfg['leafmachine']['cropped_components']['save_cropped_annotations']
-# logger.info(f"Cropping {detections} components from images")
-
-# save_per_image = cfg['leafmachine']['cropped_components']['save_per_image']
-# save_per_class = cfg['leafmachine']['cropped_components']['save_per_annotation_class']
-# save_list = cfg['leafmachine']['cropped_components']['save_cropped_annotations']
-# binarize_labels = cfg['leafmachine']['cropped_components']['binarize_labels']
-# if cfg['leafmachine']['project']['batch_size'] is None:
-# batch_size = 50
-# else:
-# batch_size = int(cfg['leafmachine']['project']['batch_size'])
-
-# if binarize_labels:
-# save_per_class = True
-
-# for i in range(0, len(Project.project_data), batch_size):
-# batch = list(Project.project_data.items())[i:i+batch_size]
-# logger.info(f"Cropping {detections} from images {i} to {i+batch_size} [{len(Project.project_data)}]")
-# for filename, analysis in batch:
-# if len(analysis) != 0:
-# crop_detections_from_images_worker_VV(filename, analysis, Project, Dirs, save_per_image, save_per_class, save_list, binarize_labels)
-
-# t2_stop = perf_counter()
-# logger.info(f"Save cropped components --- elapsed time: {round(t2_stop - t2_start)} seconds")
-
-
-# def crop_detections_from_images_SpecimenCrop(cfg, logger, dir_home, Project, Dirs, original_img_dir=None, batch_size=50):
-# t2_start = perf_counter()
-# logger.name = 'Crop Components --- Specimen Crop'
-
-# if cfg['leafmachine']['modules']['specimen_crop']:
-# # save_list = ['ruler', 'barcode', 'colorcard', 'label', 'map', 'envelope', 'photo', 'attached_item', 'weights',
-# # 'leaf_whole', 'leaf_partial', 'leaflet', 'seed_fruit_one', 'seed_fruit_many', 'flower_one', 'flower_many', 'bud', 'specimen', 'roots', 'wood']
-# save_list = cfg['leafmachine']['cropped_components']['include_these_objects_in_specimen_crop']
-
-# logger.info(f"Cropping to include {save_list} components from images")
-
-# if cfg['leafmachine']['project']['batch_size'] is None:
-# batch_size = 50
-# else:
-# batch_size = int(cfg['leafmachine']['project']['batch_size'])
-# if cfg['leafmachine']['project']['num_workers'] is None:
-# num_workers = 4
-# else:
-# num_workers = int(cfg['leafmachine']['project']['num_workers'])
-
-# with concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor:
-# futures = []
-# for i in range(0, len(Project.project_data), batch_size):
-# batch = list(Project.project_data.items())[i:i+batch_size]
-# # print(f'Cropping Detections from Images {i} to {i+batch_size}')
-# logger.info(f'Cropping {save_list} from images {i} to {i+batch_size} [{len(Project.project_data)}]')
-# for filename, analysis in batch:
-# if len(analysis) != 0:
-# futures.append(executor.submit(crop_detections_from_images_worker_SpecimenCrop, filename, analysis, Project, Dirs, save_list, original_img_dir))
-
-# for future in concurrent.futures.as_completed(futures):
-# pass
-# futures.clear()
-
-# t2_stop = perf_counter()
-# logger.info(f"Save cropped components --- elapsed time: {round(t2_stop - t2_start)} seconds")
-
-'''
-# Single threaded
-def crop_detections_from_images(cfg, dir_home, Project, Dirs):
- if cfg['leafmachine']['cropped_components']['do_save_cropped_annotations']:
- save_per_image = cfg['leafmachine']['cropped_components']['save_per_image']
- save_per_class = cfg['leafmachine']['cropped_components']['save_per_annotation_class']
- save_list = cfg['leafmachine']['cropped_components']['save_cropped_annotations']
- binarize_labels = cfg['leafmachine']['cropped_components']['binarize_labels']
- if binarize_labels:
- save_per_class = True
-
- for filename, analysis in tqdm(Project.project_data.items(), desc=f'{bcolors.BOLD} Cropping Detections from Images{bcolors.ENDC}',colour="cyan",position=0,total = len(Project.project_data.items())):
- if len(analysis) != 0:
- try:
- full_image = cv2.imread(os.path.join(Project.dir_images, '.'.join([filename, 'jpg'])))
- except:
- full_image = cv2.imread(os.path.join(Project.dir_images, '.'.join([filename, 'jpeg'])))
-
- try:
- archival = analysis['Detections_Archival_Components']
- has_archival = True
- except:
- has_archival = False
-
- try:
- plant = analysis['Detections_Plant_Components']
- has_plant = True
- except:
- has_plant = False
-
- if has_archival and (save_per_image or save_per_class):
- crop_component_from_yolo_coords('ARCHIVAL', Dirs, analysis, archival, full_image, filename, save_per_image, save_per_class, save_list)
- if has_plant and (save_per_image or save_per_class):
- crop_component_from_yolo_coords('PLANT', Dirs, analysis, plant, full_image, filename, save_per_image, save_per_class, save_list)
-'''
-
-
-def process_detections(success, save_list, detections, detection_type, height, width, min_x, min_y, max_x, max_y):
- for detection in detections:
- detection_class = detection[0]
- detection_class = set_index_for_annotation(detection_class, detection_type)
-
- if (detection_class in save_list) or ('save_all' in save_list):
- location = yolo_to_position_ruler(detection, height, width)
- ruler_polygon = [
- (location[1], location[2]),
- (location[3], location[2]),
- (location[3], location[4]),
- (location[1], location[4])
- ]
-
- x_coords = [x for x, y in ruler_polygon]
- y_coords = [y for x, y in ruler_polygon]
-
- min_x = min(min_x, *x_coords)
- min_y = min(min_y, *y_coords)
- max_x = max(max_x, *x_coords)
- max_y = max(max_y, *y_coords)
- success = True
-
- return min_x, min_y, max_x, max_y, success
-
-
-def crop_component_from_yolo_coords_VV(anno_type, Dirs, analysis, all_detections, full_image, filename, save_per_image, save_per_class, save_list):
- height = analysis['height']
- width = analysis['width']
-
- # Initialize a list to hold all the cropped images
- cropped_images = []
-
- if len(all_detections) < 1:
- print(' MAKE THIS HAVE AN EMPTY PLACEHOLDER') # TODO ###################################################################################
- else:
- for detection in all_detections:
- detection_class = detection[0]
- detection_class = set_index_for_annotation(detection_class, anno_type)
-
- if (detection_class in save_list) or ('save_all' in save_list):
-
- location = yolo_to_position_ruler(detection, height, width)
- ruler_polygon = [(location[1], location[2]), (location[3], location[2]), (location[3], location[4]), (location[1], location[4])]
-
- x_coords = [x for x, y in ruler_polygon]
- y_coords = [y for x, y in ruler_polygon]
-
- min_x, min_y = min(x_coords), min(y_coords)
- max_x, max_y = max(x_coords), max(y_coords)
-
- detection_cropped = full_image[min_y:max_y, min_x:max_x]
- cropped_images.append(detection_cropped)
- loc = '-'.join([str(min_x), str(min_y), str(max_x), str(max_y)])
- detection_cropped_name = '.'.join(['__'.join([filename, detection_class, loc]), 'jpg'])
- # detection_cropped_name = '.'.join([filename,'jpg'])
-
- # save_per_image
- if (detection_class in save_list) and save_per_image:
- if detection_class == 'label':
- detection_class2 = 'label_ind'
- else:
- detection_class2 = detection_class
- dir_destination = os.path.join(Dirs.save_per_image, filename, detection_class2)
- # print(os.path.join(dir_destination,detection_cropped_name))
- validate_dir(dir_destination)
- # cv2.imwrite(os.path.join(dir_destination,detection_cropped_name), detection_cropped)
-
- # save_per_class
- if (detection_class in save_list) and save_per_class:
- if detection_class == 'label':
- detection_class2 = 'label_ind'
- else:
- detection_class2 = detection_class
- dir_destination = os.path.join(Dirs.save_per_annotation_class, detection_class2)
- # print(os.path.join(dir_destination,detection_cropped_name))
- validate_dir(dir_destination)
- # cv2.imwrite(os.path.join(dir_destination,detection_cropped_name), detection_cropped)
- else:
- # print(f'detection_class: {detection_class} not in save_list: {save_list}')
- pass
-
- # Initialize a list to hold all the acceptable cropped images
- acceptable_cropped_images = []
-
- for img in cropped_images:
- # Calculate the aspect ratio of the image
- aspect_ratio = min(img.shape[0], img.shape[1]) / max(img.shape[0], img.shape[1])
- # Only add the image to the acceptable list if the aspect ratio is more square than 1:8
- if aspect_ratio >= 1/8:
- acceptable_cropped_images.append(img)
-
- # Sort acceptable_cropped_images by area (largest first)
- acceptable_cropped_images.sort(key=lambda img: img.shape[0] * img.shape[1], reverse=True)
-
-
- # If there are no acceptable cropped images, set combined_image to None or to a placeholder image
- if not acceptable_cropped_images:
- combined_image = None # Or a placeholder image here
- else:
- # # Recalculate max_width and total_height for acceptable images
- # max_width = max(img.shape[1] for img in acceptable_cropped_images)
- # total_height = sum(img.shape[0] for img in acceptable_cropped_images)
-
- # # Now, combine all the acceptable cropped images into a single image
- # combined_image = np.zeros((total_height, max_width, 3), dtype=np.uint8)
-
- # y_offset = 0
- # for img in acceptable_cropped_images:
- # combined_image[y_offset:y_offset+img.shape[0], :img.shape[1]] = img
- # y_offset += img.shape[0]
- # Start with the first image
- # Recalculate max_width and total_height for acceptable images
- max_width = max(img.shape[1] for img in acceptable_cropped_images)
- total_height = sum(img.shape[0] for img in acceptable_cropped_images)
- combined_image = np.zeros((total_height, max_width, 3), dtype=np.uint8)
-
- y_offset = 0
- y_offset_next_row = 0
- x_offset = 0
-
- # Start with the first image
- combined_image[y_offset:y_offset+acceptable_cropped_images[0].shape[0], :acceptable_cropped_images[0].shape[1]] = acceptable_cropped_images[0]
- y_offset_next_row += acceptable_cropped_images[0].shape[0]
-
- # Add the second image below the first one
- y_offset = y_offset_next_row
- combined_image[y_offset:y_offset+acceptable_cropped_images[1].shape[0], :acceptable_cropped_images[1].shape[1]] = acceptable_cropped_images[1]
- y_offset_next_row += acceptable_cropped_images[1].shape[0]
-
- # Create a list to store the images that are too tall for the current row
- too_tall_images = []
-
- # Now try to fill in to the right with the remaining images
- current_width = acceptable_cropped_images[1].shape[1]
-
- for img in acceptable_cropped_images[2:]:
- if current_width + img.shape[1] > max_width:
- # If this image doesn't fit, start a new row
- y_offset = y_offset_next_row
- combined_image[y_offset:y_offset+img.shape[0], :img.shape[1]] = img
- current_width = img.shape[1]
- y_offset_next_row = y_offset + img.shape[0]
- else:
- # If this image fits, add it to the right
- max_height = y_offset_next_row - y_offset
- if img.shape[0] > max_height:
- too_tall_images.append(img)
- else:
- combined_image[y_offset:y_offset+img.shape[0], current_width:current_width+img.shape[1]] = img
- current_width += img.shape[1]
-
- # Process the images that were too tall for their rows
- for img in too_tall_images:
- y_offset = y_offset_next_row
- combined_image[y_offset:y_offset+img.shape[0], :img.shape[1]] = img
- y_offset_next_row += img.shape[0]
-
- # Trim the combined_image to remove extra black space
- combined_image = combined_image[:y_offset_next_row]
-
-
- # save the combined image
- # if (detection_class in save_list) and save_per_class:
- dir_destination = os.path.join(Dirs.save_per_annotation_class, 'label')
- validate_dir(dir_destination)
- # combined_image_name = '__'.join([filename, detection_class]) + '.jpg'
- combined_image_name = '.'.join([filename,'jpg'])
- cv2.imwrite(os.path.join(dir_destination, combined_image_name), combined_image)
-
- original_image_name = '.'.join([filename,'jpg'])
- cv2.imwrite(os.path.join(Dirs.save_original, original_image_name), full_image)
-
-
-
-def crop_component_from_yolo_coords(anno_type, Dirs, analysis, all_detections, full_image, filename, save_per_image, save_per_class, save_list):
- height = analysis['height']
- width = analysis['width']
- if len(all_detections) < 1:
- print(' MAKE THIS HAVE AN EMPTY PLACEHOLDER') # TODO ###################################################################################
- else:
- for detection in all_detections:
- detection_class = detection[0]
- detection_class = set_index_for_annotation(detection_class, anno_type)
-
- if (detection_class in save_list) or ('save_all' in save_list):
-
- location = yolo_to_position_ruler(detection, height, width)
- ruler_polygon = [(location[1], location[2]), (location[3], location[2]), (location[3], location[4]), (location[1], location[4])]
-
- x_coords = [x for x, y in ruler_polygon]
- y_coords = [y for x, y in ruler_polygon]
-
- min_x, min_y = min(x_coords), min(y_coords)
- max_x, max_y = max(x_coords), max(y_coords)
-
- detection_cropped = full_image[min_y:max_y, min_x:max_x]
- loc = '-'.join([str(min_x), str(min_y), str(max_x), str(max_y)])
- detection_cropped_name = '.'.join(['__'.join([filename, detection_class, loc]), 'jpg'])
-
- # save_per_image
- if (detection_class in save_list) and save_per_image:
- dir_destination = os.path.join(Dirs.save_per_image, filename, detection_class)
- # print(os.path.join(dir_destination,detection_cropped_name))
- validate_dir(dir_destination)
- cv2.imwrite(os.path.join(dir_destination,detection_cropped_name), detection_cropped)
-
- # save_per_class
- if (detection_class in save_list) and save_per_class:
- dir_destination = os.path.join(Dirs.save_per_annotation_class, detection_class)
- # print(os.path.join(dir_destination,detection_cropped_name))
- validate_dir(dir_destination)
- cv2.imwrite(os.path.join(dir_destination,detection_cropped_name), detection_cropped)
- else:
- # print(f'detection_class: {detection_class} not in save_list: {save_list}')
- pass
-
-def yolo_to_position_ruler(annotation, height, width):
- return ['ruler',
- int((annotation[1] * width) - ((annotation[3] * width) / 2)),
- int((annotation[2] * height) - ((annotation[4] * height) / 2)),
- int(annotation[3] * width) + int((annotation[1] * width) - ((annotation[3] * width) / 2)),
- int(annotation[4] * height) + int((annotation[2] * height) - ((annotation[4] * height) / 2))]
-
-
-class bcolors:
- HEADER = '\033[95m'
- OKBLUE = '\033[94m'
- OKCYAN = '\033[96m'
- OKGREEN = '\033[92m'
- WARNING = '\033[93m'
- FAIL = '\033[91m'
- ENDC = '\033[0m'
- BOLD = '\033[1m'
- UNDERLINE = '\033[4m'
- CEND = '\33[0m'
- CBOLD = '\33[1m'
- CITALIC = '\33[3m'
- CURL = '\33[4m'
- CBLINK = '\33[5m'
- CBLINK2 = '\33[6m'
- CSELECTED = '\33[7m'
-
- CBLACK = '\33[30m'
- CRED = '\33[31m'
- CGREEN = '\33[32m'
- CYELLOW = '\33[33m'
- CBLUE = '\33[34m'
- CVIOLET = '\33[35m'
- CBEIGE = '\33[36m'
- CWHITE = '\33[37m'
-
- CBLACKBG = '\33[40m'
- CREDBG = '\33[41m'
- CGREENBG = '\33[42m'
- CYELLOWBG = '\33[43m'
- CBLUEBG = '\33[44m'
- CVIOLETBG = '\33[45m'
- CBEIGEBG = '\33[46m'
- CWHITEBG = '\33[47m'
-
- CGREY = '\33[90m'
- CRED2 = '\33[91m'
- CGREEN2 = '\33[92m'
- CYELLOW2 = '\33[93m'
- CBLUE2 = '\33[94m'
- CVIOLET2 = '\33[95m'
- CBEIGE2 = '\33[96m'
- CWHITE2 = '\33[97m'
-
- CGREYBG = '\33[100m'
- CREDBG2 = '\33[101m'
- CGREENBG2 = '\33[102m'
- CYELLOWBG2 = '\33[103m'
- CBLUEBG2 = '\33[104m'
- CVIOLETBG2 = '\33[105m'
- CBEIGEBG2 = '\33[106m'
- CWHITEBG2 = '\33[107m'
- CBLUEBG3 = '\33[112m'
-
-
-def set_index_for_annotation(cls,annoType):
- if annoType == 'PLANT':
- if cls == 0:
- annoInd = 'Leaf_WHOLE'
- elif cls == 1:
- annoInd = 'Leaf_PARTIAL'
- elif cls == 2:
- annoInd = 'Leaflet'
- elif cls == 3:
- annoInd = 'Seed_Fruit_ONE'
- elif cls == 4:
- annoInd = 'Seed_Fruit_MANY'
- elif cls == 5:
- annoInd = 'Flower_ONE'
- elif cls == 6:
- annoInd = 'Flower_MANY'
- elif cls == 7:
- annoInd = 'Bud'
- elif cls == 8:
- annoInd = 'Specimen'
- elif cls == 9:
- annoInd = 'Roots'
- elif cls == 10:
- annoInd = 'Wood'
- elif annoType == 'ARCHIVAL':
- if cls == 0:
- annoInd = 'Ruler'
- elif cls == 1:
- annoInd = 'Barcode'
- elif cls == 2:
- annoInd = 'Colorcard'
- elif cls == 3:
- annoInd = 'Label'
- elif cls == 4:
- annoInd = 'Map'
- elif cls == 5:
- annoInd = 'Envelope'
- elif cls == 6:
- annoInd = 'Photo'
- elif cls == 7:
- annoInd = 'Attached_item'
- elif cls == 8:
- annoInd = 'Weights'
- return annoInd.lower()
-# def set_yaml(path_to_yaml, value):
-# with open('file_to_edit.yaml') as f:
-# doc = yaml.load(f)
-
-# doc['state'] = state
-
-# with open('file_to_edit.yaml', 'w') as f:
-# yaml.dump(doc, f)
\ No newline at end of file
diff --git a/spaces/plzdontcry/dakubettergpt/src/components/SettingsMenu/SettingsMenu.tsx b/spaces/plzdontcry/dakubettergpt/src/components/SettingsMenu/SettingsMenu.tsx
deleted file mode 100644
index de682cddf9ea87fc324fd6acc9ac2ddc1d33fc83..0000000000000000000000000000000000000000
--- a/spaces/plzdontcry/dakubettergpt/src/components/SettingsMenu/SettingsMenu.tsx
+++ /dev/null
@@ -1,64 +0,0 @@
-import React, { useEffect, useState } from 'react';
-import { useTranslation } from 'react-i18next';
-import useStore from '@store/store';
-
-import PopupModal from '@components/PopupModal';
-import SettingIcon from '@icon/SettingIcon';
-import ThemeSwitcher from '@components/Menu/MenuOptions/ThemeSwitcher';
-import LanguageSelector from '@components/LanguageSelector';
-import AutoTitleToggle from './AutoTitleToggle';
-import AdvancedModeToggle from './AdvencedModeToggle';
-import InlineLatexToggle from './InlineLatexToggle';
-
-import PromptLibraryMenu from '@components/PromptLibraryMenu';
-import ChatConfigMenu from '@components/ChatConfigMenu';
-import EnterToSubmitToggle from './EnterToSubmitToggle';
-import TotalTokenCost, { TotalTokenCostToggle } from './TotalTokenCost';
-import ClearConversation from '@components/Menu/MenuOptions/ClearConversation';
-
-const SettingsMenu = () => {
- const { t } = useTranslation();
-
- const theme = useStore.getState().theme;
- const [isModalOpen, setIsModalOpen] = useState(false);
-
- useEffect(() => {
- document.documentElement.className = theme;
- }, [theme]);
- return (
- <>
- {
- setIsModalOpen(true);
- }}
- >
- {t('setting') as string}
-
- {isModalOpen && (
-
-
-
- )}
- >
- );
-};
-
-export default SettingsMenu;
diff --git a/spaces/posak/Tune-A-Video-Training-UI/trainer.py b/spaces/posak/Tune-A-Video-Training-UI/trainer.py
deleted file mode 100644
index 8aa11436245e9b3f581871d6b6358c07cee75aab..0000000000000000000000000000000000000000
--- a/spaces/posak/Tune-A-Video-Training-UI/trainer.py
+++ /dev/null
@@ -1,166 +0,0 @@
-from __future__ import annotations
-
-import datetime
-import os
-import pathlib
-import shlex
-import shutil
-import subprocess
-import sys
-
-import gradio as gr
-import slugify
-import torch
-from huggingface_hub import HfApi
-from omegaconf import OmegaConf
-
-from app_upload import ModelUploader
-from utils import save_model_card
-
-sys.path.append('Tune-A-Video')
-
-URL_TO_JOIN_MODEL_LIBRARY_ORG = 'https://huggingface.co/organizations/Tune-A-Video-library/share/YjTcaNJmKyeHFpMBioHhzBcTzCYddVErEk'
-ORIGINAL_SPACE_ID = 'Tune-A-Video-library/Tune-A-Video-Training-UI'
-SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID)
-
-
-class Trainer:
- def __init__(self, hf_token: str | None = None):
- self.hf_token = hf_token
- self.model_uploader = ModelUploader(hf_token)
-
- self.checkpoint_dir = pathlib.Path('checkpoints')
- self.checkpoint_dir.mkdir(exist_ok=True)
-
- def download_base_model(self, base_model_id: str) -> str:
- model_dir = self.checkpoint_dir / base_model_id
- if not model_dir.exists():
- org_name = base_model_id.split('/')[0]
- org_dir = self.checkpoint_dir / org_name
- org_dir.mkdir(exist_ok=True)
- subprocess.run(shlex.split(
- f'git clone https://huggingface.co/{base_model_id}'),
- cwd=org_dir)
- return model_dir.as_posix()
-
- def join_model_library_org(self, token: str) -> None:
- subprocess.run(
- shlex.split(
- f'curl -X POST -H "Authorization: Bearer {token}" -H "Content-Type: application/json" {URL_TO_JOIN_MODEL_LIBRARY_ORG}'
- ))
-
- def run(
- self,
- training_video: str,
- training_prompt: str,
- output_model_name: str,
- overwrite_existing_model: bool,
- validation_prompt: str,
- base_model: str,
- resolution_s: str,
- n_steps: int,
- learning_rate: float,
- gradient_accumulation: int,
- seed: int,
- fp16: bool,
- use_8bit_adam: bool,
- checkpointing_steps: int,
- validation_epochs: int,
- upload_to_hub: bool,
- use_private_repo: bool,
- delete_existing_repo: bool,
- upload_to: str,
- remove_gpu_after_training: bool,
- input_token: str,
- ) -> str:
- if SPACE_ID == ORIGINAL_SPACE_ID:
- raise gr.Error(
- 'This Space does not work on this Shared UI. Duplicate the Space and attribute a GPU'
- )
- if not torch.cuda.is_available():
- raise gr.Error('CUDA is not available.')
- if training_video is None:
- raise gr.Error('You need to upload a video.')
- if not training_prompt:
- raise gr.Error('The training prompt is missing.')
- if not validation_prompt:
- raise gr.Error('The validation prompt is missing.')
-
- resolution = int(resolution_s)
-
- if not output_model_name:
- timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
- output_model_name = f'tune-a-video-{timestamp}'
- output_model_name = slugify.slugify(output_model_name)
-
- repo_dir = pathlib.Path(__file__).parent
- output_dir = repo_dir / 'experiments' / output_model_name
- if overwrite_existing_model or upload_to_hub:
- shutil.rmtree(output_dir, ignore_errors=True)
- output_dir.mkdir(parents=True)
-
- if upload_to_hub:
- self.join_model_library_org(
- self.hf_token if self.hf_token else input_token)
-
- config = OmegaConf.load('Tune-A-Video/configs/man-surfing.yaml')
- config.pretrained_model_path = self.download_base_model(base_model)
- config.output_dir = output_dir.as_posix()
- config.train_data.video_path = training_video.name # type: ignore
- config.train_data.prompt = training_prompt
- config.train_data.n_sample_frames = 8
- config.train_data.width = resolution
- config.train_data.height = resolution
- config.train_data.sample_start_idx = 0
- config.train_data.sample_frame_rate = 1
- config.validation_data.prompts = [validation_prompt]
- config.validation_data.video_length = 8
- config.validation_data.width = resolution
- config.validation_data.height = resolution
- config.validation_data.num_inference_steps = 50
- config.validation_data.guidance_scale = 7.5
- config.learning_rate = learning_rate
- config.gradient_accumulation_steps = gradient_accumulation
- config.train_batch_size = 1
- config.max_train_steps = n_steps
- config.checkpointing_steps = checkpointing_steps
- config.validation_steps = validation_epochs
- config.seed = seed
- config.mixed_precision = 'fp16' if fp16 else ''
- config.use_8bit_adam = use_8bit_adam
-
- config_path = output_dir / 'config.yaml'
- with open(config_path, 'w') as f:
- OmegaConf.save(config, f)
-
- command = f'accelerate launch Tune-A-Video/train_tuneavideo.py --config {config_path}'
- subprocess.run(shlex.split(command))
- save_model_card(save_dir=output_dir,
- base_model=base_model,
- training_prompt=training_prompt,
- test_prompt=validation_prompt,
- test_image_dir='samples')
-
- message = 'Training completed!'
- print(message)
-
- if upload_to_hub:
- upload_message = self.model_uploader.upload_model(
- folder_path=output_dir.as_posix(),
- repo_name=output_model_name,
- upload_to=upload_to,
- private=use_private_repo,
- delete_existing_repo=delete_existing_repo,
- input_token=input_token)
- print(upload_message)
- message = message + '\n' + upload_message
-
- if remove_gpu_after_training:
- space_id = os.getenv('SPACE_ID')
- if space_id:
- api = HfApi(
- token=self.hf_token if self.hf_token else input_token)
- api.request_space_hardware(repo_id=space_id,
- hardware='cpu-basic')
-
- return message
diff --git a/spaces/priyam314/Neural_Style_Texture/src/models/definitions/vgg_nets.py b/spaces/priyam314/Neural_Style_Texture/src/models/definitions/vgg_nets.py
deleted file mode 100644
index 479331cbb71b251f7ad7dedadf3f56fa78b54141..0000000000000000000000000000000000000000
--- a/spaces/priyam314/Neural_Style_Texture/src/models/definitions/vgg_nets.py
+++ /dev/null
@@ -1,241 +0,0 @@
-from collections import namedtuple
-import torch
-from torchvision import models
-from src.utils import utils
-"""
- More detail about the VGG architecture (if you want to understand magic/hardcoded numbers) can be found here:
-
- https://github.com/pytorch/vision/blob/3c254fb7af5f8af252c24e89949c54a3461ff0be/torchvision/models/vgg.py
-"""
-
-
-class Vgg16(torch.nn.Module):
- """Only those layers are exposed which have already proven to work nicely."""
-
- def __init__(self, requires_grad=False, show_progress=False):
- super().__init__()
- vgg_pretrained_features = models.vgg16(pretrained=True,
- progress=show_progress).features
- self.layer_names = {'relu1_2': 1, 'relu2_2': 2,
- 'relu3_3': 3, 'relu4_3': 4}
- self.content_feature_maps_index = self.layer_names[
- utils.yamlGet('contentLayer')]-1 # relu2_2
- self.style_feature_maps_indices = list(range(len(
- self.layer_names))) # all layers used for style representation
-
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- for x in range(4):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(4, 9):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(9, 16):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(16, 23):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, x):
- x = self.slice1(x)
- relu1_2 = x
- x = self.slice2(x)
- relu2_2 = x
- x = self.slice3(x)
- relu3_3 = x
- x = self.slice4(x)
- relu4_3 = x
- vgg_outputs = namedtuple("VggOutputs", self.layer_names.keys())
- out = vgg_outputs(relu1_2, relu2_2, relu3_3, relu4_3)
- return out
-
-
-class Vgg16Experimental(torch.nn.Module):
- """Everything exposed so you can play with different combinations for style and content representation"""
-
- def __init__(self, requires_grad=False, show_progress=False):
- super().__init__()
- vgg_pretrained_features = models.vgg16(pretrained=True,
- progress=show_progress).features
- self.layer_names = [
- 'relu1_1', 'relu2_1', 'relu2_2', 'relu3_1', 'relu3_2', 'relu4_1',
- 'relu4_3', 'relu5_1'
- ]
- self.content_feature_maps_index = 4
- self.style_feature_maps_indices = list(range(len(
- self.layer_names))) # all layers used for style representation
-
- self.conv1_1 = vgg_pretrained_features[0]
- self.relu1_1 = vgg_pretrained_features[1]
- self.conv1_2 = vgg_pretrained_features[2]
- self.relu1_2 = vgg_pretrained_features[3]
- self.max_pooling1 = vgg_pretrained_features[4]
- self.conv2_1 = vgg_pretrained_features[5]
- self.relu2_1 = vgg_pretrained_features[6]
- self.conv2_2 = vgg_pretrained_features[7]
- self.relu2_2 = vgg_pretrained_features[8]
- self.max_pooling2 = vgg_pretrained_features[9]
- self.conv3_1 = vgg_pretrained_features[10]
- self.relu3_1 = vgg_pretrained_features[11]
- self.conv3_2 = vgg_pretrained_features[12]
- self.relu3_2 = vgg_pretrained_features[13]
- self.conv3_3 = vgg_pretrained_features[14]
- self.relu3_3 = vgg_pretrained_features[15]
- self.max_pooling3 = vgg_pretrained_features[16]
- self.conv4_1 = vgg_pretrained_features[17]
- self.relu4_1 = vgg_pretrained_features[18]
- self.conv4_2 = vgg_pretrained_features[19]
- self.relu4_2 = vgg_pretrained_features[20]
- self.conv4_3 = vgg_pretrained_features[21]
- self.relu4_3 = vgg_pretrained_features[22]
- self.max_pooling4 = vgg_pretrained_features[23]
- self.conv5_1 = vgg_pretrained_features[24]
- self.relu5_1 = vgg_pretrained_features[25]
- self.conv5_2 = vgg_pretrained_features[26]
- self.relu5_2 = vgg_pretrained_features[27]
- self.conv5_3 = vgg_pretrained_features[28]
- self.relu5_3 = vgg_pretrained_features[29]
- self.max_pooling5 = vgg_pretrained_features[30]
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, x):
- x = self.conv1_1(x)
- conv1_1 = x
- x = self.relu1_1(x)
- relu1_1 = x
- x = self.conv1_2(x)
- conv1_2 = x
- x = self.relu1_2(x)
- relu1_2 = x
- x = self.max_pooling1(x)
- x = self.conv2_1(x)
- conv2_1 = x
- x = self.relu2_1(x)
- relu2_1 = x
- x = self.conv2_2(x)
- conv2_2 = x
- x = self.relu2_2(x)
- relu2_2 = x
- x = self.max_pooling2(x)
- x = self.conv3_1(x)
- conv3_1 = x
- x = self.relu3_1(x)
- relu3_1 = x
- x = self.conv3_2(x)
- conv3_2 = x
- x = self.relu3_2(x)
- relu3_2 = x
- x = self.conv3_3(x)
- conv3_3 = x
- x = self.relu3_3(x)
- relu3_3 = x
- x = self.max_pooling3(x)
- x = self.conv4_1(x)
- conv4_1 = x
- x = self.relu4_1(x)
- relu4_1 = x
- x = self.conv4_2(x)
- conv4_2 = x
- x = self.relu4_2(x)
- relu4_2 = x
- x = self.conv4_3(x)
- conv4_3 = x
- x = self.relu4_3(x)
- relu4_3 = x
- x = self.max_pooling4(x)
- x = self.conv5_1(x)
- conv5_1 = x
- x = self.relu5_1(x)
- relu5_1 = x
- x = self.conv5_2(x)
- conv5_2 = x
- x = self.relu5_2(x)
- relu5_2 = x
- x = self.conv5_3(x)
- conv5_3 = x
- x = self.relu5_3(x)
- relu5_3 = x
- x = self.max_pooling5(x)
- # expose only the layers that you want to experiment with here
- vgg_outputs = namedtuple("VggOutputs", self.layer_names)
- out = vgg_outputs(relu1_1, relu2_1, relu2_2, relu3_1, relu3_2, relu4_1,
- relu4_3, relu5_1)
-
- return out
-
-
-class Vgg19(torch.nn.Module):
- """
- Used in the original NST paper, only those layers are exposed which were used in the original paper
-
- 'conv1_1', 'conv2_1', 'conv3_1', 'conv4_1', 'conv5_1' were used for style representation
- 'conv4_2' was used for content representation (although they did some experiments with conv2_2 and conv5_2)
- """
-
- def __init__(self,
- requires_grad=False,
- show_progress=False,
- use_relu=True):
- super().__init__()
- vgg_pretrained_features = models.vgg19(pretrained=True,
- progress=show_progress).features
- if use_relu: # use relu or as in original paper conv layers
- self.layer_names = [
- 'relu1_1', 'relu2_1', 'relu3_1', 'relu4_1', 'relu5_1'
- ]
- self.offset = 1
- else:
- self.layer_names = [
- 'conv1_1', 'conv2_1', 'conv3_1', 'conv4_1', 'conv4_2',
- 'conv5_1'
- ]
- self.offset = 0
- self.content_feature_maps_index = 4 # conv4_2
- # all layers used for style representation except conv4_2
- self.style_feature_maps_indices = list(range(len(self.layer_names)))
- self.style_feature_maps_indices.remove(4) # conv4_2
-
- self.slice1 = torch.nn.Sequential()
- self.slice2 = torch.nn.Sequential()
- self.slice3 = torch.nn.Sequential()
- self.slice4 = torch.nn.Sequential()
- self.slice5 = torch.nn.Sequential()
- self.slice6 = torch.nn.Sequential()
- for x in range(1 + self.offset):
- self.slice1.add_module(str(x), vgg_pretrained_features[x])
- for x in range(1 + self.offset, 6 + self.offset):
- self.slice2.add_module(str(x), vgg_pretrained_features[x])
- for x in range(6 + self.offset, 11 + self.offset):
- self.slice3.add_module(str(x), vgg_pretrained_features[x])
- for x in range(11 + self.offset, 20 + self.offset):
- self.slice4.add_module(str(x), vgg_pretrained_features[x])
- for x in range(20 + self.offset, 22):
- self.slice5.add_module(str(x), vgg_pretrained_features[x])
- for x in range(22, 29 + +self.offset):
- self.slice6.add_module(str(x), vgg_pretrained_features[x])
- if not requires_grad:
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, x):
- x = self.slice1(x)
- layer1_1 = x
- x = self.slice2(x)
- layer2_1 = x
- x = self.slice3(x)
- layer3_1 = x
- x = self.slice4(x)
- layer4_1 = x
- x = self.slice5(x)
- conv4_2 = x
- x = self.slice6(x)
- layer5_1 = x
- vgg_outputs = namedtuple("VggOutputs", self.layer_names)
- out = vgg_outputs(layer1_1, layer2_1, layer3_1, layer4_1, conv4_2,
- layer5_1)
- return out
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/texmanager.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/texmanager.py
deleted file mode 100644
index 812eab58b8776257406ff34dfd33e0546aad7e64..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/texmanager.py
+++ /dev/null
@@ -1,368 +0,0 @@
-r"""
-Support for embedded TeX expressions in Matplotlib.
-
-Requirements:
-
-* LaTeX.
-* \*Agg backends: dvipng>=1.6.
-* PS backend: PSfrag, dvips, and Ghostscript>=9.0.
-* PDF and SVG backends: if LuaTeX is present, it will be used to speed up some
- post-processing steps, but note that it is not used to parse the TeX string
- itself (only LaTeX is supported).
-
-To enable TeX rendering of all text in your Matplotlib figure, set
-:rc:`text.usetex` to True.
-
-TeX and dvipng/dvips processing results are cached
-in ~/.matplotlib/tex.cache for reuse between sessions.
-
-`TexManager.get_rgba` can also be used to directly obtain raster output as RGBA
-NumPy arrays.
-"""
-
-import functools
-import hashlib
-import logging
-import os
-from pathlib import Path
-import subprocess
-from tempfile import TemporaryDirectory
-
-import numpy as np
-
-import matplotlib as mpl
-from matplotlib import _api, cbook, dviread
-
-_log = logging.getLogger(__name__)
-
-
-def _usepackage_if_not_loaded(package, *, option=None):
- """
- Output LaTeX code that loads a package (possibly with an option) if it
- hasn't been loaded yet.
-
- LaTeX cannot load twice a package with different options, so this helper
- can be used to protect against users loading arbitrary packages/options in
- their custom preamble.
- """
- option = f"[{option}]" if option is not None else ""
- return (
- r"\makeatletter"
- r"\@ifpackageloaded{%(package)s}{}{\usepackage%(option)s{%(package)s}}"
- r"\makeatother"
- ) % {"package": package, "option": option}
-
-
-class TexManager:
- """
- Convert strings to dvi files using TeX, caching the results to a directory.
-
- The cache directory is called ``tex.cache`` and is located in the directory
- returned by `.get_cachedir`.
-
- Repeated calls to this constructor always return the same instance.
- """
-
- texcache = _api.deprecate_privatize_attribute("3.8")
- _texcache = os.path.join(mpl.get_cachedir(), 'tex.cache')
- _grey_arrayd = {}
-
- _font_families = ('serif', 'sans-serif', 'cursive', 'monospace')
- _font_preambles = {
- 'new century schoolbook': r'\renewcommand{\rmdefault}{pnc}',
- 'bookman': r'\renewcommand{\rmdefault}{pbk}',
- 'times': r'\usepackage{mathptmx}',
- 'palatino': r'\usepackage{mathpazo}',
- 'zapf chancery': r'\usepackage{chancery}',
- 'cursive': r'\usepackage{chancery}',
- 'charter': r'\usepackage{charter}',
- 'serif': '',
- 'sans-serif': '',
- 'helvetica': r'\usepackage{helvet}',
- 'avant garde': r'\usepackage{avant}',
- 'courier': r'\usepackage{courier}',
- # Loading the type1ec package ensures that cm-super is installed, which
- # is necessary for Unicode computer modern. (It also allows the use of
- # computer modern at arbitrary sizes, but that's just a side effect.)
- 'monospace': r'\usepackage{type1ec}',
- 'computer modern roman': r'\usepackage{type1ec}',
- 'computer modern sans serif': r'\usepackage{type1ec}',
- 'computer modern typewriter': r'\usepackage{type1ec}',
- }
- _font_types = {
- 'new century schoolbook': 'serif',
- 'bookman': 'serif',
- 'times': 'serif',
- 'palatino': 'serif',
- 'zapf chancery': 'cursive',
- 'charter': 'serif',
- 'helvetica': 'sans-serif',
- 'avant garde': 'sans-serif',
- 'courier': 'monospace',
- 'computer modern roman': 'serif',
- 'computer modern sans serif': 'sans-serif',
- 'computer modern typewriter': 'monospace',
- }
-
- @functools.lru_cache # Always return the same instance.
- def __new__(cls):
- Path(cls._texcache).mkdir(parents=True, exist_ok=True)
- return object.__new__(cls)
-
- @classmethod
- def _get_font_family_and_reduced(cls):
- """Return the font family name and whether the font is reduced."""
- ff = mpl.rcParams['font.family']
- ff_val = ff[0].lower() if len(ff) == 1 else None
- if len(ff) == 1 and ff_val in cls._font_families:
- return ff_val, False
- elif len(ff) == 1 and ff_val in cls._font_preambles:
- return cls._font_types[ff_val], True
- else:
- _log.info('font.family must be one of (%s) when text.usetex is '
- 'True. serif will be used by default.',
- ', '.join(cls._font_families))
- return 'serif', False
-
- @classmethod
- def _get_font_preamble_and_command(cls):
- requested_family, is_reduced_font = cls._get_font_family_and_reduced()
-
- preambles = {}
- for font_family in cls._font_families:
- if is_reduced_font and font_family == requested_family:
- preambles[font_family] = cls._font_preambles[
- mpl.rcParams['font.family'][0].lower()]
- else:
- for font in mpl.rcParams['font.' + font_family]:
- if font.lower() in cls._font_preambles:
- preambles[font_family] = \
- cls._font_preambles[font.lower()]
- _log.debug(
- 'family: %s, font: %s, info: %s',
- font_family, font,
- cls._font_preambles[font.lower()])
- break
- else:
- _log.debug('%s font is not compatible with usetex.',
- font)
- else:
- _log.info('No LaTeX-compatible font found for the %s font'
- 'family in rcParams. Using default.',
- font_family)
- preambles[font_family] = cls._font_preambles[font_family]
-
- # The following packages and commands need to be included in the latex
- # file's preamble:
- cmd = {preambles[family]
- for family in ['serif', 'sans-serif', 'monospace']}
- if requested_family == 'cursive':
- cmd.add(preambles['cursive'])
- cmd.add(r'\usepackage{type1cm}')
- preamble = '\n'.join(sorted(cmd))
- fontcmd = (r'\sffamily' if requested_family == 'sans-serif' else
- r'\ttfamily' if requested_family == 'monospace' else
- r'\rmfamily')
- return preamble, fontcmd
-
- @classmethod
- def get_basefile(cls, tex, fontsize, dpi=None):
- """
- Return a filename based on a hash of the string, fontsize, and dpi.
- """
- src = cls._get_tex_source(tex, fontsize) + str(dpi)
- filehash = hashlib.md5(src.encode('utf-8')).hexdigest()
- filepath = Path(cls._texcache)
-
- num_letters, num_levels = 2, 2
- for i in range(0, num_letters*num_levels, num_letters):
- filepath = filepath / Path(filehash[i:i+2])
-
- filepath.mkdir(parents=True, exist_ok=True)
- return os.path.join(filepath, filehash)
-
- @classmethod
- def get_font_preamble(cls):
- """
- Return a string containing font configuration for the tex preamble.
- """
- font_preamble, command = cls._get_font_preamble_and_command()
- return font_preamble
-
- @classmethod
- def get_custom_preamble(cls):
- """Return a string containing user additions to the tex preamble."""
- return mpl.rcParams['text.latex.preamble']
-
- @classmethod
- def _get_tex_source(cls, tex, fontsize):
- """Return the complete TeX source for processing a TeX string."""
- font_preamble, fontcmd = cls._get_font_preamble_and_command()
- baselineskip = 1.25 * fontsize
- return "\n".join([
- r"\documentclass{article}",
- r"% Pass-through \mathdefault, which is used in non-usetex mode",
- r"% to use the default text font but was historically suppressed",
- r"% in usetex mode.",
- r"\newcommand{\mathdefault}[1]{#1}",
- font_preamble,
- r"\usepackage[utf8]{inputenc}",
- r"\DeclareUnicodeCharacter{2212}{\ensuremath{-}}",
- r"% geometry is loaded before the custom preamble as ",
- r"% convert_psfrags relies on a custom preamble to change the ",
- r"% geometry.",
- r"\usepackage[papersize=72in, margin=1in]{geometry}",
- cls.get_custom_preamble(),
- r"% Use `underscore` package to take care of underscores in text.",
- r"% The [strings] option allows to use underscores in file names.",
- _usepackage_if_not_loaded("underscore", option="strings"),
- r"% Custom packages (e.g. newtxtext) may already have loaded ",
- r"% textcomp with different options.",
- _usepackage_if_not_loaded("textcomp"),
- r"\pagestyle{empty}",
- r"\begin{document}",
- r"% The empty hbox ensures that a page is printed even for empty",
- r"% inputs, except when using psfrag which gets confused by it.",
- r"% matplotlibbaselinemarker is used by dviread to detect the",
- r"% last line's baseline.",
- rf"\fontsize{{{fontsize}}}{{{baselineskip}}}%",
- r"\ifdefined\psfrag\else\hbox{}\fi%",
- rf"{{{fontcmd} {tex}}}%",
- r"\end{document}",
- ])
-
- @classmethod
- def make_tex(cls, tex, fontsize):
- """
- Generate a tex file to render the tex string at a specific font size.
-
- Return the file name.
- """
- texfile = cls.get_basefile(tex, fontsize) + ".tex"
- Path(texfile).write_text(cls._get_tex_source(tex, fontsize),
- encoding='utf-8')
- return texfile
-
- @classmethod
- def _run_checked_subprocess(cls, command, tex, *, cwd=None):
- _log.debug(cbook._pformat_subprocess(command))
- try:
- report = subprocess.check_output(
- command, cwd=cwd if cwd is not None else cls._texcache,
- stderr=subprocess.STDOUT)
- except FileNotFoundError as exc:
- raise RuntimeError(
- f'Failed to process string with tex because {command[0]} '
- 'could not be found') from exc
- except subprocess.CalledProcessError as exc:
- raise RuntimeError(
- '{prog} was not able to process the following string:\n'
- '{tex!r}\n\n'
- 'Here is the full command invocation and its output:\n\n'
- '{format_command}\n\n'
- '{exc}\n\n'.format(
- prog=command[0],
- format_command=cbook._pformat_subprocess(command),
- tex=tex.encode('unicode_escape'),
- exc=exc.output.decode('utf-8', 'backslashreplace'))
- ) from None
- _log.debug(report)
- return report
-
- @classmethod
- def make_dvi(cls, tex, fontsize):
- """
- Generate a dvi file containing latex's layout of tex string.
-
- Return the file name.
- """
- basefile = cls.get_basefile(tex, fontsize)
- dvifile = '%s.dvi' % basefile
- if not os.path.exists(dvifile):
- texfile = Path(cls.make_tex(tex, fontsize))
- # Generate the dvi in a temporary directory to avoid race
- # conditions e.g. if multiple processes try to process the same tex
- # string at the same time. Having tmpdir be a subdirectory of the
- # final output dir ensures that they are on the same filesystem,
- # and thus replace() works atomically. It also allows referring to
- # the texfile with a relative path (for pathological MPLCONFIGDIRs,
- # the absolute path may contain characters (e.g. ~) that TeX does
- # not support; n.b. relative paths cannot traverse parents, or it
- # will be blocked when `openin_any = p` in texmf.cnf).
- cwd = Path(dvifile).parent
- with TemporaryDirectory(dir=cwd) as tmpdir:
- tmppath = Path(tmpdir)
- cls._run_checked_subprocess(
- ["latex", "-interaction=nonstopmode", "--halt-on-error",
- f"--output-directory={tmppath.name}",
- f"{texfile.name}"], tex, cwd=cwd)
- (tmppath / Path(dvifile).name).replace(dvifile)
- return dvifile
-
- @classmethod
- def make_png(cls, tex, fontsize, dpi):
- """
- Generate a png file containing latex's rendering of tex string.
-
- Return the file name.
- """
- basefile = cls.get_basefile(tex, fontsize, dpi)
- pngfile = '%s.png' % basefile
- # see get_rgba for a discussion of the background
- if not os.path.exists(pngfile):
- dvifile = cls.make_dvi(tex, fontsize)
- cmd = ["dvipng", "-bg", "Transparent", "-D", str(dpi),
- "-T", "tight", "-o", pngfile, dvifile]
- # When testing, disable FreeType rendering for reproducibility; but
- # dvipng 1.16 has a bug (fixed in f3ff241) that breaks --freetype0
- # mode, so for it we keep FreeType enabled; the image will be
- # slightly off.
- if (getattr(mpl, "_called_from_pytest", False) and
- mpl._get_executable_info("dvipng").raw_version != "1.16"):
- cmd.insert(1, "--freetype0")
- cls._run_checked_subprocess(cmd, tex)
- return pngfile
-
- @classmethod
- def get_grey(cls, tex, fontsize=None, dpi=None):
- """Return the alpha channel."""
- if not fontsize:
- fontsize = mpl.rcParams['font.size']
- if not dpi:
- dpi = mpl.rcParams['savefig.dpi']
- key = cls._get_tex_source(tex, fontsize), dpi
- alpha = cls._grey_arrayd.get(key)
- if alpha is None:
- pngfile = cls.make_png(tex, fontsize, dpi)
- rgba = mpl.image.imread(os.path.join(cls._texcache, pngfile))
- cls._grey_arrayd[key] = alpha = rgba[:, :, -1]
- return alpha
-
- @classmethod
- def get_rgba(cls, tex, fontsize=None, dpi=None, rgb=(0, 0, 0)):
- r"""
- Return latex's rendering of the tex string as an RGBA array.
-
- Examples
- --------
- >>> texmanager = TexManager()
- >>> s = r"\TeX\ is $\displaystyle\sum_n\frac{-e^{i\pi}}{2^n}$!"
- >>> Z = texmanager.get_rgba(s, fontsize=12, dpi=80, rgb=(1, 0, 0))
- """
- alpha = cls.get_grey(tex, fontsize, dpi)
- rgba = np.empty((*alpha.shape, 4))
- rgba[..., :3] = mpl.colors.to_rgb(rgb)
- rgba[..., -1] = alpha
- return rgba
-
- @classmethod
- def get_text_width_height_descent(cls, tex, fontsize, renderer=None):
- """Return width, height and descent of the text."""
- if tex.strip() == '':
- return 0, 0, 0
- dvifile = cls.make_dvi(tex, fontsize)
- dpi_fraction = renderer.points_to_pixels(1.) if renderer else 1
- with dviread.Dvi(dvifile, 72 * dpi_fraction) as dvi:
- page, = dvi
- # A total height (including the descent) needs to be returned.
- return page.width, page.height + page.descent, page.descent
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/transforms.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/transforms.py
deleted file mode 100644
index d04b59afa9d7bc574b880fbaa2071d7799fd4e95..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/transforms.py
+++ /dev/null
@@ -1,2975 +0,0 @@
-"""
-Matplotlib includes a framework for arbitrary geometric
-transformations that is used determine the final position of all
-elements drawn on the canvas.
-
-Transforms are composed into trees of `TransformNode` objects
-whose actual value depends on their children. When the contents of
-children change, their parents are automatically invalidated. The
-next time an invalidated transform is accessed, it is recomputed to
-reflect those changes. This invalidation/caching approach prevents
-unnecessary recomputations of transforms, and contributes to better
-interactive performance.
-
-For example, here is a graph of the transform tree used to plot data
-to the graph:
-
-.. image:: ../_static/transforms.png
-
-The framework can be used for both affine and non-affine
-transformations. However, for speed, we want to use the backend
-renderers to perform affine transformations whenever possible.
-Therefore, it is possible to perform just the affine or non-affine
-part of a transformation on a set of data. The affine is always
-assumed to occur after the non-affine. For any transform::
-
- full transform == non-affine part + affine part
-
-The backends are not expected to handle non-affine transformations
-themselves.
-
-See the tutorial :ref:`transforms_tutorial` for examples
-of how to use transforms.
-"""
-
-# Note: There are a number of places in the code where we use `np.min` or
-# `np.minimum` instead of the builtin `min`, and likewise for `max`. This is
-# done so that `nan`s are propagated, instead of being silently dropped.
-
-import copy
-import functools
-import textwrap
-import weakref
-import math
-
-import numpy as np
-from numpy.linalg import inv
-
-from matplotlib import _api
-from matplotlib._path import (
- affine_transform, count_bboxes_overlapping_bbox, update_path_extents)
-from .path import Path
-
-DEBUG = False
-
-
-def _make_str_method(*args, **kwargs):
- """
- Generate a ``__str__`` method for a `.Transform` subclass.
-
- After ::
-
- class T:
- __str__ = _make_str_method("attr", key="other")
-
- ``str(T(...))`` will be
-
- .. code-block:: text
-
- {type(T).__name__}(
- {self.attr},
- key={self.other})
- """
- indent = functools.partial(textwrap.indent, prefix=" " * 4)
- def strrepr(x): return repr(x) if isinstance(x, str) else str(x)
- return lambda self: (
- type(self).__name__ + "("
- + ",".join([*(indent("\n" + strrepr(getattr(self, arg)))
- for arg in args),
- *(indent("\n" + k + "=" + strrepr(getattr(self, arg)))
- for k, arg in kwargs.items())])
- + ")")
-
-
-class TransformNode:
- """
- The base class for anything that participates in the transform tree
- and needs to invalidate its parents or be invalidated. This includes
- classes that are not really transforms, such as bounding boxes, since some
- transforms depend on bounding boxes to compute their values.
- """
-
- # Invalidation may affect only the affine part. If the
- # invalidation was "affine-only", the _invalid member is set to
- # INVALID_AFFINE_ONLY
- INVALID_NON_AFFINE = _api.deprecated("3.8")(_api.classproperty(lambda cls: 1))
- INVALID_AFFINE = _api.deprecated("3.8")(_api.classproperty(lambda cls: 2))
- INVALID = _api.deprecated("3.8")(_api.classproperty(lambda cls: 3))
-
- # Possible values for the _invalid attribute.
- _VALID, _INVALID_AFFINE_ONLY, _INVALID_FULL = range(3)
-
- # Some metadata about the transform, used to determine whether an
- # invalidation is affine-only
- is_affine = False
- is_bbox = False
-
- pass_through = False
- """
- If pass_through is True, all ancestors will always be
- invalidated, even if 'self' is already invalid.
- """
-
- def __init__(self, shorthand_name=None):
- """
- Parameters
- ----------
- shorthand_name : str
- A string representing the "name" of the transform. The name carries
- no significance other than to improve the readability of
- ``str(transform)`` when DEBUG=True.
- """
- self._parents = {}
- # Initially invalid, until first computation.
- self._invalid = self._INVALID_FULL
- self._shorthand_name = shorthand_name or ''
-
- if DEBUG:
- def __str__(self):
- # either just return the name of this TransformNode, or its repr
- return self._shorthand_name or repr(self)
-
- def __getstate__(self):
- # turn the dictionary with weak values into a normal dictionary
- return {**self.__dict__,
- '_parents': {k: v() for k, v in self._parents.items()}}
-
- def __setstate__(self, data_dict):
- self.__dict__ = data_dict
- # turn the normal dictionary back into a dictionary with weak values
- # The extra lambda is to provide a callback to remove dead
- # weakrefs from the dictionary when garbage collection is done.
- self._parents = {
- k: weakref.ref(v, lambda _, pop=self._parents.pop, k=k: pop(k))
- for k, v in self._parents.items() if v is not None}
-
- def __copy__(self):
- other = copy.copy(super())
- # If `c = a + b; a1 = copy(a)`, then modifications to `a1` do not
- # propagate back to `c`, i.e. we need to clear the parents of `a1`.
- other._parents = {}
- # If `c = a + b; c1 = copy(c)`, then modifications to `a` also need to
- # be propagated to `c1`.
- for key, val in vars(self).items():
- if isinstance(val, TransformNode) and id(self) in val._parents:
- other.set_children(val) # val == getattr(other, key)
- return other
-
- def invalidate(self):
- """
- Invalidate this `TransformNode` and triggers an invalidation of its
- ancestors. Should be called any time the transform changes.
- """
- return self._invalidate_internal(
- level=self._INVALID_AFFINE_ONLY if self.is_affine else self._INVALID_FULL,
- invalidating_node=self)
-
- def _invalidate_internal(self, level, invalidating_node):
- """
- Called by :meth:`invalidate` and subsequently ascends the transform
- stack calling each TransformNode's _invalidate_internal method.
- """
- # If we are already more invalid than the currently propagated invalidation,
- # then we don't need to do anything.
- if level <= self._invalid and not self.pass_through:
- return
- self._invalid = level
- for parent in list(self._parents.values()):
- parent = parent() # Dereference the weak reference.
- if parent is not None:
- parent._invalidate_internal(level=level, invalidating_node=self)
-
- def set_children(self, *children):
- """
- Set the children of the transform, to let the invalidation
- system know which transforms can invalidate this transform.
- Should be called from the constructor of any transforms that
- depend on other transforms.
- """
- # Parents are stored as weak references, so that if the
- # parents are destroyed, references from the children won't
- # keep them alive.
- id_self = id(self)
- for child in children:
- # Use weak references so this dictionary won't keep obsolete nodes
- # alive; the callback deletes the dictionary entry. This is a
- # performance improvement over using WeakValueDictionary.
- ref = weakref.ref(
- self, lambda _, pop=child._parents.pop, k=id_self: pop(k))
- child._parents[id_self] = ref
-
- def frozen(self):
- """
- Return a frozen copy of this transform node. The frozen copy will not
- be updated when its children change. Useful for storing a previously
- known state of a transform where ``copy.deepcopy()`` might normally be
- used.
- """
- return self
-
-
-class BboxBase(TransformNode):
- """
- The base class of all bounding boxes.
-
- This class is immutable; `Bbox` is a mutable subclass.
-
- The canonical representation is as two points, with no
- restrictions on their ordering. Convenience properties are
- provided to get the left, bottom, right and top edges and width
- and height, but these are not stored explicitly.
- """
-
- is_bbox = True
- is_affine = True
-
- if DEBUG:
- @staticmethod
- def _check(points):
- if isinstance(points, np.ma.MaskedArray):
- _api.warn_external("Bbox bounds are a masked array.")
- points = np.asarray(points)
- if any((points[1, :] - points[0, :]) == 0):
- _api.warn_external("Singular Bbox.")
-
- def frozen(self):
- return Bbox(self.get_points().copy())
- frozen.__doc__ = TransformNode.__doc__
-
- def __array__(self, *args, **kwargs):
- return self.get_points()
-
- @property
- def x0(self):
- """
- The first of the pair of *x* coordinates that define the bounding box.
-
- This is not guaranteed to be less than :attr:`x1` (for that, use
- :attr:`xmin`).
- """
- return self.get_points()[0, 0]
-
- @property
- def y0(self):
- """
- The first of the pair of *y* coordinates that define the bounding box.
-
- This is not guaranteed to be less than :attr:`y1` (for that, use
- :attr:`ymin`).
- """
- return self.get_points()[0, 1]
-
- @property
- def x1(self):
- """
- The second of the pair of *x* coordinates that define the bounding box.
-
- This is not guaranteed to be greater than :attr:`x0` (for that, use
- :attr:`xmax`).
- """
- return self.get_points()[1, 0]
-
- @property
- def y1(self):
- """
- The second of the pair of *y* coordinates that define the bounding box.
-
- This is not guaranteed to be greater than :attr:`y0` (for that, use
- :attr:`ymax`).
- """
- return self.get_points()[1, 1]
-
- @property
- def p0(self):
- """
- The first pair of (*x*, *y*) coordinates that define the bounding box.
-
- This is not guaranteed to be the bottom-left corner (for that, use
- :attr:`min`).
- """
- return self.get_points()[0]
-
- @property
- def p1(self):
- """
- The second pair of (*x*, *y*) coordinates that define the bounding box.
-
- This is not guaranteed to be the top-right corner (for that, use
- :attr:`max`).
- """
- return self.get_points()[1]
-
- @property
- def xmin(self):
- """The left edge of the bounding box."""
- return np.min(self.get_points()[:, 0])
-
- @property
- def ymin(self):
- """The bottom edge of the bounding box."""
- return np.min(self.get_points()[:, 1])
-
- @property
- def xmax(self):
- """The right edge of the bounding box."""
- return np.max(self.get_points()[:, 0])
-
- @property
- def ymax(self):
- """The top edge of the bounding box."""
- return np.max(self.get_points()[:, 1])
-
- @property
- def min(self):
- """The bottom-left corner of the bounding box."""
- return np.min(self.get_points(), axis=0)
-
- @property
- def max(self):
- """The top-right corner of the bounding box."""
- return np.max(self.get_points(), axis=0)
-
- @property
- def intervalx(self):
- """
- The pair of *x* coordinates that define the bounding box.
-
- This is not guaranteed to be sorted from left to right.
- """
- return self.get_points()[:, 0]
-
- @property
- def intervaly(self):
- """
- The pair of *y* coordinates that define the bounding box.
-
- This is not guaranteed to be sorted from bottom to top.
- """
- return self.get_points()[:, 1]
-
- @property
- def width(self):
- """The (signed) width of the bounding box."""
- points = self.get_points()
- return points[1, 0] - points[0, 0]
-
- @property
- def height(self):
- """The (signed) height of the bounding box."""
- points = self.get_points()
- return points[1, 1] - points[0, 1]
-
- @property
- def size(self):
- """The (signed) width and height of the bounding box."""
- points = self.get_points()
- return points[1] - points[0]
-
- @property
- def bounds(self):
- """Return (:attr:`x0`, :attr:`y0`, :attr:`width`, :attr:`height`)."""
- (x0, y0), (x1, y1) = self.get_points()
- return (x0, y0, x1 - x0, y1 - y0)
-
- @property
- def extents(self):
- """Return (:attr:`x0`, :attr:`y0`, :attr:`x1`, :attr:`y1`)."""
- return self.get_points().flatten() # flatten returns a copy.
-
- def get_points(self):
- raise NotImplementedError
-
- def containsx(self, x):
- """
- Return whether *x* is in the closed (:attr:`x0`, :attr:`x1`) interval.
- """
- x0, x1 = self.intervalx
- return x0 <= x <= x1 or x0 >= x >= x1
-
- def containsy(self, y):
- """
- Return whether *y* is in the closed (:attr:`y0`, :attr:`y1`) interval.
- """
- y0, y1 = self.intervaly
- return y0 <= y <= y1 or y0 >= y >= y1
-
- def contains(self, x, y):
- """
- Return whether ``(x, y)`` is in the bounding box or on its edge.
- """
- return self.containsx(x) and self.containsy(y)
-
- def overlaps(self, other):
- """
- Return whether this bounding box overlaps with the other bounding box.
-
- Parameters
- ----------
- other : `.BboxBase`
- """
- ax1, ay1, ax2, ay2 = self.extents
- bx1, by1, bx2, by2 = other.extents
- if ax2 < ax1:
- ax2, ax1 = ax1, ax2
- if ay2 < ay1:
- ay2, ay1 = ay1, ay2
- if bx2 < bx1:
- bx2, bx1 = bx1, bx2
- if by2 < by1:
- by2, by1 = by1, by2
- return ax1 <= bx2 and bx1 <= ax2 and ay1 <= by2 and by1 <= ay2
-
- def fully_containsx(self, x):
- """
- Return whether *x* is in the open (:attr:`x0`, :attr:`x1`) interval.
- """
- x0, x1 = self.intervalx
- return x0 < x < x1 or x0 > x > x1
-
- def fully_containsy(self, y):
- """
- Return whether *y* is in the open (:attr:`y0`, :attr:`y1`) interval.
- """
- y0, y1 = self.intervaly
- return y0 < y < y1 or y0 > y > y1
-
- def fully_contains(self, x, y):
- """
- Return whether ``x, y`` is in the bounding box, but not on its edge.
- """
- return self.fully_containsx(x) and self.fully_containsy(y)
-
- def fully_overlaps(self, other):
- """
- Return whether this bounding box overlaps with the other bounding box,
- not including the edges.
-
- Parameters
- ----------
- other : `.BboxBase`
- """
- ax1, ay1, ax2, ay2 = self.extents
- bx1, by1, bx2, by2 = other.extents
- if ax2 < ax1:
- ax2, ax1 = ax1, ax2
- if ay2 < ay1:
- ay2, ay1 = ay1, ay2
- if bx2 < bx1:
- bx2, bx1 = bx1, bx2
- if by2 < by1:
- by2, by1 = by1, by2
- return ax1 < bx2 and bx1 < ax2 and ay1 < by2 and by1 < ay2
-
- def transformed(self, transform):
- """
- Construct a `Bbox` by statically transforming this one by *transform*.
- """
- pts = self.get_points()
- ll, ul, lr = transform.transform(np.array(
- [pts[0], [pts[0, 0], pts[1, 1]], [pts[1, 0], pts[0, 1]]]))
- return Bbox([ll, [lr[0], ul[1]]])
-
- coefs = {'C': (0.5, 0.5),
- 'SW': (0, 0),
- 'S': (0.5, 0),
- 'SE': (1.0, 0),
- 'E': (1.0, 0.5),
- 'NE': (1.0, 1.0),
- 'N': (0.5, 1.0),
- 'NW': (0, 1.0),
- 'W': (0, 0.5)}
-
- def anchored(self, c, container=None):
- """
- Return a copy of the `Bbox` anchored to *c* within *container*.
-
- Parameters
- ----------
- c : (float, float) or {'C', 'SW', 'S', 'SE', 'E', 'NE', ...}
- Either an (*x*, *y*) pair of relative coordinates (0 is left or
- bottom, 1 is right or top), 'C' (center), or a cardinal direction
- ('SW', southwest, is bottom left, etc.).
- container : `Bbox`, optional
- The box within which the `Bbox` is positioned.
-
- See Also
- --------
- .Axes.set_anchor
- """
- if container is None:
- _api.warn_deprecated(
- "3.8", message="Calling anchored() with no container bbox "
- "returns a frozen copy of the original bbox and is deprecated "
- "since %(since)s.")
- container = self
- l, b, w, h = container.bounds
- L, B, W, H = self.bounds
- cx, cy = self.coefs[c] if isinstance(c, str) else c
- return Bbox(self._points +
- [(l + cx * (w - W)) - L,
- (b + cy * (h - H)) - B])
-
- def shrunk(self, mx, my):
- """
- Return a copy of the `Bbox`, shrunk by the factor *mx*
- in the *x* direction and the factor *my* in the *y* direction.
- The lower left corner of the box remains unchanged. Normally
- *mx* and *my* will be less than 1, but this is not enforced.
- """
- w, h = self.size
- return Bbox([self._points[0],
- self._points[0] + [mx * w, my * h]])
-
- def shrunk_to_aspect(self, box_aspect, container=None, fig_aspect=1.0):
- """
- Return a copy of the `Bbox`, shrunk so that it is as
- large as it can be while having the desired aspect ratio,
- *box_aspect*. If the box coordinates are relative (i.e.
- fractions of a larger box such as a figure) then the
- physical aspect ratio of that figure is specified with
- *fig_aspect*, so that *box_aspect* can also be given as a
- ratio of the absolute dimensions, not the relative dimensions.
- """
- if box_aspect <= 0 or fig_aspect <= 0:
- raise ValueError("'box_aspect' and 'fig_aspect' must be positive")
- if container is None:
- container = self
- w, h = container.size
- H = w * box_aspect / fig_aspect
- if H <= h:
- W = w
- else:
- W = h * fig_aspect / box_aspect
- H = h
- return Bbox([self._points[0],
- self._points[0] + (W, H)])
-
- def splitx(self, *args):
- """
- Return a list of new `Bbox` objects formed by splitting the original
- one with vertical lines at fractional positions given by *args*.
- """
- xf = [0, *args, 1]
- x0, y0, x1, y1 = self.extents
- w = x1 - x0
- return [Bbox([[x0 + xf0 * w, y0], [x0 + xf1 * w, y1]])
- for xf0, xf1 in zip(xf[:-1], xf[1:])]
-
- def splity(self, *args):
- """
- Return a list of new `Bbox` objects formed by splitting the original
- one with horizontal lines at fractional positions given by *args*.
- """
- yf = [0, *args, 1]
- x0, y0, x1, y1 = self.extents
- h = y1 - y0
- return [Bbox([[x0, y0 + yf0 * h], [x1, y0 + yf1 * h]])
- for yf0, yf1 in zip(yf[:-1], yf[1:])]
-
- def count_contains(self, vertices):
- """
- Count the number of vertices contained in the `Bbox`.
- Any vertices with a non-finite x or y value are ignored.
-
- Parameters
- ----------
- vertices : (N, 2) array
- """
- if len(vertices) == 0:
- return 0
- vertices = np.asarray(vertices)
- with np.errstate(invalid='ignore'):
- return (((self.min < vertices) &
- (vertices < self.max)).all(axis=1).sum())
-
- def count_overlaps(self, bboxes):
- """
- Count the number of bounding boxes that overlap this one.
-
- Parameters
- ----------
- bboxes : sequence of `.BboxBase`
- """
- return count_bboxes_overlapping_bbox(
- self, np.atleast_3d([np.array(x) for x in bboxes]))
-
- def expanded(self, sw, sh):
- """
- Construct a `Bbox` by expanding this one around its center by the
- factors *sw* and *sh*.
- """
- width = self.width
- height = self.height
- deltaw = (sw * width - width) / 2.0
- deltah = (sh * height - height) / 2.0
- a = np.array([[-deltaw, -deltah], [deltaw, deltah]])
- return Bbox(self._points + a)
-
- @_api.rename_parameter("3.8", "p", "w_pad")
- def padded(self, w_pad, h_pad=None):
- """
- Construct a `Bbox` by padding this one on all four sides.
-
- Parameters
- ----------
- w_pad : float
- Width pad
- h_pad : float, optional
- Height pad. Defaults to *w_pad*.
-
- """
- points = self.get_points()
- if h_pad is None:
- h_pad = w_pad
- return Bbox(points + [[-w_pad, -h_pad], [w_pad, h_pad]])
-
- def translated(self, tx, ty):
- """Construct a `Bbox` by translating this one by *tx* and *ty*."""
- return Bbox(self._points + (tx, ty))
-
- def corners(self):
- """
- Return the corners of this rectangle as an array of points.
-
- Specifically, this returns the array
- ``[[x0, y0], [x0, y1], [x1, y0], [x1, y1]]``.
- """
- (x0, y0), (x1, y1) = self.get_points()
- return np.array([[x0, y0], [x0, y1], [x1, y0], [x1, y1]])
-
- def rotated(self, radians):
- """
- Return the axes-aligned bounding box that bounds the result of rotating
- this `Bbox` by an angle of *radians*.
- """
- corners = self.corners()
- corners_rotated = Affine2D().rotate(radians).transform(corners)
- bbox = Bbox.unit()
- bbox.update_from_data_xy(corners_rotated, ignore=True)
- return bbox
-
- @staticmethod
- def union(bboxes):
- """Return a `Bbox` that contains all of the given *bboxes*."""
- if not len(bboxes):
- raise ValueError("'bboxes' cannot be empty")
- x0 = np.min([bbox.xmin for bbox in bboxes])
- x1 = np.max([bbox.xmax for bbox in bboxes])
- y0 = np.min([bbox.ymin for bbox in bboxes])
- y1 = np.max([bbox.ymax for bbox in bboxes])
- return Bbox([[x0, y0], [x1, y1]])
-
- @staticmethod
- def intersection(bbox1, bbox2):
- """
- Return the intersection of *bbox1* and *bbox2* if they intersect, or
- None if they don't.
- """
- x0 = np.maximum(bbox1.xmin, bbox2.xmin)
- x1 = np.minimum(bbox1.xmax, bbox2.xmax)
- y0 = np.maximum(bbox1.ymin, bbox2.ymin)
- y1 = np.minimum(bbox1.ymax, bbox2.ymax)
- return Bbox([[x0, y0], [x1, y1]]) if x0 <= x1 and y0 <= y1 else None
-
-_default_minpos = np.array([np.inf, np.inf])
-
-
-class Bbox(BboxBase):
- """
- A mutable bounding box.
-
- Examples
- --------
- **Create from known bounds**
-
- The default constructor takes the boundary "points" ``[[xmin, ymin],
- [xmax, ymax]]``.
-
- >>> Bbox([[1, 1], [3, 7]])
- Bbox([[1.0, 1.0], [3.0, 7.0]])
-
- Alternatively, a Bbox can be created from the flattened points array, the
- so-called "extents" ``(xmin, ymin, xmax, ymax)``
-
- >>> Bbox.from_extents(1, 1, 3, 7)
- Bbox([[1.0, 1.0], [3.0, 7.0]])
-
- or from the "bounds" ``(xmin, ymin, width, height)``.
-
- >>> Bbox.from_bounds(1, 1, 2, 6)
- Bbox([[1.0, 1.0], [3.0, 7.0]])
-
- **Create from collections of points**
-
- The "empty" object for accumulating Bboxs is the null bbox, which is a
- stand-in for the empty set.
-
- >>> Bbox.null()
- Bbox([[inf, inf], [-inf, -inf]])
-
- Adding points to the null bbox will give you the bbox of those points.
-
- >>> box = Bbox.null()
- >>> box.update_from_data_xy([[1, 1]])
- >>> box
- Bbox([[1.0, 1.0], [1.0, 1.0]])
- >>> box.update_from_data_xy([[2, 3], [3, 2]], ignore=False)
- >>> box
- Bbox([[1.0, 1.0], [3.0, 3.0]])
-
- Setting ``ignore=True`` is equivalent to starting over from a null bbox.
-
- >>> box.update_from_data_xy([[1, 1]], ignore=True)
- >>> box
- Bbox([[1.0, 1.0], [1.0, 1.0]])
-
- .. warning::
-
- It is recommended to always specify ``ignore`` explicitly. If not, the
- default value of ``ignore`` can be changed at any time by code with
- access to your Bbox, for example using the method `~.Bbox.ignore`.
-
- **Properties of the ``null`` bbox**
-
- .. note::
-
- The current behavior of `Bbox.null()` may be surprising as it does
- not have all of the properties of the "empty set", and as such does
- not behave like a "zero" object in the mathematical sense. We may
- change that in the future (with a deprecation period).
-
- The null bbox is the identity for intersections
-
- >>> Bbox.intersection(Bbox([[1, 1], [3, 7]]), Bbox.null())
- Bbox([[1.0, 1.0], [3.0, 7.0]])
-
- except with itself, where it returns the full space.
-
- >>> Bbox.intersection(Bbox.null(), Bbox.null())
- Bbox([[-inf, -inf], [inf, inf]])
-
- A union containing null will always return the full space (not the other
- set!)
-
- >>> Bbox.union([Bbox([[0, 0], [0, 0]]), Bbox.null()])
- Bbox([[-inf, -inf], [inf, inf]])
- """
-
- def __init__(self, points, **kwargs):
- """
- Parameters
- ----------
- points : `~numpy.ndarray`
- A (2, 2) array of the form ``[[x0, y0], [x1, y1]]``.
- """
- super().__init__(**kwargs)
- points = np.asarray(points, float)
- if points.shape != (2, 2):
- raise ValueError('Bbox points must be of the form '
- '"[[x0, y0], [x1, y1]]".')
- self._points = points
- self._minpos = _default_minpos.copy()
- self._ignore = True
- # it is helpful in some contexts to know if the bbox is a
- # default or has been mutated; we store the orig points to
- # support the mutated methods
- self._points_orig = self._points.copy()
- if DEBUG:
- ___init__ = __init__
-
- def __init__(self, points, **kwargs):
- self._check(points)
- self.___init__(points, **kwargs)
-
- def invalidate(self):
- self._check(self._points)
- super().invalidate()
-
- def frozen(self):
- # docstring inherited
- frozen_bbox = super().frozen()
- frozen_bbox._minpos = self.minpos.copy()
- return frozen_bbox
-
- @staticmethod
- def unit():
- """Create a new unit `Bbox` from (0, 0) to (1, 1)."""
- return Bbox([[0, 0], [1, 1]])
-
- @staticmethod
- def null():
- """Create a new null `Bbox` from (inf, inf) to (-inf, -inf)."""
- return Bbox([[np.inf, np.inf], [-np.inf, -np.inf]])
-
- @staticmethod
- def from_bounds(x0, y0, width, height):
- """
- Create a new `Bbox` from *x0*, *y0*, *width* and *height*.
-
- *width* and *height* may be negative.
- """
- return Bbox.from_extents(x0, y0, x0 + width, y0 + height)
-
- @staticmethod
- def from_extents(*args, minpos=None):
- """
- Create a new Bbox from *left*, *bottom*, *right* and *top*.
-
- The *y*-axis increases upwards.
-
- Parameters
- ----------
- left, bottom, right, top : float
- The four extents of the bounding box.
- minpos : float or None
- If this is supplied, the Bbox will have a minimum positive value
- set. This is useful when dealing with logarithmic scales and other
- scales where negative bounds result in floating point errors.
- """
- bbox = Bbox(np.reshape(args, (2, 2)))
- if minpos is not None:
- bbox._minpos[:] = minpos
- return bbox
-
- def __format__(self, fmt):
- return (
- 'Bbox(x0={0.x0:{1}}, y0={0.y0:{1}}, x1={0.x1:{1}}, y1={0.y1:{1}})'.
- format(self, fmt))
-
- def __str__(self):
- return format(self, '')
-
- def __repr__(self):
- return 'Bbox([[{0.x0}, {0.y0}], [{0.x1}, {0.y1}]])'.format(self)
-
- def ignore(self, value):
- """
- Set whether the existing bounds of the box should be ignored
- by subsequent calls to :meth:`update_from_data_xy`.
-
- value : bool
- - When ``True``, subsequent calls to `update_from_data_xy` will
- ignore the existing bounds of the `Bbox`.
- - When ``False``, subsequent calls to `update_from_data_xy` will
- include the existing bounds of the `Bbox`.
- """
- self._ignore = value
-
- def update_from_path(self, path, ignore=None, updatex=True, updatey=True):
- """
- Update the bounds of the `Bbox` to contain the vertices of the
- provided path. After updating, the bounds will have positive *width*
- and *height*; *x0* and *y0* will be the minimal values.
-
- Parameters
- ----------
- path : `~matplotlib.path.Path`
- ignore : bool, optional
- - When ``True``, ignore the existing bounds of the `Bbox`.
- - When ``False``, include the existing bounds of the `Bbox`.
- - When ``None``, use the last value passed to :meth:`ignore`.
- updatex, updatey : bool, default: True
- When ``True``, update the x/y values.
- """
- if ignore is None:
- ignore = self._ignore
-
- if path.vertices.size == 0:
- return
-
- points, minpos, changed = update_path_extents(
- path, None, self._points, self._minpos, ignore)
-
- if changed:
- self.invalidate()
- if updatex:
- self._points[:, 0] = points[:, 0]
- self._minpos[0] = minpos[0]
- if updatey:
- self._points[:, 1] = points[:, 1]
- self._minpos[1] = minpos[1]
-
- def update_from_data_x(self, x, ignore=None):
- """
- Update the x-bounds of the `Bbox` based on the passed in data. After
- updating, the bounds will have positive *width*, and *x0* will be the
- minimal value.
-
- Parameters
- ----------
- x : `~numpy.ndarray`
- Array of x-values.
- ignore : bool, optional
- - When ``True``, ignore the existing bounds of the `Bbox`.
- - When ``False``, include the existing bounds of the `Bbox`.
- - When ``None``, use the last value passed to :meth:`ignore`.
- """
- x = np.ravel(x)
- self.update_from_data_xy(np.column_stack([x, np.ones(x.size)]),
- ignore=ignore, updatey=False)
-
- def update_from_data_y(self, y, ignore=None):
- """
- Update the y-bounds of the `Bbox` based on the passed in data. After
- updating, the bounds will have positive *height*, and *y0* will be the
- minimal value.
-
- Parameters
- ----------
- y : `~numpy.ndarray`
- Array of y-values.
- ignore : bool, optional
- - When ``True``, ignore the existing bounds of the `Bbox`.
- - When ``False``, include the existing bounds of the `Bbox`.
- - When ``None``, use the last value passed to :meth:`ignore`.
- """
- y = np.ravel(y)
- self.update_from_data_xy(np.column_stack([np.ones(y.size), y]),
- ignore=ignore, updatex=False)
-
- def update_from_data_xy(self, xy, ignore=None, updatex=True, updatey=True):
- """
- Update the `Bbox` bounds based on the passed in *xy* coordinates.
-
- After updating, the bounds will have positive *width* and *height*;
- *x0* and *y0* will be the minimal values.
-
- Parameters
- ----------
- xy : (N, 2) array-like
- The (x, y) coordinates.
- ignore : bool, optional
- - When ``True``, ignore the existing bounds of the `Bbox`.
- - When ``False``, include the existing bounds of the `Bbox`.
- - When ``None``, use the last value passed to :meth:`ignore`.
- updatex, updatey : bool, default: True
- When ``True``, update the x/y values.
- """
- if len(xy) == 0:
- return
-
- path = Path(xy)
- self.update_from_path(path, ignore=ignore,
- updatex=updatex, updatey=updatey)
-
- @BboxBase.x0.setter
- def x0(self, val):
- self._points[0, 0] = val
- self.invalidate()
-
- @BboxBase.y0.setter
- def y0(self, val):
- self._points[0, 1] = val
- self.invalidate()
-
- @BboxBase.x1.setter
- def x1(self, val):
- self._points[1, 0] = val
- self.invalidate()
-
- @BboxBase.y1.setter
- def y1(self, val):
- self._points[1, 1] = val
- self.invalidate()
-
- @BboxBase.p0.setter
- def p0(self, val):
- self._points[0] = val
- self.invalidate()
-
- @BboxBase.p1.setter
- def p1(self, val):
- self._points[1] = val
- self.invalidate()
-
- @BboxBase.intervalx.setter
- def intervalx(self, interval):
- self._points[:, 0] = interval
- self.invalidate()
-
- @BboxBase.intervaly.setter
- def intervaly(self, interval):
- self._points[:, 1] = interval
- self.invalidate()
-
- @BboxBase.bounds.setter
- def bounds(self, bounds):
- l, b, w, h = bounds
- points = np.array([[l, b], [l + w, b + h]], float)
- if np.any(self._points != points):
- self._points = points
- self.invalidate()
-
- @property
- def minpos(self):
- """
- The minimum positive value in both directions within the Bbox.
-
- This is useful when dealing with logarithmic scales and other scales
- where negative bounds result in floating point errors, and will be used
- as the minimum extent instead of *p0*.
- """
- return self._minpos
-
- @property
- def minposx(self):
- """
- The minimum positive value in the *x*-direction within the Bbox.
-
- This is useful when dealing with logarithmic scales and other scales
- where negative bounds result in floating point errors, and will be used
- as the minimum *x*-extent instead of *x0*.
- """
- return self._minpos[0]
-
- @property
- def minposy(self):
- """
- The minimum positive value in the *y*-direction within the Bbox.
-
- This is useful when dealing with logarithmic scales and other scales
- where negative bounds result in floating point errors, and will be used
- as the minimum *y*-extent instead of *y0*.
- """
- return self._minpos[1]
-
- def get_points(self):
- """
- Get the points of the bounding box as an array of the form
- ``[[x0, y0], [x1, y1]]``.
- """
- self._invalid = 0
- return self._points
-
- def set_points(self, points):
- """
- Set the points of the bounding box directly from an array of the form
- ``[[x0, y0], [x1, y1]]``. No error checking is performed, as this
- method is mainly for internal use.
- """
- if np.any(self._points != points):
- self._points = points
- self.invalidate()
-
- def set(self, other):
- """
- Set this bounding box from the "frozen" bounds of another `Bbox`.
- """
- if np.any(self._points != other.get_points()):
- self._points = other.get_points()
- self.invalidate()
-
- def mutated(self):
- """Return whether the bbox has changed since init."""
- return self.mutatedx() or self.mutatedy()
-
- def mutatedx(self):
- """Return whether the x-limits have changed since init."""
- return (self._points[0, 0] != self._points_orig[0, 0] or
- self._points[1, 0] != self._points_orig[1, 0])
-
- def mutatedy(self):
- """Return whether the y-limits have changed since init."""
- return (self._points[0, 1] != self._points_orig[0, 1] or
- self._points[1, 1] != self._points_orig[1, 1])
-
-
-class TransformedBbox(BboxBase):
- """
- A `Bbox` that is automatically transformed by a given
- transform. When either the child bounding box or transform
- changes, the bounds of this bbox will update accordingly.
- """
-
- def __init__(self, bbox, transform, **kwargs):
- """
- Parameters
- ----------
- bbox : `Bbox`
- transform : `Transform`
- """
- if not bbox.is_bbox:
- raise ValueError("'bbox' is not a bbox")
- _api.check_isinstance(Transform, transform=transform)
- if transform.input_dims != 2 or transform.output_dims != 2:
- raise ValueError(
- "The input and output dimensions of 'transform' must be 2")
-
- super().__init__(**kwargs)
- self._bbox = bbox
- self._transform = transform
- self.set_children(bbox, transform)
- self._points = None
-
- __str__ = _make_str_method("_bbox", "_transform")
-
- def get_points(self):
- # docstring inherited
- if self._invalid:
- p = self._bbox.get_points()
- # Transform all four points, then make a new bounding box
- # from the result, taking care to make the orientation the
- # same.
- points = self._transform.transform(
- [[p[0, 0], p[0, 1]],
- [p[1, 0], p[0, 1]],
- [p[0, 0], p[1, 1]],
- [p[1, 0], p[1, 1]]])
- points = np.ma.filled(points, 0.0)
-
- xs = min(points[:, 0]), max(points[:, 0])
- if p[0, 0] > p[1, 0]:
- xs = xs[::-1]
-
- ys = min(points[:, 1]), max(points[:, 1])
- if p[0, 1] > p[1, 1]:
- ys = ys[::-1]
-
- self._points = np.array([
- [xs[0], ys[0]],
- [xs[1], ys[1]]
- ])
-
- self._invalid = 0
- return self._points
-
- if DEBUG:
- _get_points = get_points
-
- def get_points(self):
- points = self._get_points()
- self._check(points)
- return points
-
- def contains(self, x, y):
- # Docstring inherited.
- return self._bbox.contains(*self._transform.inverted().transform((x, y)))
-
- def fully_contains(self, x, y):
- # Docstring inherited.
- return self._bbox.fully_contains(*self._transform.inverted().transform((x, y)))
-
-
-class LockableBbox(BboxBase):
- """
- A `Bbox` where some elements may be locked at certain values.
-
- When the child bounding box changes, the bounds of this bbox will update
- accordingly with the exception of the locked elements.
- """
- def __init__(self, bbox, x0=None, y0=None, x1=None, y1=None, **kwargs):
- """
- Parameters
- ----------
- bbox : `Bbox`
- The child bounding box to wrap.
-
- x0 : float or None
- The locked value for x0, or None to leave unlocked.
-
- y0 : float or None
- The locked value for y0, or None to leave unlocked.
-
- x1 : float or None
- The locked value for x1, or None to leave unlocked.
-
- y1 : float or None
- The locked value for y1, or None to leave unlocked.
-
- """
- if not bbox.is_bbox:
- raise ValueError("'bbox' is not a bbox")
-
- super().__init__(**kwargs)
- self._bbox = bbox
- self.set_children(bbox)
- self._points = None
- fp = [x0, y0, x1, y1]
- mask = [val is None for val in fp]
- self._locked_points = np.ma.array(fp, float, mask=mask).reshape((2, 2))
-
- __str__ = _make_str_method("_bbox", "_locked_points")
-
- def get_points(self):
- # docstring inherited
- if self._invalid:
- points = self._bbox.get_points()
- self._points = np.where(self._locked_points.mask,
- points,
- self._locked_points)
- self._invalid = 0
- return self._points
-
- if DEBUG:
- _get_points = get_points
-
- def get_points(self):
- points = self._get_points()
- self._check(points)
- return points
-
- @property
- def locked_x0(self):
- """
- float or None: The value used for the locked x0.
- """
- if self._locked_points.mask[0, 0]:
- return None
- else:
- return self._locked_points[0, 0]
-
- @locked_x0.setter
- def locked_x0(self, x0):
- self._locked_points.mask[0, 0] = x0 is None
- self._locked_points.data[0, 0] = x0
- self.invalidate()
-
- @property
- def locked_y0(self):
- """
- float or None: The value used for the locked y0.
- """
- if self._locked_points.mask[0, 1]:
- return None
- else:
- return self._locked_points[0, 1]
-
- @locked_y0.setter
- def locked_y0(self, y0):
- self._locked_points.mask[0, 1] = y0 is None
- self._locked_points.data[0, 1] = y0
- self.invalidate()
-
- @property
- def locked_x1(self):
- """
- float or None: The value used for the locked x1.
- """
- if self._locked_points.mask[1, 0]:
- return None
- else:
- return self._locked_points[1, 0]
-
- @locked_x1.setter
- def locked_x1(self, x1):
- self._locked_points.mask[1, 0] = x1 is None
- self._locked_points.data[1, 0] = x1
- self.invalidate()
-
- @property
- def locked_y1(self):
- """
- float or None: The value used for the locked y1.
- """
- if self._locked_points.mask[1, 1]:
- return None
- else:
- return self._locked_points[1, 1]
-
- @locked_y1.setter
- def locked_y1(self, y1):
- self._locked_points.mask[1, 1] = y1 is None
- self._locked_points.data[1, 1] = y1
- self.invalidate()
-
-
-class Transform(TransformNode):
- """
- The base class of all `TransformNode` instances that
- actually perform a transformation.
-
- All non-affine transformations should be subclasses of this class.
- New affine transformations should be subclasses of `Affine2D`.
-
- Subclasses of this class should override the following members (at
- minimum):
-
- - :attr:`input_dims`
- - :attr:`output_dims`
- - :meth:`transform`
- - :meth:`inverted` (if an inverse exists)
-
- The following attributes may be overridden if the default is unsuitable:
-
- - :attr:`is_separable` (defaults to True for 1D -> 1D transforms, False
- otherwise)
- - :attr:`has_inverse` (defaults to True if :meth:`inverted` is overridden,
- False otherwise)
-
- If the transform needs to do something non-standard with
- `matplotlib.path.Path` objects, such as adding curves
- where there were once line segments, it should override:
-
- - :meth:`transform_path`
- """
-
- input_dims = None
- """
- The number of input dimensions of this transform.
- Must be overridden (with integers) in the subclass.
- """
-
- output_dims = None
- """
- The number of output dimensions of this transform.
- Must be overridden (with integers) in the subclass.
- """
-
- is_separable = False
- """True if this transform is separable in the x- and y- dimensions."""
-
- has_inverse = False
- """True if this transform has a corresponding inverse transform."""
-
- def __init_subclass__(cls):
- # 1d transforms are always separable; we assume higher-dimensional ones
- # are not but subclasses can also directly set is_separable -- this is
- # verified by checking whether "is_separable" appears more than once in
- # the class's MRO (it appears once in Transform).
- if (sum("is_separable" in vars(parent) for parent in cls.__mro__) == 1
- and cls.input_dims == cls.output_dims == 1):
- cls.is_separable = True
- # Transform.inverted raises NotImplementedError; we assume that if this
- # is overridden then the transform is invertible but subclass can also
- # directly set has_inverse.
- if (sum("has_inverse" in vars(parent) for parent in cls.__mro__) == 1
- and hasattr(cls, "inverted")
- and cls.inverted is not Transform.inverted):
- cls.has_inverse = True
-
- def __add__(self, other):
- """
- Compose two transforms together so that *self* is followed by *other*.
-
- ``A + B`` returns a transform ``C`` so that
- ``C.transform(x) == B.transform(A.transform(x))``.
- """
- return (composite_transform_factory(self, other)
- if isinstance(other, Transform) else
- NotImplemented)
-
- # Equality is based on object identity for `Transform`s (so we don't
- # override `__eq__`), but some subclasses, such as TransformWrapper &
- # AffineBase, override this behavior.
-
- def _iter_break_from_left_to_right(self):
- """
- Return an iterator breaking down this transform stack from left to
- right recursively. If self == ((A, N), A) then the result will be an
- iterator which yields I : ((A, N), A), followed by A : (N, A),
- followed by (A, N) : (A), but not ((A, N), A) : I.
-
- This is equivalent to flattening the stack then yielding
- ``flat_stack[:i], flat_stack[i:]`` where i=0..(n-1).
- """
- yield IdentityTransform(), self
-
- @property
- def depth(self):
- """
- Return the number of transforms which have been chained
- together to form this Transform instance.
-
- .. note::
-
- For the special case of a Composite transform, the maximum depth
- of the two is returned.
-
- """
- return 1
-
- def contains_branch(self, other):
- """
- Return whether the given transform is a sub-tree of this transform.
-
- This routine uses transform equality to identify sub-trees, therefore
- in many situations it is object id which will be used.
-
- For the case where the given transform represents the whole
- of this transform, returns True.
- """
- if self.depth < other.depth:
- return False
-
- # check that a subtree is equal to other (starting from self)
- for _, sub_tree in self._iter_break_from_left_to_right():
- if sub_tree == other:
- return True
- return False
-
- def contains_branch_seperately(self, other_transform):
- """
- Return whether the given branch is a sub-tree of this transform on
- each separate dimension.
-
- A common use for this method is to identify if a transform is a blended
- transform containing an Axes' data transform. e.g.::
-
- x_isdata, y_isdata = trans.contains_branch_seperately(ax.transData)
-
- """
- if self.output_dims != 2:
- raise ValueError('contains_branch_seperately only supports '
- 'transforms with 2 output dimensions')
- # for a non-blended transform each separate dimension is the same, so
- # just return the appropriate shape.
- return [self.contains_branch(other_transform)] * 2
-
- def __sub__(self, other):
- """
- Compose *self* with the inverse of *other*, cancelling identical terms
- if any::
-
- # In general:
- A - B == A + B.inverted()
- # (but see note regarding frozen transforms below).
-
- # If A "ends with" B (i.e. A == A' + B for some A') we can cancel
- # out B:
- (A' + B) - B == A'
-
- # Likewise, if B "starts with" A (B = A + B'), we can cancel out A:
- A - (A + B') == B'.inverted() == B'^-1
-
- Cancellation (rather than naively returning ``A + B.inverted()``) is
- important for multiple reasons:
-
- - It avoids floating-point inaccuracies when computing the inverse of
- B: ``B - B`` is guaranteed to cancel out exactly (resulting in the
- identity transform), whereas ``B + B.inverted()`` may differ by a
- small epsilon.
- - ``B.inverted()`` always returns a frozen transform: if one computes
- ``A + B + B.inverted()`` and later mutates ``B``, then
- ``B.inverted()`` won't be updated and the last two terms won't cancel
- out anymore; on the other hand, ``A + B - B`` will always be equal to
- ``A`` even if ``B`` is mutated.
- """
- # we only know how to do this operation if other is a Transform.
- if not isinstance(other, Transform):
- return NotImplemented
- for remainder, sub_tree in self._iter_break_from_left_to_right():
- if sub_tree == other:
- return remainder
- for remainder, sub_tree in other._iter_break_from_left_to_right():
- if sub_tree == self:
- if not remainder.has_inverse:
- raise ValueError(
- "The shortcut cannot be computed since 'other' "
- "includes a non-invertible component")
- return remainder.inverted()
- # if we have got this far, then there was no shortcut possible
- if other.has_inverse:
- return self + other.inverted()
- else:
- raise ValueError('It is not possible to compute transA - transB '
- 'since transB cannot be inverted and there is no '
- 'shortcut possible.')
-
- def __array__(self, *args, **kwargs):
- """Array interface to get at this Transform's affine matrix."""
- return self.get_affine().get_matrix()
-
- def transform(self, values):
- """
- Apply this transformation on the given array of *values*.
-
- Parameters
- ----------
- values : array-like
- The input values as an array of length :attr:`input_dims` or
- shape (N, :attr:`input_dims`).
-
- Returns
- -------
- array
- The output values as an array of length :attr:`output_dims` or
- shape (N, :attr:`output_dims`), depending on the input.
- """
- # Ensure that values is a 2d array (but remember whether
- # we started with a 1d or 2d array).
- values = np.asanyarray(values)
- ndim = values.ndim
- values = values.reshape((-1, self.input_dims))
-
- # Transform the values
- res = self.transform_affine(self.transform_non_affine(values))
-
- # Convert the result back to the shape of the input values.
- if ndim == 0:
- assert not np.ma.is_masked(res) # just to be on the safe side
- return res[0, 0]
- if ndim == 1:
- return res.reshape(-1)
- elif ndim == 2:
- return res
- raise ValueError(
- "Input values must have shape (N, {dims}) or ({dims},)"
- .format(dims=self.input_dims))
-
- def transform_affine(self, values):
- """
- Apply only the affine part of this transformation on the
- given array of values.
-
- ``transform(values)`` is always equivalent to
- ``transform_affine(transform_non_affine(values))``.
-
- In non-affine transformations, this is generally a no-op. In
- affine transformations, this is equivalent to
- ``transform(values)``.
-
- Parameters
- ----------
- values : array
- The input values as an array of length :attr:`input_dims` or
- shape (N, :attr:`input_dims`).
-
- Returns
- -------
- array
- The output values as an array of length :attr:`output_dims` or
- shape (N, :attr:`output_dims`), depending on the input.
- """
- return self.get_affine().transform(values)
-
- def transform_non_affine(self, values):
- """
- Apply only the non-affine part of this transformation.
-
- ``transform(values)`` is always equivalent to
- ``transform_affine(transform_non_affine(values))``.
-
- In non-affine transformations, this is generally equivalent to
- ``transform(values)``. In affine transformations, this is
- always a no-op.
-
- Parameters
- ----------
- values : array
- The input values as an array of length :attr:`input_dims` or
- shape (N, :attr:`input_dims`).
-
- Returns
- -------
- array
- The output values as an array of length :attr:`output_dims` or
- shape (N, :attr:`output_dims`), depending on the input.
- """
- return values
-
- def transform_bbox(self, bbox):
- """
- Transform the given bounding box.
-
- For smarter transforms including caching (a common requirement in
- Matplotlib), see `TransformedBbox`.
- """
- return Bbox(self.transform(bbox.get_points()))
-
- def get_affine(self):
- """Get the affine part of this transform."""
- return IdentityTransform()
-
- def get_matrix(self):
- """Get the matrix for the affine part of this transform."""
- return self.get_affine().get_matrix()
-
- def transform_point(self, point):
- """
- Return a transformed point.
-
- This function is only kept for backcompatibility; the more general
- `.transform` method is capable of transforming both a list of points
- and a single point.
-
- The point is given as a sequence of length :attr:`input_dims`.
- The transformed point is returned as a sequence of length
- :attr:`output_dims`.
- """
- if len(point) != self.input_dims:
- raise ValueError("The length of 'point' must be 'self.input_dims'")
- return self.transform(point)
-
- def transform_path(self, path):
- """
- Apply the transform to `.Path` *path*, returning a new `.Path`.
-
- In some cases, this transform may insert curves into the path
- that began as line segments.
- """
- return self.transform_path_affine(self.transform_path_non_affine(path))
-
- def transform_path_affine(self, path):
- """
- Apply the affine part of this transform to `.Path` *path*, returning a
- new `.Path`.
-
- ``transform_path(path)`` is equivalent to
- ``transform_path_affine(transform_path_non_affine(values))``.
- """
- return self.get_affine().transform_path_affine(path)
-
- def transform_path_non_affine(self, path):
- """
- Apply the non-affine part of this transform to `.Path` *path*,
- returning a new `.Path`.
-
- ``transform_path(path)`` is equivalent to
- ``transform_path_affine(transform_path_non_affine(values))``.
- """
- x = self.transform_non_affine(path.vertices)
- return Path._fast_from_codes_and_verts(x, path.codes, path)
-
- def transform_angles(self, angles, pts, radians=False, pushoff=1e-5):
- """
- Transform a set of angles anchored at specific locations.
-
- Parameters
- ----------
- angles : (N,) array-like
- The angles to transform.
- pts : (N, 2) array-like
- The points where the angles are anchored.
- radians : bool, default: False
- Whether *angles* are radians or degrees.
- pushoff : float
- For each point in *pts* and angle in *angles*, the transformed
- angle is computed by transforming a segment of length *pushoff*
- starting at that point and making that angle relative to the
- horizontal axis, and measuring the angle between the horizontal
- axis and the transformed segment.
-
- Returns
- -------
- (N,) array
- """
- # Must be 2D
- if self.input_dims != 2 or self.output_dims != 2:
- raise NotImplementedError('Only defined in 2D')
- angles = np.asarray(angles)
- pts = np.asarray(pts)
- _api.check_shape((None, 2), pts=pts)
- _api.check_shape((None,), angles=angles)
- if len(angles) != len(pts):
- raise ValueError("There must be as many 'angles' as 'pts'")
- # Convert to radians if desired
- if not radians:
- angles = np.deg2rad(angles)
- # Move a short distance away
- pts2 = pts + pushoff * np.column_stack([np.cos(angles),
- np.sin(angles)])
- # Transform both sets of points
- tpts = self.transform(pts)
- tpts2 = self.transform(pts2)
- # Calculate transformed angles
- d = tpts2 - tpts
- a = np.arctan2(d[:, 1], d[:, 0])
- # Convert back to degrees if desired
- if not radians:
- a = np.rad2deg(a)
- return a
-
- def inverted(self):
- """
- Return the corresponding inverse transformation.
-
- It holds ``x == self.inverted().transform(self.transform(x))``.
-
- The return value of this method should be treated as
- temporary. An update to *self* does not cause a corresponding
- update to its inverted copy.
- """
- raise NotImplementedError()
-
-
-class TransformWrapper(Transform):
- """
- A helper class that holds a single child transform and acts
- equivalently to it.
-
- This is useful if a node of the transform tree must be replaced at
- run time with a transform of a different type. This class allows
- that replacement to correctly trigger invalidation.
-
- `TransformWrapper` instances must have the same input and output dimensions
- during their entire lifetime, so the child transform may only be replaced
- with another child transform of the same dimensions.
- """
-
- pass_through = True
-
- def __init__(self, child):
- """
- *child*: A `Transform` instance. This child may later
- be replaced with :meth:`set`.
- """
- _api.check_isinstance(Transform, child=child)
- super().__init__()
- self.set(child)
-
- def __eq__(self, other):
- return self._child.__eq__(other)
-
- __str__ = _make_str_method("_child")
-
- def frozen(self):
- # docstring inherited
- return self._child.frozen()
-
- def set(self, child):
- """
- Replace the current child of this transform with another one.
-
- The new child must have the same number of input and output
- dimensions as the current child.
- """
- if hasattr(self, "_child"): # Absent during init.
- self.invalidate()
- new_dims = (child.input_dims, child.output_dims)
- old_dims = (self._child.input_dims, self._child.output_dims)
- if new_dims != old_dims:
- raise ValueError(
- f"The input and output dims of the new child {new_dims} "
- f"do not match those of current child {old_dims}")
- self._child._parents.pop(id(self), None)
-
- self._child = child
- self.set_children(child)
-
- self.transform = child.transform
- self.transform_affine = child.transform_affine
- self.transform_non_affine = child.transform_non_affine
- self.transform_path = child.transform_path
- self.transform_path_affine = child.transform_path_affine
- self.transform_path_non_affine = child.transform_path_non_affine
- self.get_affine = child.get_affine
- self.inverted = child.inverted
- self.get_matrix = child.get_matrix
- # note we do not wrap other properties here since the transform's
- # child can be changed with WrappedTransform.set and so checking
- # is_affine and other such properties may be dangerous.
-
- self._invalid = 0
- self.invalidate()
- self._invalid = 0
-
- input_dims = property(lambda self: self._child.input_dims)
- output_dims = property(lambda self: self._child.output_dims)
- is_affine = property(lambda self: self._child.is_affine)
- is_separable = property(lambda self: self._child.is_separable)
- has_inverse = property(lambda self: self._child.has_inverse)
-
-
-class AffineBase(Transform):
- """
- The base class of all affine transformations of any number of dimensions.
- """
- is_affine = True
-
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self._inverted = None
-
- def __array__(self, *args, **kwargs):
- # optimises the access of the transform matrix vs. the superclass
- return self.get_matrix()
-
- def __eq__(self, other):
- if getattr(other, "is_affine", False) and hasattr(other, "get_matrix"):
- return (self.get_matrix() == other.get_matrix()).all()
- return NotImplemented
-
- def transform(self, values):
- # docstring inherited
- return self.transform_affine(values)
-
- def transform_affine(self, values):
- # docstring inherited
- raise NotImplementedError('Affine subclasses should override this '
- 'method.')
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform_non_affine(self, values):
- # docstring inherited
- return values
-
- def transform_path(self, path):
- # docstring inherited
- return self.transform_path_affine(path)
-
- def transform_path_affine(self, path):
- # docstring inherited
- return Path(self.transform_affine(path.vertices),
- path.codes, path._interpolation_steps)
-
- def transform_path_non_affine(self, path):
- # docstring inherited
- return path
-
- def get_affine(self):
- # docstring inherited
- return self
-
-
-class Affine2DBase(AffineBase):
- """
- The base class of all 2D affine transformations.
-
- 2D affine transformations are performed using a 3x3 numpy array::
-
- a c e
- b d f
- 0 0 1
-
- This class provides the read-only interface. For a mutable 2D
- affine transformation, use `Affine2D`.
-
- Subclasses of this class will generally only need to override a
- constructor and `~.Transform.get_matrix` that generates a custom 3x3 matrix.
- """
- input_dims = 2
- output_dims = 2
-
- def frozen(self):
- # docstring inherited
- return Affine2D(self.get_matrix().copy())
-
- @property
- def is_separable(self):
- mtx = self.get_matrix()
- return mtx[0, 1] == mtx[1, 0] == 0.0
-
- def to_values(self):
- """
- Return the values of the matrix as an ``(a, b, c, d, e, f)`` tuple.
- """
- mtx = self.get_matrix()
- return tuple(mtx[:2].swapaxes(0, 1).flat)
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform_affine(self, values):
- mtx = self.get_matrix()
- if isinstance(values, np.ma.MaskedArray):
- tpoints = affine_transform(values.data, mtx)
- return np.ma.MaskedArray(tpoints, mask=np.ma.getmask(values))
- return affine_transform(values, mtx)
-
- if DEBUG:
- _transform_affine = transform_affine
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform_affine(self, values):
- # docstring inherited
- # The major speed trap here is just converting to the
- # points to an array in the first place. If we can use
- # more arrays upstream, that should help here.
- if not isinstance(values, np.ndarray):
- _api.warn_external(
- f'A non-numpy array of type {type(values)} was passed in '
- f'for transformation, which results in poor performance.')
- return self._transform_affine(values)
-
- def inverted(self):
- # docstring inherited
- if self._inverted is None or self._invalid:
- mtx = self.get_matrix()
- shorthand_name = None
- if self._shorthand_name:
- shorthand_name = '(%s)-1' % self._shorthand_name
- self._inverted = Affine2D(inv(mtx), shorthand_name=shorthand_name)
- self._invalid = 0
- return self._inverted
-
-
-class Affine2D(Affine2DBase):
- """
- A mutable 2D affine transformation.
- """
-
- def __init__(self, matrix=None, **kwargs):
- """
- Initialize an Affine transform from a 3x3 numpy float array::
-
- a c e
- b d f
- 0 0 1
-
- If *matrix* is None, initialize with the identity transform.
- """
- super().__init__(**kwargs)
- if matrix is None:
- # A bit faster than np.identity(3).
- matrix = IdentityTransform._mtx
- self._mtx = matrix.copy()
- self._invalid = 0
-
- _base_str = _make_str_method("_mtx")
-
- def __str__(self):
- return (self._base_str()
- if (self._mtx != np.diag(np.diag(self._mtx))).any()
- else f"Affine2D().scale({self._mtx[0, 0]}, {self._mtx[1, 1]})"
- if self._mtx[0, 0] != self._mtx[1, 1]
- else f"Affine2D().scale({self._mtx[0, 0]})")
-
- @staticmethod
- def from_values(a, b, c, d, e, f):
- """
- Create a new Affine2D instance from the given values::
-
- a c e
- b d f
- 0 0 1
-
- .
- """
- return Affine2D(
- np.array([a, c, e, b, d, f, 0.0, 0.0, 1.0], float).reshape((3, 3)))
-
- def get_matrix(self):
- """
- Get the underlying transformation matrix as a 3x3 array::
-
- a c e
- b d f
- 0 0 1
-
- .
- """
- if self._invalid:
- self._inverted = None
- self._invalid = 0
- return self._mtx
-
- def set_matrix(self, mtx):
- """
- Set the underlying transformation matrix from a 3x3 array::
-
- a c e
- b d f
- 0 0 1
-
- .
- """
- self._mtx = mtx
- self.invalidate()
-
- def set(self, other):
- """
- Set this transformation from the frozen copy of another
- `Affine2DBase` object.
- """
- _api.check_isinstance(Affine2DBase, other=other)
- self._mtx = other.get_matrix()
- self.invalidate()
-
- def clear(self):
- """
- Reset the underlying matrix to the identity transform.
- """
- # A bit faster than np.identity(3).
- self._mtx = IdentityTransform._mtx.copy()
- self.invalidate()
- return self
-
- def rotate(self, theta):
- """
- Add a rotation (in radians) to this transform in place.
-
- Returns *self*, so this method can easily be chained with more
- calls to :meth:`rotate`, :meth:`rotate_deg`, :meth:`translate`
- and :meth:`scale`.
- """
- a = math.cos(theta)
- b = math.sin(theta)
- mtx = self._mtx
- # Operating and assigning one scalar at a time is much faster.
- (xx, xy, x0), (yx, yy, y0), _ = mtx.tolist()
- # mtx = [[a -b 0], [b a 0], [0 0 1]] * mtx
- mtx[0, 0] = a * xx - b * yx
- mtx[0, 1] = a * xy - b * yy
- mtx[0, 2] = a * x0 - b * y0
- mtx[1, 0] = b * xx + a * yx
- mtx[1, 1] = b * xy + a * yy
- mtx[1, 2] = b * x0 + a * y0
- self.invalidate()
- return self
-
- def rotate_deg(self, degrees):
- """
- Add a rotation (in degrees) to this transform in place.
-
- Returns *self*, so this method can easily be chained with more
- calls to :meth:`rotate`, :meth:`rotate_deg`, :meth:`translate`
- and :meth:`scale`.
- """
- return self.rotate(math.radians(degrees))
-
- def rotate_around(self, x, y, theta):
- """
- Add a rotation (in radians) around the point (x, y) in place.
-
- Returns *self*, so this method can easily be chained with more
- calls to :meth:`rotate`, :meth:`rotate_deg`, :meth:`translate`
- and :meth:`scale`.
- """
- return self.translate(-x, -y).rotate(theta).translate(x, y)
-
- def rotate_deg_around(self, x, y, degrees):
- """
- Add a rotation (in degrees) around the point (x, y) in place.
-
- Returns *self*, so this method can easily be chained with more
- calls to :meth:`rotate`, :meth:`rotate_deg`, :meth:`translate`
- and :meth:`scale`.
- """
- # Cast to float to avoid wraparound issues with uint8's
- x, y = float(x), float(y)
- return self.translate(-x, -y).rotate_deg(degrees).translate(x, y)
-
- def translate(self, tx, ty):
- """
- Add a translation in place.
-
- Returns *self*, so this method can easily be chained with more
- calls to :meth:`rotate`, :meth:`rotate_deg`, :meth:`translate`
- and :meth:`scale`.
- """
- self._mtx[0, 2] += tx
- self._mtx[1, 2] += ty
- self.invalidate()
- return self
-
- def scale(self, sx, sy=None):
- """
- Add a scale in place.
-
- If *sy* is None, the same scale is applied in both the *x*- and
- *y*-directions.
-
- Returns *self*, so this method can easily be chained with more
- calls to :meth:`rotate`, :meth:`rotate_deg`, :meth:`translate`
- and :meth:`scale`.
- """
- if sy is None:
- sy = sx
- # explicit element-wise scaling is fastest
- self._mtx[0, 0] *= sx
- self._mtx[0, 1] *= sx
- self._mtx[0, 2] *= sx
- self._mtx[1, 0] *= sy
- self._mtx[1, 1] *= sy
- self._mtx[1, 2] *= sy
- self.invalidate()
- return self
-
- def skew(self, xShear, yShear):
- """
- Add a skew in place.
-
- *xShear* and *yShear* are the shear angles along the *x*- and
- *y*-axes, respectively, in radians.
-
- Returns *self*, so this method can easily be chained with more
- calls to :meth:`rotate`, :meth:`rotate_deg`, :meth:`translate`
- and :meth:`scale`.
- """
- rx = math.tan(xShear)
- ry = math.tan(yShear)
- mtx = self._mtx
- # Operating and assigning one scalar at a time is much faster.
- (xx, xy, x0), (yx, yy, y0), _ = mtx.tolist()
- # mtx = [[1 rx 0], [ry 1 0], [0 0 1]] * mtx
- mtx[0, 0] += rx * yx
- mtx[0, 1] += rx * yy
- mtx[0, 2] += rx * y0
- mtx[1, 0] += ry * xx
- mtx[1, 1] += ry * xy
- mtx[1, 2] += ry * x0
- self.invalidate()
- return self
-
- def skew_deg(self, xShear, yShear):
- """
- Add a skew in place.
-
- *xShear* and *yShear* are the shear angles along the *x*- and
- *y*-axes, respectively, in degrees.
-
- Returns *self*, so this method can easily be chained with more
- calls to :meth:`rotate`, :meth:`rotate_deg`, :meth:`translate`
- and :meth:`scale`.
- """
- return self.skew(math.radians(xShear), math.radians(yShear))
-
-
-class IdentityTransform(Affine2DBase):
- """
- A special class that does one thing, the identity transform, in a
- fast way.
- """
- _mtx = np.identity(3)
-
- def frozen(self):
- # docstring inherited
- return self
-
- __str__ = _make_str_method()
-
- def get_matrix(self):
- # docstring inherited
- return self._mtx
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform(self, values):
- # docstring inherited
- return np.asanyarray(values)
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform_affine(self, values):
- # docstring inherited
- return np.asanyarray(values)
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform_non_affine(self, values):
- # docstring inherited
- return np.asanyarray(values)
-
- def transform_path(self, path):
- # docstring inherited
- return path
-
- def transform_path_affine(self, path):
- # docstring inherited
- return path
-
- def transform_path_non_affine(self, path):
- # docstring inherited
- return path
-
- def get_affine(self):
- # docstring inherited
- return self
-
- def inverted(self):
- # docstring inherited
- return self
-
-
-class _BlendedMixin:
- """Common methods for `BlendedGenericTransform` and `BlendedAffine2D`."""
-
- def __eq__(self, other):
- if isinstance(other, (BlendedAffine2D, BlendedGenericTransform)):
- return (self._x == other._x) and (self._y == other._y)
- elif self._x == self._y:
- return self._x == other
- else:
- return NotImplemented
-
- def contains_branch_seperately(self, transform):
- return (self._x.contains_branch(transform),
- self._y.contains_branch(transform))
-
- __str__ = _make_str_method("_x", "_y")
-
-
-class BlendedGenericTransform(_BlendedMixin, Transform):
- """
- A "blended" transform uses one transform for the *x*-direction, and
- another transform for the *y*-direction.
-
- This "generic" version can handle any given child transform in the
- *x*- and *y*-directions.
- """
- input_dims = 2
- output_dims = 2
- is_separable = True
- pass_through = True
-
- def __init__(self, x_transform, y_transform, **kwargs):
- """
- Create a new "blended" transform using *x_transform* to transform the
- *x*-axis and *y_transform* to transform the *y*-axis.
-
- You will generally not call this constructor directly but use the
- `blended_transform_factory` function instead, which can determine
- automatically which kind of blended transform to create.
- """
- Transform.__init__(self, **kwargs)
- self._x = x_transform
- self._y = y_transform
- self.set_children(x_transform, y_transform)
- self._affine = None
-
- @property
- def depth(self):
- return max(self._x.depth, self._y.depth)
-
- def contains_branch(self, other):
- # A blended transform cannot possibly contain a branch from two
- # different transforms.
- return False
-
- is_affine = property(lambda self: self._x.is_affine and self._y.is_affine)
- has_inverse = property(
- lambda self: self._x.has_inverse and self._y.has_inverse)
-
- def frozen(self):
- # docstring inherited
- return blended_transform_factory(self._x.frozen(), self._y.frozen())
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform_non_affine(self, values):
- # docstring inherited
- if self._x.is_affine and self._y.is_affine:
- return values
- x = self._x
- y = self._y
-
- if x == y and x.input_dims == 2:
- return x.transform_non_affine(values)
-
- if x.input_dims == 2:
- x_points = x.transform_non_affine(values)[:, 0:1]
- else:
- x_points = x.transform_non_affine(values[:, 0])
- x_points = x_points.reshape((len(x_points), 1))
-
- if y.input_dims == 2:
- y_points = y.transform_non_affine(values)[:, 1:]
- else:
- y_points = y.transform_non_affine(values[:, 1])
- y_points = y_points.reshape((len(y_points), 1))
-
- if (isinstance(x_points, np.ma.MaskedArray) or
- isinstance(y_points, np.ma.MaskedArray)):
- return np.ma.concatenate((x_points, y_points), 1)
- else:
- return np.concatenate((x_points, y_points), 1)
-
- def inverted(self):
- # docstring inherited
- return BlendedGenericTransform(self._x.inverted(), self._y.inverted())
-
- def get_affine(self):
- # docstring inherited
- if self._invalid or self._affine is None:
- if self._x == self._y:
- self._affine = self._x.get_affine()
- else:
- x_mtx = self._x.get_affine().get_matrix()
- y_mtx = self._y.get_affine().get_matrix()
- # We already know the transforms are separable, so we can skip
- # setting b and c to zero.
- mtx = np.array([x_mtx[0], y_mtx[1], [0.0, 0.0, 1.0]])
- self._affine = Affine2D(mtx)
- self._invalid = 0
- return self._affine
-
-
-class BlendedAffine2D(_BlendedMixin, Affine2DBase):
- """
- A "blended" transform uses one transform for the *x*-direction, and
- another transform for the *y*-direction.
-
- This version is an optimization for the case where both child
- transforms are of type `Affine2DBase`.
- """
-
- is_separable = True
-
- def __init__(self, x_transform, y_transform, **kwargs):
- """
- Create a new "blended" transform using *x_transform* to transform the
- *x*-axis and *y_transform* to transform the *y*-axis.
-
- Both *x_transform* and *y_transform* must be 2D affine transforms.
-
- You will generally not call this constructor directly but use the
- `blended_transform_factory` function instead, which can determine
- automatically which kind of blended transform to create.
- """
- is_affine = x_transform.is_affine and y_transform.is_affine
- is_separable = x_transform.is_separable and y_transform.is_separable
- is_correct = is_affine and is_separable
- if not is_correct:
- raise ValueError("Both *x_transform* and *y_transform* must be 2D "
- "affine transforms")
-
- Transform.__init__(self, **kwargs)
- self._x = x_transform
- self._y = y_transform
- self.set_children(x_transform, y_transform)
-
- Affine2DBase.__init__(self)
- self._mtx = None
-
- def get_matrix(self):
- # docstring inherited
- if self._invalid:
- if self._x == self._y:
- self._mtx = self._x.get_matrix()
- else:
- x_mtx = self._x.get_matrix()
- y_mtx = self._y.get_matrix()
- # We already know the transforms are separable, so we can skip
- # setting b and c to zero.
- self._mtx = np.array([x_mtx[0], y_mtx[1], [0.0, 0.0, 1.0]])
- self._inverted = None
- self._invalid = 0
- return self._mtx
-
-
-def blended_transform_factory(x_transform, y_transform):
- """
- Create a new "blended" transform using *x_transform* to transform
- the *x*-axis and *y_transform* to transform the *y*-axis.
-
- A faster version of the blended transform is returned for the case
- where both child transforms are affine.
- """
- if (isinstance(x_transform, Affine2DBase) and
- isinstance(y_transform, Affine2DBase)):
- return BlendedAffine2D(x_transform, y_transform)
- return BlendedGenericTransform(x_transform, y_transform)
-
-
-class CompositeGenericTransform(Transform):
- """
- A composite transform formed by applying transform *a* then
- transform *b*.
-
- This "generic" version can handle any two arbitrary
- transformations.
- """
- pass_through = True
-
- def __init__(self, a, b, **kwargs):
- """
- Create a new composite transform that is the result of
- applying transform *a* then transform *b*.
-
- You will generally not call this constructor directly but write ``a +
- b`` instead, which will automatically choose the best kind of composite
- transform instance to create.
- """
- if a.output_dims != b.input_dims:
- raise ValueError("The output dimension of 'a' must be equal to "
- "the input dimensions of 'b'")
- self.input_dims = a.input_dims
- self.output_dims = b.output_dims
-
- super().__init__(**kwargs)
- self._a = a
- self._b = b
- self.set_children(a, b)
-
- def frozen(self):
- # docstring inherited
- self._invalid = 0
- frozen = composite_transform_factory(
- self._a.frozen(), self._b.frozen())
- if not isinstance(frozen, CompositeGenericTransform):
- return frozen.frozen()
- return frozen
-
- def _invalidate_internal(self, level, invalidating_node):
- # When the left child is invalidated at AFFINE_ONLY level and the right child is
- # non-affine, the composite transform is FULLY invalidated.
- if invalidating_node is self._a and not self._b.is_affine:
- level = Transform._INVALID_FULL
- super()._invalidate_internal(level, invalidating_node)
-
- def __eq__(self, other):
- if isinstance(other, (CompositeGenericTransform, CompositeAffine2D)):
- return self is other or (self._a == other._a
- and self._b == other._b)
- else:
- return False
-
- def _iter_break_from_left_to_right(self):
- for left, right in self._a._iter_break_from_left_to_right():
- yield left, right + self._b
- for left, right in self._b._iter_break_from_left_to_right():
- yield self._a + left, right
-
- depth = property(lambda self: self._a.depth + self._b.depth)
- is_affine = property(lambda self: self._a.is_affine and self._b.is_affine)
- is_separable = property(
- lambda self: self._a.is_separable and self._b.is_separable)
- has_inverse = property(
- lambda self: self._a.has_inverse and self._b.has_inverse)
-
- __str__ = _make_str_method("_a", "_b")
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform_affine(self, values):
- # docstring inherited
- return self.get_affine().transform(values)
-
- @_api.rename_parameter("3.8", "points", "values")
- def transform_non_affine(self, values):
- # docstring inherited
- if self._a.is_affine and self._b.is_affine:
- return values
- elif not self._a.is_affine and self._b.is_affine:
- return self._a.transform_non_affine(values)
- else:
- return self._b.transform_non_affine(self._a.transform(values))
-
- def transform_path_non_affine(self, path):
- # docstring inherited
- if self._a.is_affine and self._b.is_affine:
- return path
- elif not self._a.is_affine and self._b.is_affine:
- return self._a.transform_path_non_affine(path)
- else:
- return self._b.transform_path_non_affine(
- self._a.transform_path(path))
-
- def get_affine(self):
- # docstring inherited
- if not self._b.is_affine:
- return self._b.get_affine()
- else:
- return Affine2D(np.dot(self._b.get_affine().get_matrix(),
- self._a.get_affine().get_matrix()))
-
- def inverted(self):
- # docstring inherited
- return CompositeGenericTransform(
- self._b.inverted(), self._a.inverted())
-
-
-class CompositeAffine2D(Affine2DBase):
- """
- A composite transform formed by applying transform *a* then transform *b*.
-
- This version is an optimization that handles the case where both *a*
- and *b* are 2D affines.
- """
- def __init__(self, a, b, **kwargs):
- """
- Create a new composite transform that is the result of
- applying `Affine2DBase` *a* then `Affine2DBase` *b*.
-
- You will generally not call this constructor directly but write ``a +
- b`` instead, which will automatically choose the best kind of composite
- transform instance to create.
- """
- if not a.is_affine or not b.is_affine:
- raise ValueError("'a' and 'b' must be affine transforms")
- if a.output_dims != b.input_dims:
- raise ValueError("The output dimension of 'a' must be equal to "
- "the input dimensions of 'b'")
- self.input_dims = a.input_dims
- self.output_dims = b.output_dims
-
- super().__init__(**kwargs)
- self._a = a
- self._b = b
- self.set_children(a, b)
- self._mtx = None
-
- @property
- def depth(self):
- return self._a.depth + self._b.depth
-
- def _iter_break_from_left_to_right(self):
- for left, right in self._a._iter_break_from_left_to_right():
- yield left, right + self._b
- for left, right in self._b._iter_break_from_left_to_right():
- yield self._a + left, right
-
- __str__ = _make_str_method("_a", "_b")
-
- def get_matrix(self):
- # docstring inherited
- if self._invalid:
- self._mtx = np.dot(
- self._b.get_matrix(),
- self._a.get_matrix())
- self._inverted = None
- self._invalid = 0
- return self._mtx
-
-
-def composite_transform_factory(a, b):
- """
- Create a new composite transform that is the result of applying
- transform a then transform b.
-
- Shortcut versions of the blended transform are provided for the
- case where both child transforms are affine, or one or the other
- is the identity transform.
-
- Composite transforms may also be created using the '+' operator,
- e.g.::
-
- c = a + b
- """
- # check to see if any of a or b are IdentityTransforms. We use
- # isinstance here to guarantee that the transforms will *always*
- # be IdentityTransforms. Since TransformWrappers are mutable,
- # use of equality here would be wrong.
- if isinstance(a, IdentityTransform):
- return b
- elif isinstance(b, IdentityTransform):
- return a
- elif isinstance(a, Affine2D) and isinstance(b, Affine2D):
- return CompositeAffine2D(a, b)
- return CompositeGenericTransform(a, b)
-
-
-class BboxTransform(Affine2DBase):
- """
- `BboxTransform` linearly transforms points from one `Bbox` to another.
- """
-
- is_separable = True
-
- def __init__(self, boxin, boxout, **kwargs):
- """
- Create a new `BboxTransform` that linearly transforms
- points from *boxin* to *boxout*.
- """
- if not boxin.is_bbox or not boxout.is_bbox:
- raise ValueError("'boxin' and 'boxout' must be bbox")
-
- super().__init__(**kwargs)
- self._boxin = boxin
- self._boxout = boxout
- self.set_children(boxin, boxout)
- self._mtx = None
- self._inverted = None
-
- __str__ = _make_str_method("_boxin", "_boxout")
-
- def get_matrix(self):
- # docstring inherited
- if self._invalid:
- inl, inb, inw, inh = self._boxin.bounds
- outl, outb, outw, outh = self._boxout.bounds
- x_scale = outw / inw
- y_scale = outh / inh
- if DEBUG and (x_scale == 0 or y_scale == 0):
- raise ValueError(
- "Transforming from or to a singular bounding box")
- self._mtx = np.array([[x_scale, 0.0 , (-inl*x_scale+outl)],
- [0.0 , y_scale, (-inb*y_scale+outb)],
- [0.0 , 0.0 , 1.0 ]],
- float)
- self._inverted = None
- self._invalid = 0
- return self._mtx
-
-
-class BboxTransformTo(Affine2DBase):
- """
- `BboxTransformTo` is a transformation that linearly transforms points from
- the unit bounding box to a given `Bbox`.
- """
-
- is_separable = True
-
- def __init__(self, boxout, **kwargs):
- """
- Create a new `BboxTransformTo` that linearly transforms
- points from the unit bounding box to *boxout*.
- """
- if not boxout.is_bbox:
- raise ValueError("'boxout' must be bbox")
-
- super().__init__(**kwargs)
- self._boxout = boxout
- self.set_children(boxout)
- self._mtx = None
- self._inverted = None
-
- __str__ = _make_str_method("_boxout")
-
- def get_matrix(self):
- # docstring inherited
- if self._invalid:
- outl, outb, outw, outh = self._boxout.bounds
- if DEBUG and (outw == 0 or outh == 0):
- raise ValueError("Transforming to a singular bounding box.")
- self._mtx = np.array([[outw, 0.0, outl],
- [ 0.0, outh, outb],
- [ 0.0, 0.0, 1.0]],
- float)
- self._inverted = None
- self._invalid = 0
- return self._mtx
-
-
-class BboxTransformToMaxOnly(BboxTransformTo):
- """
- `BboxTransformTo` is a transformation that linearly transforms points from
- the unit bounding box to a given `Bbox` with a fixed upper left of (0, 0).
- """
- def get_matrix(self):
- # docstring inherited
- if self._invalid:
- xmax, ymax = self._boxout.max
- if DEBUG and (xmax == 0 or ymax == 0):
- raise ValueError("Transforming to a singular bounding box.")
- self._mtx = np.array([[xmax, 0.0, 0.0],
- [ 0.0, ymax, 0.0],
- [ 0.0, 0.0, 1.0]],
- float)
- self._inverted = None
- self._invalid = 0
- return self._mtx
-
-
-class BboxTransformFrom(Affine2DBase):
- """
- `BboxTransformFrom` linearly transforms points from a given `Bbox` to the
- unit bounding box.
- """
- is_separable = True
-
- def __init__(self, boxin, **kwargs):
- if not boxin.is_bbox:
- raise ValueError("'boxin' must be bbox")
-
- super().__init__(**kwargs)
- self._boxin = boxin
- self.set_children(boxin)
- self._mtx = None
- self._inverted = None
-
- __str__ = _make_str_method("_boxin")
-
- def get_matrix(self):
- # docstring inherited
- if self._invalid:
- inl, inb, inw, inh = self._boxin.bounds
- if DEBUG and (inw == 0 or inh == 0):
- raise ValueError("Transforming from a singular bounding box.")
- x_scale = 1.0 / inw
- y_scale = 1.0 / inh
- self._mtx = np.array([[x_scale, 0.0 , (-inl*x_scale)],
- [0.0 , y_scale, (-inb*y_scale)],
- [0.0 , 0.0 , 1.0 ]],
- float)
- self._inverted = None
- self._invalid = 0
- return self._mtx
-
-
-class ScaledTranslation(Affine2DBase):
- """
- A transformation that translates by *xt* and *yt*, after *xt* and *yt*
- have been transformed by *scale_trans*.
- """
- def __init__(self, xt, yt, scale_trans, **kwargs):
- super().__init__(**kwargs)
- self._t = (xt, yt)
- self._scale_trans = scale_trans
- self.set_children(scale_trans)
- self._mtx = None
- self._inverted = None
-
- __str__ = _make_str_method("_t")
-
- def get_matrix(self):
- # docstring inherited
- if self._invalid:
- # A bit faster than np.identity(3).
- self._mtx = IdentityTransform._mtx.copy()
- self._mtx[:2, 2] = self._scale_trans.transform(self._t)
- self._invalid = 0
- self._inverted = None
- return self._mtx
-
-
-class AffineDeltaTransform(Affine2DBase):
- r"""
- A transform wrapper for transforming displacements between pairs of points.
-
- This class is intended to be used to transform displacements ("position
- deltas") between pairs of points (e.g., as the ``offset_transform``
- of `.Collection`\s): given a transform ``t`` such that ``t =
- AffineDeltaTransform(t) + offset``, ``AffineDeltaTransform``
- satisfies ``AffineDeltaTransform(a - b) == AffineDeltaTransform(a) -
- AffineDeltaTransform(b)``.
-
- This is implemented by forcing the offset components of the transform
- matrix to zero.
-
- This class is experimental as of 3.3, and the API may change.
- """
-
- def __init__(self, transform, **kwargs):
- super().__init__(**kwargs)
- self._base_transform = transform
-
- __str__ = _make_str_method("_base_transform")
-
- def get_matrix(self):
- if self._invalid:
- self._mtx = self._base_transform.get_matrix().copy()
- self._mtx[:2, -1] = 0
- return self._mtx
-
-
-class TransformedPath(TransformNode):
- """
- A `TransformedPath` caches a non-affine transformed copy of the
- `~.path.Path`. This cached copy is automatically updated when the
- non-affine part of the transform changes.
-
- .. note::
-
- Paths are considered immutable by this class. Any update to the
- path's vertices/codes will not trigger a transform recomputation.
-
- """
- def __init__(self, path, transform):
- """
- Parameters
- ----------
- path : `~.path.Path`
- transform : `Transform`
- """
- _api.check_isinstance(Transform, transform=transform)
- super().__init__()
- self._path = path
- self._transform = transform
- self.set_children(transform)
- self._transformed_path = None
- self._transformed_points = None
-
- def _revalidate(self):
- # only recompute if the invalidation includes the non_affine part of
- # the transform
- if (self._invalid == self._INVALID_FULL
- or self._transformed_path is None):
- self._transformed_path = \
- self._transform.transform_path_non_affine(self._path)
- self._transformed_points = \
- Path._fast_from_codes_and_verts(
- self._transform.transform_non_affine(self._path.vertices),
- None, self._path)
- self._invalid = 0
-
- def get_transformed_points_and_affine(self):
- """
- Return a copy of the child path, with the non-affine part of
- the transform already applied, along with the affine part of
- the path necessary to complete the transformation. Unlike
- :meth:`get_transformed_path_and_affine`, no interpolation will
- be performed.
- """
- self._revalidate()
- return self._transformed_points, self.get_affine()
-
- def get_transformed_path_and_affine(self):
- """
- Return a copy of the child path, with the non-affine part of
- the transform already applied, along with the affine part of
- the path necessary to complete the transformation.
- """
- self._revalidate()
- return self._transformed_path, self.get_affine()
-
- def get_fully_transformed_path(self):
- """
- Return a fully-transformed copy of the child path.
- """
- self._revalidate()
- return self._transform.transform_path_affine(self._transformed_path)
-
- def get_affine(self):
- return self._transform.get_affine()
-
-
-class TransformedPatchPath(TransformedPath):
- """
- A `TransformedPatchPath` caches a non-affine transformed copy of the
- `~.patches.Patch`. This cached copy is automatically updated when the
- non-affine part of the transform or the patch changes.
- """
-
- def __init__(self, patch):
- """
- Parameters
- ----------
- patch : `~.patches.Patch`
- """
- # Defer to TransformedPath.__init__.
- super().__init__(patch.get_path(), patch.get_transform())
- self._patch = patch
-
- def _revalidate(self):
- patch_path = self._patch.get_path()
- # Force invalidation if the patch path changed; otherwise, let base
- # class check invalidation.
- if patch_path != self._path:
- self._path = patch_path
- self._transformed_path = None
- super()._revalidate()
-
-
-def nonsingular(vmin, vmax, expander=0.001, tiny=1e-15, increasing=True):
- """
- Modify the endpoints of a range as needed to avoid singularities.
-
- Parameters
- ----------
- vmin, vmax : float
- The initial endpoints.
- expander : float, default: 0.001
- Fractional amount by which *vmin* and *vmax* are expanded if
- the original interval is too small, based on *tiny*.
- tiny : float, default: 1e-15
- Threshold for the ratio of the interval to the maximum absolute
- value of its endpoints. If the interval is smaller than
- this, it will be expanded. This value should be around
- 1e-15 or larger; otherwise the interval will be approaching
- the double precision resolution limit.
- increasing : bool, default: True
- If True, swap *vmin*, *vmax* if *vmin* > *vmax*.
-
- Returns
- -------
- vmin, vmax : float
- Endpoints, expanded and/or swapped if necessary.
- If either input is inf or NaN, or if both inputs are 0 or very
- close to zero, it returns -*expander*, *expander*.
- """
-
- if (not np.isfinite(vmin)) or (not np.isfinite(vmax)):
- return -expander, expander
-
- swapped = False
- if vmax < vmin:
- vmin, vmax = vmax, vmin
- swapped = True
-
- # Expand vmin, vmax to float: if they were integer types, they can wrap
- # around in abs (abs(np.int8(-128)) == -128) and vmax - vmin can overflow.
- vmin, vmax = map(float, [vmin, vmax])
-
- maxabsvalue = max(abs(vmin), abs(vmax))
- if maxabsvalue < (1e6 / tiny) * np.finfo(float).tiny:
- vmin = -expander
- vmax = expander
-
- elif vmax - vmin <= maxabsvalue * tiny:
- if vmax == 0 and vmin == 0:
- vmin = -expander
- vmax = expander
- else:
- vmin -= expander*abs(vmin)
- vmax += expander*abs(vmax)
-
- if swapped and not increasing:
- vmin, vmax = vmax, vmin
- return vmin, vmax
-
-
-def interval_contains(interval, val):
- """
- Check, inclusively, whether an interval includes a given value.
-
- Parameters
- ----------
- interval : (float, float)
- The endpoints of the interval.
- val : float
- Value to check is within interval.
-
- Returns
- -------
- bool
- Whether *val* is within the *interval*.
- """
- a, b = interval
- if a > b:
- a, b = b, a
- return a <= val <= b
-
-
-def _interval_contains_close(interval, val, rtol=1e-10):
- """
- Check, inclusively, whether an interval includes a given value, with the
- interval expanded by a small tolerance to admit floating point errors.
-
- Parameters
- ----------
- interval : (float, float)
- The endpoints of the interval.
- val : float
- Value to check is within interval.
- rtol : float, default: 1e-10
- Relative tolerance slippage allowed outside of the interval.
- For an interval ``[a, b]``, values
- ``a - rtol * (b - a) <= val <= b + rtol * (b - a)`` are considered
- inside the interval.
-
- Returns
- -------
- bool
- Whether *val* is within the *interval* (with tolerance).
- """
- a, b = interval
- if a > b:
- a, b = b, a
- rtol = (b - a) * rtol
- return a - rtol <= val <= b + rtol
-
-
-def interval_contains_open(interval, val):
- """
- Check, excluding endpoints, whether an interval includes a given value.
-
- Parameters
- ----------
- interval : (float, float)
- The endpoints of the interval.
- val : float
- Value to check is within interval.
-
- Returns
- -------
- bool
- Whether *val* is within the *interval*.
- """
- a, b = interval
- return a < val < b or a > val > b
-
-
-def offset_copy(trans, fig=None, x=0.0, y=0.0, units='inches'):
- """
- Return a new transform with an added offset.
-
- Parameters
- ----------
- trans : `Transform` subclass
- Any transform, to which offset will be applied.
- fig : `~matplotlib.figure.Figure`, default: None
- Current figure. It can be None if *units* are 'dots'.
- x, y : float, default: 0.0
- The offset to apply.
- units : {'inches', 'points', 'dots'}, default: 'inches'
- Units of the offset.
-
- Returns
- -------
- `Transform` subclass
- Transform with applied offset.
- """
- _api.check_in_list(['dots', 'points', 'inches'], units=units)
- if units == 'dots':
- return trans + Affine2D().translate(x, y)
- if fig is None:
- raise ValueError('For units of inches or points a fig kwarg is needed')
- if units == 'points':
- x /= 72.0
- y /= 72.0
- # Default units are 'inches'
- return trans + ScaledTranslation(x, y, fig.dpi_scale_trans)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/lib2def.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/lib2def.py
deleted file mode 100644
index 851682c633109e4d8644d80bb501e5cafcd39d04..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/lib2def.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import re
-import sys
-import subprocess
-
-__doc__ = """This module generates a DEF file from the symbols in
-an MSVC-compiled DLL import library. It correctly discriminates between
-data and functions. The data is collected from the output of the program
-nm(1).
-
-Usage:
- python lib2def.py [libname.lib] [output.def]
-or
- python lib2def.py [libname.lib] > output.def
-
-libname.lib defaults to python.lib and output.def defaults to stdout
-
-Author: Robert Kern
-Last Update: April 30, 1999
-"""
-
-__version__ = '0.1a'
-
-py_ver = "%d%d" % tuple(sys.version_info[:2])
-
-DEFAULT_NM = ['nm', '-Cs']
-
-DEF_HEADER = """LIBRARY python%s.dll
-;CODE PRELOAD MOVEABLE DISCARDABLE
-;DATA PRELOAD SINGLE
-
-EXPORTS
-""" % py_ver
-# the header of the DEF file
-
-FUNC_RE = re.compile(r"^(.*) in python%s\.dll" % py_ver, re.MULTILINE)
-DATA_RE = re.compile(r"^_imp__(.*) in python%s\.dll" % py_ver, re.MULTILINE)
-
-def parse_cmd():
- """Parses the command-line arguments.
-
-libfile, deffile = parse_cmd()"""
- if len(sys.argv) == 3:
- if sys.argv[1][-4:] == '.lib' and sys.argv[2][-4:] == '.def':
- libfile, deffile = sys.argv[1:]
- elif sys.argv[1][-4:] == '.def' and sys.argv[2][-4:] == '.lib':
- deffile, libfile = sys.argv[1:]
- else:
- print("I'm assuming that your first argument is the library")
- print("and the second is the DEF file.")
- elif len(sys.argv) == 2:
- if sys.argv[1][-4:] == '.def':
- deffile = sys.argv[1]
- libfile = 'python%s.lib' % py_ver
- elif sys.argv[1][-4:] == '.lib':
- deffile = None
- libfile = sys.argv[1]
- else:
- libfile = 'python%s.lib' % py_ver
- deffile = None
- return libfile, deffile
-
-def getnm(nm_cmd=['nm', '-Cs', 'python%s.lib' % py_ver], shell=True):
- """Returns the output of nm_cmd via a pipe.
-
-nm_output = getnm(nm_cmd = 'nm -Cs py_lib')"""
- p = subprocess.Popen(nm_cmd, shell=shell, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, text=True)
- nm_output, nm_err = p.communicate()
- if p.returncode != 0:
- raise RuntimeError('failed to run "%s": "%s"' % (
- ' '.join(nm_cmd), nm_err))
- return nm_output
-
-def parse_nm(nm_output):
- """Returns a tuple of lists: dlist for the list of data
-symbols and flist for the list of function symbols.
-
-dlist, flist = parse_nm(nm_output)"""
- data = DATA_RE.findall(nm_output)
- func = FUNC_RE.findall(nm_output)
-
- flist = []
- for sym in data:
- if sym in func and (sym[:2] == 'Py' or sym[:3] == '_Py' or sym[:4] == 'init'):
- flist.append(sym)
-
- dlist = []
- for sym in data:
- if sym not in flist and (sym[:2] == 'Py' or sym[:3] == '_Py'):
- dlist.append(sym)
-
- dlist.sort()
- flist.sort()
- return dlist, flist
-
-def output_def(dlist, flist, header, file = sys.stdout):
- """Outputs the final DEF file to a file defaulting to stdout.
-
-output_def(dlist, flist, header, file = sys.stdout)"""
- for data_sym in dlist:
- header = header + '\t%s DATA\n' % data_sym
- header = header + '\n' # blank line
- for func_sym in flist:
- header = header + '\t%s\n' % func_sym
- file.write(header)
-
-if __name__ == '__main__':
- libfile, deffile = parse_cmd()
- if deffile is None:
- deffile = sys.stdout
- else:
- deffile = open(deffile, 'w')
- nm_cmd = DEFAULT_NM + [str(libfile)]
- nm_output = getnm(nm_cmd, shell=False)
- dlist, flist = parse_nm(nm_output)
- output_def(dlist, flist, DEF_HEADER, deffile)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/tests/test_polynomial.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/tests/test_polynomial.py
deleted file mode 100644
index 6b3ef2388f630f0233c79f31a9a1f4039f4e4f4a..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/tests/test_polynomial.py
+++ /dev/null
@@ -1,611 +0,0 @@
-"""Tests for polynomial module.
-
-"""
-from functools import reduce
-
-import numpy as np
-import numpy.polynomial.polynomial as poly
-import pickle
-from copy import deepcopy
-from numpy.testing import (
- assert_almost_equal, assert_raises, assert_equal, assert_,
- assert_warns, assert_array_equal, assert_raises_regex)
-
-
-def trim(x):
- return poly.polytrim(x, tol=1e-6)
-
-T0 = [1]
-T1 = [0, 1]
-T2 = [-1, 0, 2]
-T3 = [0, -3, 0, 4]
-T4 = [1, 0, -8, 0, 8]
-T5 = [0, 5, 0, -20, 0, 16]
-T6 = [-1, 0, 18, 0, -48, 0, 32]
-T7 = [0, -7, 0, 56, 0, -112, 0, 64]
-T8 = [1, 0, -32, 0, 160, 0, -256, 0, 128]
-T9 = [0, 9, 0, -120, 0, 432, 0, -576, 0, 256]
-
-Tlist = [T0, T1, T2, T3, T4, T5, T6, T7, T8, T9]
-
-
-class TestConstants:
-
- def test_polydomain(self):
- assert_equal(poly.polydomain, [-1, 1])
-
- def test_polyzero(self):
- assert_equal(poly.polyzero, [0])
-
- def test_polyone(self):
- assert_equal(poly.polyone, [1])
-
- def test_polyx(self):
- assert_equal(poly.polyx, [0, 1])
-
- def test_copy(self):
- x = poly.Polynomial([1, 2, 3])
- y = deepcopy(x)
- assert_equal(x, y)
-
- def test_pickle(self):
- x = poly.Polynomial([1, 2, 3])
- y = pickle.loads(pickle.dumps(x))
- assert_equal(x, y)
-
-class TestArithmetic:
-
- def test_polyadd(self):
- for i in range(5):
- for j in range(5):
- msg = f"At i={i}, j={j}"
- tgt = np.zeros(max(i, j) + 1)
- tgt[i] += 1
- tgt[j] += 1
- res = poly.polyadd([0]*i + [1], [0]*j + [1])
- assert_equal(trim(res), trim(tgt), err_msg=msg)
-
- def test_polysub(self):
- for i in range(5):
- for j in range(5):
- msg = f"At i={i}, j={j}"
- tgt = np.zeros(max(i, j) + 1)
- tgt[i] += 1
- tgt[j] -= 1
- res = poly.polysub([0]*i + [1], [0]*j + [1])
- assert_equal(trim(res), trim(tgt), err_msg=msg)
-
- def test_polymulx(self):
- assert_equal(poly.polymulx([0]), [0])
- assert_equal(poly.polymulx([1]), [0, 1])
- for i in range(1, 5):
- ser = [0]*i + [1]
- tgt = [0]*(i + 1) + [1]
- assert_equal(poly.polymulx(ser), tgt)
-
- def test_polymul(self):
- for i in range(5):
- for j in range(5):
- msg = f"At i={i}, j={j}"
- tgt = np.zeros(i + j + 1)
- tgt[i + j] += 1
- res = poly.polymul([0]*i + [1], [0]*j + [1])
- assert_equal(trim(res), trim(tgt), err_msg=msg)
-
- def test_polydiv(self):
- # check zero division
- assert_raises(ZeroDivisionError, poly.polydiv, [1], [0])
-
- # check scalar division
- quo, rem = poly.polydiv([2], [2])
- assert_equal((quo, rem), (1, 0))
- quo, rem = poly.polydiv([2, 2], [2])
- assert_equal((quo, rem), ((1, 1), 0))
-
- # check rest.
- for i in range(5):
- for j in range(5):
- msg = f"At i={i}, j={j}"
- ci = [0]*i + [1, 2]
- cj = [0]*j + [1, 2]
- tgt = poly.polyadd(ci, cj)
- quo, rem = poly.polydiv(tgt, ci)
- res = poly.polyadd(poly.polymul(quo, ci), rem)
- assert_equal(res, tgt, err_msg=msg)
-
- def test_polypow(self):
- for i in range(5):
- for j in range(5):
- msg = f"At i={i}, j={j}"
- c = np.arange(i + 1)
- tgt = reduce(poly.polymul, [c]*j, np.array([1]))
- res = poly.polypow(c, j)
- assert_equal(trim(res), trim(tgt), err_msg=msg)
-
-
-class TestEvaluation:
- # coefficients of 1 + 2*x + 3*x**2
- c1d = np.array([1., 2., 3.])
- c2d = np.einsum('i,j->ij', c1d, c1d)
- c3d = np.einsum('i,j,k->ijk', c1d, c1d, c1d)
-
- # some random values in [-1, 1)
- x = np.random.random((3, 5))*2 - 1
- y = poly.polyval(x, [1., 2., 3.])
-
- def test_polyval(self):
- #check empty input
- assert_equal(poly.polyval([], [1]).size, 0)
-
- #check normal input)
- x = np.linspace(-1, 1)
- y = [x**i for i in range(5)]
- for i in range(5):
- tgt = y[i]
- res = poly.polyval(x, [0]*i + [1])
- assert_almost_equal(res, tgt)
- tgt = x*(x**2 - 1)
- res = poly.polyval(x, [0, -1, 0, 1])
- assert_almost_equal(res, tgt)
-
- #check that shape is preserved
- for i in range(3):
- dims = [2]*i
- x = np.zeros(dims)
- assert_equal(poly.polyval(x, [1]).shape, dims)
- assert_equal(poly.polyval(x, [1, 0]).shape, dims)
- assert_equal(poly.polyval(x, [1, 0, 0]).shape, dims)
-
- #check masked arrays are processed correctly
- mask = [False, True, False]
- mx = np.ma.array([1, 2, 3], mask=mask)
- res = np.polyval([7, 5, 3], mx)
- assert_array_equal(res.mask, mask)
-
- #check subtypes of ndarray are preserved
- class C(np.ndarray):
- pass
-
- cx = np.array([1, 2, 3]).view(C)
- assert_equal(type(np.polyval([2, 3, 4], cx)), C)
-
- def test_polyvalfromroots(self):
- # check exception for broadcasting x values over root array with
- # too few dimensions
- assert_raises(ValueError, poly.polyvalfromroots,
- [1], [1], tensor=False)
-
- # check empty input
- assert_equal(poly.polyvalfromroots([], [1]).size, 0)
- assert_(poly.polyvalfromroots([], [1]).shape == (0,))
-
- # check empty input + multidimensional roots
- assert_equal(poly.polyvalfromroots([], [[1] * 5]).size, 0)
- assert_(poly.polyvalfromroots([], [[1] * 5]).shape == (5, 0))
-
- # check scalar input
- assert_equal(poly.polyvalfromroots(1, 1), 0)
- assert_(poly.polyvalfromroots(1, np.ones((3, 3))).shape == (3,))
-
- # check normal input)
- x = np.linspace(-1, 1)
- y = [x**i for i in range(5)]
- for i in range(1, 5):
- tgt = y[i]
- res = poly.polyvalfromroots(x, [0]*i)
- assert_almost_equal(res, tgt)
- tgt = x*(x - 1)*(x + 1)
- res = poly.polyvalfromroots(x, [-1, 0, 1])
- assert_almost_equal(res, tgt)
-
- # check that shape is preserved
- for i in range(3):
- dims = [2]*i
- x = np.zeros(dims)
- assert_equal(poly.polyvalfromroots(x, [1]).shape, dims)
- assert_equal(poly.polyvalfromroots(x, [1, 0]).shape, dims)
- assert_equal(poly.polyvalfromroots(x, [1, 0, 0]).shape, dims)
-
- # check compatibility with factorization
- ptest = [15, 2, -16, -2, 1]
- r = poly.polyroots(ptest)
- x = np.linspace(-1, 1)
- assert_almost_equal(poly.polyval(x, ptest),
- poly.polyvalfromroots(x, r))
-
- # check multidimensional arrays of roots and values
- # check tensor=False
- rshape = (3, 5)
- x = np.arange(-3, 2)
- r = np.random.randint(-5, 5, size=rshape)
- res = poly.polyvalfromroots(x, r, tensor=False)
- tgt = np.empty(r.shape[1:])
- for ii in range(tgt.size):
- tgt[ii] = poly.polyvalfromroots(x[ii], r[:, ii])
- assert_equal(res, tgt)
-
- # check tensor=True
- x = np.vstack([x, 2*x])
- res = poly.polyvalfromroots(x, r, tensor=True)
- tgt = np.empty(r.shape[1:] + x.shape)
- for ii in range(r.shape[1]):
- for jj in range(x.shape[0]):
- tgt[ii, jj, :] = poly.polyvalfromroots(x[jj], r[:, ii])
- assert_equal(res, tgt)
-
- def test_polyval2d(self):
- x1, x2, x3 = self.x
- y1, y2, y3 = self.y
-
- #test exceptions
- assert_raises_regex(ValueError, 'incompatible',
- poly.polyval2d, x1, x2[:2], self.c2d)
-
- #test values
- tgt = y1*y2
- res = poly.polyval2d(x1, x2, self.c2d)
- assert_almost_equal(res, tgt)
-
- #test shape
- z = np.ones((2, 3))
- res = poly.polyval2d(z, z, self.c2d)
- assert_(res.shape == (2, 3))
-
- def test_polyval3d(self):
- x1, x2, x3 = self.x
- y1, y2, y3 = self.y
-
- #test exceptions
- assert_raises_regex(ValueError, 'incompatible',
- poly.polyval3d, x1, x2, x3[:2], self.c3d)
-
- #test values
- tgt = y1*y2*y3
- res = poly.polyval3d(x1, x2, x3, self.c3d)
- assert_almost_equal(res, tgt)
-
- #test shape
- z = np.ones((2, 3))
- res = poly.polyval3d(z, z, z, self.c3d)
- assert_(res.shape == (2, 3))
-
- def test_polygrid2d(self):
- x1, x2, x3 = self.x
- y1, y2, y3 = self.y
-
- #test values
- tgt = np.einsum('i,j->ij', y1, y2)
- res = poly.polygrid2d(x1, x2, self.c2d)
- assert_almost_equal(res, tgt)
-
- #test shape
- z = np.ones((2, 3))
- res = poly.polygrid2d(z, z, self.c2d)
- assert_(res.shape == (2, 3)*2)
-
- def test_polygrid3d(self):
- x1, x2, x3 = self.x
- y1, y2, y3 = self.y
-
- #test values
- tgt = np.einsum('i,j,k->ijk', y1, y2, y3)
- res = poly.polygrid3d(x1, x2, x3, self.c3d)
- assert_almost_equal(res, tgt)
-
- #test shape
- z = np.ones((2, 3))
- res = poly.polygrid3d(z, z, z, self.c3d)
- assert_(res.shape == (2, 3)*3)
-
-
-class TestIntegral:
-
- def test_polyint(self):
- # check exceptions
- assert_raises(TypeError, poly.polyint, [0], .5)
- assert_raises(ValueError, poly.polyint, [0], -1)
- assert_raises(ValueError, poly.polyint, [0], 1, [0, 0])
- assert_raises(ValueError, poly.polyint, [0], lbnd=[0])
- assert_raises(ValueError, poly.polyint, [0], scl=[0])
- assert_raises(TypeError, poly.polyint, [0], axis=.5)
- with assert_warns(DeprecationWarning):
- poly.polyint([1, 1], 1.)
-
- # test integration of zero polynomial
- for i in range(2, 5):
- k = [0]*(i - 2) + [1]
- res = poly.polyint([0], m=i, k=k)
- assert_almost_equal(res, [0, 1])
-
- # check single integration with integration constant
- for i in range(5):
- scl = i + 1
- pol = [0]*i + [1]
- tgt = [i] + [0]*i + [1/scl]
- res = poly.polyint(pol, m=1, k=[i])
- assert_almost_equal(trim(res), trim(tgt))
-
- # check single integration with integration constant and lbnd
- for i in range(5):
- scl = i + 1
- pol = [0]*i + [1]
- res = poly.polyint(pol, m=1, k=[i], lbnd=-1)
- assert_almost_equal(poly.polyval(-1, res), i)
-
- # check single integration with integration constant and scaling
- for i in range(5):
- scl = i + 1
- pol = [0]*i + [1]
- tgt = [i] + [0]*i + [2/scl]
- res = poly.polyint(pol, m=1, k=[i], scl=2)
- assert_almost_equal(trim(res), trim(tgt))
-
- # check multiple integrations with default k
- for i in range(5):
- for j in range(2, 5):
- pol = [0]*i + [1]
- tgt = pol[:]
- for k in range(j):
- tgt = poly.polyint(tgt, m=1)
- res = poly.polyint(pol, m=j)
- assert_almost_equal(trim(res), trim(tgt))
-
- # check multiple integrations with defined k
- for i in range(5):
- for j in range(2, 5):
- pol = [0]*i + [1]
- tgt = pol[:]
- for k in range(j):
- tgt = poly.polyint(tgt, m=1, k=[k])
- res = poly.polyint(pol, m=j, k=list(range(j)))
- assert_almost_equal(trim(res), trim(tgt))
-
- # check multiple integrations with lbnd
- for i in range(5):
- for j in range(2, 5):
- pol = [0]*i + [1]
- tgt = pol[:]
- for k in range(j):
- tgt = poly.polyint(tgt, m=1, k=[k], lbnd=-1)
- res = poly.polyint(pol, m=j, k=list(range(j)), lbnd=-1)
- assert_almost_equal(trim(res), trim(tgt))
-
- # check multiple integrations with scaling
- for i in range(5):
- for j in range(2, 5):
- pol = [0]*i + [1]
- tgt = pol[:]
- for k in range(j):
- tgt = poly.polyint(tgt, m=1, k=[k], scl=2)
- res = poly.polyint(pol, m=j, k=list(range(j)), scl=2)
- assert_almost_equal(trim(res), trim(tgt))
-
- def test_polyint_axis(self):
- # check that axis keyword works
- c2d = np.random.random((3, 4))
-
- tgt = np.vstack([poly.polyint(c) for c in c2d.T]).T
- res = poly.polyint(c2d, axis=0)
- assert_almost_equal(res, tgt)
-
- tgt = np.vstack([poly.polyint(c) for c in c2d])
- res = poly.polyint(c2d, axis=1)
- assert_almost_equal(res, tgt)
-
- tgt = np.vstack([poly.polyint(c, k=3) for c in c2d])
- res = poly.polyint(c2d, k=3, axis=1)
- assert_almost_equal(res, tgt)
-
-
-class TestDerivative:
-
- def test_polyder(self):
- # check exceptions
- assert_raises(TypeError, poly.polyder, [0], .5)
- assert_raises(ValueError, poly.polyder, [0], -1)
-
- # check that zeroth derivative does nothing
- for i in range(5):
- tgt = [0]*i + [1]
- res = poly.polyder(tgt, m=0)
- assert_equal(trim(res), trim(tgt))
-
- # check that derivation is the inverse of integration
- for i in range(5):
- for j in range(2, 5):
- tgt = [0]*i + [1]
- res = poly.polyder(poly.polyint(tgt, m=j), m=j)
- assert_almost_equal(trim(res), trim(tgt))
-
- # check derivation with scaling
- for i in range(5):
- for j in range(2, 5):
- tgt = [0]*i + [1]
- res = poly.polyder(poly.polyint(tgt, m=j, scl=2), m=j, scl=.5)
- assert_almost_equal(trim(res), trim(tgt))
-
- def test_polyder_axis(self):
- # check that axis keyword works
- c2d = np.random.random((3, 4))
-
- tgt = np.vstack([poly.polyder(c) for c in c2d.T]).T
- res = poly.polyder(c2d, axis=0)
- assert_almost_equal(res, tgt)
-
- tgt = np.vstack([poly.polyder(c) for c in c2d])
- res = poly.polyder(c2d, axis=1)
- assert_almost_equal(res, tgt)
-
-
-class TestVander:
- # some random values in [-1, 1)
- x = np.random.random((3, 5))*2 - 1
-
- def test_polyvander(self):
- # check for 1d x
- x = np.arange(3)
- v = poly.polyvander(x, 3)
- assert_(v.shape == (3, 4))
- for i in range(4):
- coef = [0]*i + [1]
- assert_almost_equal(v[..., i], poly.polyval(x, coef))
-
- # check for 2d x
- x = np.array([[1, 2], [3, 4], [5, 6]])
- v = poly.polyvander(x, 3)
- assert_(v.shape == (3, 2, 4))
- for i in range(4):
- coef = [0]*i + [1]
- assert_almost_equal(v[..., i], poly.polyval(x, coef))
-
- def test_polyvander2d(self):
- # also tests polyval2d for non-square coefficient array
- x1, x2, x3 = self.x
- c = np.random.random((2, 3))
- van = poly.polyvander2d(x1, x2, [1, 2])
- tgt = poly.polyval2d(x1, x2, c)
- res = np.dot(van, c.flat)
- assert_almost_equal(res, tgt)
-
- # check shape
- van = poly.polyvander2d([x1], [x2], [1, 2])
- assert_(van.shape == (1, 5, 6))
-
- def test_polyvander3d(self):
- # also tests polyval3d for non-square coefficient array
- x1, x2, x3 = self.x
- c = np.random.random((2, 3, 4))
- van = poly.polyvander3d(x1, x2, x3, [1, 2, 3])
- tgt = poly.polyval3d(x1, x2, x3, c)
- res = np.dot(van, c.flat)
- assert_almost_equal(res, tgt)
-
- # check shape
- van = poly.polyvander3d([x1], [x2], [x3], [1, 2, 3])
- assert_(van.shape == (1, 5, 24))
-
- def test_polyvandernegdeg(self):
- x = np.arange(3)
- assert_raises(ValueError, poly.polyvander, x, -1)
-
-
-class TestCompanion:
-
- def test_raises(self):
- assert_raises(ValueError, poly.polycompanion, [])
- assert_raises(ValueError, poly.polycompanion, [1])
-
- def test_dimensions(self):
- for i in range(1, 5):
- coef = [0]*i + [1]
- assert_(poly.polycompanion(coef).shape == (i, i))
-
- def test_linear_root(self):
- assert_(poly.polycompanion([1, 2])[0, 0] == -.5)
-
-
-class TestMisc:
-
- def test_polyfromroots(self):
- res = poly.polyfromroots([])
- assert_almost_equal(trim(res), [1])
- for i in range(1, 5):
- roots = np.cos(np.linspace(-np.pi, 0, 2*i + 1)[1::2])
- tgt = Tlist[i]
- res = poly.polyfromroots(roots)*2**(i-1)
- assert_almost_equal(trim(res), trim(tgt))
-
- def test_polyroots(self):
- assert_almost_equal(poly.polyroots([1]), [])
- assert_almost_equal(poly.polyroots([1, 2]), [-.5])
- for i in range(2, 5):
- tgt = np.linspace(-1, 1, i)
- res = poly.polyroots(poly.polyfromroots(tgt))
- assert_almost_equal(trim(res), trim(tgt))
-
- def test_polyfit(self):
- def f(x):
- return x*(x - 1)*(x - 2)
-
- def f2(x):
- return x**4 + x**2 + 1
-
- # Test exceptions
- assert_raises(ValueError, poly.polyfit, [1], [1], -1)
- assert_raises(TypeError, poly.polyfit, [[1]], [1], 0)
- assert_raises(TypeError, poly.polyfit, [], [1], 0)
- assert_raises(TypeError, poly.polyfit, [1], [[[1]]], 0)
- assert_raises(TypeError, poly.polyfit, [1, 2], [1], 0)
- assert_raises(TypeError, poly.polyfit, [1], [1, 2], 0)
- assert_raises(TypeError, poly.polyfit, [1], [1], 0, w=[[1]])
- assert_raises(TypeError, poly.polyfit, [1], [1], 0, w=[1, 1])
- assert_raises(ValueError, poly.polyfit, [1], [1], [-1,])
- assert_raises(ValueError, poly.polyfit, [1], [1], [2, -1, 6])
- assert_raises(TypeError, poly.polyfit, [1], [1], [])
-
- # Test fit
- x = np.linspace(0, 2)
- y = f(x)
- #
- coef3 = poly.polyfit(x, y, 3)
- assert_equal(len(coef3), 4)
- assert_almost_equal(poly.polyval(x, coef3), y)
- coef3 = poly.polyfit(x, y, [0, 1, 2, 3])
- assert_equal(len(coef3), 4)
- assert_almost_equal(poly.polyval(x, coef3), y)
- #
- coef4 = poly.polyfit(x, y, 4)
- assert_equal(len(coef4), 5)
- assert_almost_equal(poly.polyval(x, coef4), y)
- coef4 = poly.polyfit(x, y, [0, 1, 2, 3, 4])
- assert_equal(len(coef4), 5)
- assert_almost_equal(poly.polyval(x, coef4), y)
- #
- coef2d = poly.polyfit(x, np.array([y, y]).T, 3)
- assert_almost_equal(coef2d, np.array([coef3, coef3]).T)
- coef2d = poly.polyfit(x, np.array([y, y]).T, [0, 1, 2, 3])
- assert_almost_equal(coef2d, np.array([coef3, coef3]).T)
- # test weighting
- w = np.zeros_like(x)
- yw = y.copy()
- w[1::2] = 1
- yw[0::2] = 0
- wcoef3 = poly.polyfit(x, yw, 3, w=w)
- assert_almost_equal(wcoef3, coef3)
- wcoef3 = poly.polyfit(x, yw, [0, 1, 2, 3], w=w)
- assert_almost_equal(wcoef3, coef3)
- #
- wcoef2d = poly.polyfit(x, np.array([yw, yw]).T, 3, w=w)
- assert_almost_equal(wcoef2d, np.array([coef3, coef3]).T)
- wcoef2d = poly.polyfit(x, np.array([yw, yw]).T, [0, 1, 2, 3], w=w)
- assert_almost_equal(wcoef2d, np.array([coef3, coef3]).T)
- # test scaling with complex values x points whose square
- # is zero when summed.
- x = [1, 1j, -1, -1j]
- assert_almost_equal(poly.polyfit(x, x, 1), [0, 1])
- assert_almost_equal(poly.polyfit(x, x, [0, 1]), [0, 1])
- # test fitting only even Polyendre polynomials
- x = np.linspace(-1, 1)
- y = f2(x)
- coef1 = poly.polyfit(x, y, 4)
- assert_almost_equal(poly.polyval(x, coef1), y)
- coef2 = poly.polyfit(x, y, [0, 2, 4])
- assert_almost_equal(poly.polyval(x, coef2), y)
- assert_almost_equal(coef1, coef2)
-
- def test_polytrim(self):
- coef = [2, -1, 1, 0]
-
- # Test exceptions
- assert_raises(ValueError, poly.polytrim, coef, -1)
-
- # Test results
- assert_equal(poly.polytrim(coef), coef[:-1])
- assert_equal(poly.polytrim(coef, 1), coef[:-3])
- assert_equal(poly.polytrim(coef, 2), [0])
-
- def test_polyline(self):
- assert_equal(poly.polyline(3, 4), [3, 4])
-
- def test_polyline_zero(self):
- assert_equal(poly.polyline(3, 0), [3])
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_compat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_compat.py
deleted file mode 100644
index f91856c3948a0673a6cdea2a9b1c368f7be13099..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_compat.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import numpy as np
-import pytest
-
-import pandas as pd
-from pandas import MultiIndex
-import pandas._testing as tm
-
-
-def test_numeric_compat(idx):
- with pytest.raises(TypeError, match="cannot perform __mul__"):
- idx * 1
-
- with pytest.raises(TypeError, match="cannot perform __rmul__"):
- 1 * idx
-
- div_err = "cannot perform __truediv__"
- with pytest.raises(TypeError, match=div_err):
- idx / 1
-
- div_err = div_err.replace(" __", " __r")
- with pytest.raises(TypeError, match=div_err):
- 1 / idx
-
- with pytest.raises(TypeError, match="cannot perform __floordiv__"):
- idx // 1
-
- with pytest.raises(TypeError, match="cannot perform __rfloordiv__"):
- 1 // idx
-
-
-@pytest.mark.parametrize("method", ["all", "any", "__invert__"])
-def test_logical_compat(idx, method):
- msg = f"cannot perform {method}"
-
- with pytest.raises(TypeError, match=msg):
- getattr(idx, method)()
-
-
-def test_inplace_mutation_resets_values():
- levels = [["a", "b", "c"], [4]]
- levels2 = [[1, 2, 3], ["a"]]
- codes = [[0, 1, 0, 2, 2, 0], [0, 0, 0, 0, 0, 0]]
-
- mi1 = MultiIndex(levels=levels, codes=codes)
- mi2 = MultiIndex(levels=levels2, codes=codes)
-
- # instantiating MultiIndex should not access/cache _.values
- assert "_values" not in mi1._cache
- assert "_values" not in mi2._cache
-
- vals = mi1.values.copy()
- vals2 = mi2.values.copy()
-
- # accessing .values should cache ._values
- assert mi1._values is mi1._cache["_values"]
- assert mi1.values is mi1._cache["_values"]
- assert isinstance(mi1._cache["_values"], np.ndarray)
-
- # Make sure level setting works
- new_vals = mi1.set_levels(levels2).values
- tm.assert_almost_equal(vals2, new_vals)
-
- # Doesn't drop _values from _cache [implementation detail]
- tm.assert_almost_equal(mi1._cache["_values"], vals)
-
- # ...and values is still same too
- tm.assert_almost_equal(mi1.values, vals)
-
- # Make sure label setting works too
- codes2 = [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]
- exp_values = np.empty((6,), dtype=object)
- exp_values[:] = [(1, "a")] * 6
-
- # Must be 1d array of tuples
- assert exp_values.shape == (6,)
-
- new_mi = mi2.set_codes(codes2)
- assert "_values" not in new_mi._cache
- new_values = new_mi.values
- assert "_values" in new_mi._cache
-
- # Shouldn't change cache
- tm.assert_almost_equal(mi2._cache["_values"], vals2)
-
- # Should have correct values
- tm.assert_almost_equal(exp_values, new_values)
-
-
-def test_boxable_categorical_values():
- cat = pd.Categorical(pd.date_range("2012-01-01", periods=3, freq="H"))
- result = MultiIndex.from_product([["a", "b", "c"], cat]).values
- expected = pd.Series(
- [
- ("a", pd.Timestamp("2012-01-01 00:00:00")),
- ("a", pd.Timestamp("2012-01-01 01:00:00")),
- ("a", pd.Timestamp("2012-01-01 02:00:00")),
- ("b", pd.Timestamp("2012-01-01 00:00:00")),
- ("b", pd.Timestamp("2012-01-01 01:00:00")),
- ("b", pd.Timestamp("2012-01-01 02:00:00")),
- ("c", pd.Timestamp("2012-01-01 00:00:00")),
- ("c", pd.Timestamp("2012-01-01 01:00:00")),
- ("c", pd.Timestamp("2012-01-01 02:00:00")),
- ]
- ).values
- tm.assert_numpy_array_equal(result, expected)
- result = pd.DataFrame({"a": ["a", "b", "c"], "b": cat, "c": np.array(cat)}).values
- expected = pd.DataFrame(
- {
- "a": ["a", "b", "c"],
- "b": [
- pd.Timestamp("2012-01-01 00:00:00"),
- pd.Timestamp("2012-01-01 01:00:00"),
- pd.Timestamp("2012-01-01 02:00:00"),
- ],
- "c": [
- pd.Timestamp("2012-01-01 00:00:00"),
- pd.Timestamp("2012-01-01 01:00:00"),
- pd.Timestamp("2012-01-01 02:00:00"),
- ],
- }
- ).values
- tm.assert_numpy_array_equal(result, expected)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_sorting.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_sorting.py
deleted file mode 100644
index 08c1a4092952c292da4e95e11063a3dbac147e07..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_sorting.py
+++ /dev/null
@@ -1,340 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas.errors import (
- PerformanceWarning,
- UnsortedIndexError,
-)
-
-from pandas import (
- CategoricalIndex,
- DataFrame,
- Index,
- MultiIndex,
- RangeIndex,
- Series,
- Timestamp,
-)
-import pandas._testing as tm
-from pandas.core.indexes.frozen import FrozenList
-
-
-def test_sortlevel(idx):
- tuples = list(idx)
- np.random.default_rng(2).shuffle(tuples)
-
- index = MultiIndex.from_tuples(tuples)
-
- sorted_idx, _ = index.sortlevel(0)
- expected = MultiIndex.from_tuples(sorted(tuples))
- assert sorted_idx.equals(expected)
-
- sorted_idx, _ = index.sortlevel(0, ascending=False)
- assert sorted_idx.equals(expected[::-1])
-
- sorted_idx, _ = index.sortlevel(1)
- by1 = sorted(tuples, key=lambda x: (x[1], x[0]))
- expected = MultiIndex.from_tuples(by1)
- assert sorted_idx.equals(expected)
-
- sorted_idx, _ = index.sortlevel(1, ascending=False)
- assert sorted_idx.equals(expected[::-1])
-
-
-def test_sortlevel_not_sort_remaining():
- mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list("ABC"))
- sorted_idx, _ = mi.sortlevel("A", sort_remaining=False)
- assert sorted_idx.equals(mi)
-
-
-def test_sortlevel_deterministic():
- tuples = [
- ("bar", "one"),
- ("foo", "two"),
- ("qux", "two"),
- ("foo", "one"),
- ("baz", "two"),
- ("qux", "one"),
- ]
-
- index = MultiIndex.from_tuples(tuples)
-
- sorted_idx, _ = index.sortlevel(0)
- expected = MultiIndex.from_tuples(sorted(tuples))
- assert sorted_idx.equals(expected)
-
- sorted_idx, _ = index.sortlevel(0, ascending=False)
- assert sorted_idx.equals(expected[::-1])
-
- sorted_idx, _ = index.sortlevel(1)
- by1 = sorted(tuples, key=lambda x: (x[1], x[0]))
- expected = MultiIndex.from_tuples(by1)
- assert sorted_idx.equals(expected)
-
- sorted_idx, _ = index.sortlevel(1, ascending=False)
- assert sorted_idx.equals(expected[::-1])
-
-
-def test_sortlevel_na_position():
- # GH#51612
- midx = MultiIndex.from_tuples([(1, np.nan), (1, 1)])
- result = midx.sortlevel(level=[0, 1], na_position="last")[0]
- expected = MultiIndex.from_tuples([(1, 1), (1, np.nan)])
- tm.assert_index_equal(result, expected)
-
-
-def test_numpy_argsort(idx):
- result = np.argsort(idx)
- expected = idx.argsort()
- tm.assert_numpy_array_equal(result, expected)
-
- # these are the only two types that perform
- # pandas compatibility input validation - the
- # rest already perform separate (or no) such
- # validation via their 'values' attribute as
- # defined in pandas.core.indexes/base.py - they
- # cannot be changed at the moment due to
- # backwards compatibility concerns
- if isinstance(type(idx), (CategoricalIndex, RangeIndex)):
- msg = "the 'axis' parameter is not supported"
- with pytest.raises(ValueError, match=msg):
- np.argsort(idx, axis=1)
-
- msg = "the 'kind' parameter is not supported"
- with pytest.raises(ValueError, match=msg):
- np.argsort(idx, kind="mergesort")
-
- msg = "the 'order' parameter is not supported"
- with pytest.raises(ValueError, match=msg):
- np.argsort(idx, order=("a", "b"))
-
-
-def test_unsortedindex():
- # GH 11897
- mi = MultiIndex.from_tuples(
- [("z", "a"), ("x", "a"), ("y", "b"), ("x", "b"), ("y", "a"), ("z", "b")],
- names=["one", "two"],
- )
- df = DataFrame([[i, 10 * i] for i in range(6)], index=mi, columns=["one", "two"])
-
- # GH 16734: not sorted, but no real slicing
- result = df.loc(axis=0)["z", "a"]
- expected = df.iloc[0]
- tm.assert_series_equal(result, expected)
-
- msg = (
- "MultiIndex slicing requires the index to be lexsorted: "
- r"slicing on levels \[1\], lexsort depth 0"
- )
- with pytest.raises(UnsortedIndexError, match=msg):
- df.loc(axis=0)["z", slice("a")]
- df.sort_index(inplace=True)
- assert len(df.loc(axis=0)["z", :]) == 2
-
- with pytest.raises(KeyError, match="'q'"):
- df.loc(axis=0)["q", :]
-
-
-def test_unsortedindex_doc_examples():
- # https://pandas.pydata.org/pandas-docs/stable/advanced.html#sorting-a-multiindex
- dfm = DataFrame(
- {
- "jim": [0, 0, 1, 1],
- "joe": ["x", "x", "z", "y"],
- "jolie": np.random.default_rng(2).random(4),
- }
- )
-
- dfm = dfm.set_index(["jim", "joe"])
- with tm.assert_produces_warning(PerformanceWarning):
- dfm.loc[(1, "z")]
-
- msg = r"Key length \(2\) was greater than MultiIndex lexsort depth \(1\)"
- with pytest.raises(UnsortedIndexError, match=msg):
- dfm.loc[(0, "y"):(1, "z")]
-
- assert not dfm.index._is_lexsorted()
- assert dfm.index._lexsort_depth == 1
-
- # sort it
- dfm = dfm.sort_index()
- dfm.loc[(1, "z")]
- dfm.loc[(0, "y"):(1, "z")]
-
- assert dfm.index._is_lexsorted()
- assert dfm.index._lexsort_depth == 2
-
-
-def test_reconstruct_sort():
- # starts off lexsorted & monotonic
- mi = MultiIndex.from_arrays([["A", "A", "B", "B", "B"], [1, 2, 1, 2, 3]])
- assert mi.is_monotonic_increasing
- recons = mi._sort_levels_monotonic()
- assert recons.is_monotonic_increasing
- assert mi is recons
-
- assert mi.equals(recons)
- assert Index(mi.values).equals(Index(recons.values))
-
- # cannot convert to lexsorted
- mi = MultiIndex.from_tuples(
- [("z", "a"), ("x", "a"), ("y", "b"), ("x", "b"), ("y", "a"), ("z", "b")],
- names=["one", "two"],
- )
- assert not mi.is_monotonic_increasing
- recons = mi._sort_levels_monotonic()
- assert not recons.is_monotonic_increasing
- assert mi.equals(recons)
- assert Index(mi.values).equals(Index(recons.values))
-
- # cannot convert to lexsorted
- mi = MultiIndex(
- levels=[["b", "d", "a"], [1, 2, 3]],
- codes=[[0, 1, 0, 2], [2, 0, 0, 1]],
- names=["col1", "col2"],
- )
- assert not mi.is_monotonic_increasing
- recons = mi._sort_levels_monotonic()
- assert not recons.is_monotonic_increasing
- assert mi.equals(recons)
- assert Index(mi.values).equals(Index(recons.values))
-
-
-def test_reconstruct_remove_unused():
- # xref to GH 2770
- df = DataFrame(
- [["deleteMe", 1, 9], ["keepMe", 2, 9], ["keepMeToo", 3, 9]],
- columns=["first", "second", "third"],
- )
- df2 = df.set_index(["first", "second"], drop=False)
- df2 = df2[df2["first"] != "deleteMe"]
-
- # removed levels are there
- expected = MultiIndex(
- levels=[["deleteMe", "keepMe", "keepMeToo"], [1, 2, 3]],
- codes=[[1, 2], [1, 2]],
- names=["first", "second"],
- )
- result = df2.index
- tm.assert_index_equal(result, expected)
-
- expected = MultiIndex(
- levels=[["keepMe", "keepMeToo"], [2, 3]],
- codes=[[0, 1], [0, 1]],
- names=["first", "second"],
- )
- result = df2.index.remove_unused_levels()
- tm.assert_index_equal(result, expected)
-
- # idempotent
- result2 = result.remove_unused_levels()
- tm.assert_index_equal(result2, expected)
- assert result2.is_(result)
-
-
-@pytest.mark.parametrize(
- "first_type,second_type", [("int64", "int64"), ("datetime64[D]", "str")]
-)
-def test_remove_unused_levels_large(first_type, second_type):
- # GH16556
-
- # because tests should be deterministic (and this test in particular
- # checks that levels are removed, which is not the case for every
- # random input):
- rng = np.random.default_rng(10) # seed is arbitrary value that works
-
- size = 1 << 16
- df = DataFrame(
- {
- "first": rng.integers(0, 1 << 13, size).astype(first_type),
- "second": rng.integers(0, 1 << 10, size).astype(second_type),
- "third": rng.random(size),
- }
- )
- df = df.groupby(["first", "second"]).sum()
- df = df[df.third < 0.1]
-
- result = df.index.remove_unused_levels()
- assert len(result.levels[0]) < len(df.index.levels[0])
- assert len(result.levels[1]) < len(df.index.levels[1])
- assert result.equals(df.index)
-
- expected = df.reset_index().set_index(["first", "second"]).index
- tm.assert_index_equal(result, expected)
-
-
-@pytest.mark.parametrize("level0", [["a", "d", "b"], ["a", "d", "b", "unused"]])
-@pytest.mark.parametrize(
- "level1", [["w", "x", "y", "z"], ["w", "x", "y", "z", "unused"]]
-)
-def test_remove_unused_nan(level0, level1):
- # GH 18417
- mi = MultiIndex(levels=[level0, level1], codes=[[0, 2, -1, 1, -1], [0, 1, 2, 3, 2]])
-
- result = mi.remove_unused_levels()
- tm.assert_index_equal(result, mi)
- for level in 0, 1:
- assert "unused" not in result.levels[level]
-
-
-def test_argsort(idx):
- result = idx.argsort()
- expected = idx.values.argsort()
- tm.assert_numpy_array_equal(result, expected)
-
-
-def test_remove_unused_levels_with_nan():
- # GH 37510
- idx = Index([(1, np.nan), (3, 4)]).rename(["id1", "id2"])
- idx = idx.set_levels(["a", np.nan], level="id1")
- idx = idx.remove_unused_levels()
- result = idx.levels
- expected = FrozenList([["a", np.nan], [4]])
- assert str(result) == str(expected)
-
-
-def test_sort_values_nan():
- # GH48495, GH48626
- midx = MultiIndex(levels=[["A", "B", "C"], ["D"]], codes=[[1, 0, 2], [-1, -1, 0]])
- result = midx.sort_values()
- expected = MultiIndex(
- levels=[["A", "B", "C"], ["D"]], codes=[[0, 1, 2], [-1, -1, 0]]
- )
- tm.assert_index_equal(result, expected)
-
-
-def test_sort_values_incomparable():
- # GH48495
- mi = MultiIndex.from_arrays(
- [
- [1, Timestamp("2000-01-01")],
- [3, 4],
- ]
- )
- match = "'<' not supported between instances of 'Timestamp' and 'int'"
- with pytest.raises(TypeError, match=match):
- mi.sort_values()
-
-
-@pytest.mark.parametrize("na_position", ["first", "last"])
-@pytest.mark.parametrize("dtype", ["float64", "Int64", "Float64"])
-def test_sort_values_with_na_na_position(dtype, na_position):
- # 51612
- arrays = [
- Series([1, 1, 2], dtype=dtype),
- Series([1, None, 3], dtype=dtype),
- ]
- index = MultiIndex.from_arrays(arrays)
- result = index.sort_values(na_position=na_position)
- if na_position == "first":
- arrays = [
- Series([1, 1, 2], dtype=dtype),
- Series([None, 1, 3], dtype=dtype),
- ]
- else:
- arrays = [
- Series([1, 1, 2], dtype=dtype),
- Series([1, None, 3], dtype=dtype),
- ]
- expected = MultiIndex.from_arrays(arrays)
- tm.assert_index_equal(result, expected)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tseries/holiday.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tseries/holiday.py
deleted file mode 100644
index 75cb7f78500137bbe8461c2b5658a5a10221715d..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tseries/holiday.py
+++ /dev/null
@@ -1,634 +0,0 @@
-from __future__ import annotations
-
-from datetime import (
- datetime,
- timedelta,
-)
-import warnings
-
-from dateutil.relativedelta import (
- FR,
- MO,
- SA,
- SU,
- TH,
- TU,
- WE,
-)
-import numpy as np
-
-from pandas.errors import PerformanceWarning
-
-from pandas import (
- DateOffset,
- DatetimeIndex,
- Series,
- Timestamp,
- concat,
- date_range,
-)
-
-from pandas.tseries.offsets import (
- Day,
- Easter,
-)
-
-
-def next_monday(dt: datetime) -> datetime:
- """
- If holiday falls on Saturday, use following Monday instead;
- if holiday falls on Sunday, use Monday instead
- """
- if dt.weekday() == 5:
- return dt + timedelta(2)
- elif dt.weekday() == 6:
- return dt + timedelta(1)
- return dt
-
-
-def next_monday_or_tuesday(dt: datetime) -> datetime:
- """
- For second holiday of two adjacent ones!
- If holiday falls on Saturday, use following Monday instead;
- if holiday falls on Sunday or Monday, use following Tuesday instead
- (because Monday is already taken by adjacent holiday on the day before)
- """
- dow = dt.weekday()
- if dow in (5, 6):
- return dt + timedelta(2)
- if dow == 0:
- return dt + timedelta(1)
- return dt
-
-
-def previous_friday(dt: datetime) -> datetime:
- """
- If holiday falls on Saturday or Sunday, use previous Friday instead.
- """
- if dt.weekday() == 5:
- return dt - timedelta(1)
- elif dt.weekday() == 6:
- return dt - timedelta(2)
- return dt
-
-
-def sunday_to_monday(dt: datetime) -> datetime:
- """
- If holiday falls on Sunday, use day thereafter (Monday) instead.
- """
- if dt.weekday() == 6:
- return dt + timedelta(1)
- return dt
-
-
-def weekend_to_monday(dt: datetime) -> datetime:
- """
- If holiday falls on Sunday or Saturday,
- use day thereafter (Monday) instead.
- Needed for holidays such as Christmas observation in Europe
- """
- if dt.weekday() == 6:
- return dt + timedelta(1)
- elif dt.weekday() == 5:
- return dt + timedelta(2)
- return dt
-
-
-def nearest_workday(dt: datetime) -> datetime:
- """
- If holiday falls on Saturday, use day before (Friday) instead;
- if holiday falls on Sunday, use day thereafter (Monday) instead.
- """
- if dt.weekday() == 5:
- return dt - timedelta(1)
- elif dt.weekday() == 6:
- return dt + timedelta(1)
- return dt
-
-
-def next_workday(dt: datetime) -> datetime:
- """
- returns next weekday used for observances
- """
- dt += timedelta(days=1)
- while dt.weekday() > 4:
- # Mon-Fri are 0-4
- dt += timedelta(days=1)
- return dt
-
-
-def previous_workday(dt: datetime) -> datetime:
- """
- returns previous weekday used for observances
- """
- dt -= timedelta(days=1)
- while dt.weekday() > 4:
- # Mon-Fri are 0-4
- dt -= timedelta(days=1)
- return dt
-
-
-def before_nearest_workday(dt: datetime) -> datetime:
- """
- returns previous workday after nearest workday
- """
- return previous_workday(nearest_workday(dt))
-
-
-def after_nearest_workday(dt: datetime) -> datetime:
- """
- returns next workday after nearest workday
- needed for Boxing day or multiple holidays in a series
- """
- return next_workday(nearest_workday(dt))
-
-
-class Holiday:
- """
- Class that defines a holiday with start/end dates and rules
- for observance.
- """
-
- start_date: Timestamp | None
- end_date: Timestamp | None
- days_of_week: tuple[int, ...] | None
-
- def __init__(
- self,
- name: str,
- year=None,
- month=None,
- day=None,
- offset=None,
- observance=None,
- start_date=None,
- end_date=None,
- days_of_week=None,
- ) -> None:
- """
- Parameters
- ----------
- name : str
- Name of the holiday , defaults to class name
- offset : array of pandas.tseries.offsets or
- class from pandas.tseries.offsets
- computes offset from date
- observance: function
- computes when holiday is given a pandas Timestamp
- days_of_week:
- provide a tuple of days e.g (0,1,2,3,) for Monday Through Thursday
- Monday=0,..,Sunday=6
-
- Examples
- --------
- >>> from dateutil.relativedelta import MO
-
- >>> USMemorialDay = pd.tseries.holiday.Holiday(
- ... "Memorial Day", month=5, day=31, offset=pd.DateOffset(weekday=MO(-1))
- ... )
- >>> USMemorialDay
- Holiday: Memorial Day (month=5, day=31, offset=)
-
- >>> USLaborDay = pd.tseries.holiday.Holiday(
- ... "Labor Day", month=9, day=1, offset=pd.DateOffset(weekday=MO(1))
- ... )
- >>> USLaborDay
- Holiday: Labor Day (month=9, day=1, offset=)
-
- >>> July3rd = pd.tseries.holiday.Holiday("July 3rd", month=7, day=3)
- >>> July3rd
- Holiday: July 3rd (month=7, day=3, )
-
- >>> NewYears = pd.tseries.holiday.Holiday(
- ... "New Years Day", month=1, day=1,
- ... observance=pd.tseries.holiday.nearest_workday
- ... )
- >>> NewYears # doctest: +SKIP
- Holiday: New Years Day (
- month=1, day=1, observance=
- )
-
- >>> July3rd = pd.tseries.holiday.Holiday(
- ... "July 3rd", month=7, day=3,
- ... days_of_week=(0, 1, 2, 3)
- ... )
- >>> July3rd
- Holiday: July 3rd (month=7, day=3, )
- """
- if offset is not None and observance is not None:
- raise NotImplementedError("Cannot use both offset and observance.")
-
- self.name = name
- self.year = year
- self.month = month
- self.day = day
- self.offset = offset
- self.start_date = (
- Timestamp(start_date) if start_date is not None else start_date
- )
- self.end_date = Timestamp(end_date) if end_date is not None else end_date
- self.observance = observance
- assert days_of_week is None or type(days_of_week) == tuple
- self.days_of_week = days_of_week
-
- def __repr__(self) -> str:
- info = ""
- if self.year is not None:
- info += f"year={self.year}, "
- info += f"month={self.month}, day={self.day}, "
-
- if self.offset is not None:
- info += f"offset={self.offset}"
-
- if self.observance is not None:
- info += f"observance={self.observance}"
-
- repr = f"Holiday: {self.name} ({info})"
- return repr
-
- def dates(
- self, start_date, end_date, return_name: bool = False
- ) -> Series | DatetimeIndex:
- """
- Calculate holidays observed between start date and end date
-
- Parameters
- ----------
- start_date : starting date, datetime-like, optional
- end_date : ending date, datetime-like, optional
- return_name : bool, optional, default=False
- If True, return a series that has dates and holiday names.
- False will only return dates.
-
- Returns
- -------
- Series or DatetimeIndex
- Series if return_name is True
- """
- start_date = Timestamp(start_date)
- end_date = Timestamp(end_date)
-
- filter_start_date = start_date
- filter_end_date = end_date
-
- if self.year is not None:
- dt = Timestamp(datetime(self.year, self.month, self.day))
- dti = DatetimeIndex([dt])
- if return_name:
- return Series(self.name, index=dti)
- else:
- return dti
-
- dates = self._reference_dates(start_date, end_date)
- holiday_dates = self._apply_rule(dates)
- if self.days_of_week is not None:
- holiday_dates = holiday_dates[
- np.isin(
- # error: "DatetimeIndex" has no attribute "dayofweek"
- holiday_dates.dayofweek, # type: ignore[attr-defined]
- self.days_of_week,
- ).ravel()
- ]
-
- if self.start_date is not None:
- filter_start_date = max(
- self.start_date.tz_localize(filter_start_date.tz), filter_start_date
- )
- if self.end_date is not None:
- filter_end_date = min(
- self.end_date.tz_localize(filter_end_date.tz), filter_end_date
- )
- holiday_dates = holiday_dates[
- (holiday_dates >= filter_start_date) & (holiday_dates <= filter_end_date)
- ]
- if return_name:
- return Series(self.name, index=holiday_dates)
- return holiday_dates
-
- def _reference_dates(
- self, start_date: Timestamp, end_date: Timestamp
- ) -> DatetimeIndex:
- """
- Get reference dates for the holiday.
-
- Return reference dates for the holiday also returning the year
- prior to the start_date and year following the end_date. This ensures
- that any offsets to be applied will yield the holidays within
- the passed in dates.
- """
- if self.start_date is not None:
- start_date = self.start_date.tz_localize(start_date.tz)
-
- if self.end_date is not None:
- end_date = self.end_date.tz_localize(start_date.tz)
-
- year_offset = DateOffset(years=1)
- reference_start_date = Timestamp(
- datetime(start_date.year - 1, self.month, self.day)
- )
-
- reference_end_date = Timestamp(
- datetime(end_date.year + 1, self.month, self.day)
- )
- # Don't process unnecessary holidays
- dates = date_range(
- start=reference_start_date,
- end=reference_end_date,
- freq=year_offset,
- tz=start_date.tz,
- )
-
- return dates
-
- def _apply_rule(self, dates: DatetimeIndex) -> DatetimeIndex:
- """
- Apply the given offset/observance to a DatetimeIndex of dates.
-
- Parameters
- ----------
- dates : DatetimeIndex
- Dates to apply the given offset/observance rule
-
- Returns
- -------
- Dates with rules applied
- """
- if dates.empty:
- return DatetimeIndex([])
-
- if self.observance is not None:
- return dates.map(lambda d: self.observance(d))
-
- if self.offset is not None:
- if not isinstance(self.offset, list):
- offsets = [self.offset]
- else:
- offsets = self.offset
- for offset in offsets:
- # if we are adding a non-vectorized value
- # ignore the PerformanceWarnings:
- with warnings.catch_warnings():
- warnings.simplefilter("ignore", PerformanceWarning)
- dates += offset
- return dates
-
-
-holiday_calendars = {}
-
-
-def register(cls) -> None:
- try:
- name = cls.name
- except AttributeError:
- name = cls.__name__
- holiday_calendars[name] = cls
-
-
-def get_calendar(name: str):
- """
- Return an instance of a calendar based on its name.
-
- Parameters
- ----------
- name : str
- Calendar name to return an instance of
- """
- return holiday_calendars[name]()
-
-
-class HolidayCalendarMetaClass(type):
- def __new__(cls, clsname: str, bases, attrs):
- calendar_class = super().__new__(cls, clsname, bases, attrs)
- register(calendar_class)
- return calendar_class
-
-
-class AbstractHolidayCalendar(metaclass=HolidayCalendarMetaClass):
- """
- Abstract interface to create holidays following certain rules.
- """
-
- rules: list[Holiday] = []
- start_date = Timestamp(datetime(1970, 1, 1))
- end_date = Timestamp(datetime(2200, 12, 31))
- _cache = None
-
- def __init__(self, name: str = "", rules=None) -> None:
- """
- Initializes holiday object with a given set a rules. Normally
- classes just have the rules defined within them.
-
- Parameters
- ----------
- name : str
- Name of the holiday calendar, defaults to class name
- rules : array of Holiday objects
- A set of rules used to create the holidays.
- """
- super().__init__()
- if not name:
- name = type(self).__name__
- self.name = name
-
- if rules is not None:
- self.rules = rules
-
- def rule_from_name(self, name: str):
- for rule in self.rules:
- if rule.name == name:
- return rule
-
- return None
-
- def holidays(self, start=None, end=None, return_name: bool = False):
- """
- Returns a curve with holidays between start_date and end_date
-
- Parameters
- ----------
- start : starting date, datetime-like, optional
- end : ending date, datetime-like, optional
- return_name : bool, optional
- If True, return a series that has dates and holiday names.
- False will only return a DatetimeIndex of dates.
-
- Returns
- -------
- DatetimeIndex of holidays
- """
- if self.rules is None:
- raise Exception(
- f"Holiday Calendar {self.name} does not have any rules specified"
- )
-
- if start is None:
- start = AbstractHolidayCalendar.start_date
-
- if end is None:
- end = AbstractHolidayCalendar.end_date
-
- start = Timestamp(start)
- end = Timestamp(end)
-
- # If we don't have a cache or the dates are outside the prior cache, we
- # get them again
- if self._cache is None or start < self._cache[0] or end > self._cache[1]:
- pre_holidays = [
- rule.dates(start, end, return_name=True) for rule in self.rules
- ]
- if pre_holidays:
- # error: Argument 1 to "concat" has incompatible type
- # "List[Union[Series, DatetimeIndex]]"; expected
- # "Union[Iterable[DataFrame], Mapping[, DataFrame]]"
- holidays = concat(pre_holidays) # type: ignore[arg-type]
- else:
- # error: Incompatible types in assignment (expression has type
- # "Series", variable has type "DataFrame")
- holidays = Series(
- index=DatetimeIndex([]), dtype=object
- ) # type: ignore[assignment]
-
- self._cache = (start, end, holidays.sort_index())
-
- holidays = self._cache[2]
- holidays = holidays[start:end]
-
- if return_name:
- return holidays
- else:
- return holidays.index
-
- @staticmethod
- def merge_class(base, other):
- """
- Merge holiday calendars together. The base calendar
- will take precedence to other. The merge will be done
- based on each holiday's name.
-
- Parameters
- ----------
- base : AbstractHolidayCalendar
- instance/subclass or array of Holiday objects
- other : AbstractHolidayCalendar
- instance/subclass or array of Holiday objects
- """
- try:
- other = other.rules
- except AttributeError:
- pass
-
- if not isinstance(other, list):
- other = [other]
- other_holidays = {holiday.name: holiday for holiday in other}
-
- try:
- base = base.rules
- except AttributeError:
- pass
-
- if not isinstance(base, list):
- base = [base]
- base_holidays = {holiday.name: holiday for holiday in base}
-
- other_holidays.update(base_holidays)
- return list(other_holidays.values())
-
- def merge(self, other, inplace: bool = False):
- """
- Merge holiday calendars together. The caller's class
- rules take precedence. The merge will be done
- based on each holiday's name.
-
- Parameters
- ----------
- other : holiday calendar
- inplace : bool (default=False)
- If True set rule_table to holidays, else return array of Holidays
- """
- holidays = self.merge_class(self, other)
- if inplace:
- self.rules = holidays
- else:
- return holidays
-
-
-USMemorialDay = Holiday(
- "Memorial Day", month=5, day=31, offset=DateOffset(weekday=MO(-1))
-)
-USLaborDay = Holiday("Labor Day", month=9, day=1, offset=DateOffset(weekday=MO(1)))
-USColumbusDay = Holiday(
- "Columbus Day", month=10, day=1, offset=DateOffset(weekday=MO(2))
-)
-USThanksgivingDay = Holiday(
- "Thanksgiving Day", month=11, day=1, offset=DateOffset(weekday=TH(4))
-)
-USMartinLutherKingJr = Holiday(
- "Birthday of Martin Luther King, Jr.",
- start_date=datetime(1986, 1, 1),
- month=1,
- day=1,
- offset=DateOffset(weekday=MO(3)),
-)
-USPresidentsDay = Holiday(
- "Washington's Birthday", month=2, day=1, offset=DateOffset(weekday=MO(3))
-)
-GoodFriday = Holiday("Good Friday", month=1, day=1, offset=[Easter(), Day(-2)])
-
-EasterMonday = Holiday("Easter Monday", month=1, day=1, offset=[Easter(), Day(1)])
-
-
-class USFederalHolidayCalendar(AbstractHolidayCalendar):
- """
- US Federal Government Holiday Calendar based on rules specified by:
- https://www.opm.gov/policy-data-oversight/pay-leave/federal-holidays/
- """
-
- rules = [
- Holiday("New Year's Day", month=1, day=1, observance=nearest_workday),
- USMartinLutherKingJr,
- USPresidentsDay,
- USMemorialDay,
- Holiday(
- "Juneteenth National Independence Day",
- month=6,
- day=19,
- start_date="2021-06-18",
- observance=nearest_workday,
- ),
- Holiday("Independence Day", month=7, day=4, observance=nearest_workday),
- USLaborDay,
- USColumbusDay,
- Holiday("Veterans Day", month=11, day=11, observance=nearest_workday),
- USThanksgivingDay,
- Holiday("Christmas Day", month=12, day=25, observance=nearest_workday),
- ]
-
-
-def HolidayCalendarFactory(name: str, base, other, base_class=AbstractHolidayCalendar):
- rules = AbstractHolidayCalendar.merge_class(base, other)
- calendar_class = type(name, (base_class,), {"rules": rules, "name": name})
- return calendar_class
-
-
-__all__ = [
- "after_nearest_workday",
- "before_nearest_workday",
- "FR",
- "get_calendar",
- "HolidayCalendarFactory",
- "MO",
- "nearest_workday",
- "next_monday",
- "next_monday_or_tuesday",
- "next_workday",
- "previous_friday",
- "previous_workday",
- "register",
- "SA",
- "SU",
- "sunday_to_monday",
- "TH",
- "TU",
- "WE",
- "weekend_to_monday",
-]
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/utils.py
deleted file mode 100644
index 1e5857ad8afb501373e9191d8a413a1db47b09a3..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/utils.py
+++ /dev/null
@@ -1,1060 +0,0 @@
-# -*- coding: utf-8 -*-
-
-"""
-requests.utils
-~~~~~~~~~~~~~~
-
-This module provides utility functions that are used within Requests
-that are also useful for external consumption.
-"""
-
-import codecs
-import contextlib
-import io
-import os
-import re
-import socket
-import struct
-import sys
-import tempfile
-import warnings
-import zipfile
-from collections import OrderedDict
-from pip._vendor.urllib3.util import make_headers
-from pip._vendor.urllib3.util import parse_url
-
-from .__version__ import __version__
-from . import certs
-# to_native_string is unused here, but imported here for backwards compatibility
-from ._internal_utils import to_native_string
-from .compat import parse_http_list as _parse_list_header
-from .compat import (
- quote, urlparse, bytes, str, unquote, getproxies,
- proxy_bypass, urlunparse, basestring, integer_types, is_py3,
- proxy_bypass_environment, getproxies_environment, Mapping)
-from .cookies import cookiejar_from_dict
-from .structures import CaseInsensitiveDict
-from .exceptions import (
- InvalidURL, InvalidHeader, FileModeWarning, UnrewindableBodyError)
-
-NETRC_FILES = ('.netrc', '_netrc')
-
-DEFAULT_CA_BUNDLE_PATH = certs.where()
-
-DEFAULT_PORTS = {'http': 80, 'https': 443}
-
-# Ensure that ', ' is used to preserve previous delimiter behavior.
-DEFAULT_ACCEPT_ENCODING = ", ".join(
- re.split(r",\s*", make_headers(accept_encoding=True)["accept-encoding"])
-)
-
-
-if sys.platform == 'win32':
- # provide a proxy_bypass version on Windows without DNS lookups
-
- def proxy_bypass_registry(host):
- try:
- if is_py3:
- import winreg
- else:
- import _winreg as winreg
- except ImportError:
- return False
-
- try:
- internetSettings = winreg.OpenKey(winreg.HKEY_CURRENT_USER,
- r'Software\Microsoft\Windows\CurrentVersion\Internet Settings')
- # ProxyEnable could be REG_SZ or REG_DWORD, normalizing it
- proxyEnable = int(winreg.QueryValueEx(internetSettings,
- 'ProxyEnable')[0])
- # ProxyOverride is almost always a string
- proxyOverride = winreg.QueryValueEx(internetSettings,
- 'ProxyOverride')[0]
- except OSError:
- return False
- if not proxyEnable or not proxyOverride:
- return False
-
- # make a check value list from the registry entry: replace the
- # '' string by the localhost entry and the corresponding
- # canonical entry.
- proxyOverride = proxyOverride.split(';')
- # now check if we match one of the registry values.
- for test in proxyOverride:
- if test == '':
- if '.' not in host:
- return True
- test = test.replace(".", r"\.") # mask dots
- test = test.replace("*", r".*") # change glob sequence
- test = test.replace("?", r".") # change glob char
- if re.match(test, host, re.I):
- return True
- return False
-
- def proxy_bypass(host): # noqa
- """Return True, if the host should be bypassed.
-
- Checks proxy settings gathered from the environment, if specified,
- or the registry.
- """
- if getproxies_environment():
- return proxy_bypass_environment(host)
- else:
- return proxy_bypass_registry(host)
-
-
-def dict_to_sequence(d):
- """Returns an internal sequence dictionary update."""
-
- if hasattr(d, 'items'):
- d = d.items()
-
- return d
-
-
-def super_len(o):
- total_length = None
- current_position = 0
-
- if hasattr(o, '__len__'):
- total_length = len(o)
-
- elif hasattr(o, 'len'):
- total_length = o.len
-
- elif hasattr(o, 'fileno'):
- try:
- fileno = o.fileno()
- except (io.UnsupportedOperation, AttributeError):
- # AttributeError is a surprising exception, seeing as how we've just checked
- # that `hasattr(o, 'fileno')`. It happens for objects obtained via
- # `Tarfile.extractfile()`, per issue 5229.
- pass
- else:
- total_length = os.fstat(fileno).st_size
-
- # Having used fstat to determine the file length, we need to
- # confirm that this file was opened up in binary mode.
- if 'b' not in o.mode:
- warnings.warn((
- "Requests has determined the content-length for this "
- "request using the binary size of the file: however, the "
- "file has been opened in text mode (i.e. without the 'b' "
- "flag in the mode). This may lead to an incorrect "
- "content-length. In Requests 3.0, support will be removed "
- "for files in text mode."),
- FileModeWarning
- )
-
- if hasattr(o, 'tell'):
- try:
- current_position = o.tell()
- except (OSError, IOError):
- # This can happen in some weird situations, such as when the file
- # is actually a special file descriptor like stdin. In this
- # instance, we don't know what the length is, so set it to zero and
- # let requests chunk it instead.
- if total_length is not None:
- current_position = total_length
- else:
- if hasattr(o, 'seek') and total_length is None:
- # StringIO and BytesIO have seek but no usable fileno
- try:
- # seek to end of file
- o.seek(0, 2)
- total_length = o.tell()
-
- # seek back to current position to support
- # partially read file-like objects
- o.seek(current_position or 0)
- except (OSError, IOError):
- total_length = 0
-
- if total_length is None:
- total_length = 0
-
- return max(0, total_length - current_position)
-
-
-def get_netrc_auth(url, raise_errors=False):
- """Returns the Requests tuple auth for a given url from netrc."""
-
- netrc_file = os.environ.get('NETRC')
- if netrc_file is not None:
- netrc_locations = (netrc_file,)
- else:
- netrc_locations = ('~/{}'.format(f) for f in NETRC_FILES)
-
- try:
- from netrc import netrc, NetrcParseError
-
- netrc_path = None
-
- for f in netrc_locations:
- try:
- loc = os.path.expanduser(f)
- except KeyError:
- # os.path.expanduser can fail when $HOME is undefined and
- # getpwuid fails. See https://bugs.python.org/issue20164 &
- # https://github.com/psf/requests/issues/1846
- return
-
- if os.path.exists(loc):
- netrc_path = loc
- break
-
- # Abort early if there isn't one.
- if netrc_path is None:
- return
-
- ri = urlparse(url)
-
- # Strip port numbers from netloc. This weird `if...encode`` dance is
- # used for Python 3.2, which doesn't support unicode literals.
- splitstr = b':'
- if isinstance(url, str):
- splitstr = splitstr.decode('ascii')
- host = ri.netloc.split(splitstr)[0]
-
- try:
- _netrc = netrc(netrc_path).authenticators(host)
- if _netrc:
- # Return with login / password
- login_i = (0 if _netrc[0] else 1)
- return (_netrc[login_i], _netrc[2])
- except (NetrcParseError, IOError):
- # If there was a parsing error or a permissions issue reading the file,
- # we'll just skip netrc auth unless explicitly asked to raise errors.
- if raise_errors:
- raise
-
- # App Engine hackiness.
- except (ImportError, AttributeError):
- pass
-
-
-def guess_filename(obj):
- """Tries to guess the filename of the given object."""
- name = getattr(obj, 'name', None)
- if (name and isinstance(name, basestring) and name[0] != '<' and
- name[-1] != '>'):
- return os.path.basename(name)
-
-
-def extract_zipped_paths(path):
- """Replace nonexistent paths that look like they refer to a member of a zip
- archive with the location of an extracted copy of the target, or else
- just return the provided path unchanged.
- """
- if os.path.exists(path):
- # this is already a valid path, no need to do anything further
- return path
-
- # find the first valid part of the provided path and treat that as a zip archive
- # assume the rest of the path is the name of a member in the archive
- archive, member = os.path.split(path)
- while archive and not os.path.exists(archive):
- archive, prefix = os.path.split(archive)
- if not prefix:
- # If we don't check for an empty prefix after the split (in other words, archive remains unchanged after the split),
- # we _can_ end up in an infinite loop on a rare corner case affecting a small number of users
- break
- member = '/'.join([prefix, member])
-
- if not zipfile.is_zipfile(archive):
- return path
-
- zip_file = zipfile.ZipFile(archive)
- if member not in zip_file.namelist():
- return path
-
- # we have a valid zip archive and a valid member of that archive
- tmp = tempfile.gettempdir()
- extracted_path = os.path.join(tmp, member.split('/')[-1])
- if not os.path.exists(extracted_path):
- # use read + write to avoid the creating nested folders, we only want the file, avoids mkdir racing condition
- with atomic_open(extracted_path) as file_handler:
- file_handler.write(zip_file.read(member))
- return extracted_path
-
-
-@contextlib.contextmanager
-def atomic_open(filename):
- """Write a file to the disk in an atomic fashion"""
- replacer = os.rename if sys.version_info[0] == 2 else os.replace
- tmp_descriptor, tmp_name = tempfile.mkstemp(dir=os.path.dirname(filename))
- try:
- with os.fdopen(tmp_descriptor, 'wb') as tmp_handler:
- yield tmp_handler
- replacer(tmp_name, filename)
- except BaseException:
- os.remove(tmp_name)
- raise
-
-
-def from_key_val_list(value):
- """Take an object and test to see if it can be represented as a
- dictionary. Unless it can not be represented as such, return an
- OrderedDict, e.g.,
-
- ::
-
- >>> from_key_val_list([('key', 'val')])
- OrderedDict([('key', 'val')])
- >>> from_key_val_list('string')
- Traceback (most recent call last):
- ...
- ValueError: cannot encode objects that are not 2-tuples
- >>> from_key_val_list({'key': 'val'})
- OrderedDict([('key', 'val')])
-
- :rtype: OrderedDict
- """
- if value is None:
- return None
-
- if isinstance(value, (str, bytes, bool, int)):
- raise ValueError('cannot encode objects that are not 2-tuples')
-
- return OrderedDict(value)
-
-
-def to_key_val_list(value):
- """Take an object and test to see if it can be represented as a
- dictionary. If it can be, return a list of tuples, e.g.,
-
- ::
-
- >>> to_key_val_list([('key', 'val')])
- [('key', 'val')]
- >>> to_key_val_list({'key': 'val'})
- [('key', 'val')]
- >>> to_key_val_list('string')
- Traceback (most recent call last):
- ...
- ValueError: cannot encode objects that are not 2-tuples
-
- :rtype: list
- """
- if value is None:
- return None
-
- if isinstance(value, (str, bytes, bool, int)):
- raise ValueError('cannot encode objects that are not 2-tuples')
-
- if isinstance(value, Mapping):
- value = value.items()
-
- return list(value)
-
-
-# From mitsuhiko/werkzeug (used with permission).
-def parse_list_header(value):
- """Parse lists as described by RFC 2068 Section 2.
-
- In particular, parse comma-separated lists where the elements of
- the list may include quoted-strings. A quoted-string could
- contain a comma. A non-quoted string could have quotes in the
- middle. Quotes are removed automatically after parsing.
-
- It basically works like :func:`parse_set_header` just that items
- may appear multiple times and case sensitivity is preserved.
-
- The return value is a standard :class:`list`:
-
- >>> parse_list_header('token, "quoted value"')
- ['token', 'quoted value']
-
- To create a header from the :class:`list` again, use the
- :func:`dump_header` function.
-
- :param value: a string with a list header.
- :return: :class:`list`
- :rtype: list
- """
- result = []
- for item in _parse_list_header(value):
- if item[:1] == item[-1:] == '"':
- item = unquote_header_value(item[1:-1])
- result.append(item)
- return result
-
-
-# From mitsuhiko/werkzeug (used with permission).
-def parse_dict_header(value):
- """Parse lists of key, value pairs as described by RFC 2068 Section 2 and
- convert them into a python dict:
-
- >>> d = parse_dict_header('foo="is a fish", bar="as well"')
- >>> type(d) is dict
- True
- >>> sorted(d.items())
- [('bar', 'as well'), ('foo', 'is a fish')]
-
- If there is no value for a key it will be `None`:
-
- >>> parse_dict_header('key_without_value')
- {'key_without_value': None}
-
- To create a header from the :class:`dict` again, use the
- :func:`dump_header` function.
-
- :param value: a string with a dict header.
- :return: :class:`dict`
- :rtype: dict
- """
- result = {}
- for item in _parse_list_header(value):
- if '=' not in item:
- result[item] = None
- continue
- name, value = item.split('=', 1)
- if value[:1] == value[-1:] == '"':
- value = unquote_header_value(value[1:-1])
- result[name] = value
- return result
-
-
-# From mitsuhiko/werkzeug (used with permission).
-def unquote_header_value(value, is_filename=False):
- r"""Unquotes a header value. (Reversal of :func:`quote_header_value`).
- This does not use the real unquoting but what browsers are actually
- using for quoting.
-
- :param value: the header value to unquote.
- :rtype: str
- """
- if value and value[0] == value[-1] == '"':
- # this is not the real unquoting, but fixing this so that the
- # RFC is met will result in bugs with internet explorer and
- # probably some other browsers as well. IE for example is
- # uploading files with "C:\foo\bar.txt" as filename
- value = value[1:-1]
-
- # if this is a filename and the starting characters look like
- # a UNC path, then just return the value without quotes. Using the
- # replace sequence below on a UNC path has the effect of turning
- # the leading double slash into a single slash and then
- # _fix_ie_filename() doesn't work correctly. See #458.
- if not is_filename or value[:2] != '\\\\':
- return value.replace('\\\\', '\\').replace('\\"', '"')
- return value
-
-
-def dict_from_cookiejar(cj):
- """Returns a key/value dictionary from a CookieJar.
-
- :param cj: CookieJar object to extract cookies from.
- :rtype: dict
- """
-
- cookie_dict = {}
-
- for cookie in cj:
- cookie_dict[cookie.name] = cookie.value
-
- return cookie_dict
-
-
-def add_dict_to_cookiejar(cj, cookie_dict):
- """Returns a CookieJar from a key/value dictionary.
-
- :param cj: CookieJar to insert cookies into.
- :param cookie_dict: Dict of key/values to insert into CookieJar.
- :rtype: CookieJar
- """
-
- return cookiejar_from_dict(cookie_dict, cj)
-
-
-def get_encodings_from_content(content):
- """Returns encodings from given content string.
-
- :param content: bytestring to extract encodings from.
- """
- warnings.warn((
- 'In requests 3.0, get_encodings_from_content will be removed. For '
- 'more information, please see the discussion on issue #2266. (This'
- ' warning should only appear once.)'),
- DeprecationWarning)
-
- charset_re = re.compile(r']', flags=re.I)
- pragma_re = re.compile(r']', flags=re.I)
- xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]')
-
- return (charset_re.findall(content) +
- pragma_re.findall(content) +
- xml_re.findall(content))
-
-
-def _parse_content_type_header(header):
- """Returns content type and parameters from given header
-
- :param header: string
- :return: tuple containing content type and dictionary of
- parameters
- """
-
- tokens = header.split(';')
- content_type, params = tokens[0].strip(), tokens[1:]
- params_dict = {}
- items_to_strip = "\"' "
-
- for param in params:
- param = param.strip()
- if param:
- key, value = param, True
- index_of_equals = param.find("=")
- if index_of_equals != -1:
- key = param[:index_of_equals].strip(items_to_strip)
- value = param[index_of_equals + 1:].strip(items_to_strip)
- params_dict[key.lower()] = value
- return content_type, params_dict
-
-
-def get_encoding_from_headers(headers):
- """Returns encodings from given HTTP Header Dict.
-
- :param headers: dictionary to extract encoding from.
- :rtype: str
- """
-
- content_type = headers.get('content-type')
-
- if not content_type:
- return None
-
- content_type, params = _parse_content_type_header(content_type)
-
- if 'charset' in params:
- return params['charset'].strip("'\"")
-
- if 'text' in content_type:
- return 'ISO-8859-1'
-
- if 'application/json' in content_type:
- # Assume UTF-8 based on RFC 4627: https://www.ietf.org/rfc/rfc4627.txt since the charset was unset
- return 'utf-8'
-
-
-def stream_decode_response_unicode(iterator, r):
- """Stream decodes a iterator."""
-
- if r.encoding is None:
- for item in iterator:
- yield item
- return
-
- decoder = codecs.getincrementaldecoder(r.encoding)(errors='replace')
- for chunk in iterator:
- rv = decoder.decode(chunk)
- if rv:
- yield rv
- rv = decoder.decode(b'', final=True)
- if rv:
- yield rv
-
-
-def iter_slices(string, slice_length):
- """Iterate over slices of a string."""
- pos = 0
- if slice_length is None or slice_length <= 0:
- slice_length = len(string)
- while pos < len(string):
- yield string[pos:pos + slice_length]
- pos += slice_length
-
-
-def get_unicode_from_response(r):
- """Returns the requested content back in unicode.
-
- :param r: Response object to get unicode content from.
-
- Tried:
-
- 1. charset from content-type
- 2. fall back and replace all unicode characters
-
- :rtype: str
- """
- warnings.warn((
- 'In requests 3.0, get_unicode_from_response will be removed. For '
- 'more information, please see the discussion on issue #2266. (This'
- ' warning should only appear once.)'),
- DeprecationWarning)
-
- tried_encodings = []
-
- # Try charset from content-type
- encoding = get_encoding_from_headers(r.headers)
-
- if encoding:
- try:
- return str(r.content, encoding)
- except UnicodeError:
- tried_encodings.append(encoding)
-
- # Fall back:
- try:
- return str(r.content, encoding, errors='replace')
- except TypeError:
- return r.content
-
-
-# The unreserved URI characters (RFC 3986)
-UNRESERVED_SET = frozenset(
- "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" + "0123456789-._~")
-
-
-def unquote_unreserved(uri):
- """Un-escape any percent-escape sequences in a URI that are unreserved
- characters. This leaves all reserved, illegal and non-ASCII bytes encoded.
-
- :rtype: str
- """
- parts = uri.split('%')
- for i in range(1, len(parts)):
- h = parts[i][0:2]
- if len(h) == 2 and h.isalnum():
- try:
- c = chr(int(h, 16))
- except ValueError:
- raise InvalidURL("Invalid percent-escape sequence: '%s'" % h)
-
- if c in UNRESERVED_SET:
- parts[i] = c + parts[i][2:]
- else:
- parts[i] = '%' + parts[i]
- else:
- parts[i] = '%' + parts[i]
- return ''.join(parts)
-
-
-def requote_uri(uri):
- """Re-quote the given URI.
-
- This function passes the given URI through an unquote/quote cycle to
- ensure that it is fully and consistently quoted.
-
- :rtype: str
- """
- safe_with_percent = "!#$%&'()*+,/:;=?@[]~"
- safe_without_percent = "!#$&'()*+,/:;=?@[]~"
- try:
- # Unquote only the unreserved characters
- # Then quote only illegal characters (do not quote reserved,
- # unreserved, or '%')
- return quote(unquote_unreserved(uri), safe=safe_with_percent)
- except InvalidURL:
- # We couldn't unquote the given URI, so let's try quoting it, but
- # there may be unquoted '%'s in the URI. We need to make sure they're
- # properly quoted so they do not cause issues elsewhere.
- return quote(uri, safe=safe_without_percent)
-
-
-def address_in_network(ip, net):
- """This function allows you to check if an IP belongs to a network subnet
-
- Example: returns True if ip = 192.168.1.1 and net = 192.168.1.0/24
- returns False if ip = 192.168.1.1 and net = 192.168.100.0/24
-
- :rtype: bool
- """
- ipaddr = struct.unpack('=L', socket.inet_aton(ip))[0]
- netaddr, bits = net.split('/')
- netmask = struct.unpack('=L', socket.inet_aton(dotted_netmask(int(bits))))[0]
- network = struct.unpack('=L', socket.inet_aton(netaddr))[0] & netmask
- return (ipaddr & netmask) == (network & netmask)
-
-
-def dotted_netmask(mask):
- """Converts mask from /xx format to xxx.xxx.xxx.xxx
-
- Example: if mask is 24 function returns 255.255.255.0
-
- :rtype: str
- """
- bits = 0xffffffff ^ (1 << 32 - mask) - 1
- return socket.inet_ntoa(struct.pack('>I', bits))
-
-
-def is_ipv4_address(string_ip):
- """
- :rtype: bool
- """
- try:
- socket.inet_aton(string_ip)
- except socket.error:
- return False
- return True
-
-
-def is_valid_cidr(string_network):
- """
- Very simple check of the cidr format in no_proxy variable.
-
- :rtype: bool
- """
- if string_network.count('/') == 1:
- try:
- mask = int(string_network.split('/')[1])
- except ValueError:
- return False
-
- if mask < 1 or mask > 32:
- return False
-
- try:
- socket.inet_aton(string_network.split('/')[0])
- except socket.error:
- return False
- else:
- return False
- return True
-
-
-@contextlib.contextmanager
-def set_environ(env_name, value):
- """Set the environment variable 'env_name' to 'value'
-
- Save previous value, yield, and then restore the previous value stored in
- the environment variable 'env_name'.
-
- If 'value' is None, do nothing"""
- value_changed = value is not None
- if value_changed:
- old_value = os.environ.get(env_name)
- os.environ[env_name] = value
- try:
- yield
- finally:
- if value_changed:
- if old_value is None:
- del os.environ[env_name]
- else:
- os.environ[env_name] = old_value
-
-
-def should_bypass_proxies(url, no_proxy):
- """
- Returns whether we should bypass proxies or not.
-
- :rtype: bool
- """
- # Prioritize lowercase environment variables over uppercase
- # to keep a consistent behaviour with other http projects (curl, wget).
- get_proxy = lambda k: os.environ.get(k) or os.environ.get(k.upper())
-
- # First check whether no_proxy is defined. If it is, check that the URL
- # we're getting isn't in the no_proxy list.
- no_proxy_arg = no_proxy
- if no_proxy is None:
- no_proxy = get_proxy('no_proxy')
- parsed = urlparse(url)
-
- if parsed.hostname is None:
- # URLs don't always have hostnames, e.g. file:/// urls.
- return True
-
- if no_proxy:
- # We need to check whether we match here. We need to see if we match
- # the end of the hostname, both with and without the port.
- no_proxy = (
- host for host in no_proxy.replace(' ', '').split(',') if host
- )
-
- if is_ipv4_address(parsed.hostname):
- for proxy_ip in no_proxy:
- if is_valid_cidr(proxy_ip):
- if address_in_network(parsed.hostname, proxy_ip):
- return True
- elif parsed.hostname == proxy_ip:
- # If no_proxy ip was defined in plain IP notation instead of cidr notation &
- # matches the IP of the index
- return True
- else:
- host_with_port = parsed.hostname
- if parsed.port:
- host_with_port += ':{}'.format(parsed.port)
-
- for host in no_proxy:
- if parsed.hostname.endswith(host) or host_with_port.endswith(host):
- # The URL does match something in no_proxy, so we don't want
- # to apply the proxies on this URL.
- return True
-
- with set_environ('no_proxy', no_proxy_arg):
- # parsed.hostname can be `None` in cases such as a file URI.
- try:
- bypass = proxy_bypass(parsed.hostname)
- except (TypeError, socket.gaierror):
- bypass = False
-
- if bypass:
- return True
-
- return False
-
-
-def get_environ_proxies(url, no_proxy=None):
- """
- Return a dict of environment proxies.
-
- :rtype: dict
- """
- if should_bypass_proxies(url, no_proxy=no_proxy):
- return {}
- else:
- return getproxies()
-
-
-def select_proxy(url, proxies):
- """Select a proxy for the url, if applicable.
-
- :param url: The url being for the request
- :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs
- """
- proxies = proxies or {}
- urlparts = urlparse(url)
- if urlparts.hostname is None:
- return proxies.get(urlparts.scheme, proxies.get('all'))
-
- proxy_keys = [
- urlparts.scheme + '://' + urlparts.hostname,
- urlparts.scheme,
- 'all://' + urlparts.hostname,
- 'all',
- ]
- proxy = None
- for proxy_key in proxy_keys:
- if proxy_key in proxies:
- proxy = proxies[proxy_key]
- break
-
- return proxy
-
-
-def resolve_proxies(request, proxies, trust_env=True):
- """This method takes proxy information from a request and configuration
- input to resolve a mapping of target proxies. This will consider settings
- such a NO_PROXY to strip proxy configurations.
-
- :param request: Request or PreparedRequest
- :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs
- :param trust_env: Boolean declaring whether to trust environment configs
-
- :rtype: dict
- """
- proxies = proxies if proxies is not None else {}
- url = request.url
- scheme = urlparse(url).scheme
- no_proxy = proxies.get('no_proxy')
- new_proxies = proxies.copy()
-
- if trust_env and not should_bypass_proxies(url, no_proxy=no_proxy):
- environ_proxies = get_environ_proxies(url, no_proxy=no_proxy)
-
- proxy = environ_proxies.get(scheme, environ_proxies.get('all'))
-
- if proxy:
- new_proxies.setdefault(scheme, proxy)
- return new_proxies
-
-
-def default_user_agent(name="python-requests"):
- """
- Return a string representing the default user agent.
-
- :rtype: str
- """
- return '%s/%s' % (name, __version__)
-
-
-def default_headers():
- """
- :rtype: requests.structures.CaseInsensitiveDict
- """
- return CaseInsensitiveDict({
- 'User-Agent': default_user_agent(),
- 'Accept-Encoding': DEFAULT_ACCEPT_ENCODING,
- 'Accept': '*/*',
- 'Connection': 'keep-alive',
- })
-
-
-def parse_header_links(value):
- """Return a list of parsed link headers proxies.
-
- i.e. Link: ; rel=front; type="image/jpeg",; rel=back;type="image/jpeg"
-
- :rtype: list
- """
-
- links = []
-
- replace_chars = ' \'"'
-
- value = value.strip(replace_chars)
- if not value:
- return links
-
- for val in re.split(', *<', value):
- try:
- url, params = val.split(';', 1)
- except ValueError:
- url, params = val, ''
-
- link = {'url': url.strip('<> \'"')}
-
- for param in params.split(';'):
- try:
- key, value = param.split('=')
- except ValueError:
- break
-
- link[key.strip(replace_chars)] = value.strip(replace_chars)
-
- links.append(link)
-
- return links
-
-
-# Null bytes; no need to recreate these on each call to guess_json_utf
-_null = '\x00'.encode('ascii') # encoding to ASCII for Python 3
-_null2 = _null * 2
-_null3 = _null * 3
-
-
-def guess_json_utf(data):
- """
- :rtype: str
- """
- # JSON always starts with two ASCII characters, so detection is as
- # easy as counting the nulls and from their location and count
- # determine the encoding. Also detect a BOM, if present.
- sample = data[:4]
- if sample in (codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE):
- return 'utf-32' # BOM included
- if sample[:3] == codecs.BOM_UTF8:
- return 'utf-8-sig' # BOM included, MS style (discouraged)
- if sample[:2] in (codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE):
- return 'utf-16' # BOM included
- nullcount = sample.count(_null)
- if nullcount == 0:
- return 'utf-8'
- if nullcount == 2:
- if sample[::2] == _null2: # 1st and 3rd are null
- return 'utf-16-be'
- if sample[1::2] == _null2: # 2nd and 4th are null
- return 'utf-16-le'
- # Did not detect 2 valid UTF-16 ascii-range characters
- if nullcount == 3:
- if sample[:3] == _null3:
- return 'utf-32-be'
- if sample[1:] == _null3:
- return 'utf-32-le'
- # Did not detect a valid UTF-32 ascii-range character
- return None
-
-
-def prepend_scheme_if_needed(url, new_scheme):
- """Given a URL that may or may not have a scheme, prepend the given scheme.
- Does not replace a present scheme with the one provided as an argument.
-
- :rtype: str
- """
- parsed = parse_url(url)
- scheme, auth, host, port, path, query, fragment = parsed
-
- # A defect in urlparse determines that there isn't a netloc present in some
- # urls. We previously assumed parsing was overly cautious, and swapped the
- # netloc and path. Due to a lack of tests on the original defect, this is
- # maintained with parse_url for backwards compatibility.
- netloc = parsed.netloc
- if not netloc:
- netloc, path = path, netloc
-
- if auth:
- # parse_url doesn't provide the netloc with auth
- # so we'll add it ourselves.
- netloc = '@'.join([auth, netloc])
- if scheme is None:
- scheme = new_scheme
- if path is None:
- path = ''
-
- return urlunparse((scheme, netloc, path, '', query, fragment))
-
-
-def get_auth_from_url(url):
- """Given a url with authentication components, extract them into a tuple of
- username,password.
-
- :rtype: (str,str)
- """
- parsed = urlparse(url)
-
- try:
- auth = (unquote(parsed.username), unquote(parsed.password))
- except (AttributeError, TypeError):
- auth = ('', '')
-
- return auth
-
-
-# Moved outside of function to avoid recompile every call
-_CLEAN_HEADER_REGEX_BYTE = re.compile(b'^\\S[^\\r\\n]*$|^$')
-_CLEAN_HEADER_REGEX_STR = re.compile(r'^\S[^\r\n]*$|^$')
-
-
-def check_header_validity(header):
- """Verifies that header value is a string which doesn't contain
- leading whitespace or return characters. This prevents unintended
- header injection.
-
- :param header: tuple, in the format (name, value).
- """
- name, value = header
-
- if isinstance(value, bytes):
- pat = _CLEAN_HEADER_REGEX_BYTE
- else:
- pat = _CLEAN_HEADER_REGEX_STR
- try:
- if not pat.match(value):
- raise InvalidHeader("Invalid return character or leading space in header: %s" % name)
- except TypeError:
- raise InvalidHeader("Value for header {%s: %s} must be of type str or "
- "bytes, not %s" % (name, value, type(value)))
-
-
-def urldefragauth(url):
- """
- Given a url remove the fragment and the authentication part.
-
- :rtype: str
- """
- scheme, netloc, path, params, query, fragment = urlparse(url)
-
- # see func:`prepend_scheme_if_needed`
- if not netloc:
- netloc, path = path, netloc
-
- netloc = netloc.rsplit('@', 1)[-1]
-
- return urlunparse((scheme, netloc, path, params, query, ''))
-
-
-def rewind_body(prepared_request):
- """Move file pointer back to its recorded starting position
- so it can be read again on redirect.
- """
- body_seek = getattr(prepared_request.body, 'seek', None)
- if body_seek is not None and isinstance(prepared_request._body_position, integer_types):
- try:
- body_seek(prepared_request._body_position)
- except (IOError, OSError):
- raise UnrewindableBodyError("An error occurred when rewinding request "
- "body for redirect.")
- else:
- raise UnrewindableBodyError("Unable to rewind request body for redirect.")
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/special.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/special.py
deleted file mode 100644
index 45565ac0fd2ca323dcb73439ef0622d789609f34..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/special.py
+++ /dev/null
@@ -1,116 +0,0 @@
-"""
- pygments.lexers.special
- ~~~~~~~~~~~~~~~~~~~~~~~
-
- Special lexers.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import ast
-
-from pygments.lexer import Lexer, line_re
-from pygments.token import Token, Error, Text, Generic
-from pygments.util import get_choice_opt
-
-
-__all__ = ['TextLexer', 'OutputLexer', 'RawTokenLexer']
-
-
-class TextLexer(Lexer):
- """
- "Null" lexer, doesn't highlight anything.
- """
- name = 'Text only'
- aliases = ['text']
- filenames = ['*.txt']
- mimetypes = ['text/plain']
- priority = 0.01
-
- def get_tokens_unprocessed(self, text):
- yield 0, Text, text
-
- def analyse_text(text):
- return TextLexer.priority
-
-
-class OutputLexer(Lexer):
- """
- Simple lexer that highlights everything as ``Token.Generic.Output``.
-
- .. versionadded:: 2.10
- """
- name = 'Text output'
- aliases = ['output']
-
- def get_tokens_unprocessed(self, text):
- yield 0, Generic.Output, text
-
-
-_ttype_cache = {}
-
-
-class RawTokenLexer(Lexer):
- """
- Recreate a token stream formatted with the `RawTokenFormatter`.
-
- Additional options accepted:
-
- `compress`
- If set to ``"gz"`` or ``"bz2"``, decompress the token stream with
- the given compression algorithm before lexing (default: ``""``).
- """
- name = 'Raw token data'
- aliases = []
- filenames = []
- mimetypes = ['application/x-pygments-tokens']
-
- def __init__(self, **options):
- self.compress = get_choice_opt(options, 'compress',
- ['', 'none', 'gz', 'bz2'], '')
- Lexer.__init__(self, **options)
-
- def get_tokens(self, text):
- if self.compress:
- if isinstance(text, str):
- text = text.encode('latin1')
- try:
- if self.compress == 'gz':
- import gzip
- text = gzip.decompress(text)
- elif self.compress == 'bz2':
- import bz2
- text = bz2.decompress(text)
- except OSError:
- yield Error, text.decode('latin1')
- if isinstance(text, bytes):
- text = text.decode('latin1')
-
- # do not call Lexer.get_tokens() because stripping is not optional.
- text = text.strip('\n') + '\n'
- for i, t, v in self.get_tokens_unprocessed(text):
- yield t, v
-
- def get_tokens_unprocessed(self, text):
- length = 0
- for match in line_re.finditer(text):
- try:
- ttypestr, val = match.group().rstrip().split('\t', 1)
- ttype = _ttype_cache.get(ttypestr)
- if not ttype:
- ttype = Token
- ttypes = ttypestr.split('.')[1:]
- for ttype_ in ttypes:
- if not ttype_ or not ttype_[0].isupper():
- raise ValueError('malformed token name')
- ttype = getattr(ttype, ttype_)
- _ttype_cache[ttypestr] = ttype
- val = ast.literal_eval(val)
- if not isinstance(val, str):
- raise ValueError('expected str')
- except (SyntaxError, ValueError):
- val = match.group()
- ttype = Error
- yield length, ttype, val
- length += len(val)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/console.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/console.py
deleted file mode 100644
index 1232cd5ce7c6b3db62fc3b1fe92b0e86cc7f6ae9..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/console.py
+++ /dev/null
@@ -1,2633 +0,0 @@
-import inspect
-import os
-import platform
-import sys
-import threading
-import zlib
-from abc import ABC, abstractmethod
-from dataclasses import dataclass, field
-from datetime import datetime
-from functools import wraps
-from getpass import getpass
-from html import escape
-from inspect import isclass
-from itertools import islice
-from math import ceil
-from time import monotonic
-from types import FrameType, ModuleType, TracebackType
-from typing import (
- IO,
- TYPE_CHECKING,
- Any,
- Callable,
- Dict,
- Iterable,
- List,
- Mapping,
- NamedTuple,
- Optional,
- TextIO,
- Tuple,
- Type,
- Union,
- cast,
-)
-
-from rich._null_file import NULL_FILE
-
-if sys.version_info >= (3, 8):
- from typing import Literal, Protocol, runtime_checkable
-else:
- from typing_extensions import (
- Literal,
- Protocol,
- runtime_checkable,
- ) # pragma: no cover
-
-from . import errors, themes
-from ._emoji_replace import _emoji_replace
-from ._export_format import CONSOLE_HTML_FORMAT, CONSOLE_SVG_FORMAT
-from ._fileno import get_fileno
-from ._log_render import FormatTimeCallable, LogRender
-from .align import Align, AlignMethod
-from .color import ColorSystem, blend_rgb
-from .control import Control
-from .emoji import EmojiVariant
-from .highlighter import NullHighlighter, ReprHighlighter
-from .markup import render as render_markup
-from .measure import Measurement, measure_renderables
-from .pager import Pager, SystemPager
-from .pretty import Pretty, is_expandable
-from .protocol import rich_cast
-from .region import Region
-from .scope import render_scope
-from .screen import Screen
-from .segment import Segment
-from .style import Style, StyleType
-from .styled import Styled
-from .terminal_theme import DEFAULT_TERMINAL_THEME, SVG_EXPORT_THEME, TerminalTheme
-from .text import Text, TextType
-from .theme import Theme, ThemeStack
-
-if TYPE_CHECKING:
- from ._windows import WindowsConsoleFeatures
- from .live import Live
- from .status import Status
-
-JUPYTER_DEFAULT_COLUMNS = 115
-JUPYTER_DEFAULT_LINES = 100
-WINDOWS = platform.system() == "Windows"
-
-HighlighterType = Callable[[Union[str, "Text"]], "Text"]
-JustifyMethod = Literal["default", "left", "center", "right", "full"]
-OverflowMethod = Literal["fold", "crop", "ellipsis", "ignore"]
-
-
-class NoChange:
- pass
-
-
-NO_CHANGE = NoChange()
-
-try:
- _STDIN_FILENO = sys.__stdin__.fileno()
-except Exception:
- _STDIN_FILENO = 0
-try:
- _STDOUT_FILENO = sys.__stdout__.fileno()
-except Exception:
- _STDOUT_FILENO = 1
-try:
- _STDERR_FILENO = sys.__stderr__.fileno()
-except Exception:
- _STDERR_FILENO = 2
-
-_STD_STREAMS = (_STDIN_FILENO, _STDOUT_FILENO, _STDERR_FILENO)
-_STD_STREAMS_OUTPUT = (_STDOUT_FILENO, _STDERR_FILENO)
-
-
-_TERM_COLORS = {
- "kitty": ColorSystem.EIGHT_BIT,
- "256color": ColorSystem.EIGHT_BIT,
- "16color": ColorSystem.STANDARD,
-}
-
-
-class ConsoleDimensions(NamedTuple):
- """Size of the terminal."""
-
- width: int
- """The width of the console in 'cells'."""
- height: int
- """The height of the console in lines."""
-
-
-@dataclass
-class ConsoleOptions:
- """Options for __rich_console__ method."""
-
- size: ConsoleDimensions
- """Size of console."""
- legacy_windows: bool
- """legacy_windows: flag for legacy windows."""
- min_width: int
- """Minimum width of renderable."""
- max_width: int
- """Maximum width of renderable."""
- is_terminal: bool
- """True if the target is a terminal, otherwise False."""
- encoding: str
- """Encoding of terminal."""
- max_height: int
- """Height of container (starts as terminal)"""
- justify: Optional[JustifyMethod] = None
- """Justify value override for renderable."""
- overflow: Optional[OverflowMethod] = None
- """Overflow value override for renderable."""
- no_wrap: Optional[bool] = False
- """Disable wrapping for text."""
- highlight: Optional[bool] = None
- """Highlight override for render_str."""
- markup: Optional[bool] = None
- """Enable markup when rendering strings."""
- height: Optional[int] = None
-
- @property
- def ascii_only(self) -> bool:
- """Check if renderables should use ascii only."""
- return not self.encoding.startswith("utf")
-
- def copy(self) -> "ConsoleOptions":
- """Return a copy of the options.
-
- Returns:
- ConsoleOptions: a copy of self.
- """
- options: ConsoleOptions = ConsoleOptions.__new__(ConsoleOptions)
- options.__dict__ = self.__dict__.copy()
- return options
-
- def update(
- self,
- *,
- width: Union[int, NoChange] = NO_CHANGE,
- min_width: Union[int, NoChange] = NO_CHANGE,
- max_width: Union[int, NoChange] = NO_CHANGE,
- justify: Union[Optional[JustifyMethod], NoChange] = NO_CHANGE,
- overflow: Union[Optional[OverflowMethod], NoChange] = NO_CHANGE,
- no_wrap: Union[Optional[bool], NoChange] = NO_CHANGE,
- highlight: Union[Optional[bool], NoChange] = NO_CHANGE,
- markup: Union[Optional[bool], NoChange] = NO_CHANGE,
- height: Union[Optional[int], NoChange] = NO_CHANGE,
- ) -> "ConsoleOptions":
- """Update values, return a copy."""
- options = self.copy()
- if not isinstance(width, NoChange):
- options.min_width = options.max_width = max(0, width)
- if not isinstance(min_width, NoChange):
- options.min_width = min_width
- if not isinstance(max_width, NoChange):
- options.max_width = max_width
- if not isinstance(justify, NoChange):
- options.justify = justify
- if not isinstance(overflow, NoChange):
- options.overflow = overflow
- if not isinstance(no_wrap, NoChange):
- options.no_wrap = no_wrap
- if not isinstance(highlight, NoChange):
- options.highlight = highlight
- if not isinstance(markup, NoChange):
- options.markup = markup
- if not isinstance(height, NoChange):
- if height is not None:
- options.max_height = height
- options.height = None if height is None else max(0, height)
- return options
-
- def update_width(self, width: int) -> "ConsoleOptions":
- """Update just the width, return a copy.
-
- Args:
- width (int): New width (sets both min_width and max_width)
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.min_width = options.max_width = max(0, width)
- return options
-
- def update_height(self, height: int) -> "ConsoleOptions":
- """Update the height, and return a copy.
-
- Args:
- height (int): New height
-
- Returns:
- ~ConsoleOptions: New Console options instance.
- """
- options = self.copy()
- options.max_height = options.height = height
- return options
-
- def reset_height(self) -> "ConsoleOptions":
- """Return a copy of the options with height set to ``None``.
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.height = None
- return options
-
- def update_dimensions(self, width: int, height: int) -> "ConsoleOptions":
- """Update the width and height, and return a copy.
-
- Args:
- width (int): New width (sets both min_width and max_width).
- height (int): New height.
-
- Returns:
- ~ConsoleOptions: New console options instance.
- """
- options = self.copy()
- options.min_width = options.max_width = max(0, width)
- options.height = options.max_height = height
- return options
-
-
-@runtime_checkable
-class RichCast(Protocol):
- """An object that may be 'cast' to a console renderable."""
-
- def __rich__(
- self,
- ) -> Union["ConsoleRenderable", "RichCast", str]: # pragma: no cover
- ...
-
-
-@runtime_checkable
-class ConsoleRenderable(Protocol):
- """An object that supports the console protocol."""
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult": # pragma: no cover
- ...
-
-
-# A type that may be rendered by Console.
-RenderableType = Union[ConsoleRenderable, RichCast, str]
-"""A string or any object that may be rendered by Rich."""
-
-# The result of calling a __rich_console__ method.
-RenderResult = Iterable[Union[RenderableType, Segment]]
-
-_null_highlighter = NullHighlighter()
-
-
-class CaptureError(Exception):
- """An error in the Capture context manager."""
-
-
-class NewLine:
- """A renderable to generate new line(s)"""
-
- def __init__(self, count: int = 1) -> None:
- self.count = count
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> Iterable[Segment]:
- yield Segment("\n" * self.count)
-
-
-class ScreenUpdate:
- """Render a list of lines at a given offset."""
-
- def __init__(self, lines: List[List[Segment]], x: int, y: int) -> None:
- self._lines = lines
- self.x = x
- self.y = y
-
- def __rich_console__(
- self, console: "Console", options: ConsoleOptions
- ) -> RenderResult:
- x = self.x
- move_to = Control.move_to
- for offset, line in enumerate(self._lines, self.y):
- yield move_to(x, offset)
- yield from line
-
-
-class Capture:
- """Context manager to capture the result of printing to the console.
- See :meth:`~rich.console.Console.capture` for how to use.
-
- Args:
- console (Console): A console instance to capture output.
- """
-
- def __init__(self, console: "Console") -> None:
- self._console = console
- self._result: Optional[str] = None
-
- def __enter__(self) -> "Capture":
- self._console.begin_capture()
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self._result = self._console.end_capture()
-
- def get(self) -> str:
- """Get the result of the capture."""
- if self._result is None:
- raise CaptureError(
- "Capture result is not available until context manager exits."
- )
- return self._result
-
-
-class ThemeContext:
- """A context manager to use a temporary theme. See :meth:`~rich.console.Console.use_theme` for usage."""
-
- def __init__(self, console: "Console", theme: Theme, inherit: bool = True) -> None:
- self.console = console
- self.theme = theme
- self.inherit = inherit
-
- def __enter__(self) -> "ThemeContext":
- self.console.push_theme(self.theme)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- self.console.pop_theme()
-
-
-class PagerContext:
- """A context manager that 'pages' content. See :meth:`~rich.console.Console.pager` for usage."""
-
- def __init__(
- self,
- console: "Console",
- pager: Optional[Pager] = None,
- styles: bool = False,
- links: bool = False,
- ) -> None:
- self._console = console
- self.pager = SystemPager() if pager is None else pager
- self.styles = styles
- self.links = links
-
- def __enter__(self) -> "PagerContext":
- self._console._enter_buffer()
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- if exc_type is None:
- with self._console._lock:
- buffer: List[Segment] = self._console._buffer[:]
- del self._console._buffer[:]
- segments: Iterable[Segment] = buffer
- if not self.styles:
- segments = Segment.strip_styles(segments)
- elif not self.links:
- segments = Segment.strip_links(segments)
- content = self._console._render_buffer(segments)
- self.pager.show(content)
- self._console._exit_buffer()
-
-
-class ScreenContext:
- """A context manager that enables an alternative screen. See :meth:`~rich.console.Console.screen` for usage."""
-
- def __init__(
- self, console: "Console", hide_cursor: bool, style: StyleType = ""
- ) -> None:
- self.console = console
- self.hide_cursor = hide_cursor
- self.screen = Screen(style=style)
- self._changed = False
-
- def update(
- self, *renderables: RenderableType, style: Optional[StyleType] = None
- ) -> None:
- """Update the screen.
-
- Args:
- renderable (RenderableType, optional): Optional renderable to replace current renderable,
- or None for no change. Defaults to None.
- style: (Style, optional): Replacement style, or None for no change. Defaults to None.
- """
- if renderables:
- self.screen.renderable = (
- Group(*renderables) if len(renderables) > 1 else renderables[0]
- )
- if style is not None:
- self.screen.style = style
- self.console.print(self.screen, end="")
-
- def __enter__(self) -> "ScreenContext":
- self._changed = self.console.set_alt_screen(True)
- if self._changed and self.hide_cursor:
- self.console.show_cursor(False)
- return self
-
- def __exit__(
- self,
- exc_type: Optional[Type[BaseException]],
- exc_val: Optional[BaseException],
- exc_tb: Optional[TracebackType],
- ) -> None:
- if self._changed:
- self.console.set_alt_screen(False)
- if self.hide_cursor:
- self.console.show_cursor(True)
-
-
-class Group:
- """Takes a group of renderables and returns a renderable object that renders the group.
-
- Args:
- renderables (Iterable[RenderableType]): An iterable of renderable objects.
- fit (bool, optional): Fit dimension of group to contents, or fill available space. Defaults to True.
- """
-
- def __init__(self, *renderables: "RenderableType", fit: bool = True) -> None:
- self._renderables = renderables
- self.fit = fit
- self._render: Optional[List[RenderableType]] = None
-
- @property
- def renderables(self) -> List["RenderableType"]:
- if self._render is None:
- self._render = list(self._renderables)
- return self._render
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "Measurement":
- if self.fit:
- return measure_renderables(console, options, self.renderables)
- else:
- return Measurement(options.max_width, options.max_width)
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> RenderResult:
- yield from self.renderables
-
-
-def group(fit: bool = True) -> Callable[..., Callable[..., Group]]:
- """A decorator that turns an iterable of renderables in to a group.
-
- Args:
- fit (bool, optional): Fit dimension of group to contents, or fill available space. Defaults to True.
- """
-
- def decorator(
- method: Callable[..., Iterable[RenderableType]]
- ) -> Callable[..., Group]:
- """Convert a method that returns an iterable of renderables in to a Group."""
-
- @wraps(method)
- def _replace(*args: Any, **kwargs: Any) -> Group:
- renderables = method(*args, **kwargs)
- return Group(*renderables, fit=fit)
-
- return _replace
-
- return decorator
-
-
-def _is_jupyter() -> bool: # pragma: no cover
- """Check if we're running in a Jupyter notebook."""
- try:
- get_ipython # type: ignore[name-defined]
- except NameError:
- return False
- ipython = get_ipython() # type: ignore[name-defined]
- shell = ipython.__class__.__name__
- if (
- "google.colab" in str(ipython.__class__)
- or os.getenv("DATABRICKS_RUNTIME_VERSION")
- or shell == "ZMQInteractiveShell"
- ):
- return True # Jupyter notebook or qtconsole
- elif shell == "TerminalInteractiveShell":
- return False # Terminal running IPython
- else:
- return False # Other type (?)
-
-
-COLOR_SYSTEMS = {
- "standard": ColorSystem.STANDARD,
- "256": ColorSystem.EIGHT_BIT,
- "truecolor": ColorSystem.TRUECOLOR,
- "windows": ColorSystem.WINDOWS,
-}
-
-_COLOR_SYSTEMS_NAMES = {system: name for name, system in COLOR_SYSTEMS.items()}
-
-
-@dataclass
-class ConsoleThreadLocals(threading.local):
- """Thread local values for Console context."""
-
- theme_stack: ThemeStack
- buffer: List[Segment] = field(default_factory=list)
- buffer_index: int = 0
-
-
-class RenderHook(ABC):
- """Provides hooks in to the render process."""
-
- @abstractmethod
- def process_renderables(
- self, renderables: List[ConsoleRenderable]
- ) -> List[ConsoleRenderable]:
- """Called with a list of objects to render.
-
- This method can return a new list of renderables, or modify and return the same list.
-
- Args:
- renderables (List[ConsoleRenderable]): A number of renderable objects.
-
- Returns:
- List[ConsoleRenderable]: A replacement list of renderables.
- """
-
-
-_windows_console_features: Optional["WindowsConsoleFeatures"] = None
-
-
-def get_windows_console_features() -> "WindowsConsoleFeatures": # pragma: no cover
- global _windows_console_features
- if _windows_console_features is not None:
- return _windows_console_features
- from ._windows import get_windows_console_features
-
- _windows_console_features = get_windows_console_features()
- return _windows_console_features
-
-
-def detect_legacy_windows() -> bool:
- """Detect legacy Windows."""
- return WINDOWS and not get_windows_console_features().vt
-
-
-class Console:
- """A high level console interface.
-
- Args:
- color_system (str, optional): The color system supported by your terminal,
- either ``"standard"``, ``"256"`` or ``"truecolor"``. Leave as ``"auto"`` to autodetect.
- force_terminal (Optional[bool], optional): Enable/disable terminal control codes, or None to auto-detect terminal. Defaults to None.
- force_jupyter (Optional[bool], optional): Enable/disable Jupyter rendering, or None to auto-detect Jupyter. Defaults to None.
- force_interactive (Optional[bool], optional): Enable/disable interactive mode, or None to auto detect. Defaults to None.
- soft_wrap (Optional[bool], optional): Set soft wrap default on print method. Defaults to False.
- theme (Theme, optional): An optional style theme object, or ``None`` for default theme.
- stderr (bool, optional): Use stderr rather than stdout if ``file`` is not specified. Defaults to False.
- file (IO, optional): A file object where the console should write to. Defaults to stdout.
- quiet (bool, Optional): Boolean to suppress all output. Defaults to False.
- width (int, optional): The width of the terminal. Leave as default to auto-detect width.
- height (int, optional): The height of the terminal. Leave as default to auto-detect height.
- style (StyleType, optional): Style to apply to all output, or None for no style. Defaults to None.
- no_color (Optional[bool], optional): Enabled no color mode, or None to auto detect. Defaults to None.
- tab_size (int, optional): Number of spaces used to replace a tab character. Defaults to 8.
- record (bool, optional): Boolean to enable recording of terminal output,
- required to call :meth:`export_html`, :meth:`export_svg`, and :meth:`export_text`. Defaults to False.
- markup (bool, optional): Boolean to enable :ref:`console_markup`. Defaults to True.
- emoji (bool, optional): Enable emoji code. Defaults to True.
- emoji_variant (str, optional): Optional emoji variant, either "text" or "emoji". Defaults to None.
- highlight (bool, optional): Enable automatic highlighting. Defaults to True.
- log_time (bool, optional): Boolean to enable logging of time by :meth:`log` methods. Defaults to True.
- log_path (bool, optional): Boolean to enable the logging of the caller by :meth:`log`. Defaults to True.
- log_time_format (Union[str, TimeFormatterCallable], optional): If ``log_time`` is enabled, either string for strftime or callable that formats the time. Defaults to "[%X] ".
- highlighter (HighlighterType, optional): Default highlighter.
- legacy_windows (bool, optional): Enable legacy Windows mode, or ``None`` to auto detect. Defaults to ``None``.
- safe_box (bool, optional): Restrict box options that don't render on legacy Windows.
- get_datetime (Callable[[], datetime], optional): Callable that gets the current time as a datetime.datetime object (used by Console.log),
- or None for datetime.now.
- get_time (Callable[[], time], optional): Callable that gets the current time in seconds, default uses time.monotonic.
- """
-
- _environ: Mapping[str, str] = os.environ
-
- def __init__(
- self,
- *,
- color_system: Optional[
- Literal["auto", "standard", "256", "truecolor", "windows"]
- ] = "auto",
- force_terminal: Optional[bool] = None,
- force_jupyter: Optional[bool] = None,
- force_interactive: Optional[bool] = None,
- soft_wrap: bool = False,
- theme: Optional[Theme] = None,
- stderr: bool = False,
- file: Optional[IO[str]] = None,
- quiet: bool = False,
- width: Optional[int] = None,
- height: Optional[int] = None,
- style: Optional[StyleType] = None,
- no_color: Optional[bool] = None,
- tab_size: int = 8,
- record: bool = False,
- markup: bool = True,
- emoji: bool = True,
- emoji_variant: Optional[EmojiVariant] = None,
- highlight: bool = True,
- log_time: bool = True,
- log_path: bool = True,
- log_time_format: Union[str, FormatTimeCallable] = "[%X]",
- highlighter: Optional["HighlighterType"] = ReprHighlighter(),
- legacy_windows: Optional[bool] = None,
- safe_box: bool = True,
- get_datetime: Optional[Callable[[], datetime]] = None,
- get_time: Optional[Callable[[], float]] = None,
- _environ: Optional[Mapping[str, str]] = None,
- ):
- # Copy of os.environ allows us to replace it for testing
- if _environ is not None:
- self._environ = _environ
-
- self.is_jupyter = _is_jupyter() if force_jupyter is None else force_jupyter
- if self.is_jupyter:
- if width is None:
- jupyter_columns = self._environ.get("JUPYTER_COLUMNS")
- if jupyter_columns is not None and jupyter_columns.isdigit():
- width = int(jupyter_columns)
- else:
- width = JUPYTER_DEFAULT_COLUMNS
- if height is None:
- jupyter_lines = self._environ.get("JUPYTER_LINES")
- if jupyter_lines is not None and jupyter_lines.isdigit():
- height = int(jupyter_lines)
- else:
- height = JUPYTER_DEFAULT_LINES
-
- self.tab_size = tab_size
- self.record = record
- self._markup = markup
- self._emoji = emoji
- self._emoji_variant: Optional[EmojiVariant] = emoji_variant
- self._highlight = highlight
- self.legacy_windows: bool = (
- (detect_legacy_windows() and not self.is_jupyter)
- if legacy_windows is None
- else legacy_windows
- )
-
- if width is None:
- columns = self._environ.get("COLUMNS")
- if columns is not None and columns.isdigit():
- width = int(columns) - self.legacy_windows
- if height is None:
- lines = self._environ.get("LINES")
- if lines is not None and lines.isdigit():
- height = int(lines)
-
- self.soft_wrap = soft_wrap
- self._width = width
- self._height = height
-
- self._color_system: Optional[ColorSystem]
-
- self._force_terminal = None
- if force_terminal is not None:
- self._force_terminal = force_terminal
-
- self._file = file
- self.quiet = quiet
- self.stderr = stderr
-
- if color_system is None:
- self._color_system = None
- elif color_system == "auto":
- self._color_system = self._detect_color_system()
- else:
- self._color_system = COLOR_SYSTEMS[color_system]
-
- self._lock = threading.RLock()
- self._log_render = LogRender(
- show_time=log_time,
- show_path=log_path,
- time_format=log_time_format,
- )
- self.highlighter: HighlighterType = highlighter or _null_highlighter
- self.safe_box = safe_box
- self.get_datetime = get_datetime or datetime.now
- self.get_time = get_time or monotonic
- self.style = style
- self.no_color = (
- no_color if no_color is not None else "NO_COLOR" in self._environ
- )
- self.is_interactive = (
- (self.is_terminal and not self.is_dumb_terminal)
- if force_interactive is None
- else force_interactive
- )
-
- self._record_buffer_lock = threading.RLock()
- self._thread_locals = ConsoleThreadLocals(
- theme_stack=ThemeStack(themes.DEFAULT if theme is None else theme)
- )
- self._record_buffer: List[Segment] = []
- self._render_hooks: List[RenderHook] = []
- self._live: Optional["Live"] = None
- self._is_alt_screen = False
-
- def __repr__(self) -> str:
- return f""
-
- @property
- def file(self) -> IO[str]:
- """Get the file object to write to."""
- file = self._file or (sys.stderr if self.stderr else sys.stdout)
- file = getattr(file, "rich_proxied_file", file)
- if file is None:
- file = NULL_FILE
- return file
-
- @file.setter
- def file(self, new_file: IO[str]) -> None:
- """Set a new file object."""
- self._file = new_file
-
- @property
- def _buffer(self) -> List[Segment]:
- """Get a thread local buffer."""
- return self._thread_locals.buffer
-
- @property
- def _buffer_index(self) -> int:
- """Get a thread local buffer."""
- return self._thread_locals.buffer_index
-
- @_buffer_index.setter
- def _buffer_index(self, value: int) -> None:
- self._thread_locals.buffer_index = value
-
- @property
- def _theme_stack(self) -> ThemeStack:
- """Get the thread local theme stack."""
- return self._thread_locals.theme_stack
-
- def _detect_color_system(self) -> Optional[ColorSystem]:
- """Detect color system from env vars."""
- if self.is_jupyter:
- return ColorSystem.TRUECOLOR
- if not self.is_terminal or self.is_dumb_terminal:
- return None
- if WINDOWS: # pragma: no cover
- if self.legacy_windows: # pragma: no cover
- return ColorSystem.WINDOWS
- windows_console_features = get_windows_console_features()
- return (
- ColorSystem.TRUECOLOR
- if windows_console_features.truecolor
- else ColorSystem.EIGHT_BIT
- )
- else:
- color_term = self._environ.get("COLORTERM", "").strip().lower()
- if color_term in ("truecolor", "24bit"):
- return ColorSystem.TRUECOLOR
- term = self._environ.get("TERM", "").strip().lower()
- _term_name, _hyphen, colors = term.rpartition("-")
- color_system = _TERM_COLORS.get(colors, ColorSystem.STANDARD)
- return color_system
-
- def _enter_buffer(self) -> None:
- """Enter in to a buffer context, and buffer all output."""
- self._buffer_index += 1
-
- def _exit_buffer(self) -> None:
- """Leave buffer context, and render content if required."""
- self._buffer_index -= 1
- self._check_buffer()
-
- def set_live(self, live: "Live") -> None:
- """Set Live instance. Used by Live context manager.
-
- Args:
- live (Live): Live instance using this Console.
-
- Raises:
- errors.LiveError: If this Console has a Live context currently active.
- """
- with self._lock:
- if self._live is not None:
- raise errors.LiveError("Only one live display may be active at once")
- self._live = live
-
- def clear_live(self) -> None:
- """Clear the Live instance."""
- with self._lock:
- self._live = None
-
- def push_render_hook(self, hook: RenderHook) -> None:
- """Add a new render hook to the stack.
-
- Args:
- hook (RenderHook): Render hook instance.
- """
- with self._lock:
- self._render_hooks.append(hook)
-
- def pop_render_hook(self) -> None:
- """Pop the last renderhook from the stack."""
- with self._lock:
- self._render_hooks.pop()
-
- def __enter__(self) -> "Console":
- """Own context manager to enter buffer context."""
- self._enter_buffer()
- return self
-
- def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
- """Exit buffer context."""
- self._exit_buffer()
-
- def begin_capture(self) -> None:
- """Begin capturing console output. Call :meth:`end_capture` to exit capture mode and return output."""
- self._enter_buffer()
-
- def end_capture(self) -> str:
- """End capture mode and return captured string.
-
- Returns:
- str: Console output.
- """
- render_result = self._render_buffer(self._buffer)
- del self._buffer[:]
- self._exit_buffer()
- return render_result
-
- def push_theme(self, theme: Theme, *, inherit: bool = True) -> None:
- """Push a new theme on to the top of the stack, replacing the styles from the previous theme.
- Generally speaking, you should call :meth:`~rich.console.Console.use_theme` to get a context manager, rather
- than calling this method directly.
-
- Args:
- theme (Theme): A theme instance.
- inherit (bool, optional): Inherit existing styles. Defaults to True.
- """
- self._theme_stack.push_theme(theme, inherit=inherit)
-
- def pop_theme(self) -> None:
- """Remove theme from top of stack, restoring previous theme."""
- self._theme_stack.pop_theme()
-
- def use_theme(self, theme: Theme, *, inherit: bool = True) -> ThemeContext:
- """Use a different theme for the duration of the context manager.
-
- Args:
- theme (Theme): Theme instance to user.
- inherit (bool, optional): Inherit existing console styles. Defaults to True.
-
- Returns:
- ThemeContext: [description]
- """
- return ThemeContext(self, theme, inherit)
-
- @property
- def color_system(self) -> Optional[str]:
- """Get color system string.
-
- Returns:
- Optional[str]: "standard", "256" or "truecolor".
- """
-
- if self._color_system is not None:
- return _COLOR_SYSTEMS_NAMES[self._color_system]
- else:
- return None
-
- @property
- def encoding(self) -> str:
- """Get the encoding of the console file, e.g. ``"utf-8"``.
-
- Returns:
- str: A standard encoding string.
- """
- return (getattr(self.file, "encoding", "utf-8") or "utf-8").lower()
-
- @property
- def is_terminal(self) -> bool:
- """Check if the console is writing to a terminal.
-
- Returns:
- bool: True if the console writing to a device capable of
- understanding terminal codes, otherwise False.
- """
- if self._force_terminal is not None:
- return self._force_terminal
-
- if hasattr(sys.stdin, "__module__") and sys.stdin.__module__.startswith(
- "idlelib"
- ):
- # Return False for Idle which claims to be a tty but can't handle ansi codes
- return False
-
- if self.is_jupyter:
- # return False for Jupyter, which may have FORCE_COLOR set
- return False
-
- # If FORCE_COLOR env var has any value at all, we assume a terminal.
- force_color = self._environ.get("FORCE_COLOR")
- if force_color is not None:
- self._force_terminal = True
- return True
-
- isatty: Optional[Callable[[], bool]] = getattr(self.file, "isatty", None)
- try:
- return False if isatty is None else isatty()
- except ValueError:
- # in some situation (at the end of a pytest run for example) isatty() can raise
- # ValueError: I/O operation on closed file
- # return False because we aren't in a terminal anymore
- return False
-
- @property
- def is_dumb_terminal(self) -> bool:
- """Detect dumb terminal.
-
- Returns:
- bool: True if writing to a dumb terminal, otherwise False.
-
- """
- _term = self._environ.get("TERM", "")
- is_dumb = _term.lower() in ("dumb", "unknown")
- return self.is_terminal and is_dumb
-
- @property
- def options(self) -> ConsoleOptions:
- """Get default console options."""
- return ConsoleOptions(
- max_height=self.size.height,
- size=self.size,
- legacy_windows=self.legacy_windows,
- min_width=1,
- max_width=self.width,
- encoding=self.encoding,
- is_terminal=self.is_terminal,
- )
-
- @property
- def size(self) -> ConsoleDimensions:
- """Get the size of the console.
-
- Returns:
- ConsoleDimensions: A named tuple containing the dimensions.
- """
-
- if self._width is not None and self._height is not None:
- return ConsoleDimensions(self._width - self.legacy_windows, self._height)
-
- if self.is_dumb_terminal:
- return ConsoleDimensions(80, 25)
-
- width: Optional[int] = None
- height: Optional[int] = None
-
- if WINDOWS: # pragma: no cover
- try:
- width, height = os.get_terminal_size()
- except (AttributeError, ValueError, OSError): # Probably not a terminal
- pass
- else:
- for file_descriptor in _STD_STREAMS:
- try:
- width, height = os.get_terminal_size(file_descriptor)
- except (AttributeError, ValueError, OSError):
- pass
- else:
- break
-
- columns = self._environ.get("COLUMNS")
- if columns is not None and columns.isdigit():
- width = int(columns)
- lines = self._environ.get("LINES")
- if lines is not None and lines.isdigit():
- height = int(lines)
-
- # get_terminal_size can report 0, 0 if run from pseudo-terminal
- width = width or 80
- height = height or 25
- return ConsoleDimensions(
- width - self.legacy_windows if self._width is None else self._width,
- height if self._height is None else self._height,
- )
-
- @size.setter
- def size(self, new_size: Tuple[int, int]) -> None:
- """Set a new size for the terminal.
-
- Args:
- new_size (Tuple[int, int]): New width and height.
- """
- width, height = new_size
- self._width = width
- self._height = height
-
- @property
- def width(self) -> int:
- """Get the width of the console.
-
- Returns:
- int: The width (in characters) of the console.
- """
- return self.size.width
-
- @width.setter
- def width(self, width: int) -> None:
- """Set width.
-
- Args:
- width (int): New width.
- """
- self._width = width
-
- @property
- def height(self) -> int:
- """Get the height of the console.
-
- Returns:
- int: The height (in lines) of the console.
- """
- return self.size.height
-
- @height.setter
- def height(self, height: int) -> None:
- """Set height.
-
- Args:
- height (int): new height.
- """
- self._height = height
-
- def bell(self) -> None:
- """Play a 'bell' sound (if supported by the terminal)."""
- self.control(Control.bell())
-
- def capture(self) -> Capture:
- """A context manager to *capture* the result of print() or log() in a string,
- rather than writing it to the console.
-
- Example:
- >>> from rich.console import Console
- >>> console = Console()
- >>> with console.capture() as capture:
- ... console.print("[bold magenta]Hello World[/]")
- >>> print(capture.get())
-
- Returns:
- Capture: Context manager with disables writing to the terminal.
- """
- capture = Capture(self)
- return capture
-
- def pager(
- self, pager: Optional[Pager] = None, styles: bool = False, links: bool = False
- ) -> PagerContext:
- """A context manager to display anything printed within a "pager". The pager application
- is defined by the system and will typically support at least pressing a key to scroll.
-
- Args:
- pager (Pager, optional): A pager object, or None to use :class:`~rich.pager.SystemPager`. Defaults to None.
- styles (bool, optional): Show styles in pager. Defaults to False.
- links (bool, optional): Show links in pager. Defaults to False.
-
- Example:
- >>> from rich.console import Console
- >>> from rich.__main__ import make_test_card
- >>> console = Console()
- >>> with console.pager():
- console.print(make_test_card())
-
- Returns:
- PagerContext: A context manager.
- """
- return PagerContext(self, pager=pager, styles=styles, links=links)
-
- def line(self, count: int = 1) -> None:
- """Write new line(s).
-
- Args:
- count (int, optional): Number of new lines. Defaults to 1.
- """
-
- assert count >= 0, "count must be >= 0"
- self.print(NewLine(count))
-
- def clear(self, home: bool = True) -> None:
- """Clear the screen.
-
- Args:
- home (bool, optional): Also move the cursor to 'home' position. Defaults to True.
- """
- if home:
- self.control(Control.clear(), Control.home())
- else:
- self.control(Control.clear())
-
- def status(
- self,
- status: RenderableType,
- *,
- spinner: str = "dots",
- spinner_style: StyleType = "status.spinner",
- speed: float = 1.0,
- refresh_per_second: float = 12.5,
- ) -> "Status":
- """Display a status and spinner.
-
- Args:
- status (RenderableType): A status renderable (str or Text typically).
- spinner (str, optional): Name of spinner animation (see python -m rich.spinner). Defaults to "dots".
- spinner_style (StyleType, optional): Style of spinner. Defaults to "status.spinner".
- speed (float, optional): Speed factor for spinner animation. Defaults to 1.0.
- refresh_per_second (float, optional): Number of refreshes per second. Defaults to 12.5.
-
- Returns:
- Status: A Status object that may be used as a context manager.
- """
- from .status import Status
-
- status_renderable = Status(
- status,
- console=self,
- spinner=spinner,
- spinner_style=spinner_style,
- speed=speed,
- refresh_per_second=refresh_per_second,
- )
- return status_renderable
-
- def show_cursor(self, show: bool = True) -> bool:
- """Show or hide the cursor.
-
- Args:
- show (bool, optional): Set visibility of the cursor.
- """
- if self.is_terminal:
- self.control(Control.show_cursor(show))
- return True
- return False
-
- def set_alt_screen(self, enable: bool = True) -> bool:
- """Enables alternative screen mode.
-
- Note, if you enable this mode, you should ensure that is disabled before
- the application exits. See :meth:`~rich.Console.screen` for a context manager
- that handles this for you.
-
- Args:
- enable (bool, optional): Enable (True) or disable (False) alternate screen. Defaults to True.
-
- Returns:
- bool: True if the control codes were written.
-
- """
- changed = False
- if self.is_terminal and not self.legacy_windows:
- self.control(Control.alt_screen(enable))
- changed = True
- self._is_alt_screen = enable
- return changed
-
- @property
- def is_alt_screen(self) -> bool:
- """Check if the alt screen was enabled.
-
- Returns:
- bool: True if the alt screen was enabled, otherwise False.
- """
- return self._is_alt_screen
-
- def set_window_title(self, title: str) -> bool:
- """Set the title of the console terminal window.
-
- Warning: There is no means within Rich of "resetting" the window title to its
- previous value, meaning the title you set will persist even after your application
- exits.
-
- ``fish`` shell resets the window title before and after each command by default,
- negating this issue. Windows Terminal and command prompt will also reset the title for you.
- Most other shells and terminals, however, do not do this.
-
- Some terminals may require configuration changes before you can set the title.
- Some terminals may not support setting the title at all.
-
- Other software (including the terminal itself, the shell, custom prompts, plugins, etc.)
- may also set the terminal window title. This could result in whatever value you write
- using this method being overwritten.
-
- Args:
- title (str): The new title of the terminal window.
-
- Returns:
- bool: True if the control code to change the terminal title was
- written, otherwise False. Note that a return value of True
- does not guarantee that the window title has actually changed,
- since the feature may be unsupported/disabled in some terminals.
- """
- if self.is_terminal:
- self.control(Control.title(title))
- return True
- return False
-
- def screen(
- self, hide_cursor: bool = True, style: Optional[StyleType] = None
- ) -> "ScreenContext":
- """Context manager to enable and disable 'alternative screen' mode.
-
- Args:
- hide_cursor (bool, optional): Also hide the cursor. Defaults to False.
- style (Style, optional): Optional style for screen. Defaults to None.
-
- Returns:
- ~ScreenContext: Context which enables alternate screen on enter, and disables it on exit.
- """
- return ScreenContext(self, hide_cursor=hide_cursor, style=style or "")
-
- def measure(
- self, renderable: RenderableType, *, options: Optional[ConsoleOptions] = None
- ) -> Measurement:
- """Measure a renderable. Returns a :class:`~rich.measure.Measurement` object which contains
- information regarding the number of characters required to print the renderable.
-
- Args:
- renderable (RenderableType): Any renderable or string.
- options (Optional[ConsoleOptions], optional): Options to use when measuring, or None
- to use default options. Defaults to None.
-
- Returns:
- Measurement: A measurement of the renderable.
- """
- measurement = Measurement.get(self, options or self.options, renderable)
- return measurement
-
- def render(
- self, renderable: RenderableType, options: Optional[ConsoleOptions] = None
- ) -> Iterable[Segment]:
- """Render an object in to an iterable of `Segment` instances.
-
- This method contains the logic for rendering objects with the console protocol.
- You are unlikely to need to use it directly, unless you are extending the library.
-
- Args:
- renderable (RenderableType): An object supporting the console protocol, or
- an object that may be converted to a string.
- options (ConsoleOptions, optional): An options object, or None to use self.options. Defaults to None.
-
- Returns:
- Iterable[Segment]: An iterable of segments that may be rendered.
- """
-
- _options = options or self.options
- if _options.max_width < 1:
- # No space to render anything. This prevents potential recursion errors.
- return
- render_iterable: RenderResult
-
- renderable = rich_cast(renderable)
- if hasattr(renderable, "__rich_console__") and not isclass(renderable):
- render_iterable = renderable.__rich_console__(self, _options) # type: ignore[union-attr]
- elif isinstance(renderable, str):
- text_renderable = self.render_str(
- renderable, highlight=_options.highlight, markup=_options.markup
- )
- render_iterable = text_renderable.__rich_console__(self, _options)
- else:
- raise errors.NotRenderableError(
- f"Unable to render {renderable!r}; "
- "A str, Segment or object with __rich_console__ method is required"
- )
-
- try:
- iter_render = iter(render_iterable)
- except TypeError:
- raise errors.NotRenderableError(
- f"object {render_iterable!r} is not renderable"
- )
- _Segment = Segment
- _options = _options.reset_height()
- for render_output in iter_render:
- if isinstance(render_output, _Segment):
- yield render_output
- else:
- yield from self.render(render_output, _options)
-
- def render_lines(
- self,
- renderable: RenderableType,
- options: Optional[ConsoleOptions] = None,
- *,
- style: Optional[Style] = None,
- pad: bool = True,
- new_lines: bool = False,
- ) -> List[List[Segment]]:
- """Render objects in to a list of lines.
-
- The output of render_lines is useful when further formatting of rendered console text
- is required, such as the Panel class which draws a border around any renderable object.
-
- Args:
- renderable (RenderableType): Any object renderable in the console.
- options (Optional[ConsoleOptions], optional): Console options, or None to use self.options. Default to ``None``.
- style (Style, optional): Optional style to apply to renderables. Defaults to ``None``.
- pad (bool, optional): Pad lines shorter than render width. Defaults to ``True``.
- new_lines (bool, optional): Include "\n" characters at end of lines.
-
- Returns:
- List[List[Segment]]: A list of lines, where a line is a list of Segment objects.
- """
- with self._lock:
- render_options = options or self.options
- _rendered = self.render(renderable, render_options)
- if style:
- _rendered = Segment.apply_style(_rendered, style)
-
- render_height = render_options.height
- if render_height is not None:
- render_height = max(0, render_height)
-
- lines = list(
- islice(
- Segment.split_and_crop_lines(
- _rendered,
- render_options.max_width,
- include_new_lines=new_lines,
- pad=pad,
- style=style,
- ),
- None,
- render_height,
- )
- )
- if render_options.height is not None:
- extra_lines = render_options.height - len(lines)
- if extra_lines > 0:
- pad_line = [
- [Segment(" " * render_options.max_width, style), Segment("\n")]
- if new_lines
- else [Segment(" " * render_options.max_width, style)]
- ]
- lines.extend(pad_line * extra_lines)
-
- return lines
-
- def render_str(
- self,
- text: str,
- *,
- style: Union[str, Style] = "",
- justify: Optional[JustifyMethod] = None,
- overflow: Optional[OverflowMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- highlighter: Optional[HighlighterType] = None,
- ) -> "Text":
- """Convert a string to a Text instance. This is called automatically if
- you print or log a string.
-
- Args:
- text (str): Text to render.
- style (Union[str, Style], optional): Style to apply to rendered text.
- justify (str, optional): Justify method: "default", "left", "center", "full", or "right". Defaults to ``None``.
- overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to ``None``.
- emoji (Optional[bool], optional): Enable emoji, or ``None`` to use Console default.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use Console default.
- highlight (Optional[bool], optional): Enable highlighting, or ``None`` to use Console default.
- highlighter (HighlighterType, optional): Optional highlighter to apply.
- Returns:
- ConsoleRenderable: Renderable object.
-
- """
- emoji_enabled = emoji or (emoji is None and self._emoji)
- markup_enabled = markup or (markup is None and self._markup)
- highlight_enabled = highlight or (highlight is None and self._highlight)
-
- if markup_enabled:
- rich_text = render_markup(
- text,
- style=style,
- emoji=emoji_enabled,
- emoji_variant=self._emoji_variant,
- )
- rich_text.justify = justify
- rich_text.overflow = overflow
- else:
- rich_text = Text(
- _emoji_replace(text, default_variant=self._emoji_variant)
- if emoji_enabled
- else text,
- justify=justify,
- overflow=overflow,
- style=style,
- )
-
- _highlighter = (highlighter or self.highlighter) if highlight_enabled else None
- if _highlighter is not None:
- highlight_text = _highlighter(str(rich_text))
- highlight_text.copy_styles(rich_text)
- return highlight_text
-
- return rich_text
-
- def get_style(
- self, name: Union[str, Style], *, default: Optional[Union[Style, str]] = None
- ) -> Style:
- """Get a Style instance by its theme name or parse a definition.
-
- Args:
- name (str): The name of a style or a style definition.
-
- Returns:
- Style: A Style object.
-
- Raises:
- MissingStyle: If no style could be parsed from name.
-
- """
- if isinstance(name, Style):
- return name
-
- try:
- style = self._theme_stack.get(name)
- if style is None:
- style = Style.parse(name)
- return style.copy() if style.link else style
- except errors.StyleSyntaxError as error:
- if default is not None:
- return self.get_style(default)
- raise errors.MissingStyle(
- f"Failed to get style {name!r}; {error}"
- ) from None
-
- def _collect_renderables(
- self,
- objects: Iterable[Any],
- sep: str,
- end: str,
- *,
- justify: Optional[JustifyMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- ) -> List[ConsoleRenderable]:
- """Combine a number of renderables and text into one renderable.
-
- Args:
- objects (Iterable[Any]): Anything that Rich can render.
- sep (str): String to write between print data.
- end (str): String to write at end of print data.
- justify (str, optional): One of "left", "right", "center", or "full". Defaults to ``None``.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default.
-
- Returns:
- List[ConsoleRenderable]: A list of things to render.
- """
- renderables: List[ConsoleRenderable] = []
- _append = renderables.append
- text: List[Text] = []
- append_text = text.append
-
- append = _append
- if justify in ("left", "center", "right"):
-
- def align_append(renderable: RenderableType) -> None:
- _append(Align(renderable, cast(AlignMethod, justify)))
-
- append = align_append
-
- _highlighter: HighlighterType = _null_highlighter
- if highlight or (highlight is None and self._highlight):
- _highlighter = self.highlighter
-
- def check_text() -> None:
- if text:
- sep_text = Text(sep, justify=justify, end=end)
- append(sep_text.join(text))
- text.clear()
-
- for renderable in objects:
- renderable = rich_cast(renderable)
- if isinstance(renderable, str):
- append_text(
- self.render_str(
- renderable, emoji=emoji, markup=markup, highlighter=_highlighter
- )
- )
- elif isinstance(renderable, Text):
- append_text(renderable)
- elif isinstance(renderable, ConsoleRenderable):
- check_text()
- append(renderable)
- elif is_expandable(renderable):
- check_text()
- append(Pretty(renderable, highlighter=_highlighter))
- else:
- append_text(_highlighter(str(renderable)))
-
- check_text()
-
- if self.style is not None:
- style = self.get_style(self.style)
- renderables = [Styled(renderable, style) for renderable in renderables]
-
- return renderables
-
- def rule(
- self,
- title: TextType = "",
- *,
- characters: str = "─",
- style: Union[str, Style] = "rule.line",
- align: AlignMethod = "center",
- ) -> None:
- """Draw a line with optional centered title.
-
- Args:
- title (str, optional): Text to render over the rule. Defaults to "".
- characters (str, optional): Character(s) to form the line. Defaults to "─".
- style (str, optional): Style of line. Defaults to "rule.line".
- align (str, optional): How to align the title, one of "left", "center", or "right". Defaults to "center".
- """
- from .rule import Rule
-
- rule = Rule(title=title, characters=characters, style=style, align=align)
- self.print(rule)
-
- def control(self, *control: Control) -> None:
- """Insert non-printing control codes.
-
- Args:
- control_codes (str): Control codes, such as those that may move the cursor.
- """
- if not self.is_dumb_terminal:
- with self:
- self._buffer.extend(_control.segment for _control in control)
-
- def out(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- highlight: Optional[bool] = None,
- ) -> None:
- """Output to the terminal. This is a low-level way of writing to the terminal which unlike
- :meth:`~rich.console.Console.print` won't pretty print, wrap text, or apply markup, but will
- optionally apply highlighting and a basic style.
-
- Args:
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use
- console default. Defaults to ``None``.
- """
- raw_output: str = sep.join(str(_object) for _object in objects)
- self.print(
- raw_output,
- style=style,
- highlight=highlight,
- emoji=False,
- markup=False,
- no_wrap=True,
- overflow="ignore",
- crop=False,
- end=end,
- )
-
- def print(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- justify: Optional[JustifyMethod] = None,
- overflow: Optional[OverflowMethod] = None,
- no_wrap: Optional[bool] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- width: Optional[int] = None,
- height: Optional[int] = None,
- crop: bool = True,
- soft_wrap: Optional[bool] = None,
- new_line_start: bool = False,
- ) -> None:
- """Print to the console.
-
- Args:
- objects (positional args): Objects to log to the terminal.
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- justify (str, optional): Justify method: "default", "left", "right", "center", or "full". Defaults to ``None``.
- overflow (str, optional): Overflow method: "ignore", "crop", "fold", or "ellipsis". Defaults to None.
- no_wrap (Optional[bool], optional): Disable word wrapping. Defaults to None.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to ``None``.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to ``None``.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to ``None``.
- width (Optional[int], optional): Width of output, or ``None`` to auto-detect. Defaults to ``None``.
- crop (Optional[bool], optional): Crop output to width of terminal. Defaults to True.
- soft_wrap (bool, optional): Enable soft wrap mode which disables word wrapping and cropping of text or ``None`` for
- Console default. Defaults to ``None``.
- new_line_start (bool, False): Insert a new line at the start if the output contains more than one line. Defaults to ``False``.
- """
- if not objects:
- objects = (NewLine(),)
-
- if soft_wrap is None:
- soft_wrap = self.soft_wrap
- if soft_wrap:
- if no_wrap is None:
- no_wrap = True
- if overflow is None:
- overflow = "ignore"
- crop = False
- render_hooks = self._render_hooks[:]
- with self:
- renderables = self._collect_renderables(
- objects,
- sep,
- end,
- justify=justify,
- emoji=emoji,
- markup=markup,
- highlight=highlight,
- )
- for hook in render_hooks:
- renderables = hook.process_renderables(renderables)
- render_options = self.options.update(
- justify=justify,
- overflow=overflow,
- width=min(width, self.width) if width is not None else NO_CHANGE,
- height=height,
- no_wrap=no_wrap,
- markup=markup,
- highlight=highlight,
- )
-
- new_segments: List[Segment] = []
- extend = new_segments.extend
- render = self.render
- if style is None:
- for renderable in renderables:
- extend(render(renderable, render_options))
- else:
- for renderable in renderables:
- extend(
- Segment.apply_style(
- render(renderable, render_options), self.get_style(style)
- )
- )
- if new_line_start:
- if (
- len("".join(segment.text for segment in new_segments).splitlines())
- > 1
- ):
- new_segments.insert(0, Segment.line())
- if crop:
- buffer_extend = self._buffer.extend
- for line in Segment.split_and_crop_lines(
- new_segments, self.width, pad=False
- ):
- buffer_extend(line)
- else:
- self._buffer.extend(new_segments)
-
- def print_json(
- self,
- json: Optional[str] = None,
- *,
- data: Any = None,
- indent: Union[None, int, str] = 2,
- highlight: bool = True,
- skip_keys: bool = False,
- ensure_ascii: bool = False,
- check_circular: bool = True,
- allow_nan: bool = True,
- default: Optional[Callable[[Any], Any]] = None,
- sort_keys: bool = False,
- ) -> None:
- """Pretty prints JSON. Output will be valid JSON.
-
- Args:
- json (Optional[str]): A string containing JSON.
- data (Any): If json is not supplied, then encode this data.
- indent (Union[None, int, str], optional): Number of spaces to indent. Defaults to 2.
- highlight (bool, optional): Enable highlighting of output: Defaults to True.
- skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False.
- ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False.
- check_circular (bool, optional): Check for circular references. Defaults to True.
- allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True.
- default (Callable, optional): A callable that converts values that can not be encoded
- in to something that can be JSON encoded. Defaults to None.
- sort_keys (bool, optional): Sort dictionary keys. Defaults to False.
- """
- from rich.json import JSON
-
- if json is None:
- json_renderable = JSON.from_data(
- data,
- indent=indent,
- highlight=highlight,
- skip_keys=skip_keys,
- ensure_ascii=ensure_ascii,
- check_circular=check_circular,
- allow_nan=allow_nan,
- default=default,
- sort_keys=sort_keys,
- )
- else:
- if not isinstance(json, str):
- raise TypeError(
- f"json must be str. Did you mean print_json(data={json!r}) ?"
- )
- json_renderable = JSON(
- json,
- indent=indent,
- highlight=highlight,
- skip_keys=skip_keys,
- ensure_ascii=ensure_ascii,
- check_circular=check_circular,
- allow_nan=allow_nan,
- default=default,
- sort_keys=sort_keys,
- )
- self.print(json_renderable, soft_wrap=True)
-
- def update_screen(
- self,
- renderable: RenderableType,
- *,
- region: Optional[Region] = None,
- options: Optional[ConsoleOptions] = None,
- ) -> None:
- """Update the screen at a given offset.
-
- Args:
- renderable (RenderableType): A Rich renderable.
- region (Region, optional): Region of screen to update, or None for entire screen. Defaults to None.
- x (int, optional): x offset. Defaults to 0.
- y (int, optional): y offset. Defaults to 0.
-
- Raises:
- errors.NoAltScreen: If the Console isn't in alt screen mode.
-
- """
- if not self.is_alt_screen:
- raise errors.NoAltScreen("Alt screen must be enabled to call update_screen")
- render_options = options or self.options
- if region is None:
- x = y = 0
- render_options = render_options.update_dimensions(
- render_options.max_width, render_options.height or self.height
- )
- else:
- x, y, width, height = region
- render_options = render_options.update_dimensions(width, height)
-
- lines = self.render_lines(renderable, options=render_options)
- self.update_screen_lines(lines, x, y)
-
- def update_screen_lines(
- self, lines: List[List[Segment]], x: int = 0, y: int = 0
- ) -> None:
- """Update lines of the screen at a given offset.
-
- Args:
- lines (List[List[Segment]]): Rendered lines (as produced by :meth:`~rich.Console.render_lines`).
- x (int, optional): x offset (column no). Defaults to 0.
- y (int, optional): y offset (column no). Defaults to 0.
-
- Raises:
- errors.NoAltScreen: If the Console isn't in alt screen mode.
- """
- if not self.is_alt_screen:
- raise errors.NoAltScreen("Alt screen must be enabled to call update_screen")
- screen_update = ScreenUpdate(lines, x, y)
- segments = self.render(screen_update)
- self._buffer.extend(segments)
- self._check_buffer()
-
- def print_exception(
- self,
- *,
- width: Optional[int] = 100,
- extra_lines: int = 3,
- theme: Optional[str] = None,
- word_wrap: bool = False,
- show_locals: bool = False,
- suppress: Iterable[Union[str, ModuleType]] = (),
- max_frames: int = 100,
- ) -> None:
- """Prints a rich render of the last exception and traceback.
-
- Args:
- width (Optional[int], optional): Number of characters used to render code. Defaults to 100.
- extra_lines (int, optional): Additional lines of code to render. Defaults to 3.
- theme (str, optional): Override pygments theme used in traceback
- word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
- show_locals (bool, optional): Enable display of local variables. Defaults to False.
- suppress (Iterable[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
- max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100.
- """
- from .traceback import Traceback
-
- traceback = Traceback(
- width=width,
- extra_lines=extra_lines,
- theme=theme,
- word_wrap=word_wrap,
- show_locals=show_locals,
- suppress=suppress,
- max_frames=max_frames,
- )
- self.print(traceback)
-
- @staticmethod
- def _caller_frame_info(
- offset: int,
- currentframe: Callable[[], Optional[FrameType]] = inspect.currentframe,
- ) -> Tuple[str, int, Dict[str, Any]]:
- """Get caller frame information.
-
- Args:
- offset (int): the caller offset within the current frame stack.
- currentframe (Callable[[], Optional[FrameType]], optional): the callable to use to
- retrieve the current frame. Defaults to ``inspect.currentframe``.
-
- Returns:
- Tuple[str, int, Dict[str, Any]]: A tuple containing the filename, the line number and
- the dictionary of local variables associated with the caller frame.
-
- Raises:
- RuntimeError: If the stack offset is invalid.
- """
- # Ignore the frame of this local helper
- offset += 1
-
- frame = currentframe()
- if frame is not None:
- # Use the faster currentframe where implemented
- while offset and frame is not None:
- frame = frame.f_back
- offset -= 1
- assert frame is not None
- return frame.f_code.co_filename, frame.f_lineno, frame.f_locals
- else:
- # Fallback to the slower stack
- frame_info = inspect.stack()[offset]
- return frame_info.filename, frame_info.lineno, frame_info.frame.f_locals
-
- def log(
- self,
- *objects: Any,
- sep: str = " ",
- end: str = "\n",
- style: Optional[Union[str, Style]] = None,
- justify: Optional[JustifyMethod] = None,
- emoji: Optional[bool] = None,
- markup: Optional[bool] = None,
- highlight: Optional[bool] = None,
- log_locals: bool = False,
- _stack_offset: int = 1,
- ) -> None:
- """Log rich content to the terminal.
-
- Args:
- objects (positional args): Objects to log to the terminal.
- sep (str, optional): String to write between print data. Defaults to " ".
- end (str, optional): String to write at end of print data. Defaults to "\\\\n".
- style (Union[str, Style], optional): A style to apply to output. Defaults to None.
- justify (str, optional): One of "left", "right", "center", or "full". Defaults to ``None``.
- emoji (Optional[bool], optional): Enable emoji code, or ``None`` to use console default. Defaults to None.
- markup (Optional[bool], optional): Enable markup, or ``None`` to use console default. Defaults to None.
- highlight (Optional[bool], optional): Enable automatic highlighting, or ``None`` to use console default. Defaults to None.
- log_locals (bool, optional): Boolean to enable logging of locals where ``log()``
- was called. Defaults to False.
- _stack_offset (int, optional): Offset of caller from end of call stack. Defaults to 1.
- """
- if not objects:
- objects = (NewLine(),)
-
- render_hooks = self._render_hooks[:]
-
- with self:
- renderables = self._collect_renderables(
- objects,
- sep,
- end,
- justify=justify,
- emoji=emoji,
- markup=markup,
- highlight=highlight,
- )
- if style is not None:
- renderables = [Styled(renderable, style) for renderable in renderables]
-
- filename, line_no, locals = self._caller_frame_info(_stack_offset)
- link_path = None if filename.startswith("<") else os.path.abspath(filename)
- path = filename.rpartition(os.sep)[-1]
- if log_locals:
- locals_map = {
- key: value
- for key, value in locals.items()
- if not key.startswith("__")
- }
- renderables.append(render_scope(locals_map, title="[i]locals"))
-
- renderables = [
- self._log_render(
- self,
- renderables,
- log_time=self.get_datetime(),
- path=path,
- line_no=line_no,
- link_path=link_path,
- )
- ]
- for hook in render_hooks:
- renderables = hook.process_renderables(renderables)
- new_segments: List[Segment] = []
- extend = new_segments.extend
- render = self.render
- render_options = self.options
- for renderable in renderables:
- extend(render(renderable, render_options))
- buffer_extend = self._buffer.extend
- for line in Segment.split_and_crop_lines(
- new_segments, self.width, pad=False
- ):
- buffer_extend(line)
-
- def _check_buffer(self) -> None:
- """Check if the buffer may be rendered. Render it if it can (e.g. Console.quiet is False)
- Rendering is supported on Windows, Unix and Jupyter environments. For
- legacy Windows consoles, the win32 API is called directly.
- This method will also record what it renders if recording is enabled via Console.record.
- """
- if self.quiet:
- del self._buffer[:]
- return
- with self._lock:
- if self.record:
- with self._record_buffer_lock:
- self._record_buffer.extend(self._buffer[:])
-
- if self._buffer_index == 0:
- if self.is_jupyter: # pragma: no cover
- from .jupyter import display
-
- display(self._buffer, self._render_buffer(self._buffer[:]))
- del self._buffer[:]
- else:
- if WINDOWS:
- use_legacy_windows_render = False
- if self.legacy_windows:
- fileno = get_fileno(self.file)
- if fileno is not None:
- use_legacy_windows_render = (
- fileno in _STD_STREAMS_OUTPUT
- )
-
- if use_legacy_windows_render:
- from rich._win32_console import LegacyWindowsTerm
- from rich._windows_renderer import legacy_windows_render
-
- buffer = self._buffer[:]
- if self.no_color and self._color_system:
- buffer = list(Segment.remove_color(buffer))
-
- legacy_windows_render(buffer, LegacyWindowsTerm(self.file))
- else:
- # Either a non-std stream on legacy Windows, or modern Windows.
- text = self._render_buffer(self._buffer[:])
- # https://bugs.python.org/issue37871
- # https://github.com/python/cpython/issues/82052
- # We need to avoid writing more than 32Kb in a single write, due to the above bug
- write = self.file.write
- # Worse case scenario, every character is 4 bytes of utf-8
- MAX_WRITE = 32 * 1024 // 4
- try:
- if len(text) <= MAX_WRITE:
- write(text)
- else:
- batch: List[str] = []
- batch_append = batch.append
- size = 0
- for line in text.splitlines(True):
- if size + len(line) > MAX_WRITE and batch:
- write("".join(batch))
- batch.clear()
- size = 0
- batch_append(line)
- size += len(line)
- if batch:
- write("".join(batch))
- batch.clear()
- except UnicodeEncodeError as error:
- error.reason = f"{error.reason}\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***"
- raise
- else:
- text = self._render_buffer(self._buffer[:])
- try:
- self.file.write(text)
- except UnicodeEncodeError as error:
- error.reason = f"{error.reason}\n*** You may need to add PYTHONIOENCODING=utf-8 to your environment ***"
- raise
-
- self.file.flush()
- del self._buffer[:]
-
- def _render_buffer(self, buffer: Iterable[Segment]) -> str:
- """Render buffered output, and clear buffer."""
- output: List[str] = []
- append = output.append
- color_system = self._color_system
- legacy_windows = self.legacy_windows
- not_terminal = not self.is_terminal
- if self.no_color and color_system:
- buffer = Segment.remove_color(buffer)
- for text, style, control in buffer:
- if style:
- append(
- style.render(
- text,
- color_system=color_system,
- legacy_windows=legacy_windows,
- )
- )
- elif not (not_terminal and control):
- append(text)
-
- rendered = "".join(output)
- return rendered
-
- def input(
- self,
- prompt: TextType = "",
- *,
- markup: bool = True,
- emoji: bool = True,
- password: bool = False,
- stream: Optional[TextIO] = None,
- ) -> str:
- """Displays a prompt and waits for input from the user. The prompt may contain color / style.
-
- It works in the same way as Python's builtin :func:`input` function and provides elaborate line editing and history features if Python's builtin :mod:`readline` module is previously loaded.
-
- Args:
- prompt (Union[str, Text]): Text to render in the prompt.
- markup (bool, optional): Enable console markup (requires a str prompt). Defaults to True.
- emoji (bool, optional): Enable emoji (requires a str prompt). Defaults to True.
- password: (bool, optional): Hide typed text. Defaults to False.
- stream: (TextIO, optional): Optional file to read input from (rather than stdin). Defaults to None.
-
- Returns:
- str: Text read from stdin.
- """
- if prompt:
- self.print(prompt, markup=markup, emoji=emoji, end="")
- if password:
- result = getpass("", stream=stream)
- else:
- if stream:
- result = stream.readline()
- else:
- result = input()
- return result
-
- def export_text(self, *, clear: bool = True, styles: bool = False) -> str:
- """Generate text from console contents (requires record=True argument in constructor).
-
- Args:
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- styles (bool, optional): If ``True``, ansi escape codes will be included. ``False`` for plain text.
- Defaults to ``False``.
-
- Returns:
- str: String containing console contents.
-
- """
- assert (
- self.record
- ), "To export console contents set record=True in the constructor or instance"
-
- with self._record_buffer_lock:
- if styles:
- text = "".join(
- (style.render(text) if style else text)
- for text, style, _ in self._record_buffer
- )
- else:
- text = "".join(
- segment.text
- for segment in self._record_buffer
- if not segment.control
- )
- if clear:
- del self._record_buffer[:]
- return text
-
- def save_text(self, path: str, *, clear: bool = True, styles: bool = False) -> None:
- """Generate text from console and save to a given location (requires record=True argument in constructor).
-
- Args:
- path (str): Path to write text files.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- styles (bool, optional): If ``True``, ansi style codes will be included. ``False`` for plain text.
- Defaults to ``False``.
-
- """
- text = self.export_text(clear=clear, styles=styles)
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(text)
-
- def export_html(
- self,
- *,
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: Optional[str] = None,
- inline_styles: bool = False,
- ) -> str:
- """Generate HTML from console contents (requires record=True argument in constructor).
-
- Args:
- theme (TerminalTheme, optional): TerminalTheme object containing console colors.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- code_format (str, optional): Format string to render HTML. In addition to '{foreground}',
- '{background}', and '{code}', should contain '{stylesheet}' if inline_styles is ``False``.
- inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files
- larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.
- Defaults to False.
-
- Returns:
- str: String containing console contents as HTML.
- """
- assert (
- self.record
- ), "To export console contents set record=True in the constructor or instance"
- fragments: List[str] = []
- append = fragments.append
- _theme = theme or DEFAULT_TERMINAL_THEME
- stylesheet = ""
-
- render_code_format = CONSOLE_HTML_FORMAT if code_format is None else code_format
-
- with self._record_buffer_lock:
- if inline_styles:
- for text, style, _ in Segment.filter_control(
- Segment.simplify(self._record_buffer)
- ):
- text = escape(text)
- if style:
- rule = style.get_html_style(_theme)
- if style.link:
- text = f'{text} '
- text = f'{text} ' if rule else text
- append(text)
- else:
- styles: Dict[str, int] = {}
- for text, style, _ in Segment.filter_control(
- Segment.simplify(self._record_buffer)
- ):
- text = escape(text)
- if style:
- rule = style.get_html_style(_theme)
- style_number = styles.setdefault(rule, len(styles) + 1)
- if style.link:
- text = f'{text} '
- else:
- text = f'{text} '
- append(text)
- stylesheet_rules: List[str] = []
- stylesheet_append = stylesheet_rules.append
- for style_rule, style_number in styles.items():
- if style_rule:
- stylesheet_append(f".r{style_number} {{{style_rule}}}")
- stylesheet = "\n".join(stylesheet_rules)
-
- rendered_code = render_code_format.format(
- code="".join(fragments),
- stylesheet=stylesheet,
- foreground=_theme.foreground_color.hex,
- background=_theme.background_color.hex,
- )
- if clear:
- del self._record_buffer[:]
- return rendered_code
-
- def save_html(
- self,
- path: str,
- *,
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_HTML_FORMAT,
- inline_styles: bool = False,
- ) -> None:
- """Generate HTML from console contents and write to a file (requires record=True argument in constructor).
-
- Args:
- path (str): Path to write html file.
- theme (TerminalTheme, optional): TerminalTheme object containing console colors.
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``.
- code_format (str, optional): Format string to render HTML. In addition to '{foreground}',
- '{background}', and '{code}', should contain '{stylesheet}' if inline_styles is ``False``.
- inline_styles (bool, optional): If ``True`` styles will be inlined in to spans, which makes files
- larger but easier to cut and paste markup. If ``False``, styles will be embedded in a style tag.
- Defaults to False.
-
- """
- html = self.export_html(
- theme=theme,
- clear=clear,
- code_format=code_format,
- inline_styles=inline_styles,
- )
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(html)
-
- def export_svg(
- self,
- *,
- title: str = "Rich",
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_SVG_FORMAT,
- font_aspect_ratio: float = 0.61,
- unique_id: Optional[str] = None,
- ) -> str:
- """
- Generate an SVG from the console contents (requires record=True in Console constructor).
-
- Args:
- title (str, optional): The title of the tab in the output image
- theme (TerminalTheme, optional): The ``TerminalTheme`` object to use to style the terminal
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``
- code_format (str, optional): Format string used to generate the SVG. Rich will inject a number of variables
- into the string in order to form the final SVG output. The default template used and the variables
- injected by Rich can be found by inspecting the ``console.CONSOLE_SVG_FORMAT`` variable.
- font_aspect_ratio (float, optional): The width to height ratio of the font used in the ``code_format``
- string. Defaults to 0.61, which is the width to height ratio of Fira Code (the default font).
- If you aren't specifying a different font inside ``code_format``, you probably don't need this.
- unique_id (str, optional): unique id that is used as the prefix for various elements (CSS styles, node
- ids). If not set, this defaults to a computed value based on the recorded content.
- """
-
- from rich.cells import cell_len
-
- style_cache: Dict[Style, str] = {}
-
- def get_svg_style(style: Style) -> str:
- """Convert a Style to CSS rules for SVG."""
- if style in style_cache:
- return style_cache[style]
- css_rules = []
- color = (
- _theme.foreground_color
- if (style.color is None or style.color.is_default)
- else style.color.get_truecolor(_theme)
- )
- bgcolor = (
- _theme.background_color
- if (style.bgcolor is None or style.bgcolor.is_default)
- else style.bgcolor.get_truecolor(_theme)
- )
- if style.reverse:
- color, bgcolor = bgcolor, color
- if style.dim:
- color = blend_rgb(color, bgcolor, 0.4)
- css_rules.append(f"fill: {color.hex}")
- if style.bold:
- css_rules.append("font-weight: bold")
- if style.italic:
- css_rules.append("font-style: italic;")
- if style.underline:
- css_rules.append("text-decoration: underline;")
- if style.strike:
- css_rules.append("text-decoration: line-through;")
-
- css = ";".join(css_rules)
- style_cache[style] = css
- return css
-
- _theme = theme or SVG_EXPORT_THEME
-
- width = self.width
- char_height = 20
- char_width = char_height * font_aspect_ratio
- line_height = char_height * 1.22
-
- margin_top = 1
- margin_right = 1
- margin_bottom = 1
- margin_left = 1
-
- padding_top = 40
- padding_right = 8
- padding_bottom = 8
- padding_left = 8
-
- padding_width = padding_left + padding_right
- padding_height = padding_top + padding_bottom
- margin_width = margin_left + margin_right
- margin_height = margin_top + margin_bottom
-
- text_backgrounds: List[str] = []
- text_group: List[str] = []
- classes: Dict[str, int] = {}
- style_no = 1
-
- def escape_text(text: str) -> str:
- """HTML escape text and replace spaces with nbsp."""
- return escape(text).replace(" ", " ")
-
- def make_tag(
- name: str, content: Optional[str] = None, **attribs: object
- ) -> str:
- """Make a tag from name, content, and attributes."""
-
- def stringify(value: object) -> str:
- if isinstance(value, (float)):
- return format(value, "g")
- return str(value)
-
- tag_attribs = " ".join(
- f'{k.lstrip("_").replace("_", "-")}="{stringify(v)}"'
- for k, v in attribs.items()
- )
- return (
- f"<{name} {tag_attribs}>{content}{name}>"
- if content
- else f"<{name} {tag_attribs}/>"
- )
-
- with self._record_buffer_lock:
- segments = list(Segment.filter_control(self._record_buffer))
- if clear:
- self._record_buffer.clear()
-
- if unique_id is None:
- unique_id = "terminal-" + str(
- zlib.adler32(
- ("".join(repr(segment) for segment in segments)).encode(
- "utf-8",
- "ignore",
- )
- + title.encode("utf-8", "ignore")
- )
- )
- y = 0
- for y, line in enumerate(Segment.split_and_crop_lines(segments, length=width)):
- x = 0
- for text, style, _control in line:
- style = style or Style()
- rules = get_svg_style(style)
- if rules not in classes:
- classes[rules] = style_no
- style_no += 1
- class_name = f"r{classes[rules]}"
-
- if style.reverse:
- has_background = True
- background = (
- _theme.foreground_color.hex
- if style.color is None
- else style.color.get_truecolor(_theme).hex
- )
- else:
- bgcolor = style.bgcolor
- has_background = bgcolor is not None and not bgcolor.is_default
- background = (
- _theme.background_color.hex
- if style.bgcolor is None
- else style.bgcolor.get_truecolor(_theme).hex
- )
-
- text_length = cell_len(text)
- if has_background:
- text_backgrounds.append(
- make_tag(
- "rect",
- fill=background,
- x=x * char_width,
- y=y * line_height + 1.5,
- width=char_width * text_length,
- height=line_height + 0.25,
- shape_rendering="crispEdges",
- )
- )
-
- if text != " " * len(text):
- text_group.append(
- make_tag(
- "text",
- escape_text(text),
- _class=f"{unique_id}-{class_name}",
- x=x * char_width,
- y=y * line_height + char_height,
- textLength=char_width * len(text),
- clip_path=f"url(#{unique_id}-line-{y})",
- )
- )
- x += cell_len(text)
-
- line_offsets = [line_no * line_height + 1.5 for line_no in range(y)]
- lines = "\n".join(
- f"""
- {make_tag("rect", x=0, y=offset, width=char_width * width, height=line_height + 0.25)}
- """
- for line_no, offset in enumerate(line_offsets)
- )
-
- styles = "\n".join(
- f".{unique_id}-r{rule_no} {{ {css} }}" for css, rule_no in classes.items()
- )
- backgrounds = "".join(text_backgrounds)
- matrix = "".join(text_group)
-
- terminal_width = ceil(width * char_width + padding_width)
- terminal_height = (y + 1) * line_height + padding_height
- chrome = make_tag(
- "rect",
- fill=_theme.background_color.hex,
- stroke="rgba(255,255,255,0.35)",
- stroke_width="1",
- x=margin_left,
- y=margin_top,
- width=terminal_width,
- height=terminal_height,
- rx=8,
- )
-
- title_color = _theme.foreground_color.hex
- if title:
- chrome += make_tag(
- "text",
- escape_text(title),
- _class=f"{unique_id}-title",
- fill=title_color,
- text_anchor="middle",
- x=terminal_width // 2,
- y=margin_top + char_height + 6,
- )
- chrome += f"""
-
-
-
-
-
- """
-
- svg = code_format.format(
- unique_id=unique_id,
- char_width=char_width,
- char_height=char_height,
- line_height=line_height,
- terminal_width=char_width * width - 1,
- terminal_height=(y + 1) * line_height - 1,
- width=terminal_width + margin_width,
- height=terminal_height + margin_height,
- terminal_x=margin_left + padding_left,
- terminal_y=margin_top + padding_top,
- styles=styles,
- chrome=chrome,
- backgrounds=backgrounds,
- matrix=matrix,
- lines=lines,
- )
- return svg
-
- def save_svg(
- self,
- path: str,
- *,
- title: str = "Rich",
- theme: Optional[TerminalTheme] = None,
- clear: bool = True,
- code_format: str = CONSOLE_SVG_FORMAT,
- font_aspect_ratio: float = 0.61,
- unique_id: Optional[str] = None,
- ) -> None:
- """Generate an SVG file from the console contents (requires record=True in Console constructor).
-
- Args:
- path (str): The path to write the SVG to.
- title (str, optional): The title of the tab in the output image
- theme (TerminalTheme, optional): The ``TerminalTheme`` object to use to style the terminal
- clear (bool, optional): Clear record buffer after exporting. Defaults to ``True``
- code_format (str, optional): Format string used to generate the SVG. Rich will inject a number of variables
- into the string in order to form the final SVG output. The default template used and the variables
- injected by Rich can be found by inspecting the ``console.CONSOLE_SVG_FORMAT`` variable.
- font_aspect_ratio (float, optional): The width to height ratio of the font used in the ``code_format``
- string. Defaults to 0.61, which is the width to height ratio of Fira Code (the default font).
- If you aren't specifying a different font inside ``code_format``, you probably don't need this.
- unique_id (str, optional): unique id that is used as the prefix for various elements (CSS styles, node
- ids). If not set, this defaults to a computed value based on the recorded content.
- """
- svg = self.export_svg(
- title=title,
- theme=theme,
- clear=clear,
- code_format=code_format,
- font_aspect_ratio=font_aspect_ratio,
- unique_id=unique_id,
- )
- with open(path, "wt", encoding="utf-8") as write_file:
- write_file.write(svg)
-
-
-def _svg_hash(svg_main_code: str) -> str:
- """Returns a unique hash for the given SVG main code.
-
- Args:
- svg_main_code (str): The content we're going to inject in the SVG envelope.
-
- Returns:
- str: a hash of the given content
- """
- return str(zlib.adler32(svg_main_code.encode()))
-
-
-if __name__ == "__main__": # pragma: no cover
- console = Console(record=True)
-
- console.log(
- "JSONRPC [i]request[/i]",
- 5,
- 1.3,
- True,
- False,
- None,
- {
- "jsonrpc": "2.0",
- "method": "subtract",
- "params": {"minuend": 42, "subtrahend": 23},
- "id": 3,
- },
- )
-
- console.log("Hello, World!", "{'a': 1}", repr(console))
-
- console.print(
- {
- "name": None,
- "empty": [],
- "quiz": {
- "sport": {
- "answered": True,
- "q1": {
- "question": "Which one is correct team name in NBA?",
- "options": [
- "New York Bulls",
- "Los Angeles Kings",
- "Golden State Warriors",
- "Huston Rocket",
- ],
- "answer": "Huston Rocket",
- },
- },
- "maths": {
- "answered": False,
- "q1": {
- "question": "5 + 7 = ?",
- "options": [10, 11, 12, 13],
- "answer": 12,
- },
- "q2": {
- "question": "12 - 8 = ?",
- "options": [1, 2, 3, 4],
- "answer": 4,
- },
- },
- },
- }
- )
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_deprecation_warning.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_deprecation_warning.py
deleted file mode 100644
index 086b64dd3817c0c1a194ffc1959eeffdd2695bef..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_deprecation_warning.py
+++ /dev/null
@@ -1,7 +0,0 @@
-class SetuptoolsDeprecationWarning(Warning):
- """
- Base class for warning deprecations in ``setuptools``
-
- This class is not derived from ``DeprecationWarning``, and as such is
- visible by default.
- """
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/functoolz.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/functoolz.py
deleted file mode 100644
index 2c75d3a42aec42d66ada7fe8a2541bce3e1db3aa..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/functoolz.py
+++ /dev/null
@@ -1,1048 +0,0 @@
-from functools import reduce, partial
-import inspect
-import sys
-from operator import attrgetter, not_
-from importlib import import_module
-from types import MethodType
-
-from .utils import no_default
-
-PYPY = hasattr(sys, 'pypy_version_info') and sys.version_info[0] > 2
-
-
-__all__ = ('identity', 'apply', 'thread_first', 'thread_last', 'memoize',
- 'compose', 'compose_left', 'pipe', 'complement', 'juxt', 'do',
- 'curry', 'flip', 'excepts')
-
-PYPY = hasattr(sys, 'pypy_version_info')
-
-
-def identity(x):
- """ Identity function. Return x
-
- >>> identity(3)
- 3
- """
- return x
-
-
-def apply(*func_and_args, **kwargs):
- """ Applies a function and returns the results
-
- >>> def double(x): return 2*x
- >>> def inc(x): return x + 1
- >>> apply(double, 5)
- 10
-
- >>> tuple(map(apply, [double, inc, double], [10, 500, 8000]))
- (20, 501, 16000)
- """
- if not func_and_args:
- raise TypeError('func argument is required')
- func, args = func_and_args[0], func_and_args[1:]
- return func(*args, **kwargs)
-
-
-def thread_first(val, *forms):
- """ Thread value through a sequence of functions/forms
-
- >>> def double(x): return 2*x
- >>> def inc(x): return x + 1
- >>> thread_first(1, inc, double)
- 4
-
- If the function expects more than one input you can specify those inputs
- in a tuple. The value is used as the first input.
-
- >>> def add(x, y): return x + y
- >>> def pow(x, y): return x**y
- >>> thread_first(1, (add, 4), (pow, 2)) # pow(add(1, 4), 2)
- 25
-
- So in general
- thread_first(x, f, (g, y, z))
- expands to
- g(f(x), y, z)
-
- See Also:
- thread_last
- """
- def evalform_front(val, form):
- if callable(form):
- return form(val)
- if isinstance(form, tuple):
- func, args = form[0], form[1:]
- args = (val,) + args
- return func(*args)
- return reduce(evalform_front, forms, val)
-
-
-def thread_last(val, *forms):
- """ Thread value through a sequence of functions/forms
-
- >>> def double(x): return 2*x
- >>> def inc(x): return x + 1
- >>> thread_last(1, inc, double)
- 4
-
- If the function expects more than one input you can specify those inputs
- in a tuple. The value is used as the last input.
-
- >>> def add(x, y): return x + y
- >>> def pow(x, y): return x**y
- >>> thread_last(1, (add, 4), (pow, 2)) # pow(2, add(4, 1))
- 32
-
- So in general
- thread_last(x, f, (g, y, z))
- expands to
- g(y, z, f(x))
-
- >>> def iseven(x):
- ... return x % 2 == 0
- >>> list(thread_last([1, 2, 3], (map, inc), (filter, iseven)))
- [2, 4]
-
- See Also:
- thread_first
- """
- def evalform_back(val, form):
- if callable(form):
- return form(val)
- if isinstance(form, tuple):
- func, args = form[0], form[1:]
- args = args + (val,)
- return func(*args)
- return reduce(evalform_back, forms, val)
-
-
-def instanceproperty(fget=None, fset=None, fdel=None, doc=None, classval=None):
- """ Like @property, but returns ``classval`` when used as a class attribute
-
- >>> class MyClass(object):
- ... '''The class docstring'''
- ... @instanceproperty(classval=__doc__)
- ... def __doc__(self):
- ... return 'An object docstring'
- ... @instanceproperty
- ... def val(self):
- ... return 42
- ...
- >>> MyClass.__doc__
- 'The class docstring'
- >>> MyClass.val is None
- True
- >>> obj = MyClass()
- >>> obj.__doc__
- 'An object docstring'
- >>> obj.val
- 42
- """
- if fget is None:
- return partial(instanceproperty, fset=fset, fdel=fdel, doc=doc,
- classval=classval)
- return InstanceProperty(fget=fget, fset=fset, fdel=fdel, doc=doc,
- classval=classval)
-
-
-class InstanceProperty(property):
- """ Like @property, but returns ``classval`` when used as a class attribute
-
- Should not be used directly. Use ``instanceproperty`` instead.
- """
- def __init__(self, fget=None, fset=None, fdel=None, doc=None,
- classval=None):
- self.classval = classval
- property.__init__(self, fget=fget, fset=fset, fdel=fdel, doc=doc)
-
- def __get__(self, obj, type=None):
- if obj is None:
- return self.classval
- return property.__get__(self, obj, type)
-
- def __reduce__(self):
- state = (self.fget, self.fset, self.fdel, self.__doc__, self.classval)
- return InstanceProperty, state
-
-
-class curry(object):
- """ Curry a callable function
-
- Enables partial application of arguments through calling a function with an
- incomplete set of arguments.
-
- >>> def mul(x, y):
- ... return x * y
- >>> mul = curry(mul)
-
- >>> double = mul(2)
- >>> double(10)
- 20
-
- Also supports keyword arguments
-
- >>> @curry # Can use curry as a decorator
- ... def f(x, y, a=10):
- ... return a * (x + y)
-
- >>> add = f(a=1)
- >>> add(2, 3)
- 5
-
- See Also:
- toolz.curried - namespace of curried functions
- https://toolz.readthedocs.io/en/latest/curry.html
- """
- def __init__(self, *args, **kwargs):
- if not args:
- raise TypeError('__init__() takes at least 2 arguments (1 given)')
- func, args = args[0], args[1:]
- if not callable(func):
- raise TypeError("Input must be callable")
-
- # curry- or functools.partial-like object? Unpack and merge arguments
- if (
- hasattr(func, 'func')
- and hasattr(func, 'args')
- and hasattr(func, 'keywords')
- and isinstance(func.args, tuple)
- ):
- _kwargs = {}
- if func.keywords:
- _kwargs.update(func.keywords)
- _kwargs.update(kwargs)
- kwargs = _kwargs
- args = func.args + args
- func = func.func
-
- if kwargs:
- self._partial = partial(func, *args, **kwargs)
- else:
- self._partial = partial(func, *args)
-
- self.__doc__ = getattr(func, '__doc__', None)
- self.__name__ = getattr(func, '__name__', '')
- self.__module__ = getattr(func, '__module__', None)
- self.__qualname__ = getattr(func, '__qualname__', None)
- self._sigspec = None
- self._has_unknown_args = None
-
- @instanceproperty
- def func(self):
- return self._partial.func
-
- @instanceproperty
- def __signature__(self):
- sig = inspect.signature(self.func)
- args = self.args or ()
- keywords = self.keywords or {}
- if is_partial_args(self.func, args, keywords, sig) is False:
- raise TypeError('curry object has incorrect arguments')
-
- params = list(sig.parameters.values())
- skip = 0
- for param in params[:len(args)]:
- if param.kind == param.VAR_POSITIONAL:
- break
- skip += 1
-
- kwonly = False
- newparams = []
- for param in params[skip:]:
- kind = param.kind
- default = param.default
- if kind == param.VAR_KEYWORD:
- pass
- elif kind == param.VAR_POSITIONAL:
- if kwonly:
- continue
- elif param.name in keywords:
- default = keywords[param.name]
- kind = param.KEYWORD_ONLY
- kwonly = True
- else:
- if kwonly:
- kind = param.KEYWORD_ONLY
- if default is param.empty:
- default = no_default
- newparams.append(param.replace(default=default, kind=kind))
-
- return sig.replace(parameters=newparams)
-
- @instanceproperty
- def args(self):
- return self._partial.args
-
- @instanceproperty
- def keywords(self):
- return self._partial.keywords
-
- @instanceproperty
- def func_name(self):
- return self.__name__
-
- def __str__(self):
- return str(self.func)
-
- def __repr__(self):
- return repr(self.func)
-
- def __hash__(self):
- return hash((self.func, self.args,
- frozenset(self.keywords.items()) if self.keywords
- else None))
-
- def __eq__(self, other):
- return (isinstance(other, curry) and self.func == other.func and
- self.args == other.args and self.keywords == other.keywords)
-
- def __ne__(self, other):
- return not self.__eq__(other)
-
- def __call__(self, *args, **kwargs):
- try:
- return self._partial(*args, **kwargs)
- except TypeError as exc:
- if self._should_curry(args, kwargs, exc):
- return self.bind(*args, **kwargs)
- raise
-
- def _should_curry(self, args, kwargs, exc=None):
- func = self.func
- args = self.args + args
- if self.keywords:
- kwargs = dict(self.keywords, **kwargs)
- if self._sigspec is None:
- sigspec = self._sigspec = _sigs.signature_or_spec(func)
- self._has_unknown_args = has_varargs(func, sigspec) is not False
- else:
- sigspec = self._sigspec
-
- if is_partial_args(func, args, kwargs, sigspec) is False:
- # Nothing can make the call valid
- return False
- elif self._has_unknown_args:
- # The call may be valid and raised a TypeError, but we curry
- # anyway because the function may have `*args`. This is useful
- # for decorators with signature `func(*args, **kwargs)`.
- return True
- elif not is_valid_args(func, args, kwargs, sigspec):
- # Adding more arguments may make the call valid
- return True
- else:
- # There was a genuine TypeError
- return False
-
- def bind(self, *args, **kwargs):
- return type(self)(self, *args, **kwargs)
-
- def call(self, *args, **kwargs):
- return self._partial(*args, **kwargs)
-
- def __get__(self, instance, owner):
- if instance is None:
- return self
- return curry(self, instance)
-
- def __reduce__(self):
- func = self.func
- modname = getattr(func, '__module__', None)
- qualname = getattr(func, '__qualname__', None)
- if qualname is None: # pragma: no cover
- qualname = getattr(func, '__name__', None)
- is_decorated = None
- if modname and qualname:
- attrs = []
- obj = import_module(modname)
- for attr in qualname.split('.'):
- if isinstance(obj, curry):
- attrs.append('func')
- obj = obj.func
- obj = getattr(obj, attr, None)
- if obj is None:
- break
- attrs.append(attr)
- if isinstance(obj, curry) and obj.func is func:
- is_decorated = obj is self
- qualname = '.'.join(attrs)
- func = '%s:%s' % (modname, qualname)
-
- # functools.partial objects can't be pickled
- userdict = tuple((k, v) for k, v in self.__dict__.items()
- if k not in ('_partial', '_sigspec'))
- state = (type(self), func, self.args, self.keywords, userdict,
- is_decorated)
- return _restore_curry, state
-
-
-def _restore_curry(cls, func, args, kwargs, userdict, is_decorated):
- if isinstance(func, str):
- modname, qualname = func.rsplit(':', 1)
- obj = import_module(modname)
- for attr in qualname.split('.'):
- obj = getattr(obj, attr)
- if is_decorated:
- return obj
- func = obj.func
- obj = cls(func, *args, **(kwargs or {}))
- obj.__dict__.update(userdict)
- return obj
-
-
-@curry
-def memoize(func, cache=None, key=None):
- """ Cache a function's result for speedy future evaluation
-
- Considerations:
- Trades memory for speed.
- Only use on pure functions.
-
- >>> def add(x, y): return x + y
- >>> add = memoize(add)
-
- Or use as a decorator
-
- >>> @memoize
- ... def add(x, y):
- ... return x + y
-
- Use the ``cache`` keyword to provide a dict-like object as an initial cache
-
- >>> @memoize(cache={(1, 2): 3})
- ... def add(x, y):
- ... return x + y
-
- Note that the above works as a decorator because ``memoize`` is curried.
-
- It is also possible to provide a ``key(args, kwargs)`` function that
- calculates keys used for the cache, which receives an ``args`` tuple and
- ``kwargs`` dict as input, and must return a hashable value. However,
- the default key function should be sufficient most of the time.
-
- >>> # Use key function that ignores extraneous keyword arguments
- >>> @memoize(key=lambda args, kwargs: args)
- ... def add(x, y, verbose=False):
- ... if verbose:
- ... print('Calculating %s + %s' % (x, y))
- ... return x + y
- """
- if cache is None:
- cache = {}
-
- try:
- may_have_kwargs = has_keywords(func) is not False
- # Is unary function (single arg, no variadic argument or keywords)?
- is_unary = is_arity(1, func)
- except TypeError: # pragma: no cover
- may_have_kwargs = True
- is_unary = False
-
- if key is None:
- if is_unary:
- def key(args, kwargs):
- return args[0]
- elif may_have_kwargs:
- def key(args, kwargs):
- return (
- args or None,
- frozenset(kwargs.items()) if kwargs else None,
- )
- else:
- def key(args, kwargs):
- return args
-
- def memof(*args, **kwargs):
- k = key(args, kwargs)
- try:
- return cache[k]
- except TypeError:
- raise TypeError("Arguments to memoized function must be hashable")
- except KeyError:
- cache[k] = result = func(*args, **kwargs)
- return result
-
- try:
- memof.__name__ = func.__name__
- except AttributeError:
- pass
- memof.__doc__ = func.__doc__
- memof.__wrapped__ = func
- return memof
-
-
-class Compose(object):
- """ A composition of functions
-
- See Also:
- compose
- """
- __slots__ = 'first', 'funcs'
-
- def __init__(self, funcs):
- funcs = tuple(reversed(funcs))
- self.first = funcs[0]
- self.funcs = funcs[1:]
-
- def __call__(self, *args, **kwargs):
- ret = self.first(*args, **kwargs)
- for f in self.funcs:
- ret = f(ret)
- return ret
-
- def __getstate__(self):
- return self.first, self.funcs
-
- def __setstate__(self, state):
- self.first, self.funcs = state
-
- @instanceproperty(classval=__doc__)
- def __doc__(self):
- def composed_doc(*fs):
- """Generate a docstring for the composition of fs.
- """
- if not fs:
- # Argument name for the docstring.
- return '*args, **kwargs'
-
- return '{f}({g})'.format(f=fs[0].__name__, g=composed_doc(*fs[1:]))
-
- try:
- return (
- 'lambda *args, **kwargs: ' +
- composed_doc(*reversed((self.first,) + self.funcs))
- )
- except AttributeError:
- # One of our callables does not have a `__name__`, whatever.
- return 'A composition of functions'
-
- @property
- def __name__(self):
- try:
- return '_of_'.join(
- (f.__name__ for f in reversed((self.first,) + self.funcs))
- )
- except AttributeError:
- return type(self).__name__
-
- def __repr__(self):
- return '{.__class__.__name__}{!r}'.format(
- self, tuple(reversed((self.first, ) + self.funcs)))
-
- def __eq__(self, other):
- if isinstance(other, Compose):
- return other.first == self.first and other.funcs == self.funcs
- return NotImplemented
-
- def __ne__(self, other):
- equality = self.__eq__(other)
- return NotImplemented if equality is NotImplemented else not equality
-
- def __hash__(self):
- return hash(self.first) ^ hash(self.funcs)
-
- # Mimic the descriptor behavior of python functions.
- # i.e. let Compose be called as a method when bound to a class.
- # adapted from
- # docs.python.org/3/howto/descriptor.html#functions-and-methods
- def __get__(self, obj, objtype=None):
- return self if obj is None else MethodType(self, obj)
-
- # introspection with Signature is only possible from py3.3+
- @instanceproperty
- def __signature__(self):
- base = inspect.signature(self.first)
- last = inspect.signature(self.funcs[-1])
- return base.replace(return_annotation=last.return_annotation)
-
- __wrapped__ = instanceproperty(attrgetter('first'))
-
-
-def compose(*funcs):
- """ Compose functions to operate in series.
-
- Returns a function that applies other functions in sequence.
-
- Functions are applied from right to left so that
- ``compose(f, g, h)(x, y)`` is the same as ``f(g(h(x, y)))``.
-
- If no arguments are provided, the identity function (f(x) = x) is returned.
-
- >>> inc = lambda i: i + 1
- >>> compose(str, inc)(3)
- '4'
-
- See Also:
- compose_left
- pipe
- """
- if not funcs:
- return identity
- if len(funcs) == 1:
- return funcs[0]
- else:
- return Compose(funcs)
-
-
-def compose_left(*funcs):
- """ Compose functions to operate in series.
-
- Returns a function that applies other functions in sequence.
-
- Functions are applied from left to right so that
- ``compose_left(f, g, h)(x, y)`` is the same as ``h(g(f(x, y)))``.
-
- If no arguments are provided, the identity function (f(x) = x) is returned.
-
- >>> inc = lambda i: i + 1
- >>> compose_left(inc, str)(3)
- '4'
-
- See Also:
- compose
- pipe
- """
- return compose(*reversed(funcs))
-
-
-def pipe(data, *funcs):
- """ Pipe a value through a sequence of functions
-
- I.e. ``pipe(data, f, g, h)`` is equivalent to ``h(g(f(data)))``
-
- We think of the value as progressing through a pipe of several
- transformations, much like pipes in UNIX
-
- ``$ cat data | f | g | h``
-
- >>> double = lambda i: 2 * i
- >>> pipe(3, double, str)
- '6'
-
- See Also:
- compose
- compose_left
- thread_first
- thread_last
- """
- for func in funcs:
- data = func(data)
- return data
-
-
-def complement(func):
- """ Convert a predicate function to its logical complement.
-
- In other words, return a function that, for inputs that normally
- yield True, yields False, and vice-versa.
-
- >>> def iseven(n): return n % 2 == 0
- >>> isodd = complement(iseven)
- >>> iseven(2)
- True
- >>> isodd(2)
- False
- """
- return compose(not_, func)
-
-
-class juxt(object):
- """ Creates a function that calls several functions with the same arguments
-
- Takes several functions and returns a function that applies its arguments
- to each of those functions then returns a tuple of the results.
-
- Name comes from juxtaposition: the fact of two things being seen or placed
- close together with contrasting effect.
-
- >>> inc = lambda x: x + 1
- >>> double = lambda x: x * 2
- >>> juxt(inc, double)(10)
- (11, 20)
- >>> juxt([inc, double])(10)
- (11, 20)
- """
- __slots__ = ['funcs']
-
- def __init__(self, *funcs):
- if len(funcs) == 1 and not callable(funcs[0]):
- funcs = funcs[0]
- self.funcs = tuple(funcs)
-
- def __call__(self, *args, **kwargs):
- return tuple(func(*args, **kwargs) for func in self.funcs)
-
- def __getstate__(self):
- return self.funcs
-
- def __setstate__(self, state):
- self.funcs = state
-
-
-def do(func, x):
- """ Runs ``func`` on ``x``, returns ``x``
-
- Because the results of ``func`` are not returned, only the side
- effects of ``func`` are relevant.
-
- Logging functions can be made by composing ``do`` with a storage function
- like ``list.append`` or ``file.write``
-
- >>> from toolz import compose
- >>> from toolz.curried import do
-
- >>> log = []
- >>> inc = lambda x: x + 1
- >>> inc = compose(inc, do(log.append))
- >>> inc(1)
- 2
- >>> inc(11)
- 12
- >>> log
- [1, 11]
- """
- func(x)
- return x
-
-
-@curry
-def flip(func, a, b):
- """ Call the function call with the arguments flipped
-
- This function is curried.
-
- >>> def div(a, b):
- ... return a // b
- ...
- >>> flip(div, 2, 6)
- 3
- >>> div_by_two = flip(div, 2)
- >>> div_by_two(4)
- 2
-
- This is particularly useful for built in functions and functions defined
- in C extensions that accept positional only arguments. For example:
- isinstance, issubclass.
-
- >>> data = [1, 'a', 'b', 2, 1.5, object(), 3]
- >>> only_ints = list(filter(flip(isinstance, int), data))
- >>> only_ints
- [1, 2, 3]
- """
- return func(b, a)
-
-
-def return_none(exc):
- """ Returns None.
- """
- return None
-
-
-class excepts(object):
- """A wrapper around a function to catch exceptions and
- dispatch to a handler.
-
- This is like a functional try/except block, in the same way that
- ifexprs are functional if/else blocks.
-
- Examples
- --------
- >>> excepting = excepts(
- ... ValueError,
- ... lambda a: [1, 2].index(a),
- ... lambda _: -1,
- ... )
- >>> excepting(1)
- 0
- >>> excepting(3)
- -1
-
- Multiple exceptions and default except clause.
- >>> excepting = excepts((IndexError, KeyError), lambda a: a[0])
- >>> excepting([])
- >>> excepting([1])
- 1
- >>> excepting({})
- >>> excepting({0: 1})
- 1
- """
- def __init__(self, exc, func, handler=return_none):
- self.exc = exc
- self.func = func
- self.handler = handler
-
- def __call__(self, *args, **kwargs):
- try:
- return self.func(*args, **kwargs)
- except self.exc as e:
- return self.handler(e)
-
- @instanceproperty(classval=__doc__)
- def __doc__(self):
- from textwrap import dedent
-
- exc = self.exc
- try:
- if isinstance(exc, tuple):
- exc_name = '(%s)' % ', '.join(
- map(attrgetter('__name__'), exc),
- )
- else:
- exc_name = exc.__name__
-
- return dedent(
- """\
- A wrapper around {inst.func.__name__!r} that will except:
- {exc}
- and handle any exceptions with {inst.handler.__name__!r}.
-
- Docs for {inst.func.__name__!r}:
- {inst.func.__doc__}
-
- Docs for {inst.handler.__name__!r}:
- {inst.handler.__doc__}
- """
- ).format(
- inst=self,
- exc=exc_name,
- )
- except AttributeError:
- return type(self).__doc__
-
- @property
- def __name__(self):
- exc = self.exc
- try:
- if isinstance(exc, tuple):
- exc_name = '_or_'.join(map(attrgetter('__name__'), exc))
- else:
- exc_name = exc.__name__
- return '%s_excepting_%s' % (self.func.__name__, exc_name)
- except AttributeError:
- return 'excepting'
-
-
-def _check_sigspec(sigspec, func, builtin_func, *builtin_args):
- if sigspec is None:
- try:
- sigspec = inspect.signature(func)
- except (ValueError, TypeError) as e:
- sigspec = e
- if isinstance(sigspec, ValueError):
- return None, builtin_func(*builtin_args)
- elif not isinstance(sigspec, inspect.Signature):
- if (
- func in _sigs.signatures
- and ((
- hasattr(func, '__signature__')
- and hasattr(func.__signature__, '__get__')
- ))
- ):
- val = builtin_func(*builtin_args)
- return None, val
- return None, False
- return sigspec, None
-
-
-if PYPY: # pragma: no cover
- _check_sigspec_orig = _check_sigspec
-
- def _check_sigspec(sigspec, func, builtin_func, *builtin_args):
- # PyPy may lie, so use our registry for builtins instead
- if func in _sigs.signatures:
- val = builtin_func(*builtin_args)
- return None, val
- return _check_sigspec_orig(sigspec, func, builtin_func, *builtin_args)
-
-
-_check_sigspec.__doc__ = """ \
-Private function to aid in introspection compatibly across Python versions.
-
-If a callable doesn't have a signature (Python 3) or an argspec (Python 2),
-the signature registry in toolz._signatures is used.
-"""
-
-
-def num_required_args(func, sigspec=None):
- sigspec, rv = _check_sigspec(sigspec, func, _sigs._num_required_args,
- func)
- if sigspec is None:
- return rv
- return sum(1 for p in sigspec.parameters.values()
- if p.default is p.empty
- and p.kind in (p.POSITIONAL_OR_KEYWORD, p.POSITIONAL_ONLY))
-
-
-def has_varargs(func, sigspec=None):
- sigspec, rv = _check_sigspec(sigspec, func, _sigs._has_varargs, func)
- if sigspec is None:
- return rv
- return any(p.kind == p.VAR_POSITIONAL
- for p in sigspec.parameters.values())
-
-
-def has_keywords(func, sigspec=None):
- sigspec, rv = _check_sigspec(sigspec, func, _sigs._has_keywords, func)
- if sigspec is None:
- return rv
- return any(p.default is not p.empty
- or p.kind in (p.KEYWORD_ONLY, p.VAR_KEYWORD)
- for p in sigspec.parameters.values())
-
-
-def is_valid_args(func, args, kwargs, sigspec=None):
- sigspec, rv = _check_sigspec(sigspec, func, _sigs._is_valid_args,
- func, args, kwargs)
- if sigspec is None:
- return rv
- try:
- sigspec.bind(*args, **kwargs)
- except TypeError:
- return False
- return True
-
-
-def is_partial_args(func, args, kwargs, sigspec=None):
- sigspec, rv = _check_sigspec(sigspec, func, _sigs._is_partial_args,
- func, args, kwargs)
- if sigspec is None:
- return rv
- try:
- sigspec.bind_partial(*args, **kwargs)
- except TypeError:
- return False
- return True
-
-
-def is_arity(n, func, sigspec=None):
- """ Does a function have only n positional arguments?
-
- This function relies on introspection and does not call the function.
- Returns None if validity can't be determined.
-
- >>> def f(x):
- ... return x
- >>> is_arity(1, f)
- True
- >>> def g(x, y=1):
- ... return x + y
- >>> is_arity(1, g)
- False
- """
- sigspec, rv = _check_sigspec(sigspec, func, _sigs._is_arity, n, func)
- if sigspec is None:
- return rv
- num = num_required_args(func, sigspec)
- if num is not None:
- num = num == n
- if not num:
- return False
- varargs = has_varargs(func, sigspec)
- if varargs:
- return False
- keywords = has_keywords(func, sigspec)
- if keywords:
- return False
- if num is None or varargs is None or keywords is None: # pragma: no cover
- return None
- return True
-
-
-num_required_args.__doc__ = """ \
-Number of required positional arguments
-
- This function relies on introspection and does not call the function.
- Returns None if validity can't be determined.
-
- >>> def f(x, y, z=3):
- ... return x + y + z
- >>> num_required_args(f)
- 2
- >>> def g(*args, **kwargs):
- ... pass
- >>> num_required_args(g)
- 0
- """
-
-has_varargs.__doc__ = """ \
-Does a function have variadic positional arguments?
-
- This function relies on introspection and does not call the function.
- Returns None if validity can't be determined.
-
- >>> def f(*args):
- ... return args
- >>> has_varargs(f)
- True
- >>> def g(**kwargs):
- ... return kwargs
- >>> has_varargs(g)
- False
- """
-
-has_keywords.__doc__ = """ \
-Does a function have keyword arguments?
-
- This function relies on introspection and does not call the function.
- Returns None if validity can't be determined.
-
- >>> def f(x, y=0):
- ... return x + y
-
- >>> has_keywords(f)
- True
- """
-
-is_valid_args.__doc__ = """ \
-Is ``func(*args, **kwargs)`` a valid function call?
-
- This function relies on introspection and does not call the function.
- Returns None if validity can't be determined.
-
- >>> def add(x, y):
- ... return x + y
-
- >>> is_valid_args(add, (1,), {})
- False
- >>> is_valid_args(add, (1, 2), {})
- True
- >>> is_valid_args(map, (), {})
- False
-
- **Implementation notes**
- Python 2 relies on ``inspect.getargspec``, which only works for
- user-defined functions. Python 3 uses ``inspect.signature``, which
- works for many more types of callables.
-
- Many builtins in the standard library are also supported.
- """
-
-is_partial_args.__doc__ = """ \
-Can partial(func, *args, **kwargs)(*args2, **kwargs2) be a valid call?
-
- Returns True *only* if the call is valid or if it is possible for the
- call to become valid by adding more positional or keyword arguments.
-
- This function relies on introspection and does not call the function.
- Returns None if validity can't be determined.
-
- >>> def add(x, y):
- ... return x + y
-
- >>> is_partial_args(add, (1,), {})
- True
- >>> is_partial_args(add, (1, 2), {})
- True
- >>> is_partial_args(add, (1, 2, 3), {})
- False
- >>> is_partial_args(map, (), {})
- True
-
- **Implementation notes**
- Python 2 relies on ``inspect.getargspec``, which only works for
- user-defined functions. Python 3 uses ``inspect.signature``, which
- works for many more types of callables.
-
- Many builtins in the standard library are also supported.
- """
-
-from . import _signatures as _sigs
diff --git a/spaces/qingxu98/gpt-academic/themes/theme.py b/spaces/qingxu98/gpt-academic/themes/theme.py
deleted file mode 100644
index 42ee75000d06156f445772c3301cdcbf7f3db411..0000000000000000000000000000000000000000
--- a/spaces/qingxu98/gpt-academic/themes/theme.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import gradio as gr
-from toolbox import get_conf
-THEME, = get_conf('THEME')
-
-def load_dynamic_theme(THEME):
- adjust_dynamic_theme = None
- if THEME == 'Chuanhu-Small-and-Beautiful':
- from .green import adjust_theme, advanced_css
- theme_declaration = "[Chuanhu-Small-and-Beautiful主题] "
- elif THEME == 'High-Contrast':
- from .contrast import adjust_theme, advanced_css
- theme_declaration = ""
- elif '/' in THEME:
- from .gradios import adjust_theme, advanced_css
- from .gradios import dynamic_set_theme
- adjust_dynamic_theme = dynamic_set_theme(THEME)
- theme_declaration = ""
- else:
- from .default import adjust_theme, advanced_css
- theme_declaration = ""
- return adjust_theme, advanced_css, theme_declaration, adjust_dynamic_theme
-
-adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(THEME)
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Happy New Year Download Movie Torrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Happy New Year Download Movie Torrent.md
deleted file mode 100644
index 47d3e509c71315ca35bc8a2cf0f9bf63dd75e9e1..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Happy New Year Download Movie Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Happy New Year download movie torrent Download ✪ https://geags.com/2uCrIH
-
- d5da3c52bf
-
-
-
diff --git a/spaces/r3gm/RVC_HF/train/process_ckpt.py b/spaces/r3gm/RVC_HF/train/process_ckpt.py
deleted file mode 100644
index e3c3dba6df4b4f71a4d0865cdc96241d17da8781..0000000000000000000000000000000000000000
--- a/spaces/r3gm/RVC_HF/train/process_ckpt.py
+++ /dev/null
@@ -1,259 +0,0 @@
-import torch, traceback, os, pdb, sys
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from collections import OrderedDict
-from i18n import I18nAuto
-
-i18n = I18nAuto()
-
-
-def savee(ckpt, sr, if_f0, name, epoch, version, hps):
- try:
- opt = OrderedDict()
- opt["weight"] = {}
- for key in ckpt.keys():
- if "enc_q" in key:
- continue
- opt["weight"][key] = ckpt[key].half()
- opt["config"] = [
- hps.data.filter_length // 2 + 1,
- 32,
- hps.model.inter_channels,
- hps.model.hidden_channels,
- hps.model.filter_channels,
- hps.model.n_heads,
- hps.model.n_layers,
- hps.model.kernel_size,
- hps.model.p_dropout,
- hps.model.resblock,
- hps.model.resblock_kernel_sizes,
- hps.model.resblock_dilation_sizes,
- hps.model.upsample_rates,
- hps.model.upsample_initial_channel,
- hps.model.upsample_kernel_sizes,
- hps.model.spk_embed_dim,
- hps.model.gin_channels,
- hps.data.sampling_rate,
- ]
- opt["info"] = "%sepoch" % epoch
- opt["sr"] = sr
- opt["f0"] = if_f0
- opt["version"] = version
- torch.save(opt, "weights/%s.pth" % name)
- return "Success."
- except:
- return traceback.format_exc()
-
-
-def show_info(path):
- try:
- a = torch.load(path, map_location="cpu")
- return "Epochs: %s\nSample rate: %s\nPitch guidance: %s\nRVC Version: %s" % (
- a.get("info", "None"),
- a.get("sr", "None"),
- a.get("f0", "None"),
- a.get("version", "None"),
- )
- except:
- return traceback.format_exc()
-
-
-def extract_small_model(path, name, sr, if_f0, info, version):
- try:
- ckpt = torch.load(path, map_location="cpu")
- if "model" in ckpt:
- ckpt = ckpt["model"]
- opt = OrderedDict()
- opt["weight"] = {}
- for key in ckpt.keys():
- if "enc_q" in key:
- continue
- opt["weight"][key] = ckpt[key].half()
- if sr == "40k":
- opt["config"] = [
- 1025,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [10, 10, 2, 2],
- 512,
- [16, 16, 4, 4],
- 109,
- 256,
- 40000,
- ]
- elif sr == "48k":
- if version == "v1":
- opt["config"] = [
- 1025,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [10, 6, 2, 2, 2],
- 512,
- [16, 16, 4, 4, 4],
- 109,
- 256,
- 48000,
- ]
- else:
- opt["config"] = [
- 1025,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [12, 10, 2, 2],
- 512,
- [24, 20, 4, 4],
- 109,
- 256,
- 48000,
- ]
- elif sr == "32k":
- if version == "v1":
- opt["config"] = [
- 513,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [10, 4, 2, 2, 2],
- 512,
- [16, 16, 4, 4, 4],
- 109,
- 256,
- 32000,
- ]
- else:
- opt["config"] = [
- 513,
- 32,
- 192,
- 192,
- 768,
- 2,
- 6,
- 3,
- 0,
- "1",
- [3, 7, 11],
- [[1, 3, 5], [1, 3, 5], [1, 3, 5]],
- [10, 8, 2, 2],
- 512,
- [20, 16, 4, 4],
- 109,
- 256,
- 32000,
- ]
- if info == "":
- info = "Extracted model."
- opt["info"] = info
- opt["version"] = version
- opt["sr"] = sr
- opt["f0"] = int(if_f0)
- torch.save(opt, "weights/%s.pth" % name)
- return "Success."
- except:
- return traceback.format_exc()
-
-
-def change_info(path, info, name):
- try:
- ckpt = torch.load(path, map_location="cpu")
- ckpt["info"] = info
- if name == "":
- name = os.path.basename(path)
- torch.save(ckpt, "weights/%s" % name)
- return "Success."
- except:
- return traceback.format_exc()
-
-
-def merge(path1, path2, alpha1, sr, f0, info, name, version):
- try:
-
- def extract(ckpt):
- a = ckpt["model"]
- opt = OrderedDict()
- opt["weight"] = {}
- for key in a.keys():
- if "enc_q" in key:
- continue
- opt["weight"][key] = a[key]
- return opt
-
- ckpt1 = torch.load(path1, map_location="cpu")
- ckpt2 = torch.load(path2, map_location="cpu")
- cfg = ckpt1["config"]
- if "model" in ckpt1:
- ckpt1 = extract(ckpt1)
- else:
- ckpt1 = ckpt1["weight"]
- if "model" in ckpt2:
- ckpt2 = extract(ckpt2)
- else:
- ckpt2 = ckpt2["weight"]
- if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())):
- return "Fail to merge the models. The model architectures are not the same."
- opt = OrderedDict()
- opt["weight"] = {}
- for key in ckpt1.keys():
- # try:
- if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape:
- min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0])
- opt["weight"][key] = (
- alpha1 * (ckpt1[key][:min_shape0].float())
- + (1 - alpha1) * (ckpt2[key][:min_shape0].float())
- ).half()
- else:
- opt["weight"][key] = (
- alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float())
- ).half()
- # except:
- # pdb.set_trace()
- opt["config"] = cfg
- """
- if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000]
- elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000]
- elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000]
- """
- opt["sr"] = sr
- opt["f0"] = 1 if f0 else 0
- opt["version"] = version
- opt["info"] = info
- torch.save(opt, "weights/%s.pth" % name)
- return "Success."
- except:
- return traceback.format_exc()
diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/visualize/layouts/plot_landmarks.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/visualize/layouts/plot_landmarks.py
deleted file mode 100644
index 4d91432a024ba4071166b0d43fb9f2f020640641..0000000000000000000000000000000000000000
--- a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/visualize/layouts/plot_landmarks.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import numpy as np
-
-# Demo libs
-from spiga.demo.visualize.layouts.plot_basics import BasicLayout
-
-
-class LandmarkLayout(BasicLayout):
-
- BasicLayout.thickness_dft['lnd'] = 3
-
- def __init__(self):
- super().__init__()
-
- def draw_landmarks(self, image, landmarks, visible=None, mask=None,
- thick=None, colors=(BasicLayout.colors['green'], BasicLayout.colors['red'])):
-
- # Initialize variables if need it
- if visible is None:
- visible = np.ones(len(landmarks))
- if mask is None:
- mask = np.ones(len(landmarks))
- if thick is None:
- thick = self.thickness['lnd']
-
- if isinstance(landmarks, (list, tuple)):
- landmarks = np.array(landmarks)
- if isinstance(visible, (list, tuple)):
- visible = np.array(visible)
- if isinstance(mask, (list, tuple)):
- mask = np.array(mask)
-
- # Clean and split landmarks
- ldm_vis, ldm_notvis = self._split_lnd_by_vis(landmarks, visible, mask)
-
- # PIL images to OpenCV
- if image.shape[0] == 3:
- image = image.transpose(1, 2, 0)
-
- # Plot landmarks
- canvas = self.draw_circles(image, ldm_vis, color=colors[0], thick=thick)
- canvas = self.draw_circles(canvas, ldm_notvis, color=colors[1], thick=thick)
- return canvas
-
- @ staticmethod
- def _split_lnd_by_vis(landmarks, visible, mask):
- mask = np.array(mask, dtype=bool)
- visible = np.array(visible, dtype=bool)
- landmarks = landmarks[mask]
- visible = visible[mask]
- ldm_vis = landmarks[visible]
- not_visible = np.logical_not(visible)
- ldm_notvis = landmarks[not_visible]
- return ldm_vis, ldm_notvis
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/As 48 Leis Do Poder Livro Pdf Volume.md b/spaces/raedeXanto/academic-chatgpt-beta/As 48 Leis Do Poder Livro Pdf Volume.md
deleted file mode 100644
index 5ed6e226b79b91e4f141ddcd9c43dd81b8340e93..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/As 48 Leis Do Poder Livro Pdf Volume.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-As 48 Leis do Poder: O Livro que Ensina a Manipular Pessoas e Situações
-Você já se perguntou por que algumas pessoas conseguem alcançar o sucesso, enquanto outras estão sempre sendo passadas para trás? Você gostaria de aprender a usar o poder a seu favor, sem se deixar dominar pelos outros? Então você precisa conhecer o livro As 48 Leis do Poder , de Robert Greene.
-Neste livro, o autor revela as estratégias usadas pelos grandes lÃderes da história, desde Maquiavel até Napoleão, para conquistar e manter o poder. Ele também mostra como essas leis podem ser aplicadas na vida pessoal e profissional, para você se tornar mais influente, persuasivo e respeitado.
-as 48 leis do poder livro pdf volume Download ❤ https://tinourl.com/2uL0jx
-As 48 Leis do Poder são baseadas na observação da natureza humana e dos princÃpios da psicologia, da filosofia e da polÃtica. Elas ensinam a manipular pessoas e situações para atingir seus objetivos, sem se importar com a moral ou a ética. Algumas delas são:
-
-Não ofusque o brilho do mestre: Faça com que as pessoas acima de você se sintam confortavelmente superiores e você alcançará o ápice do poder.
-Não confie demais nos amigos: Aprenda a usar os inimigos: Os amigos vão trair você mais rapidamente, pois são movidos pela inveja. Os inimigos, por outro lado, podem ser úteis, pois têm mais a provar.
-Concentre seus esforços: A intensidade sempre vence a extensão: Não se disperse em muitos projetos ou atividades. Foque em um objetivo de cada vez e dê o seu melhor para realizá-lo.
-Use a ausência para aumentar o respeito e a honra: Muito presente diminui sua importância. Se você já criou uma demanda por si mesmo, afaste-se por um tempo e faça com que as pessoas sintam sua falta.
-Crie espectáculos impressionantes: As aparências são tudo: As pessoas se deixam levar pelas emoções e pela impressão que você causa. Use imagens, sÃmbolos e gestos dramáticos para seduzir e impressionar seu público.
-
-Se você quer saber mais sobre as outras leis do poder e como aplicá-las na prática, não perca tempo e baixe agora mesmo o livro em PDF. Você vai descobrir como se tornar um mestre da arte da manipulação e do poder.
-Clique aqui para baixar o livro As 48 Leis do Poder em PDF
-
-As 48 Leis do Poder não são apenas um livro, mas um manual de sobrevivência em um mundo competitivo e cruel. Elas podem ser usadas para se proteger dos ataques dos inimigos, para se aproveitar das oportunidades que surgem, para se adaptar às mudanças e para controlar as situações a seu favor.
-Mas cuidado: o uso das leis do poder pode trazer consequências negativas, tanto para você quanto para os outros. Você pode se tornar uma pessoa fria, calculista e sem escrúpulos. Você pode perder amigos, aliados e admiradores. Você pode despertar o ódio, a inveja e a vingança dos que se sentem prejudicados por você.
-Por isso, antes de aplicar as leis do poder, pense bem nos seus objetivos e nos meios que você vai usar para alcançá-los. Não se deixe levar pela ganância, pela ambição ou pelo orgulho. Lembre-se de que o poder é uma faca de dois gumes: ele pode cortar tanto a seu favor quanto contra você.
-Se você quer aprender a usar o poder com sabedoria e responsabilidade, baixe agora mesmo o livro As 48 Leis do Poder em PDF e descubra os segredos dos grandes mestres da história.
-Clique aqui para baixar o livro As 48 Leis do Poder em PDF
-
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/rahgadda/bark-voice-generator/install.sh b/spaces/rahgadda/bark-voice-generator/install.sh
deleted file mode 100644
index 1ef0fac8216dae9e72500e4dbd39c00d41a4c661..0000000000000000000000000000000000000000
--- a/spaces/rahgadda/bark-voice-generator/install.sh
+++ /dev/null
@@ -1,58 +0,0 @@
-#!/bin/bash
-
-# -- Remove Proxy
-unset http_proxy
-unset https_proxy
-
-# -- Install Libs
-yum install -y zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel tk-devel libffi-devel
-yum install -y make gcc zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel tk-devel libffi-devel
-
-# -- Install IPTables
-yum install iptables-services -y
-systemctl start iptables
-systemctl enable iptables
-
-# -- Flushing iptables
-iptables -F
-
-# -- Allowing everthing
-iptables -A FORWARD -j ACCEPT
-iptables -A INPUT -j ACCEPT
-iptables -A OUTPUT -j ACCEPT
-
-# -- Saving
-service iptables save
-systemctl restart iptables
-
-# -- Display Settings
-iptables -L -n
-
-# -- Add HTTP proxy
-export http_proxy="http://company.com:80"
-export https_proxy="http://company.com:80"
-export no_proxy=localhost,127.0.0.1
-
-# -- Install Python
-wget https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz
-tar zxfv Python-3.11.4.tgz
-rm Python-3.11.4.tgz
-find ./Python-3.11.4/Python -type d | xargs chmod 0755
-cd Python-3.11.4
-./configure --prefix=$PWD/Python-3.11.4/Python
-make
-make install
-
-# -- Create Virtual Env
-mkdir -p -m777 demo
-python -m venv demo/migration
-source demo/migration/bin/activate
-
-# -- Installing Dependencies
-pip install --upgrade pip
-pip install -r requirements.txt
-
-nohup python voice.py > voice.log &
-
-# -- Exit Virtual Env
-deactivate
\ No newline at end of file
diff --git a/spaces/rajesh1729/question-answering-gradio/README.md b/spaces/rajesh1729/question-answering-gradio/README.md
deleted file mode 100644
index bf454a0e653adb0fa15daa47f931389e56682982..0000000000000000000000000000000000000000
--- a/spaces/rajesh1729/question-answering-gradio/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: Question Answering Gradio
-emoji: 🐢
-colorFrom: gray
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
-license: afl-3.0
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/raravena80/trulensplay/style.css b/spaces/raravena80/trulensplay/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/raravena80/trulensplay/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/constants.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/constants.d.ts
deleted file mode 100644
index 208020dcbab4ebcd7955b2abcb7ae49185f5976e..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/constants.d.ts
+++ /dev/null
@@ -1,18 +0,0 @@
-/** @deprecated since v6.3.0 - use constants property exposed by the relevant module instead. */
-declare module 'constants' {
- import { constants as osConstants, SignalConstants } from 'node:os';
- import { constants as cryptoConstants } from 'node:crypto';
- import { constants as fsConstants } from 'node:fs';
-
- const exp: typeof osConstants.errno &
- typeof osConstants.priority &
- SignalConstants &
- typeof cryptoConstants &
- typeof fsConstants;
- export = exp;
-}
-
-declare module 'node:constants' {
- import constants = require('constants');
- export = constants;
-}
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/American Pie Beta House 720p [TOP] Download 122.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/American Pie Beta House 720p [TOP] Download 122.md
deleted file mode 100644
index 2794ecf33213f63173a22c169875f5f0d50937bd..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/American Pie Beta House 720p [TOP] Download 122.md
+++ /dev/null
@@ -1,44 +0,0 @@
-american pie beta house 720p download 122 Download ✫✫✫ https://urlgoal.com/2uCKnX
-
-/8/10/16/12/13
-
- Title: Download Video Download - Free Porn Movies, Sex Videos, HD Sex Clips (at theamericanpie.net)
-
- am übrigend zum installieren das film immer die hälfte durchspiele
-
- und ich musste neustarten 2-3 mal
-
- starten
-
- weil es ein native deutscher film
-
- hab sogar neu gestartet wie der site vor
-
- man erübrigt sich den durchschnitt
-
- im filmvergleich nur eine
-
- warum sollte man den durchschnitt in einem Video wegnehmen?
-
- was will er auf die höhere qualität
-
- ich verstehe, das es keine sauber aufgezogene, mit guten Bildern, wenig Aufwand erstellte sei, ich kenne kein bisschen rüber, aber geh vorraus
-
- ich mach das nicht und ist ja ein deutscher arzt
-
- er ist sogar ein mediziner
-
- so dann wäre es sicher mit vielen anderen im eigenen privat aufgezogenen video der art
-
- und es tut mir leid wenn ich euch hier nicht vertraut bin
-
- aber ein film tut mir auch viel besser, die was ich zum lernen brauche
-
- wenn man kein sonderland hat, sollte man ein Video von youtube einordnen.
-
- wie?
-
- das wird etwas kompliziert, das ist es, man muss eine Art Schlüssel herausfinden, um einzelne von den anderen zu ent 4fefd39f24
-
-
-
diff --git a/spaces/riccorl/relik-entity-linking/relik/inference/serve/__init__.py b/spaces/riccorl/relik-entity-linking/relik/inference/serve/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/riccorl/relik-entity-linking/relik/reader/data/relik_reader_sample.py b/spaces/riccorl/relik-entity-linking/relik/reader/data/relik_reader_sample.py
deleted file mode 100644
index 3d7570411fbb939f99d73d1cc3318b21552bc7c2..0000000000000000000000000000000000000000
--- a/spaces/riccorl/relik-entity-linking/relik/reader/data/relik_reader_sample.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import json
-from typing import Iterable
-
-
-class RelikReaderSample:
- def __init__(self, **kwargs):
- super().__setattr__("_d", {})
- self._d = kwargs
-
- def __getattribute__(self, item):
- return super(RelikReaderSample, self).__getattribute__(item)
-
- def __getattr__(self, item):
- if item.startswith("__") and item.endswith("__"):
- # this is likely some python library-specific variable (such as __deepcopy__ for copy)
- # better follow standard behavior here
- raise AttributeError(item)
- elif item in self._d:
- return self._d[item]
- else:
- return None
-
- def __setattr__(self, key, value):
- if key in self._d:
- self._d[key] = value
- else:
- super().__setattr__(key, value)
-
- def to_jsons(self) -> str:
- if "predicted_window_labels" in self._d:
- new_obj = {
- k: v
- for k, v in self._d.items()
- if k != "predicted_window_labels" and k != "span_title_probabilities"
- }
- new_obj["predicted_window_labels"] = [
- [ss, se, pred_title]
- for (ss, se), pred_title in self.predicted_window_labels_chars
- ]
- else:
- return json.dumps(self._d)
-
-
-def load_relik_reader_samples(path: str) -> Iterable[RelikReaderSample]:
- with open(path) as f:
- for line in f:
- jsonl_line = json.loads(line.strip())
- relik_reader_sample = RelikReaderSample(**jsonl_line)
- yield relik_reader_sample
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download the Beatles Magical Mystery Tour 1967 in 1080p Blu-ray and DTS X264 Format.md b/spaces/rorallitri/biomedical-language-models/logs/Download the Beatles Magical Mystery Tour 1967 in 1080p Blu-ray and DTS X264 Format.md
deleted file mode 100644
index 24ea73674100eecbbfbfaec22368ba9b8e2e7dd7..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Download the Beatles Magical Mystery Tour 1967 in 1080p Blu-ray and DTS X264 Format.md
+++ /dev/null
@@ -1,6 +0,0 @@
-beatles magical mystery tour 1967 1080p bluray dts x264 publichd Download ————— https://tinurll.com/2uzlIK
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Free [WORK] Masterwriter 2.0 Activation Code.md b/spaces/rorallitri/biomedical-language-models/logs/Free [WORK] Masterwriter 2.0 Activation Code.md
deleted file mode 100644
index b32fc74e762322be2315eca9307c62d579f405cc..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Free [WORK] Masterwriter 2.0 Activation Code.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-if you have not yet received the email with the activation code for the free masterwriter v2 software you can also use the rhymes cheat generator here to get the code. you have to have an active roblox account in order to use the code generator.
-create any song, poem, novel or screenplay. with masterwriter v2 you'll be a poet, novelist and screenwriter in no time. we've upgraded masterwriter so it's easier to use than ever. masterwriter is compatible with all the devices. get free rhymes cheat generator now.
-free masterwriter 2.0 activation code DOWNLOAD ✔✔✔ https://tinurll.com/2uzo9t
- it is one of the best way to get free robo-bucks and robux. with roblox hack you can get free robux, rubies, and other roblox items. it is safe and easy to use. it is also a roblox cheat, you don’t need any programming skills. no jailbreak, root or cheat! just follow the instructions below and get free robux from the game. this is a free robux generator, that you can use to get free robux in your account. just follow the instructions, and you will be able to get free robux, no root, no cheat, no jailbreak. all you need is a real robux account and a working internet connection. get free robux today, you can use them to buy your favorite roblox items. another new improved roblox cheat, you can now get 100% free robux directly in your account, just follow the instructions below. the instructions below are simple to use, just follow them step by step and you will be able to get free robux to your account in no time. this is a free robux generator, you don’t need any programming skills. don’t worry, the instructions are completely safe, no jailbreak, root or cheat required. you will be able to use your free robux to buy your favorite items in the game. how does it work? the instructions below will get your robux out to your game account completely free of charge. all you need is a real robux account, and a working internet connection.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/ruslanmv/Video-Translator/README.md b/spaces/ruslanmv/Video-Translator/README.md
deleted file mode 100644
index 74135d421fc6ef39064601afdcf608bc1d7991ed..0000000000000000000000000000000000000000
--- a/spaces/ruslanmv/Video-Translator/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Video Translator
-emoji: 🐨
-colorFrom: pink
-colorTo: gray
-python_version: 3.8.9
-sdk: gradio
-sdk_version: 3.0.24
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/russellc/BLIP/data/flickr30k_dataset.py b/spaces/russellc/BLIP/data/flickr30k_dataset.py
deleted file mode 100644
index 018ab387014ddaf554c4d3184cfc0e2ba8b2d487..0000000000000000000000000000000000000000
--- a/spaces/russellc/BLIP/data/flickr30k_dataset.py
+++ /dev/null
@@ -1,93 +0,0 @@
-import os
-import json
-
-from torch.utils.data import Dataset
-from torchvision.datasets.utils import download_url
-
-from PIL import Image
-
-from data.utils import pre_caption
-
-class flickr30k_train(Dataset):
- def __init__(self, transform, image_root, ann_root, max_words=30, prompt=''):
- '''
- image_root (string): Root directory of images (e.g. flickr30k/)
- ann_root (string): directory to store the annotation file
- '''
- url = 'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_train.json'
- filename = 'flickr30k_train.json'
-
- download_url(url,ann_root)
-
- self.annotation = json.load(open(os.path.join(ann_root,filename),'r'))
- self.transform = transform
- self.image_root = image_root
- self.max_words = max_words
- self.prompt = prompt
-
- self.img_ids = {}
- n = 0
- for ann in self.annotation:
- img_id = ann['image_id']
- if img_id not in self.img_ids.keys():
- self.img_ids[img_id] = n
- n += 1
-
- def __len__(self):
- return len(self.annotation)
-
- def __getitem__(self, index):
-
- ann = self.annotation[index]
-
- image_path = os.path.join(self.image_root,ann['image'])
- image = Image.open(image_path).convert('RGB')
- image = self.transform(image)
-
- caption = self.prompt+pre_caption(ann['caption'], self.max_words)
-
- return image, caption, self.img_ids[ann['image_id']]
-
-
-class flickr30k_retrieval_eval(Dataset):
- def __init__(self, transform, image_root, ann_root, split, max_words=30):
- '''
- image_root (string): Root directory of images (e.g. flickr30k/)
- ann_root (string): directory to store the annotation file
- split (string): val or test
- '''
- urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_val.json',
- 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_test.json'}
- filenames = {'val':'flickr30k_val.json','test':'flickr30k_test.json'}
-
- download_url(urls[split],ann_root)
-
- self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r'))
- self.transform = transform
- self.image_root = image_root
-
- self.text = []
- self.image = []
- self.txt2img = {}
- self.img2txt = {}
-
- txt_id = 0
- for img_id, ann in enumerate(self.annotation):
- self.image.append(ann['image'])
- self.img2txt[img_id] = []
- for i, caption in enumerate(ann['caption']):
- self.text.append(pre_caption(caption,max_words))
- self.img2txt[img_id].append(txt_id)
- self.txt2img[txt_id] = img_id
- txt_id += 1
-
- def __len__(self):
- return len(self.annotation)
-
- def __getitem__(self, index):
-
- image_path = os.path.join(self.image_root, self.annotation[index]['image'])
- image = Image.open(image_path).convert('RGB')
- image = self.transform(image)
-
- return image, index
\ No newline at end of file
diff --git a/spaces/ryanj/clothing_recommender/app.py b/spaces/ryanj/clothing_recommender/app.py
deleted file mode 100644
index 07185b55431d7b094c44d87042984bc3b7b1234c..0000000000000000000000000000000000000000
--- a/spaces/ryanj/clothing_recommender/app.py
+++ /dev/null
@@ -1,460 +0,0 @@
-import subprocess
-subprocess.run(['wget', '-N', '-q', 'https://matematica.unipv.it/gualandi/solvers/ipopt-linux64.zip'], stdout=subprocess.DEVNULL)
-subprocess.run(['unzip', '-o', '-q', 'ipopt-linux64'], stdout=subprocess.DEVNULL)
-
-from pyomo.environ import *
-import pickle
-
-import pickle
-dict_files = {
- 'banana republic': 'annotations/banana_annotations.pickle',
- 'h&m': 'annotations/hnm_annotations.pickle',
- 'uniqlo': 'annotations/uniqlo_annotations.pickle',
- 'street': 'annotations/street_annotations.pickle',
- 'abercrombie women': 'annotations/abercrombie_women.pickle',
- 'aritzia': 'annotations/aritzia.pickle',
- 'dynamite': 'annotations/dynamite.pickle',
-}
-
-style_gender = {
- 'banana republic': 'male',
- 'h&m': 'male',
- 'uniqlo': 'male',
- 'street': 'male',
- 'abercrombie women': 'female',
- 'aritzia': 'female',
- 'dynamite': 'female',
-}
-
-gender_article_keys = {
- 'male': ['shirt', 'pants_or_shorts', 'hoodie_or_jacket', 'hat', 'shoes'],
- 'female': ['top', 'bottom', 'dress', 'hoodie_or_jacket', 'hat', 'shoes']
-}
-
-style_annotations_dict = {}
-
-def get_annotations(dict_file):
- with open(dict_file, 'rb') as f:
- annotations = pickle.load(f)
-
- for a in annotations:
- if not( ( a.get('shirt', '') and a.get('pants_or_shorts', '') ) or
- ( a.get('top', '') and a.get('bottom', '') ) or
- a.get('dress', '')
- ):
- print( dict_file, '\n', a )
-
- filtered_annotations = [ a for a in annotations if ( (a.get('shirt', '') and a.get('pants_or_shorts', '')) or (
- a.get('gender', '') == 'female' ) ) ]
- return filtered_annotations
-
-style_annotations_dict = {
- k: get_annotations(file_path) for k, file_path in dict_files.items()
-}
-
-import collections
-from collections import defaultdict
-
-optional = [False, False, True, True, False]
-article_keys = ['shirt', 'pants_or_shorts', 'hoodie_or_jacket', 'hat', 'shoes']
-
-def annotation_to_tuple(annotation):
- t = tuple( annotation[key] for key in article_keys )
- return t
-
-def remove_owned_articles(annotations_tuple, clothes_owned):
- return tuple( article if article not in clothes_owned else ''
- for article in annotations_tuple )
-
-def annotations_value(annotation_tuple_count, clothes_owned):
- new_scores = {}
- for annotation_tuple, count in annotation_tuple_count.items():
- num_missing = 0
- for clothing in annotation_tuple:
-
- if not (clothing == '' or clothing in clothes_owned):
- num_missing += 1
- if num_missing > 0:
- new_scores[annotation_tuple] = count/num_missing
- return new_scores
-
-def is_outfit_subset(smaller_outfit, larger_outfit):
- if smaller_outfit == larger_outfit:
- return False
- for i in range(len(larger_outfit)): # assume both r tuples of fixed size
- if smaller_outfit[i] != '' and smaller_outfit[i] != larger_outfit[i]:
- return False
- return True
-
-def most_outfits_helper(annotations, capacity=5, clothes_owned=[]):
- annotations = [ annotation_to_tuple(e) for e in annotations ]
- outfit_count = dict(collections.Counter(annotations))
-
- outfits = list(outfit_count.keys())
- for small_outfit in outfit_count:
- for larger_outfit in outfit_count:
- if is_outfit_subset(small_outfit, larger_outfit):
- outfit_count[small_outfit] += outfit_count[larger_outfit]
-
- clothes_owned += ['']
- best_outfits = most_outfits(outfit_count, capacity, set(clothes_owned))
- best_outfits.remove('')
- return best_outfits
-
-# consider every image in the dataset
-# each image has an outfit and casts a vote for that outfit
-# a outfit is a tuple of pants_or_short + shirt + jacket_or_hoodie?
-# images with shoes or hats have an additional vote for the shoes and hats
-# greedily pick the clothes with the highest votes
-# then remove those clothes from all outfits in that dataset and recount
-# can be solved as an integer programming problem
-def most_outfits(annotation_tuple_count, capacity, clothes_owned):
- if capacity == 0 :
- return clothes_owned
-
- # merge counts based on new keys
- updated_annotations = defaultdict(int)
- for tup, count in annotation_tuple_count.items():
- updated_annotations[ remove_owned_articles(tup, clothes_owned) ] += count
- annotation_tuple_count = updated_annotations
-
- outfits_scores = annotations_value(annotation_tuple_count, clothes_owned)
- outfits_scores = sorted(outfits_scores.items(), key=lambda x:-x[1])
- num_new = 1
- for outfit, score in outfits_scores:
- clothes_proposed = clothes_owned.union(set(outfit))
- num_new = len(clothes_proposed) - len(clothes_owned)
- if num_new <= capacity:
- clothes_owned = clothes_proposed
- break
-
- return most_outfits( annotation_tuple_count, capacity-num_new, clothes_owned )
-
-from pyomo.environ import *
-from functools import reduce
-
-def cover_style_ip(annotations, capacity=10, clothes_owned=[], mask=None, solver='ipopt'):
- """Use integer program to find the set of clothes that makes as many outfits as possible.
- annotations: List[ Dict ], contains maps from clothing categories to string types
- capacity: int, number of new clothes to find
- clothes_owned: List[ Str ], iterable of strings of clothing types
- mask: Optional( List[ Str ] ), optional iterable of categories of clothes to consider.
- Uses all clothing types by default.
- solver: Str, the nonlinear optimization solver to use for max bool sat
- """
- unique_clothes = set()
- interested_clothing_types = article_keys if mask is None else mask
- for a in annotations:
- for key in interested_clothing_types:
- unique_clothes.add( a[key] )
- if '' in unique_clothes:
- unique_clothes.remove('')
-
- model = ConcreteModel()
-
- # 1. Create the variables we want to optimize
- clothing_literals_dict = {clothing: Var(within=Binary) for clothing in unique_clothes}
-
- # set literals to true for clothes already owned
- for clothing in clothes_owned:
- clothing_literals_dict[clothing] = 1
-
- # capacity changes from number of new clothes to total clothes
- capacity += len(clothes_owned)
-
- capacity_constraint = Constraint(expr=sum(list(clothing_literals_dict.values())) <= capacity)
-
- outfit_clauses = []
- for a in annotations:
- # get the clothing literal if it exists, otherwise it's masked and say it's absent
- outfit_literals = [ clothing_literals_dict.get(a[key], 0) for key in article_keys if a[key] != '' ]
- outfit_clauses += [ reduce(lambda x,y: x*y, outfit_literals) ]
-
- # 3. Maximize the objective function
- objective = Objective( expr=sum( outfit_clauses ), sense=maximize)
-
- for name, literal in clothing_literals_dict.items():
- setattr(model, name, literal)
- model.capacity_constraint = capacity_constraint
- model.objective = objective
-
- SolverFactory(solver).solve(model)
-
- basis_clothes = [ name for name, literal in clothing_literals_dict.items() if not isinstance(literal, int) and round(literal()) ]
- return basis_clothes
-
-def annotation_str(annotation):
- output = ""
- output += annotation['hat']
- output += ' + ' + annotation['shirt'] if output and annotation['shirt'] else annotation['shirt']
- output += ' + ' + annotation['hoodie_or_jacket'] if output and annotation['hoodie_or_jacket'] else annotation['hoodie_or_jacket']
- output += ' + ' + annotation['pants_or_shorts'] if output and annotation['pants_or_shorts'] else annotation['pants_or_shorts']
- output += ' + ' + annotation['shoes'] if output and annotation['shoes'] else annotation['shoes']
- # output += ' ' + annotation['url'] if annotation.get('url') else ''
- return output
-
-def annotation_str(annotation):
- gender = annotation.get('gender', 'male')
- output = ""
- for k in gender_article_keys[gender]:
- output += ' + ' + annotation[k] if output and annotation.get(k) else annotation.get(k, '')
- return output
-
-
-def annotation_to_key(annotation):
- gender = annotation.get('gender', 'male')
- output = ""
- if gender == 'male':
- output += ' + ' + annotation['shirt'] if output and annotation['shirt'] else annotation['shirt']
- output += ' + ' + annotation['hoodie_or_jacket'] if output and annotation['hoodie_or_jacket'] else annotation['hoodie_or_jacket']
- output += ' + ' + annotation['pants_or_shorts'] if output and annotation['pants_or_shorts'] else annotation['pants_or_shorts']
- else:
- useful_keys = ['dress', 'top', 'bottom', 'hoodie_or_jacket']
- for k in useful_keys:
- output += ' + ' + annotation[k] if output and annotation.get(k) else annotation.get(k, '')
- return output
-
-
-def get_num_outfits(annotations, articles):
- outfits = set()
- for e in annotations:
- if (sum([(e[key] == '' or e[key] in articles ) for key in article_keys]) == len(article_keys)
- # and e['shirt'] and e['pants_or_shorts']
- ):
- str_form = annotation_to_key(e) # ignore +- hat, shoes otherwise use annotation_str
- outfits.add(str_form)
- return sorted(list(outfits))
-
-def get_outfit_urls(annotations, outfits):
- outfit_urls = defaultdict(list)
-
- for e in annotations:
- str_form = annotation_to_key(e)
- if e.get('url') and str_form in outfits:
- outfit_urls[str_form] += [ e['url'] ]
- return dict(outfit_urls)
-
-def cover_style(annotations, capacity=10, clothes_owned=[]):
- clothes = list(cover_style_ip(annotations, capacity=capacity, clothes_owned=clothes_owned))
- reachable_outfits = get_num_outfits(annotations, set(clothes+clothes_owned) )
-
- if len(clothes_owned) == 0: # return basis outfits from scratch
- return clothes, get_outfit_urls(annotations, reachable_outfits)
-
- elif capacity == 0: # return reach of current clothes
- return clothes, get_outfit_urls(annotations, reachable_outfits)
-
- else: # capacity > 0 and len(clothes_owned) > 0, show new clothes and outfits
- new_clothes = [ c for c in clothes if c not in clothes_owned ]
- reachable_outfits_base = get_num_outfits(annotations, clothes_owned)
- new_outfits = [ o for o in reachable_outfits if o not in reachable_outfits_base ]
- return new_clothes, get_outfit_urls(annotations, new_outfits)
-
-
-def str_to_list(input):
- tokens = [ token.strip() for token in input.split(',') ]
- return [ t for t in tokens if t != '' ]
-
-CLOTHES_HEADER = f'## Most Effective Clothes'
-OUTFITS_HEADER = f'## Possible 👕😎 Outfits'
-
-def cover_style_helper(styles, capacity=10, clothes_owned=''):
-
- if len(styles) == 0:
- return f'{CLOTHES_HEADER}\nPlease pick at least one style from the left.' , OUTFITS_HEADER
-
- clothes_owned = str_to_list(clothes_owned)
- annotations = []
- for style in styles:
- annotations += style_annotations_dict[style][:500]
-
- if len(styles) == 1: # hack for h&m having wayyyyy more examples than other brands
- annotations = style_annotations_dict[style]
-
- clothes, outfit_urls = cover_style(annotations, capacity, clothes_owned)
- clothes_str = f'{CLOTHES_HEADER}\n'
- for c in clothes:
- clothes_str += f'* {c}\n'
-
- outfits_str = f'{OUTFITS_HEADER} ({len(outfit_urls)})\n'
-
- for outfit, urls in outfit_urls.items():
- outfits_str += f'1. {str(outfit)}: '
- for i, u in enumerate(urls):
- outfits_str += f'[{i+1}] ' # f'[\[{i}\]]({u})'
- outfits_str += '\n'
- return clothes_str, outfits_str
-
-# article_keys = gender_article_keys['male']
-# print(cover_style_helper( ['banana republic'] ))
-
-def cover_style_helper_wrapper(markdown, styles, capacity=10, clothes_owned=''):
- if len(styles) > 0:
- global article_keys
- gender = style_gender[styles[0]]
- article_keys = gender_article_keys[gender]
- return cover_style_helper(styles, capacity, clothes_owned)
-
-def filter_test(annotation, filter_set):
- for f in filter_set:
- num_occur = sum([1 for key in article_keys+['caption'] if f in annotation[key]])
- if num_occur == 0:
- return False
- return True
-
-import gradio as gr
-
-def change_gallery(style_choice, start_from=0, text_filter=''):
- global article_keys
- gender = style_gender[style_choice]
- article_keys = gender_article_keys[gender]
-
- chosen_annotations = style_annotations_dict[style_choice]
- if text_filter:
- text_filter = set([t.strip() for t in text_filter.split(',')])
- chosen_annotations = [ a for a in chosen_annotations if filter_test(a, text_filter) ]
- start_from = int(start_from/100*len(chosen_annotations))
- # print(len(chosen_annotations), [a['url'] for a in chosen_annotations[start_from:start_from+20]])
- return [a['url'] for a in chosen_annotations[start_from:start_from+20]]
-
-def update_styles(gender):
- global article_keys
- article_keys = gender_article_keys[gender]
-
- default_values = ["banana republic"] if gender == "male" else ["aritzia"]
- return gr.CheckboxGroup.update(choices=[k for k in style_annotations_dict.keys() if style_gender[k] == gender], value=default_values, label='Styles')
-
-
-
-
-
-article_keys = gender_article_keys['male']
-INTRO_MARKDOWN = """**Good clothes are the ones that you can use to make many outfits.**
-Finding stuff that works with your style and wardrobe is hard.
-This program helps you find the clothes that make as many fashionable outfits as possible,
-given your current wardrobe, style, and budget.
-
-"""
-
-with gr.Blocks() as demo:
- with gr.Tabs(selected=0):
- with gr.TabItem('Find the Literal Optimal Clothes', id=0):
- with gr.Box():
- default_clothes = """black dress pants,
-blue shirt,
-white sweater,
-black sweater,
-white t shirt,
-black t shirt,
-black shorts,
-white shirt,
-grey hoodie,
-black t shirt,
-gray sweater,
-black jacket,
-black sneakers,
-white sneakers,
-blue jeans,
-black baseball hat,
-black jeans"""
-
- with gr.Row():
- with gr.Column():
- gr.Markdown(INTRO_MARKDOWN)
- with gr.Row():
- with gr.Group():
- gender = gr.Radio(["male", "female"], value="male", label='gender')
- styles = gr.CheckboxGroup([k for k in style_annotations_dict.keys() if style_gender[k] == gender.value], value=["banana republic"], label='Styles')
- gender.change(update_styles, inputs=[gender], outputs=[styles])
- with gr.Group():
- capacity = gr.Number(5, precision=0, label='Number of clothes to recommend')
- clothes_owned = gr.Textbox(
- label="Clothes Owned",
- lines=3,
- value=default_clothes, #json.dumps(default_clothes),
- )
- with gr.Row():
- btn = gr.Button("Recommend Clothes")
-
- with gr.Row():
- clothes_markdown, outfits_markdown = cover_style_helper(styles.value, capacity.value, clothes_owned.value)
- clothes_recommended = gr.Markdown(clothes_markdown)
- possible_outfits = gr.Markdown(outfits_markdown)
-
- btn.click(cover_style_helper, inputs=[styles, capacity, clothes_owned], outputs=[clothes_recommended, possible_outfits])
-
- gr.Markdown("## 3 Different Example Uses")
- example_label = gr.Textbox('', label='Explanation', visible=False)
- gr.Examples(examples=[ ['Find the central clothes that make a style.', ['street'], 10, ''],
- ['Find new outfits hidden in your wardrobe.', ['h&m'], 0, 'white t shirt, grey t shirt, black pants, black shorts, black t shirt, blue jeans'],
- ['Find the clothes that best fit your existing wardrobe.', ['banana republic', 'street'], 5, 'white t shirt, grey t shirt, black pants, black shorts, black t shirt, blue jeans']],
- inputs=[example_label, styles, capacity, clothes_owned],
- outputs=[clothes_recommended, possible_outfits],
- fn=cover_style_helper_wrapper,
- cache_examples=False)
-
- EXPLORE_MARKDOWN = """# Explore Styles
-Hint: Click an image to enable arrow key browsing.
-You put in clothes you own to find outfit ideas!
-"""
- with gr.TabItem('Explore Styles', id=1):
- with gr.Box():
- gr.Markdown(EXPLORE_MARKDOWN)
- with gr.Row():
- with gr.Column():
- bad_urls = ['dynamite', 'abercrombie women']
- styles = gr.Radio([ k for k in style_annotations_dict.keys() if k not in bad_urls ], value="uniqlo", label='Styles')
- start_from = gr.Slider(0, 100, label='Start from', value=0, step=10)
- with gr.Group():
- text_filter = gr.Textbox(value='white t shirt, jeans', label='Outfit includes')
- filter_button = gr.Button('Apply Text Filter')
- with gr.Column():
- outfits_gallery = gr.Gallery(label='Outfit Examples', value=change_gallery(styles.value, start_from.value, text_filter.value))
-
- styles.change(fn=change_gallery, inputs=[styles, start_from, text_filter], outputs=outfits_gallery)
- start_from.change(fn=change_gallery, inputs=[styles, start_from, text_filter], outputs=outfits_gallery)
- filter_button.click(fn=change_gallery, inputs=[styles, start_from, text_filter], outputs=outfits_gallery)
-
- with gr.TabItem('How it works', id=2):
- ABOUT_MARKDOWN = """
-# Why
-I don't really enjoy shopping, it takes a while a look at all the clothes,
-and I'm not sure if I should buy what I picked out.
-This program can tell me which clothes are the best for my wardrobe and style,
-so I don't have to worry about buying the wrong stuff.
-
-# How
-This program poses the problem as a nonlinear integer programming problem.
-The problem is to maximize the number of outfits, while contraining the number of clothes to be <= "this max you want to buy".
-Let a, b, c etc be binary literals that represent the use of clothing type a, b, c.
-These are clothing types like white dress shirt, or black hat.
-We maximize the nonlinear expression abc + ade + bec + ... similar to a disjuctive normal form,
-where each clause represents an outfit and is the product of clothing literals.
-Each outfit clause has value 1 iff every article of clothing is used in the solution.
-Pyomo is used to solve this optimization problem.
-
-This objective takes the form of something like,
-
-
-Hart, William E., Carl Laird, Jean-Paul Watson, David L. Woodruff, Gabriel A. Hackebeil, Bethany L. Nicholson, and John D. Siirola. Pyomo – Optimization Modeling in Python. Springer, 2017.
-
-
-# Caveats
-This model understands fashion from a macro level.
-It understands what a white shirt is, but is entirely blind to micro features like fit, brand, or shape.
-It also bins different kinds of clothing and colors together.
-So the model could mix up light grey sweatpants and dark grey chinos because they're both grey pants.
-
-There is also error introduced from bad image annotations, approximate solvers, and style dataset size.
-
-Hence it's important to look at the outfits images under possible outfits for a human touch!
-"""
- with gr.Box():
- gr.Markdown(ABOUT_MARKDOWN)
-
-
-article_keys = gender_article_keys['male']
-demo.launch(debug=False)
-
-
-
diff --git a/spaces/sblumenf/PDF-text-extractor/README.md b/spaces/sblumenf/PDF-text-extractor/README.md
deleted file mode 100644
index fe88f27ed8847864d79aaa1ffb7afc451286ad65..0000000000000000000000000000000000000000
--- a/spaces/sblumenf/PDF-text-extractor/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PDF Text Extractor
-emoji: 📉
-colorFrom: gray
-colorTo: blue
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/scedlatioru/img-to-music/example/PS3 CFW 3.72 Jailbreak Tool V1.0.rar.rar.md b/spaces/scedlatioru/img-to-music/example/PS3 CFW 3.72 Jailbreak Tool V1.0.rar.rar.md
deleted file mode 100644
index 60fcf9e22be09fe330a9e60b689c605d2c7c2ec9..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/PS3 CFW 3.72 Jailbreak Tool V1.0.rar.rar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-PS3 CFW 3.72 Jailbreak Tool V1.0.rar.rar Download ->>> https://gohhs.com/2uEzj0
-
- 4fefd39f24
-
-
-
diff --git a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/utils/time_estimator.py b/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/utils/time_estimator.py
deleted file mode 100644
index 8d429b0404b641472ff84701305c570cc26280b7..0000000000000000000000000000000000000000
--- a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/utils/time_estimator.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import time
-
-
-class TimeEstimator:
- def __init__(self, total_iter, step_size):
- self.avg_time_window = [] # window-based average
- self.exp_avg_time = None # exponential moving average
- self.alpha = 0.7 # for exponential moving average
-
- self.last_time = time.time() # would not be accurate for the first iteration but well
- self.total_iter = total_iter
- self.step_size = step_size
-
- self.buffering_exp = True
-
- # call this at a fixed interval
- # does not have to be every step
- def update(self):
- curr_time = time.time()
- time_per_iter = curr_time - self.last_time
- self.last_time = curr_time
-
- self.avg_time_window.append(time_per_iter)
-
- if self.buffering_exp:
- if self.exp_avg_time is not None:
- # discard the first iteration call to not pollute the ema
- self.buffering_exp = False
- self.exp_avg_time = time_per_iter
- else:
- self.exp_avg_time = self.alpha * self.exp_avg_time + (1 - self.alpha) * time_per_iter
-
- def get_est_remaining(self, it):
- if self.exp_avg_time is None:
- return 0
-
- remaining_iter = self.total_iter - it
- return remaining_iter * self.exp_avg_time / self.step_size
-
- def get_and_reset_avg_time(self):
- avg = sum(self.avg_time_window) / len(self.avg_time_window) / self.step_size
- self.avg_time_window = []
- return avg
diff --git a/spaces/sd-dreambooth-library/Baysaa1/train_dreambooth.py b/spaces/sd-dreambooth-library/Baysaa1/train_dreambooth.py
deleted file mode 100644
index f4ff135e549f0d6c72f733092f3df817cb178e01..0000000000000000000000000000000000000000
--- a/spaces/sd-dreambooth-library/Baysaa1/train_dreambooth.py
+++ /dev/null
@@ -1,889 +0,0 @@
-import argparse
-import itertools
-import math
-import os
-from pathlib import Path
-from typing import Optional
-import subprocess
-import sys
-import gc
-import random
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint
-from torch.utils.data import Dataset
-
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import set_seed
-from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.optimization import get_scheduler
-from huggingface_hub import HfFolder, Repository, whoami
-from PIL import Image
-from torchvision import transforms
-from tqdm.auto import tqdm
-from transformers import CLIPTextModel, CLIPTokenizer
-
-
-logger = get_logger(__name__)
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
- parser.add_argument(
- "--pretrained_model_name_or_path",
- type=str,
- default=None,
- #required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--instance_data_dir",
- type=str,
- default=None,
- #required=True,
- help="A folder containing the training data of instance images.",
- )
- parser.add_argument(
- "--class_data_dir",
- type=str,
- default=None,
- #required=False,
- help="A folder containing the training data of class images.",
- )
- parser.add_argument(
- "--instance_prompt",
- type=str,
- default=None,
- help="The prompt with identifier specifying the instance",
- )
- parser.add_argument(
- "--class_prompt",
- type=str,
- default="",
- help="The prompt to specify images in the same class as provided instance images.",
- )
- parser.add_argument(
- "--with_prior_preservation",
- default=False,
- action="store_true",
- help="Flag to add prior preservation loss.",
- )
- parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
- parser.add_argument(
- "--num_class_images",
- type=int,
- default=100,
- help=(
- "Minimal class images for prior preservation loss. If not have enough images, additional images will be"
- " sampled with class_prompt."
- ),
- )
- parser.add_argument(
- "--output_dir",
- type=str,
- default="",
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--resolution",
- type=int,
- default=512,
- help=(
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
- " resolution"
- ),
- )
- parser.add_argument(
- "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution"
- )
- parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder")
- parser.add_argument(
- "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader."
- )
- parser.add_argument(
- "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images."
- )
- parser.add_argument("--num_train_epochs", type=int, default=1)
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--gradient_checkpointing",
- action="store_true",
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=5e-6,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument(
- "--scale_lr",
- action="store_true",
- default=False,
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
- )
- parser.add_argument(
- "--lr_scheduler",
- type=str,
- default="constant",
- help=(
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
- ' "constant", "constant_with_warmup"]'
- ),
- )
- parser.add_argument(
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument(
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
- )
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--hub_model_id",
- type=str,
- default=None,
- help="The name of the repository to keep in sync with the local `output_dir`.",
- )
- parser.add_argument(
- "--logging_dir",
- type=str,
- default="logs",
- help=(
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
- ),
- )
- parser.add_argument(
- "--mixed_precision",
- type=str,
- default="no",
- choices=["no", "fp16", "bf16"],
- help=(
- "Whether to use mixed precision. Choose"
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
- "and an Nvidia Ampere GPU."
- ),
- )
-
- parser.add_argument(
- "--save_n_steps",
- type=int,
- default=1,
- help=("Save the model every n global_steps"),
- )
-
-
- parser.add_argument(
- "--save_starting_step",
- type=int,
- default=1,
- help=("The step from which it starts saving intermediary checkpoints"),
- )
-
- parser.add_argument(
- "--stop_text_encoder_training",
- type=int,
- default=1000000,
- help=("The step at which the text_encoder is no longer trained"),
- )
-
-
- parser.add_argument(
- "--image_captions_filename",
- action="store_true",
- help="Get captions from filename",
- )
-
-
- parser.add_argument(
- "--dump_only_text_encoder",
- action="store_true",
- default=False,
- help="Dump only text encoder",
- )
-
- parser.add_argument(
- "--train_only_unet",
- action="store_true",
- default=False,
- help="Train only the unet",
- )
-
- parser.add_argument(
- "--cache_latents",
- action="store_true",
- default=False,
- help="Train only the unet",
- )
-
- parser.add_argument(
- "--Session_dir",
- type=str,
- default="",
- help="Current session directory",
- )
-
-
-
-
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
-
- args = parser.parse_args()
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
- if env_local_rank != -1 and env_local_rank != args.local_rank:
- args.local_rank = env_local_rank
-
- #if args.instance_data_dir is None:
- # raise ValueError("You must specify a train data directory.")
-
- #if args.with_prior_preservation:
- # if args.class_data_dir is None:
- # raise ValueError("You must specify a data directory for class images.")
- # if args.class_prompt is None:
- # raise ValueError("You must specify prompt for class images.")
-
- return args
-
-
-class DreamBoothDataset(Dataset):
- """
- A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
- It pre-processes the images and the tokenizes prompts.
- """
-
- def __init__(
- self,
- instance_data_root,
- instance_prompt,
- tokenizer,
- args,
- class_data_root=None,
- class_prompt=None,
- size=512,
- center_crop=False,
- ):
- self.size = size
- self.center_crop = center_crop
- self.tokenizer = tokenizer
- self.image_captions_filename = None
-
- self.instance_data_root = Path(instance_data_root)
- if not self.instance_data_root.exists():
- raise ValueError("Instance images root doesn't exists.")
-
- self.instance_images_path = list(Path(instance_data_root).iterdir())
- self.num_instance_images = len(self.instance_images_path)
- self.instance_prompt = instance_prompt
- self._length = self.num_instance_images
-
- if args.image_captions_filename:
- self.image_captions_filename = True
-
- if class_data_root is not None:
- self.class_data_root = Path(class_data_root)
- self.class_data_root.mkdir(parents=True, exist_ok=True)
- self.class_images_path = list(self.class_data_root.iterdir())
- random.shuffle(self.class_images_path)
- self.num_class_images = len(self.class_images_path)
- self._length = max(self.num_class_images, self.num_instance_images)
- self.class_prompt = class_prompt
- else:
- self.class_data_root = None
-
- self.image_transforms = transforms.Compose(
- [
- transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
- transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
- transforms.ToTensor(),
- transforms.Normalize([0.5], [0.5]),
- ]
- )
-
- def __len__(self):
- return self._length
-
- def __getitem__(self, index):
- example = {}
- path = self.instance_images_path[index % self.num_instance_images]
- instance_image = Image.open(path)
- if not instance_image.mode == "RGB":
- instance_image = instance_image.convert("RGB")
-
- instance_prompt = self.instance_prompt
-
- if self.image_captions_filename:
- filename = Path(path).stem
- pt=''.join([i for i in filename if not i.isdigit()])
- pt=pt.replace("_"," ")
- pt=pt.replace("(","")
- pt=pt.replace(")","")
- pt=pt.replace("-","")
- instance_prompt = pt
- sys.stdout.write(" [0;32m" +instance_prompt+" [0m")
- sys.stdout.flush()
-
-
- example["instance_images"] = self.image_transforms(instance_image)
- example["instance_prompt_ids"] = self.tokenizer(
- instance_prompt,
- padding="do_not_pad",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- ).input_ids
-
- if self.class_data_root:
- class_image = Image.open(self.class_images_path[index % self.num_class_images])
- if not class_image.mode == "RGB":
- class_image = class_image.convert("RGB")
- example["class_images"] = self.image_transforms(class_image)
- example["class_prompt_ids"] = self.tokenizer(
- self.class_prompt,
- padding="do_not_pad",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- ).input_ids
-
- return example
-
-
-
-class PromptDataset(Dataset):
- "A simple dataset to prepare the prompts to generate class images on multiple GPUs."
-
- def __init__(self, prompt, num_samples):
- self.prompt = prompt
- self.num_samples = num_samples
-
- def __len__(self):
- return self.num_samples
-
- def __getitem__(self, index):
- example = {}
- example["prompt"] = self.prompt
- example["index"] = index
- return example
-
-class LatentsDataset(Dataset):
- def __init__(self, latents_cache, text_encoder_cache):
- self.latents_cache = latents_cache
- self.text_encoder_cache = text_encoder_cache
-
- def __len__(self):
- return len(self.latents_cache)
-
- def __getitem__(self, index):
- return self.latents_cache[index], self.text_encoder_cache[index]
-
-def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
- if token is None:
- token = HfFolder.get_token()
- if organization is None:
- username = whoami(token)["name"]
- return f"{username}/{model_id}"
- else:
- return f"{organization}/{model_id}"
-
-def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict:
- """
- Starts from base starting dict and then adds the remaining key values from updater replacing the values from
- the first starting/base dict with the second updater dict.
-
- For later: how does d = {**d1, **d2} replace collision?
-
- :param starting_dict:
- :param updater_dict:
- :return:
- """
- new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict
- new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict
- return new_dict
-
-def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace:
- """
-
- ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
- :param args1:
- :param args2:
- :return:
- """
- # - the merged args
- # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
- merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2))
- args = argparse.Namespace(**merged_key_values_for_namespace)
- return args
-
-def run_training(args_imported):
- args_default = parse_args()
- args = merge_args(args_default, args_imported)
- print(args)
- logging_dir = Path(args.output_dir, args.logging_dir)
- i=args.save_starting_step
- accelerator = Accelerator(
- gradient_accumulation_steps=args.gradient_accumulation_steps,
- mixed_precision=args.mixed_precision,
- log_with="tensorboard",
- logging_dir=logging_dir,
- )
-
- # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate
- # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models.
- # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate.
- if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1:
- raise ValueError(
- "Gradient accumulation is not supported when training the text encoder in distributed training. "
- "Please set gradient_accumulation_steps to 1. This feature will be supported in the future."
- )
-
- if args.seed is not None:
- set_seed(args.seed)
-
- if args.with_prior_preservation:
- class_images_dir = Path(args.class_data_dir)
- if not class_images_dir.exists():
- class_images_dir.mkdir(parents=True)
- cur_class_images = len(list(class_images_dir.iterdir()))
-
- if cur_class_images < args.num_class_images:
- torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path, torch_dtype=torch_dtype
- )
- pipeline.set_progress_bar_config(disable=True)
-
- num_new_images = args.num_class_images - cur_class_images
- logger.info(f"Number of class images to sample: {num_new_images}.")
-
- sample_dataset = PromptDataset(args.class_prompt, num_new_images)
- sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)
-
- sample_dataloader = accelerator.prepare(sample_dataloader)
- pipeline.to(accelerator.device)
-
- for example in tqdm(
- sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
- ):
- with torch.autocast("cuda"):
- images = pipeline(example["prompt"]).images
-
- for i, image in enumerate(images):
- image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg")
-
- del pipeline
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
- repo = Repository(args.output_dir, clone_from=repo_name)
-
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
- if "step_*" not in gitignore:
- gitignore.write("step_*\n")
- if "epoch_*" not in gitignore:
- gitignore.write("epoch_*\n")
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
-
- # Load the tokenizer
- if args.tokenizer_name:
- tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
- elif args.pretrained_model_name_or_path:
- tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
-
- # Load models and create wrapper for stable diffusion
- if args.train_only_unet:
- if os.path.exists(str(args.output_dir+"/text_encoder_trained")):
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained")
- elif os.path.exists(str(args.output_dir+"/text_encoder")):
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder")
- else:
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
- else:
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
- unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
- if is_xformers_available():
- try:
- print("Enabling memory efficient attention with xformers...")
- unet.enable_xformers_memory_efficient_attention()
- except Exception as e:
- logger.warning(
- f"Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: {e}"
- )
- vae.requires_grad_(False)
- if not args.train_text_encoder:
- text_encoder.requires_grad_(False)
-
- if args.gradient_checkpointing:
- unet.enable_gradient_checkpointing()
- if args.train_text_encoder:
- text_encoder.gradient_checkpointing_enable()
-
- if args.scale_lr:
- args.learning_rate = (
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
- )
-
- # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
- if args.use_8bit_adam:
- try:
- import bitsandbytes as bnb
- except ImportError:
- raise ImportError(
- "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
- )
-
- optimizer_class = bnb.optim.AdamW8bit
- else:
- optimizer_class = torch.optim.AdamW
-
- params_to_optimize = (
- itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters()
- )
- optimizer = optimizer_class(
- params_to_optimize,
- lr=args.learning_rate,
- betas=(args.adam_beta1, args.adam_beta2),
- weight_decay=args.adam_weight_decay,
- eps=args.adam_epsilon,
- )
-
- noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler")
-
- train_dataset = DreamBoothDataset(
- instance_data_root=args.instance_data_dir,
- instance_prompt=args.instance_prompt,
- class_data_root=args.class_data_dir if args.with_prior_preservation else None,
- class_prompt=args.class_prompt,
- tokenizer=tokenizer,
- size=args.resolution,
- center_crop=args.center_crop,
- args=args,
- )
-
- def collate_fn(examples):
- input_ids = [example["instance_prompt_ids"] for example in examples]
- pixel_values = [example["instance_images"] for example in examples]
-
- # Concat class and instance examples for prior preservation.
- # We do this to avoid doing two forward passes.
- if args.with_prior_preservation:
- input_ids += [example["class_prompt_ids"] for example in examples]
- pixel_values += [example["class_images"] for example in examples]
-
- pixel_values = torch.stack(pixel_values)
- pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
-
- input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids
-
- batch = {
- "input_ids": input_ids,
- "pixel_values": pixel_values,
- }
- return batch
-
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn
- )
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- args.lr_scheduler,
- optimizer=optimizer,
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- )
-
- if args.train_text_encoder:
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler
- )
- else:
- unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, optimizer, train_dataloader, lr_scheduler
- )
-
- weight_dtype = torch.float32
- if args.mixed_precision == "fp16":
- weight_dtype = torch.float16
- elif args.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
-
- # Move text_encode and vae to gpu.
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- vae.to(accelerator.device, dtype=weight_dtype)
- if not args.train_text_encoder:
- text_encoder.to(accelerator.device, dtype=weight_dtype)
-
-
- if args.cache_latents:
- latents_cache = []
- text_encoder_cache = []
- for batch in tqdm(train_dataloader, desc="Caching latents"):
- with torch.no_grad():
- batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype)
- batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True)
- latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
- if args.train_text_encoder:
- text_encoder_cache.append(batch["input_ids"])
- else:
- text_encoder_cache.append(text_encoder(batch["input_ids"])[0])
- train_dataset = LatentsDataset(latents_cache, text_encoder_cache)
- train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True)
-
- del vae
- #if not args.train_text_encoder:
- # del text_encoder
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("dreambooth", config=vars(args))
-
- def bar(prg):
- br='|'+'█' * prg + ' ' * (25-prg)+'|'
- return br
-
- # Train!
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num batches each epoch = {len(train_dataloader)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
- global_step = 0
-
- for epoch in range(args.num_train_epochs):
- unet.train()
- if args.train_text_encoder:
- text_encoder.train()
- for step, batch in enumerate(train_dataloader):
- with accelerator.accumulate(unet):
- # Convert images to latent space
- with torch.no_grad():
- if args.cache_latents:
- latents_dist = batch[0][0]
- else:
- latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist
- latents = latents_dist.sample() * 0.18215
-
- # Sample noise that we'll add to the latents
- noise = torch.randn_like(latents)
- bsz = latents.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
- timesteps = timesteps.long()
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
-
- # Get the text embedding for conditioning
- if(args.cache_latents):
- if args.train_text_encoder:
- encoder_hidden_states = text_encoder(batch[0][1])[0]
- else:
- encoder_hidden_states = batch[0][1]
- else:
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
-
- # Predict the noise residual
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
-
- # Get the target for loss depending on the prediction type
- if noise_scheduler.config.prediction_type == "epsilon":
- target = noise
- elif noise_scheduler.config.prediction_type == "v_prediction":
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
- else:
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
-
- if args.with_prior_preservation:
- # Chunk the noise and model_pred into two parts and compute the loss on each part separately.
- model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
- target, target_prior = torch.chunk(target, 2, dim=0)
-
- # Compute instance loss
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean()
-
- # Compute prior loss
- prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
-
- # Add the prior loss to the instance loss.
- loss = loss + args.prior_loss_weight * prior_loss
- else:
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
-
- accelerator.backward(loss)
- if accelerator.sync_gradients:
- params_to_clip = (
- itertools.chain(unet.parameters(), text_encoder.parameters())
- if args.train_text_encoder
- else unet.parameters()
- )
- accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
-
- fll=round((global_step*100)/args.max_train_steps)
- fll=round(fll/4)
- pr=bar(fll)
-
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- progress_bar.set_description_str("Progress:"+pr)
- accelerator.log(logs, step=global_step)
-
- if global_step >= args.max_train_steps:
- break
-
- if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30:
- if accelerator.is_main_process:
- print(" [0;32m" +" Freezing the text_encoder ..."+" [0m")
- frz_dir=args.output_dir + "/text_encoder_frozen"
- if os.path.exists(frz_dir):
- subprocess.call('rm -r '+ frz_dir, shell=True)
- os.mkdir(frz_dir)
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.text_encoder.save_pretrained(frz_dir)
-
- if args.save_n_steps >= 200:
- if global_step < args.max_train_steps and global_step+1==i:
- ckpt_name = "_step_" + str(global_step+1)
- save_dir = Path(args.output_dir+ckpt_name)
- save_dir=str(save_dir)
- save_dir=save_dir.replace(" ", "_")
- if not os.path.exists(save_dir):
- os.mkdir(save_dir)
- inst=save_dir[16:]
- inst=inst.replace(" ", "_")
- print(" [1;32mSAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt")
- # Create the pipeline using the trained modules and save it.
- if accelerator.is_main_process:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.save_pretrained(save_dir)
- frz_dir=args.output_dir + "/text_encoder_frozen"
- if args.train_text_encoder and os.path.exists(frz_dir):
- subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True)
- subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True)
- chkpth=args.Session_dir+"/"+inst+".ckpt"
- subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True)
- subprocess.call('rm -r '+ save_dir, shell=True)
- i=i+args.save_n_steps
-
- accelerator.wait_for_everyone()
-
- # Create the pipeline using using the trained modules and save it.
- if accelerator.is_main_process:
- if args.dump_only_text_encoder:
- txt_dir=args.output_dir + "/text_encoder_trained"
- if not os.path.exists(txt_dir):
- os.mkdir(txt_dir)
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.text_encoder.save_pretrained(txt_dir)
-
- elif args.train_only_unet:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- pipeline.save_pretrained(args.output_dir)
- txt_dir=args.output_dir + "/text_encoder_trained"
- subprocess.call('rm -r '+txt_dir, shell=True)
-
- else:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
- )
- frz_dir=args.output_dir + "/text_encoder_frozen"
- pipeline.save_pretrained(args.output_dir)
- if args.train_text_encoder and os.path.exists(frz_dir):
- subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True)
- subprocess.call('rm -r '+ frz_dir, shell=True)
-
- if args.push_to_hub:
- repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
-
- accelerator.end_training()
- del pipeline
- torch.cuda.empty_cache()
- gc.collect()
-if __name__ == "__main__":
- pass
- #main()
-
diff --git a/spaces/segadeds/simpsons/README.md b/spaces/segadeds/simpsons/README.md
deleted file mode 100644
index 9c02aaa81e4663052cbf8d606e95bb73509d3ca0..0000000000000000000000000000000000000000
--- a/spaces/segadeds/simpsons/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Simpsons
-emoji: 📚
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.4.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/maskctc/add_mask_token.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/maskctc/add_mask_token.py
deleted file mode 100644
index e503a0235b5c9a211dc6a7702039f16332748c0d..0000000000000000000000000000000000000000
--- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/maskctc/add_mask_token.py
+++ /dev/null
@@ -1,39 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-# Copyright 2020 Johns Hopkins University (Shinji Watanabe)
-# Waseda University (Yosuke Higuchi)
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""Token masking module for Masked LM."""
-
-import numpy
-
-
-def mask_uniform(ys_pad, mask_token, eos, ignore_id):
- """Replace random tokens with label and add label.
-
- The number of is chosen from a uniform distribution
- between one and the target sequence's length.
- :param torch.Tensor ys_pad: batch of padded target sequences (B, Lmax)
- :param int mask_token: index of
- :param int eos: index of
- :param int ignore_id: index of padding
- :return: padded tensor (B, Lmax)
- :rtype: torch.Tensor
- :return: padded tensor (B, Lmax)
- :rtype: torch.Tensor
- """
- from espnet.nets.pytorch_backend.nets_utils import pad_list
-
- ys = [y[y != ignore_id] for y in ys_pad] # parse padded ys
- ys_out = [y.new(y.size()).fill_(ignore_id) for y in ys]
- ys_in = [y.clone() for y in ys]
- for i in range(len(ys)):
- num_samples = numpy.random.randint(1, len(ys[i]) + 1)
- idx = numpy.random.choice(len(ys[i]), num_samples)
-
- ys_in[i][idx] = mask_token
- ys_out[i][idx] = ys[i][idx]
-
- return pad_list(ys_in, eos), pad_list(ys_out, ignore_id)
diff --git a/spaces/seok07/1JK50/infer_pack/transforms.py b/spaces/seok07/1JK50/infer_pack/transforms.py
deleted file mode 100644
index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000
--- a/spaces/seok07/1JK50/infer_pack/transforms.py
+++ /dev/null
@@ -1,209 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
-
-
-def unconstrained_rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails="linear",
- tail_bound=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == "linear":
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError("{} tails are not implemented.".format(tails))
-
- (
- outputs[inside_interval_mask],
- logabsdet[inside_interval_mask],
- ) = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound,
- right=tail_bound,
- bottom=-tail_bound,
- top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- )
-
- return outputs, logabsdet
-
-
-def rational_quadratic_spline(
- inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0.0,
- right=1.0,
- bottom=0.0,
- top=1.0,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE,
-):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError("Input to a transform is not within its domain")
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError("Minimal bin width too large for the number of bins")
- if min_bin_height * num_bins > 1.0:
- raise ValueError("Minimal bin height too large for the number of bins")
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- ) + input_heights * (input_delta - input_derivatives)
- b = input_heights * input_derivatives - (inputs - input_cumheights) * (
- input_derivatives + input_derivatives_plus_one - 2 * input_delta
- )
- c = -input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (
- input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
- )
- denominator = input_delta + (
- (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta
- )
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (
- input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2)
- )
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/sgxz/bingo/src/components/ui/tooltip.tsx b/spaces/sgxz/bingo/src/components/ui/tooltip.tsx
deleted file mode 100644
index af1d48beb90dd5ae311796539843700871052cae..0000000000000000000000000000000000000000
--- a/spaces/sgxz/bingo/src/components/ui/tooltip.tsx
+++ /dev/null
@@ -1,30 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as TooltipPrimitive from '@radix-ui/react-tooltip'
-
-import { cn } from '@/lib/utils'
-
-const TooltipProvider = TooltipPrimitive.Provider
-
-const Tooltip = TooltipPrimitive.Root
-
-const TooltipTrigger = TooltipPrimitive.Trigger
-
-const TooltipContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, sideOffset = 4, ...props }, ref) => (
-
-))
-TooltipContent.displayName = TooltipPrimitive.Content.displayName
-
-export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider }
diff --git a/spaces/shripadbhat/whisper-demo/app.py b/spaces/shripadbhat/whisper-demo/app.py
deleted file mode 100644
index 32e74ccf3e99216bc3d0b190b15f93834600b5ad..0000000000000000000000000000000000000000
--- a/spaces/shripadbhat/whisper-demo/app.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import torch
-
-import gradio as gr
-import pytube as pt
-from transformers import pipeline
-from huggingface_hub import model_info
-
-MODEL_NAME = "shripadbhat/whisper-medium-hi" #this always needs to stay in line 8 :D sorry for the hackiness
-
-device = 0 if torch.cuda.is_available() else "cpu"
-
-pipe = pipeline(
- task="automatic-speech-recognition",
- model=MODEL_NAME,
- chunk_length_s=30,
- device=device,
-)
-
-def transcribe(microphone, file_upload):
- warn_output = ""
- if (microphone is not None) and (file_upload is not None):
- warn_output = (
- "WARNING: You've uploaded an audio file and used the microphone. "
- "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n"
- )
-
- elif (microphone is None) and (file_upload is None):
- return "ERROR: You have to either use the microphone or upload an audio file"
-
- file = microphone if microphone is not None else file_upload
-
- text = pipe(file)["text"]
-
- return warn_output + text
-
-
-def _return_yt_html_embed(yt_url):
- video_id = yt_url.split("?v=")[-1]
- HTML_str = (
- f' VIDEO '
- " "
- )
- return HTML_str
-
-
-def yt_transcribe(yt_url):
- yt = pt.YouTube(yt_url)
- html_embed_str = _return_yt_html_embed(yt_url)
- stream = yt.streams.filter(only_audio=True)[0]
- stream.download(filename="audio.mp3")
-
- text = pipe("audio.mp3")["text"]
-
- return html_embed_str, text
-
-
-demo = gr.Blocks()
-
-mf_transcribe = gr.Interface(
- fn=transcribe,
- inputs=[
- gr.inputs.Audio(source="microphone", type="filepath", optional=True),
- gr.inputs.Audio(source="upload", type="filepath", optional=True),
- ],
- outputs="text",
- layout="horizontal",
- theme="huggingface",
- title="Whisper Demo: Transcribe Hindi Audio",
- description=(
- "Transcribe long-form microphone or audio inputs in Hindi with the click of a button! Demo uses the the fine-tuned"
- f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files"
- " of arbitrary length."
- ),
- allow_flagging="never",
-)
-
-yt_transcribe = gr.Interface(
- fn=yt_transcribe,
- inputs=[gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video (Hindi) here", label="YouTube URL")],
- outputs=["html", "text"],
- layout="horizontal",
- theme="huggingface",
- title="Whisper Demo: Transcribe Hindi YouTube Videos",
- description=(
- "Transcribe long-form YouTube videos with the click of a button! Demo uses the the fine-tuned checkpoint:"
- f" [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files of"
- " arbitrary length."
- ),
- allow_flagging="never",
-)
-
-with demo:
- gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcribe Audio", "Transcribe YouTube"])
-
-demo.launch(enable_queue=True)
diff --git a/spaces/silencewing/server/youyou/.history/math_20230613231433.html b/spaces/silencewing/server/youyou/.history/math_20230613231433.html
deleted file mode 100644
index 46757c655b512bed9c17f3ec28e7f41765d55f73..0000000000000000000000000000000000000000
--- a/spaces/silencewing/server/youyou/.history/math_20230613231433.html
+++ /dev/null
@@ -1,235 +0,0 @@
-
-
-
-
-
-
-
-
-
- Document
-
-
-
-
-
-
-
-
-
diff --git a/spaces/simonduerr/diffdock/utils/training.py b/spaces/simonduerr/diffdock/utils/training.py
deleted file mode 100644
index 83d1043486c24fd7ca858b0457dc1bdcf40c1e99..0000000000000000000000000000000000000000
--- a/spaces/simonduerr/diffdock/utils/training.py
+++ /dev/null
@@ -1,236 +0,0 @@
-import copy
-
-import numpy as np
-from torch_geometric.loader import DataLoader
-from tqdm import tqdm
-
-from confidence.dataset import ListDataset
-from utils import so3, torus
-from utils.sampling import randomize_position, sampling
-import torch
-from utils.diffusion_utils import get_t_schedule
-
-
-def loss_function(tr_pred, rot_pred, tor_pred, data, t_to_sigma, device, tr_weight=1, rot_weight=1,
- tor_weight=1, apply_mean=True, no_torsion=False):
- tr_sigma, rot_sigma, tor_sigma = t_to_sigma(
- *[torch.cat([d.complex_t[noise_type] for d in data]) if device.type == 'cuda' else data.complex_t[noise_type]
- for noise_type in ['tr', 'rot', 'tor']])
- mean_dims = (0, 1) if apply_mean else 1
-
- # translation component
- tr_score = torch.cat([d.tr_score for d in data], dim=0) if device.type == 'cuda' else data.tr_score
- tr_sigma = tr_sigma.unsqueeze(-1)
- tr_loss = ((tr_pred.cpu() - tr_score) ** 2 * tr_sigma ** 2).mean(dim=mean_dims)
- tr_base_loss = (tr_score ** 2 * tr_sigma ** 2).mean(dim=mean_dims).detach()
-
- # rotation component
- rot_score = torch.cat([d.rot_score for d in data], dim=0) if device.type == 'cuda' else data.rot_score
- rot_score_norm = so3.score_norm(rot_sigma.cpu()).unsqueeze(-1)
- rot_loss = (((rot_pred.cpu() - rot_score) / rot_score_norm) ** 2).mean(dim=mean_dims)
- rot_base_loss = ((rot_score / rot_score_norm) ** 2).mean(dim=mean_dims).detach()
-
- # torsion component
- if not no_torsion:
- edge_tor_sigma = torch.from_numpy(
- np.concatenate([d.tor_sigma_edge for d in data] if device.type == 'cuda' else data.tor_sigma_edge))
- tor_score = torch.cat([d.tor_score for d in data], dim=0) if device.type == 'cuda' else data.tor_score
- tor_score_norm2 = torch.tensor(torus.score_norm(edge_tor_sigma.cpu().numpy())).float()
- tor_loss = ((tor_pred.cpu() - tor_score) ** 2 / tor_score_norm2)
- tor_base_loss = ((tor_score ** 2 / tor_score_norm2)).detach()
- if apply_mean:
- tor_loss, tor_base_loss = tor_loss.mean() * torch.ones(1, dtype=torch.float), tor_base_loss.mean() * torch.ones(1, dtype=torch.float)
- else:
- index = torch.cat([torch.ones(d['ligand'].edge_mask.sum()) * i for i, d in
- enumerate(data)]).long() if device.type == 'cuda' else data['ligand'].batch[
- data['ligand', 'ligand'].edge_index[0][data['ligand'].edge_mask]]
- num_graphs = len(data) if device.type == 'cuda' else data.num_graphs
- t_l, t_b_l, c = torch.zeros(num_graphs), torch.zeros(num_graphs), torch.zeros(num_graphs)
- c.index_add_(0, index, torch.ones(tor_loss.shape))
- c = c + 0.0001
- t_l.index_add_(0, index, tor_loss)
- t_b_l.index_add_(0, index, tor_base_loss)
- tor_loss, tor_base_loss = t_l / c, t_b_l / c
- else:
- if apply_mean:
- tor_loss, tor_base_loss = torch.zeros(1, dtype=torch.float), torch.zeros(1, dtype=torch.float)
- else:
- tor_loss, tor_base_loss = torch.zeros(len(rot_loss), dtype=torch.float), torch.zeros(len(rot_loss), dtype=torch.float)
-
- loss = tr_loss * tr_weight + rot_loss * rot_weight + tor_loss * tor_weight
- return loss, tr_loss.detach(), rot_loss.detach(), tor_loss.detach(), tr_base_loss, rot_base_loss, tor_base_loss
-
-
-class AverageMeter():
- def __init__(self, types, unpooled_metrics=False, intervals=1):
- self.types = types
- self.intervals = intervals
- self.count = 0 if intervals == 1 else torch.zeros(len(types), intervals)
- self.acc = {t: torch.zeros(intervals) for t in types}
- self.unpooled_metrics = unpooled_metrics
-
- def add(self, vals, interval_idx=None):
- if self.intervals == 1:
- self.count += 1 if vals[0].dim() == 0 else len(vals[0])
- for type_idx, v in enumerate(vals):
- self.acc[self.types[type_idx]] += v.sum() if self.unpooled_metrics else v
- else:
- for type_idx, v in enumerate(vals):
- self.count[type_idx].index_add_(0, interval_idx[type_idx], torch.ones(len(v)))
- if not torch.allclose(v, torch.tensor(0.0)):
- self.acc[self.types[type_idx]].index_add_(0, interval_idx[type_idx], v)
-
- def summary(self):
- if self.intervals == 1:
- out = {k: v.item() / self.count for k, v in self.acc.items()}
- return out
- else:
- out = {}
- for i in range(self.intervals):
- for type_idx, k in enumerate(self.types):
- out['int' + str(i) + '_' + k] = (
- list(self.acc.values())[type_idx][i] / self.count[type_idx][i]).item()
- return out
-
-
-def train_epoch(model, loader, optimizer, device, t_to_sigma, loss_fn, ema_weigths):
- model.train()
- meter = AverageMeter(['loss', 'tr_loss', 'rot_loss', 'tor_loss', 'tr_base_loss', 'rot_base_loss', 'tor_base_loss'])
-
- for data in tqdm(loader, total=len(loader)):
- if device.type == 'cuda' and len(data) == 1 or device.type == 'cpu' and data.num_graphs == 1:
- print("Skipping batch of size 1 since otherwise batchnorm would not work.")
- optimizer.zero_grad()
- try:
- tr_pred, rot_pred, tor_pred = model(data)
- loss, tr_loss, rot_loss, tor_loss, tr_base_loss, rot_base_loss, tor_base_loss = \
- loss_fn(tr_pred, rot_pred, tor_pred, data=data, t_to_sigma=t_to_sigma, device=device)
- loss.backward()
- optimizer.step()
- ema_weigths.update(model.parameters())
- meter.add([loss.cpu().detach(), tr_loss, rot_loss, tor_loss, tr_base_loss, rot_base_loss, tor_base_loss])
- except RuntimeError as e:
- if 'out of memory' in str(e):
- print('| WARNING: ran out of memory, skipping batch')
- for p in model.parameters():
- if p.grad is not None:
- del p.grad # free some memory
- torch.cuda.empty_cache()
- continue
- elif 'Input mismatch' in str(e):
- print('| WARNING: weird torch_cluster error, skipping batch')
- for p in model.parameters():
- if p.grad is not None:
- del p.grad # free some memory
- torch.cuda.empty_cache()
- continue
- else:
- raise e
-
- return meter.summary()
-
-
-def test_epoch(model, loader, device, t_to_sigma, loss_fn, test_sigma_intervals=False):
- model.eval()
- meter = AverageMeter(['loss', 'tr_loss', 'rot_loss', 'tor_loss', 'tr_base_loss', 'rot_base_loss', 'tor_base_loss'],
- unpooled_metrics=True)
-
- if test_sigma_intervals:
- meter_all = AverageMeter(
- ['loss', 'tr_loss', 'rot_loss', 'tor_loss', 'tr_base_loss', 'rot_base_loss', 'tor_base_loss'],
- unpooled_metrics=True, intervals=10)
-
- for data in tqdm(loader, total=len(loader)):
- try:
- with torch.no_grad():
- tr_pred, rot_pred, tor_pred = model(data)
-
- loss, tr_loss, rot_loss, tor_loss, tr_base_loss, rot_base_loss, tor_base_loss = \
- loss_fn(tr_pred, rot_pred, tor_pred, data=data, t_to_sigma=t_to_sigma, apply_mean=False, device=device)
- meter.add([loss.cpu().detach(), tr_loss, rot_loss, tor_loss, tr_base_loss, rot_base_loss, tor_base_loss])
-
- if test_sigma_intervals > 0:
- complex_t_tr, complex_t_rot, complex_t_tor = [torch.cat([d.complex_t[noise_type] for d in data]) for
- noise_type in ['tr', 'rot', 'tor']]
- sigma_index_tr = torch.round(complex_t_tr.cpu() * (10 - 1)).long()
- sigma_index_rot = torch.round(complex_t_rot.cpu() * (10 - 1)).long()
- sigma_index_tor = torch.round(complex_t_tor.cpu() * (10 - 1)).long()
- meter_all.add(
- [loss.cpu().detach(), tr_loss, rot_loss, tor_loss, tr_base_loss, rot_base_loss, tor_base_loss],
- [sigma_index_tr, sigma_index_tr, sigma_index_rot, sigma_index_tor, sigma_index_tr, sigma_index_rot,
- sigma_index_tor, sigma_index_tr])
-
- except RuntimeError as e:
- if 'out of memory' in str(e):
- print('| WARNING: ran out of memory, skipping batch')
- for p in model.parameters():
- if p.grad is not None:
- del p.grad # free some memory
- torch.cuda.empty_cache()
- continue
- elif 'Input mismatch' in str(e):
- print('| WARNING: weird torch_cluster error, skipping batch')
- for p in model.parameters():
- if p.grad is not None:
- del p.grad # free some memory
- torch.cuda.empty_cache()
- continue
- else:
- raise e
-
- out = meter.summary()
- if test_sigma_intervals > 0: out.update(meter_all.summary())
- return out
-
-
-def inference_epoch(model, complex_graphs, device, t_to_sigma, args):
- t_schedule = get_t_schedule(inference_steps=args.inference_steps)
- tr_schedule, rot_schedule, tor_schedule = t_schedule, t_schedule, t_schedule
-
- dataset = ListDataset(complex_graphs)
- loader = DataLoader(dataset=dataset, batch_size=1, shuffle=False)
- rmsds = []
-
- for orig_complex_graph in tqdm(loader):
- data_list = [copy.deepcopy(orig_complex_graph)]
- randomize_position(data_list, args.no_torsion, False, args.tr_sigma_max)
-
- predictions_list = None
- failed_convergence_counter = 0
- while predictions_list == None:
- try:
- predictions_list, confidences = sampling(data_list=data_list, model=model.module if device.type=='cuda' else model,
- inference_steps=args.inference_steps,
- tr_schedule=tr_schedule, rot_schedule=rot_schedule,
- tor_schedule=tor_schedule,
- device=device, t_to_sigma=t_to_sigma, model_args=args)
- except Exception as e:
- if 'failed to converge' in str(e):
- failed_convergence_counter += 1
- if failed_convergence_counter > 5:
- print('| WARNING: SVD failed to converge 5 times - skipping the complex')
- break
- print('| WARNING: SVD failed to converge - trying again with a new sample')
- else:
- raise e
- if failed_convergence_counter > 5: continue
- if args.no_torsion:
- orig_complex_graph['ligand'].orig_pos = (orig_complex_graph['ligand'].pos.cpu().numpy() +
- orig_complex_graph.original_center.cpu().numpy())
-
- filterHs = torch.not_equal(predictions_list[0]['ligand'].x[:, 0], 0).cpu().numpy()
-
- if isinstance(orig_complex_graph['ligand'].orig_pos, list):
- orig_complex_graph['ligand'].orig_pos = orig_complex_graph['ligand'].orig_pos[0]
-
- ligand_pos = np.asarray(
- [complex_graph['ligand'].pos.cpu().numpy()[filterHs] for complex_graph in predictions_list])
- orig_ligand_pos = np.expand_dims(
- orig_complex_graph['ligand'].orig_pos[filterHs] - orig_complex_graph.original_center.cpu().numpy(), axis=0)
- rmsd = np.sqrt(((ligand_pos - orig_ligand_pos) ** 2).sum(axis=2).mean(axis=1))
- rmsds.append(rmsd)
-
- rmsds = np.array(rmsds)
- losses = {'rmsds_lt2': (100 * (rmsds < 2).sum() / len(rmsds)),
- 'rmsds_lt5': (100 * (rmsds < 5).sum() / len(rmsds))}
- return losses
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Instagram Latest APK for Android and Enjoy Reels Stories and More.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Instagram Latest APK for Android and Enjoy Reels Stories and More.md
deleted file mode 100644
index d50fcb569313977fd84304db86e9da85ca4fe274..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Instagram Latest APK for Android and Enjoy Reels Stories and More.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-Instagram Latest APK: How to Download and Install the App on Your Android Device
-Instagram is one of the most popular social media platforms in the world, with over 1 billion users. It allows you to create and share your photos, stories, reels, and videos with the friends and followers you care about. You can also connect with people, explore content, and express yourself in various ways.
-instagram latest apk Download Zip ⚙⚙⚙ https://ssurll.com/2uO142
-If you are an Android user, you might be wondering how to download and install the latest version of Instagram on your device. In this article, we will show you how to do that using an APK file, which is a simple and convenient way to get the app without going through the Google Play Store. We will also explain what an APK file is, why you should use it, and how to download it from a trusted source.
-What is Instagram and Why You Should Use It
-Instagram is a social networking app that lets you capture and share your life moments with your friends and followers. You can post photos, videos, reels, stories, IGTVs, and live streams on your profile, or send them privately via direct messages. You can also follow other users, like, comment, and share their content, or discover new accounts that match your interests.
-Instagram has many features and benefits that make it a fun and engaging app to use. Here are some of them:
-instagram latest apk download free
-instagram latest apk mod
-instagram latest apk 2023
-instagram latest apk for android
-instagram latest apk with reels
-instagram latest apk update
-instagram latest apk file
-instagram latest apk mirror
-instagram latest apk no ads
-instagram latest apk old version
-instagram latest apk for pc
-instagram latest apk pro
-instagram latest apk cracked
-instagram latest apk dark mode
-instagram latest apk beta
-instagram latest apk with music
-instagram latest apk direct download
-instagram latest apk hack
-instagram latest apk plus
-instagram latest apk premium
-instagram latest apk igtv
-instagram latest apk offline
-instagram latest apk size
-instagram latest apk xda
-instagram latest apk without facebook
-instagram latest apk 4k
-instagram latest apk unlimited followers
-instagram latest apk video downloader
-instagram latest apk gb
-instagram latest apk original
-instagram latest apk pure
-instagram latest apk features
-instagram latest apk lite
-instagram latest apk full version
-instagram latest apk app store
-instagram latest apk link
-instagram latest apk install
-instagram latest apk review
-instagram latest apk reddit
-instagram latest apk qr code
-instagram latest apk requirements
-instagram latest apk not working
-instagram latest apk changelog
-instagram latest apk security
-instagram latest apk bug fixes
-instagram latest apk comparison
-instagram latest apk rating
-instagram latest apk tips and tricks
-Instagram Features and Benefits
-
-Instagram Reels: A new way to create and discover entertaining short videos. You can watch, like, comment, and share reels videos in a dedicated space in the Reels tab. You can also add special effects, background music, face filters, emojis, and stickers to bring your reels to life.
-Instagram Stories: A feature that lets you share photos and videos that disappear after 24 hours. You can add fun creative tools like text, music, stickers, GIFs, polls, quizzes, and more to your stories. You can also see what your friends are up to by watching their stories or chatting with them.
-Instagram Direct: A feature that lets you message your friends in private. You can start fun conversations with one or more friends about what you see on Feed, Stories, or Reels. You can also video chat with up to 50 people at once, send voice messages, photos, videos, memes, stickers, GIFs, and more.
-Instagram Feed: A feature that lets you post photos, videos, reels or photo updates from moments in your life. You can also see what your friends and favorite accounts are posting by scrolling through your feed. You can like, comment, save or share their posts with others.
-Instagram Search and Explore: A feature that lets you find content that interests you. You can search for people, hashtags, places or topics using the search bar. You can also browse through different categories like Food, Travel, Fashion or Music in the Explore tab. You can discover new accounts to follow or content to enjoy.
-
-What is an APK and Why You Should Download It
-An An APK is an acronym for Android Package Kit, which is a file format that contains all the necessary components to install an app on an Android device. It is similar to an executable file (.exe) on a Windows PC or a DMG file on a Mac. There are many reasons why you might want to download an APK file instead of installing an app from the Google Play Store. Some of them are: - You can get access to the latest version of the app before it is officially released on the Play Store. - You can install apps that are not available in your region or country due to geo-restrictions or censorship. - You can install apps that are not compatible with your device model or Android version due to hardware or software limitations. - You can install apps that have been removed from the Play Store due to policy violations or legal issues. - You can install apps that have been modified or customized by third-party developers to add extra features or remove ads. However, downloading and installing APK files also comes with some risks and challenges. Some of them are: - You might download a fake or malicious APK file that contains viruses, malware, spyware, or adware that can harm your device or steal your personal information. - You might download an outdated or corrupted APK file that does not work properly or causes errors and crashes on your device. - You might download an APK file that violates the terms and conditions of the original app developer or the Play Store, which can result in legal consequences or account suspension. - You might need to enable unknown sources on your device settings, which can expose your device to security threats from untrusted sources. Therefore, it is important to be careful and cautious when downloading and installing APK files. You should always download them from reliable and reputable websites that have positive reviews and ratings from other users. You should also scan the APK file with a trusted antivirus app before installing it on your device. You should also backup your data and settings before installing any APK file, in case something goes wrong.
How to Download and Install Instagram Latest APK on Your Android Device
-If you want to enjoy the latest features and updates of Instagram on your Android device, you can download and install the Instagram latest APK file from a trusted website. Here are the steps you need to follow:
-Step 1: Enable Unknown Sources on Your Device Settings
-Before you can install any APK file on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Play Store. To do this, follow these steps:
-
-Go to your device settings and tap on Security or Privacy.
-Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
-A warning message will pop up, telling you about the risks of installing apps from unknown sources. Tap on OK or Allow to proceed.
-
-Step 2: Download Instagram Latest APK from a Reliable Website
-Once you have enabled unknown sources on your device settings, you can download the Instagram latest APK file from a reliable website. There are many websites that offer APK files for various apps, but not all of them are safe and trustworthy. To avoid downloading fake or malicious APK files, you should look for websites that have the following characteristics:
-
-They have a secure connection (HTTPS) and a valid SSL certificate.
-They have a clear and transparent privacy policy and terms of service.
-They have positive feedback and ratings from other users.
-They have a large and updated database of APK files.
-They have a fast and easy download process.
-
-One of the websites that meets these criteria is APKPure.com, which is one of the most popular and trusted sources of APK files. To download the Instagram latest APK file from this website, follow these steps:
-
-Go to APKPure.com using your browser.
-Type Instagram in the search bar and hit enter.
-Select the Instagram app from the search results and tap on Download APK.
-A pop-up window will appear, asking you to confirm the download. Tap on OK or Download to start downloading the file.
-The download will take a few seconds or minutes, depending on your internet speed and connection.
-
-Step 3: Locate and Tap on the Downloaded File to Install It
-After you have downloaded the Instagram latest APK file from APKPure.com, you need to locate and tap on it to install it on your device. To do this, follow these steps:
-
-Go to your device file manager or downloads folder and find the Instagram latest APK file. Go to your device file manager or downloads folder and find the Instagram latest APK file. It should have the name com.instagram.android.apk or something similar.
-Tap on the file to open it. A pop-up window will appear, asking you to install the app. Tap on Install or Next to proceed.
-The installation will take a few seconds or minutes, depending on your device and the size of the file.
-
-Step 4: Launch the App and Enjoy Its Features
-After you have installed the Instagram latest APK file on your device, you can launch the app and enjoy its features. To do this, follow these steps:
-
-Go to your device app drawer or home screen and find the Instagram app icon. It should have a camera logo with a rainbow gradient.
-Tap on the icon to open the app. You will see a welcome screen with the Instagram logo and a sign-in or sign-up option.
-If you already have an Instagram account, you can sign in with your username and password, or use Facebook or Google to log in. If you don't have an account, you can sign up with your email address, phone number, or Facebook or Google account.
-Once you are signed in or signed up, you can access all the features and functions of Instagram, such as Reels, Stories, Direct, Feed, Search and Explore. You can also edit your profile, change your settings, and manage your notifications.
-
-Conclusion
-Instagram is a great app to create and share your photos, videos, reels, stories, and more with your friends and followers. You can also discover new content and connect with people who share your interests. If you want to get the latest version of Instagram on your Android device, you can download and install the Instagram latest APK file from a trusted website like APKPure.com. This way, you can enjoy the new features and updates of Instagram without waiting for them to be available on the Play Store. Just follow the steps we have outlined in this article and you will be able to download and install Instagram latest APK on your device in no time.
-FAQs
-Here are some frequently asked questions about Instagram latest APK:
-
-Q: Is it safe to download and install Instagram latest APK?
-A: Yes, it is safe to download and install Instagram latest APK as long as you do it from a reliable and reputable website like APKPure.com. You should also scan the APK file with a trusted antivirus app before installing it on your device. However, you should be aware of the risks and challenges of installing apps from unknown sources, as we have explained in this article.
-Q: What are the benefits of downloading and installing Instagram latest APK?
-A: The benefits of downloading and installing Instagram latest APK are that you can get access to the latest version of the app before it is officially released on the Play Store. You can also install apps that are not available in your region or country, not compatible with your device or Android version, or removed from the Play Store. You can also install apps that have been modified or customized by third-party developers.
-Q: What are the drawbacks of downloading and installing Instagram latest APK?
-A: The drawbacks of downloading and installing Instagram latest APK are that you might download a fake or malicious APK file that contains viruses, malware, spyware, or adware that can harm your device or steal your personal information. You might also download an outdated or corrupted APK file that does not work properly or causes errors and crashes on your device. You might also download an APK file that violates the terms and conditions of the original app developer or the Play Store, which can result in legal consequences or account suspension. You might also need to enable unknown sources on your device settings, which can expose your device to security threats from untrusted sources.
-Q: How often is Instagram updated?
-A: Instagram is updated regularly by its developers to fix bugs, improve performance, add new features, and enhance user experience. The frequency of updates may vary depending on the device model, Android version, region or country, and other factors. However, you can always check for updates on the Play Store or download the latest APK file from a trusted website like APKPure.com.
-Q: How can I contact Instagram support?
-A: If you have any questions, issues, feedbacks, or suggestions about Instagram, you can contact Instagram support by using one of these methods:
-
-Go to your profile page on the app and tap on the menu icon (three horizontal lines) at the top Go to your profile page on the app and tap on the menu icon (three horizontal lines) at the top right corner. Then, tap on Settings and scroll down to Help. You can choose from various options such as Help Center, Report a Problem, or Send Feedback.
-Go to the Instagram website and click on the Help link at the bottom of the page. You can access the Help Center, which has various articles and guides on how to use Instagram. You can also submit a request or report a problem using the Contact Us form.
-Go to the Instagram Facebook page and send a message to the page admins. You can also post a comment or a review on the page wall. However, this method may not guarantee a quick or satisfactory response.
-
-
-I hope this article has helped you understand how to download and install Instagram latest APK on your Android device. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy Instagramming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download PPSSPP Gold PSP Emulator APK for Android and PC.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download PPSSPP Gold PSP Emulator APK for Android and PC.md
deleted file mode 100644
index d38330dbbd72b06dc3758ab2472c20b741e2afb5..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download PPSSPP Gold PSP Emulator APK for Android and PC.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-PPSSPP Gold: The Best PSP Emulator for Android
-Do you miss playing your favorite PSP games on your Android device? Do you want to enjoy the classics like God of War, Tekken, GTA, and more on your smartphone or tablet? If yes, then you need PPSSPP Gold, the best PSP emulator for Android.
-ppsspp gold psp emulator apk Download File ✸ https://ssurll.com/2uNTvc
-PPSSPP Gold is an app that lets you play PSP games on your Android device with high quality graphics, sound, and speed. You can also customize the settings and controls to suit your preferences. With PPSSPP Gold, you can relive the nostalgia of playing PSP games anytime, anywhere.
-In this article, we will tell you everything you need to know about PPSSPP Gold, including its features, benefits, drawbacks, and how to use it. We will also answer some frequently asked questions about PPSSPP Gold. So, let's get started!
- What is PPSSPP Gold?
-PPSSPP Gold is a premium version of PPSSPP, the original PSP emulator for Android. PPSSPP stands for PlayStation Portable Simulator Suitable for Playing Portably. It is an open-source project that was created by Henrik Rydgård in 2012.
-PPSSPP Gold APK is a paid app that costs $4.99 on Google Play Store. It offers some extra features and supports the development of PPSSPP. However, you can also download PPSSPP Gold APK for free from some third-party websites. But be careful, as some of these websites may contain malware or viruses.
-PPSSPP Gold allows you to play PSP games on your Android device by using ISO or CSO files. These are compressed versions of the original game discs that you can obtain from your own PSP or from online sources. However, you should only download games that you own legally, as downloading pirated games is illegal and unethical.
- Features of PPSSPP Gold
-PPSSPP Gold has many features that make it the best PSP emulator for Android. Here are some of them:
-ppsspp gold psp emulator apk download free
-how to install ppsspp gold psp emulator apk on android
-ppsspp gold psp emulator apk mod unlocked
-best settings for ppsspp gold psp emulator apk
-ppsspp gold psp emulator apk latest version 2023
-ppsspp gold psp emulator apk for pc windows 10
-ppsspp gold psp emulator apk full cracked
-ppsspp gold psp emulator apk + bios file
-ppsspp gold psp emulator apk no ads
-ppsspp gold psp emulator apk reddit review
-ppsspp gold psp emulator apk compatible games list
-ppsspp gold psp emulator apk vs free version
-ppsspp gold psp emulator apk cheats codes
-ppsspp gold psp emulator apk online multiplayer
-ppsspp gold psp emulator apk controller support
-ppsspp gold psp emulator apk 60fps patch
-ppsspp gold psp emulator apk high graphics
-ppsspp gold psp emulator apk low end device
-ppsspp gold psp emulator apk save data location
-ppsspp gold psp emulator apk iso files download
-ppsspp gold psp emulator apk custom shaders
-ppsspp gold psp emulator apk texture pack
-ppsspp gold psp emulator apk black screen fix
-ppsspp gold psp emulator apk sound problem solve
-ppsspp gold psp emulator apk update log
-ppsspp gold psp emulator apk license verification bypass
-ppsspp gold psp emulator apk direct link download
-ppsspp gold psp emulator apk mirror site download
-ppsspp gold psp emulator apk virus scan report
-ppsspp gold psp emulator apk refund policy
-ppsspp gold psp emulator apk developer contact info
-ppsspp gold psp emulator apk user guide pdf
-ppsspp gold psp emulator apk faq page
-ppsspp gold psp emulator apk forum discussion
-ppsspp gold psp emulator apk youtube video tutorial
-ppsspp gold psp emulator apk gameplay screenshots
-ppsspp gold psp emulator apk ratings and reviews
-ppsspp gold psp emulator apk alternatives and competitors
-ppsspp gold psp emulator apk benefits and features
-ppsspp gold psp emulator apk pros and cons analysis
- How to download and install PPSSPP Gold
-To download and install PPSSPP Gold on your Android device, you can follow these steps:
-
-Go to Google Play Store and search for PPSSPP Gold - PSP emulator. Alternatively, you can use this link: [text](^2^).
-Tap on the Buy button and complete the payment process.
-Wait for the app to download and install on your device.
-Launch the app and grant the necessary permissions.
-You are now ready to use PPSSPP Gold.
-
-If you want to download PPSSPP Gold APK for free from a third-party website, you can follow these steps:
-
-Go to a trusted website that offers PPSSPP Gold APK for free. For example, you can use this link: [text](^1^).
-Tap on the Download button and wait for the APK file to download on your device.
-Go to your device settings and enable the option to install apps from unknown sources.
-Locate the downloaded APK file and tap on it to install it.
-Launch the app and grant the necessary permissions.
-You are now ready to use PPSSPP Gold.
-
- How to play PSP games on PPSSPP Gold
-To play PSP games To play PSP games on PPSSPP Gold, you need to have ISO or CSO files of the games that you want to play. You can either copy them from your own PSP or download them from online sources. However, as we mentioned before, you should only download games that you own legally.
-Once you have the ISO or CSO files, you can follow these steps to play them on PPSSPP Gold:
-
-Copy the ISO or CSO files to your device storage or SD card. You can use any folder that you prefer, but we recommend creating a separate folder for your PSP games.
-Launch PPSSPP Gold and tap on the Games tab.
-Browse to the folder where you copied the ISO or CSO files and tap on the game that you want to play.
-The game will start loading and you will see the PSP logo and the game menu.
-You can use the on-screen buttons to control the game, or you can use an external controller or keyboard if you have one connected to your device.
-You can also access the PPSSPP Gold settings by tapping on the menu button on the top right corner of the screen. Here, you can adjust the graphics, sound, controls, and other options to enhance your gaming experience.
-Enjoy playing your favorite PSP games on your Android device with PPSSPP Gold!
-
- Benefits of PPSSPP Gold
-PPSSPP Gold has many benefits that make it worth buying or downloading. Here are some of them:
- High compatibility and performance
-PPSSPP Gold is compatible with most of the PSP games that are available in the market. You can play hundreds of games from various genres and categories, such as action, adventure, racing, sports, RPG, and more. PPSSPP Gold also offers high performance and speed, as it can run most of the games smoothly and without lagging or crashing. You can also adjust the frame rate and resolution to optimize the gameplay according to your device capabilities.
- Customizable settings and controls
-PPSSPP Gold allows you to customize the settings and controls of the emulator and the games to suit your preferences. You can change the graphics, sound, network, system, and other options to improve the quality and functionality of the emulator. You can also change the controls and layout of the buttons to make them more comfortable and convenient for you. You can also use gestures, touch screen controls, or external controllers and keyboards to play the games.
- Save and load states
-PPSSPP Gold enables you to save and load states of your games at any point. This means that you can save your progress and resume it later without losing anything. You can also load different states of the same game to try different scenarios or outcomes. This feature is very useful and convenient, as it allows you to play your games without worrying about saving manually or losing data.
- Support for external controllers and keyboards
-PPSSPP Gold supports external controllers and keyboards that are compatible with your Android device. This means that you can use your favorite controller or keyboard to play your PSP games on your Android device. This feature is very beneficial for gamers who prefer physical buttons over touch screen controls. You can also use Bluetooth or USB controllers and keyboards to connect them to your device.
- Drawbacks of PPSSPP Gold
-Despite its many benefits, PPSSPP Gold also has some drawbacks that you should be aware of before using it. Here are some of them:
- Requires a powerful device
-PPSSPP Gold requires a powerful device to run smoothly and without issues. If your device is not powerful enough, you may experience lagging, stuttering, freezing, or crashing while playing some games. You may also encounter graphical glitches, sound distortions, or other errors. Therefore, you should check the minimum requirements of PPSSPP Gold before installing it on your device.
- Some games may not work properly
-Although PPSSPP Gold is compatible with most of the PSP games, some games may not work properly or at all on the emulator. This may be due to various reasons, such as compatibility issues, bugs, glitches, or corrupted files. Therefore, you should not expect every game to work perfectly on PPSSPP Gold. You should also check the compatibility list of PPSSPP Gold before downloading any game.
- May consume a lot of battery and storage
-PPSSPP Gold may consume a lot of battery and storage on your device while running. This is because it uses a lot of resources and power to emulate PSP games on your Android device. Therefore, you should make sure that your device has enough battery and storage space before using PPSSPP Gold. You should also close other apps and background processes that may interfere with the emulator. You can also lower the graphics and sound settings to reduce the battery and storage consumption.
- Conclusion
-PPSSPP Gold is the best PSP emulator for Android that lets you play PSP games on your Android device with high quality graphics, sound, and speed. You can also customize the settings and controls to suit your preferences. With PPSSPP Gold, you can relive the nostalgia of playing PSP games anytime, anywhere.
-However, PPSSPP Gold also has some drawbacks that you should be aware of before using it. You need a powerful device to run it smoothly and without issues. Some games may not work properly or at all on the emulator. And it may consume a lot of battery and storage on your device while running.
-Therefore, you should weigh the pros and cons of PPSSPP Gold before buying or downloading it. You should also check the compatibility list of PPSSPP Gold before downloading any game. And you should only download games that you own legally, as downloading pirated games is illegal and unethical.
-We hope that this article has helped you understand everything you need to know about PPSSPP Gold, the best PSP emulator for Android. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
- FAQs
-Here are some frequently asked questions about PPSSPP Gold:
-
-What is the difference between PPSSPP and PPSSPP Gold?
-PPSSPP is the original PSP emulator for Android that is free and open-source. PPSSPP Gold is the premium version of PPSSPP that costs $4.99 on Google Play Store. It offers some extra features and supports the development of PPSSPP. However, both versions have the same functionality and compatibility.
-Is PPSSPP Gold safe to use?
-Yes, PPSSPP Gold is safe to use if you download it from Google Play Store or a trusted third-party website. However, you should be careful when downloading games from online sources, as some of them may contain malware or viruses. You should also scan your device regularly with an antivirus app to prevent any infections.
-Is PPSSPP Gold legal to use?
-Yes, PPSSPP Gold is legal to use if you only download games that you own legally. Downloading pirated games is illegal and unethical, as it violates the intellectual property rights of the game developers and publishers. You should also respect the terms and conditions of PPSSPP Gold and Google Play Store when using the app.
-How can I update PPSSPP Gold?
-If you have downloaded PPSSPP Gold from Google Play Store, you can update it automatically or manually through the app store. If you have downloaded PPSSPP Gold APK from a third-party website, you can update it by downloading and installing the latest version of the APK file from the same website.
-How can I contact the developers of PPSSPP Gold?
-If you have any questions, feedback, or suggestions for the developers of PPSSPP Gold, you can contact them through their official website, their GitHub page, their Facebook page, or their Twitter account. You can also join their Discord server or their forum to interact with other users and developers of PPSSPP Gold.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/skf15963/summary/fengshen/examples/qa_t5/finetune_t5_cmrc.py b/spaces/skf15963/summary/fengshen/examples/qa_t5/finetune_t5_cmrc.py
deleted file mode 100644
index c8f2c30254f7b907921a83a07dba802279838ac9..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/qa_t5/finetune_t5_cmrc.py
+++ /dev/null
@@ -1,450 +0,0 @@
-# -*- encoding: utf-8 -*-
-'''
-Copyright 2022 The International Digital Economy Academy (IDEA). CCNL team. All rights reserved.
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@File : finetune_t5_cmrc.py
-@Time : 2022/10/28 19:57
-@Author : He Junqing
-@Version : 1.0
-@Contact : hejunqing@idea.edu.cn
-@License : (C)Copyright 2022-2023, CCNL-IDEA
-'''
-# here put the import lib
-
-import pytorch_lightning as pl
-import os
-import sys
-import time
-import torch
-import argparse
-from collections import Counter
-from fengshen.utils.utils import chinese_char_tokenize
-from fengshen.data.universal_datamodule import UniversalDataModule
-from pytorch_lightning import Trainer, loggers
-from pytorch_lightning.callbacks import LearningRateMonitor
-from transformers import MT5ForConditionalGeneration, T5Tokenizer, MT5Config
-from torchmetrics.text.rouge import ROUGEScore
-from nltk.translate.bleu_score import corpus_bleu
-
-torch.cuda.empty_cache()
-
-
-class QAFinetuneModel(pl.LightningModule):
- @staticmethod
- def add_model_specific_args(parent_args):
- parser = parent_args.add_argument_group("BaseModel")
- parser.add_argument("--prediction_res_path", default=None, type=str)
- parser.add_argument(
- "--decode_strategy",
- default="greedy",
- choices=["beamsearch", "sampling", "greedy"],
- )
- return parent_args
-
- def __init__(self, args):
- super().__init__()
- self.save_hyperparameters(args)
- self.formator = args.formator
- self.max_target_length = args.max_target_length
- self.decode_strategy = args.decode_strategy
- self.rouge_metric = ROUGEScore(
- rouge_keys=("rougeL", "rouge1", "rouge2"), normalizer=lambda x: x
- )
- self.loss_func = torch.nn.CrossEntropyLoss(reduction="none")
-
- self.model = MT5ForConditionalGeneration.from_pretrained(
- args.pretrained_model_path
- )
- print("using MT5 model")
-
- if args.tokenizer_type == "t5_tokenizer":
- self.tokenizer = T5Tokenizer.from_pretrained(args.pretrained_model_path)
- print("vocab_size:", len(self.tokenizer))
- # self.tokenizer.add_special_tokens(special_token_dict)
- # print('add special tokens to tokenizer,vocab size:',len(self.tokenizer))
- else:
- print("now only the t5_tokenizer is supported")
- self.bleu_val = []
-
- def setup(self, stage=None) -> None:
-
- if stage == "fit":
- train_loader = (
- self.trainer._data_connector._train_dataloader_source.dataloader()
- )
-
- # Calculate total steps
- if self.trainer.max_epochs > 0:
- world_size = self.trainer.world_size
- tb_size = self.hparams.train_batchsize * max(1, world_size)
- ab_size = self.trainer.accumulate_grad_batches * float(
- self.trainer.max_epochs
- )
- self.total_steps = (
- len(train_loader.dataset) * self.trainer.max_epochs // tb_size
- ) // ab_size
- else:
- self.total_steps = (
- self.trainer.max_steps // self.trainer.accumulate_grad_batches
- )
-
- print("Total steps: {}".format(self.total_steps))
- # return super().setup(stage)
-
- def configure_optimizers(self):
- from fengshen.models.model_utils import configure_optimizers
-
- return configure_optimizers(self)
-
- def on_save_checkpoint(self, checkpoint) -> None:
- # Save the current loop info in the mid of epoch
- # if you lightning <= 1.6.0 uncomment the line below
- # checkpoint['loops'] = self.trainer.checkpoint_connector._get_loops_state_dict()
- if (
- self.trainer.global_rank == 0
- and self.trainer.global_step % self.hparams.every_n_train_steps == 0
- ):
- self.model.save_pretrained(
- os.path.join(
- self.trainer.checkpoint_callback.dirpath,
- "hf_pretrained_epoch{}_step{}".format(
- self.trainer.current_epoch, self.trainer.global_step
- ),
- )
- )
-
- def on_load_checkpoint(self, checkpoint) -> None:
- global_step_offset = checkpoint["global_step"]
- if "global_samples" in checkpoint:
- self.consumed_samples = checkpoint["global_samples"]
- self.trainer.fit_loop.epoch_loop._batches_that_stepped = global_step_offset
-
- def training_step(self, batch, batch_idx): # todo: change
- if self.formator == "t5style":
- output = self.model(
- input_ids=batch["input_ids"],
- labels=batch["labels"],
- decoder_input_ids=batch["decoder_input_ids"],
- )
- else:
- output = self.model(
- input_ids=batch["input_ids"],
- input_token_type=batch["token_types"],
- labels=batch["labels"],
- decoder_input_ids=batch["decoder_input_ids"],
- )
- # print(output.logits)
- acc = self.comput_metrix(output.logits, batch["labels"])
- grad = get_gradient_norm(self.model)
- self.log("train_loss", output.loss, sync_dist=True)
- self.log("train_acc", acc, sync_dist=True)
- self.log("train_grad", grad, sync_dist=True)
- return output.loss
-
- def validation_step(self, batch, batch_idx):
- output = self.model(
- input_ids=batch["input_ids"],
- labels=batch["labels"],
- )
- pred_ids = self.model.generate(
- input_ids=batch["input_ids"], max_new_tokens=self.max_target_length
- )
-
- acc = self.comput_metrix(output.logits, batch["labels"])
- # print(output.logits.shape)
- self.log("val_loss", output.loss, sync_dist=True)
- self.log("val_acc", acc, sync_dist=True)
- batch_labels = torch.where(
- batch["labels"] != -100, batch["labels"], self.tokenizer.pad_token_id
- )
-
- ppl = torch.exp(output.loss)
- self.log("val_ppl", ppl, sync_dist=True)
- pred_tokens = self.tokenizer.batch_decode(
- pred_ids, cleanup_tokenization_space=True, skip_special_tokens=True
- )
- label_tokens = self.tokenizer.batch_decode(
- batch_labels, cleanup_tokenization_space=True, skip_special_tokens=True
- )
- pred_sentences = list(map(remove_pad, pred_tokens))
- # print(label_tokens)
- self.bleu_val.append(compute_bleu(pred_sentences, [[t] for t in label_tokens]))
- candidate = [
- chinese_char_tokenize(p).lstrip("") for p in pred_tokens
- ]
- target = [
- generate_sentence(chinese_char_tokenize(sent)).lstrip("")
- for sent in label_tokens
- ]
- self.rouge_metric.update(preds=candidate, target=target)
- f1 = compute_f1(candidate, label_tokens)
- self.log("val_f1", f1, sync_dist=True)
-
- def on_validation_epoch_end(self) -> None:
- n = len(self.bleu_val)
- avg_bleu = float(sum(self.bleu_val)) / n
- print("bleu:", avg_bleu)
- self.log("val_bleu", avg_bleu)
- self.bleu_val = []
- rouge_dict = self.rouge_metric.compute()
- # reset the metric after once validation
- self.rouge_metric.reset()
- for k, v in rouge_dict.items():
- self.log("val_{}".format(k), v, sync_dist=True)
- if self.trainer._accelerator_connector.cluster_environment.global_rank() == 0:
- print("rouge:\n", rouge_dict)
- return
-
- def predict_step(self, batch, batch_idx):
- num_beams = 1
- do_sample = False
- top_p = None
- if self.decode_strategy == "beamsearch":
- num_beams = 10
- elif self.decode_strategy == "sampling":
- num_beams = 4
- top_p = 0.9
- do_sample = True
-
- prediction_dic = self.model.generate(
- input_ids=batch["input_ids"],
- max_new_tokens=self.max_target_length,
- num_beams=num_beams,
- do_sample=do_sample,
- top_p=top_p,
- no_repeat_ngram_size=3,
- return_dict_in_generate=True,
- output_scores=True,
- )
- output = self.model(
- input_ids=batch["input_ids"],
- labels=batch["labels"],
- )
- prediction_ids = prediction_dic["sequences"]
- loss_tensor = self.loss_func(output.logits.transpose(1, 2), batch["labels"])
- indexes = torch.where(batch["labels"] == self.tokenizer.eos_token_id)[1]
- loss = torch.sum(loss_tensor, dim=1) / indexes
- return {
- "input_ids": batch["input_ids"],
- "predict_ids": prediction_ids,
- "labels": batch["labels"],
- "decoder_inputs": batch["decoder_input_ids"],
- "loss": loss,
- }
-
- def save_preditions(self, result, args):
- with open(args.prediction_res_path, "w", encoding="utf8") as fw:
- preditions = []
- labels = []
- for batch in result:
- print(batch.keys())
- batch_labels = torch.where(
- batch["labels"] != -100,
- batch["labels"],
- self.tokenizer.pad_token_id,
- )
- for i in range(len(batch["input_ids"])):
- context = self.tokenizer.decode(
- batch["input_ids"][i],
- skip_special_tokens=True,
- cleanup_tokenization_space=True,
- )
- pred = self.tokenizer.decode(
- batch["predict_ids"][i],
- cleanup_tokenization_space=True,
- skip_special_tokens=True,
- )
- target = generate_sentence(
- self.tokenizer.batch_decode(
- batch_labels[i], cleanup_tokenization_space=True
- )
- )
- pred = pred.lstrip("")
- target = target.lstrip("")
- self.rouge_metric.update(
- preds=chinese_char_tokenize(pred),
- target=chinese_char_tokenize(target),
- )
- preditions.append(list(pred))
- labels.append([list(target)])
- fw.write("context:" + "".join(context) + "\n")
- fw.write("pred:" + pred + "\n")
- fw.write("target" + target + "\n")
- fw.write("loss:{:.6f}\n".format(batch["loss"][i].item()))
- fw.write("\n")
- bleu = compute_bleu(preditions, labels)
- fw.write("bleu:{}".format(bleu))
- print("finish prediction, saved in {}".format(args.prediction_res_path))
- return preditions, labels
-
- def comput_metrix(self, logits, labels):
- y_pred = torch.argmax(logits, dim=-1)
- y_true = labels.float()
- pad_num = torch.sum(torch.eq(labels, -100))
- corr = torch.eq(y_pred, y_true)
- acc = (torch.sum(corr.float()) - pad_num) / (
- y_true.view(size=(-1,)).shape[0] - pad_num
- )
- return acc
-
-
-class PredictDataModule(UniversalDataModule):
-
- def predict_dataloader(self):
- return self.test_dataloader()
-
-
-def main():
-
- total_parser = argparse.ArgumentParser("Finetune Dialogue model.")
- total_parser.add_argument("--do_eval_only", action="store_true", default=False)
- total_parser.add_argument("--pretrained_model_path", default=None, type=str)
- total_parser.add_argument("--new_vocab_path", default=None, type=str)
- total_parser.add_argument(
- "--tokenizer_type",
- default="t5_tokenizer",
- choices=["t5_tokenizer", "bert_tokenizer"],
- )
- total_parser.add_argument("--train_split_size", default=0.995, type=int)
- total_parser.add_argument("--preprocessing_num_workers", default="10", type=int)
- total_parser.add_argument("--ckpt_path", default=None, type=str)
- total_parser.add_argument("--use_cache", default=False, type=bool)
- total_parser.add_argument(
- "--formator", default="dialog", choices=["dialog", "ccqa", "t5style"]
- )
-
- sys.path.append("../../../")
-
- from fengshen.utils.universal_checkpoint import UniversalCheckpoint
- from qa_dataset import T5StyleDataset, TextGenCollator
-
- total_parser = T5StyleDataset.add_data_specific_args(total_parser)
- total_parser = UniversalDataModule.add_data_specific_args(
- total_parser
- ) # TaskDataModel
- total_parser = Trainer.add_argparse_args(total_parser)
- total_parser = UniversalCheckpoint.add_argparse_args(total_parser)
- total_parser = QAFinetuneModel.add_model_specific_args(
- total_parser
- ) # todo: check names
-
- args = total_parser.parse_args()
- print("Argument parse success.")
- print("superviseT5DataModel load start {}".format(get_time_str()))
-
- config = MT5Config.from_pretrained(args.pretrained_model_path)
- collate_fn = TextGenCollator(
- config=config,
- pad_token_id=config.pad_token_id,
- decoder_start_token_id=config.decoder_start_token_id,
- formator=args.formator)
- if not args.do_eval_only:
- datasets = {'train': T5StyleDataset(args.train_file, args, load_data_type=0, data="train"),
- 'validation': T5StyleDataset(args.val_file, args, load_data_type=0, data="dev")}
-
- model = QAFinetuneModel(args)
- print("superviseT5DataModel load end {}".format(get_time_str()))
-
- data_model = UniversalDataModule(
- tokenizer=None, args=args, collate_fn=collate_fn, datasets=datasets
- )
- print('data loaded')
- checkpoint_callback = UniversalCheckpoint(args)
- lr_monitor = LearningRateMonitor(logging_interval="step")
- logger = loggers.TensorBoardLogger(
- save_dir=os.path.join(args.default_root_dir, "logs/") # TOCHANGE
- )
- trainer = Trainer.from_argparse_args(
- args, logger=logger, callbacks=[checkpoint_callback, lr_monitor]
- )
- trainer.fit(model, data_model)
- else:
- datasets = {'test': T5StyleDataset(args.test_file, args, load_data_type=0, data="test")}
-
- data_model = PredictDataModule(
- tokenizer=None, args=args, collate_fn=collate_fn, datasets=datasets
- )
-
- tokenizer = T5Tokenizer.from_pretrained(args.pretrained_model_path)
- model = QAFinetuneModel(args=args)
- trainer = Trainer.from_argparse_args(args)
- result = trainer.predict(model, data_model, ckpt_path=args.ckpt_path)
- predictions, labels = model.save_preditions(result, args)
- sample = result[0] # first_batch
- batch_labels = torch.where(
- sample["labels"] != -100, sample["labels"], model.tokenizer.pad_token_id
- )
- for i in range(4):
- print(tokenizer.batch_decode(sample["input_ids"][i]))
- print(tokenizer.batch_decode(sample["predict_ids"][i]))
- print(tokenizer.batch_decode(batch_labels[i]))
-
-
-def compute_f1(cand, ref):
- f1_score = []
- for p, t in zip(cand, ref):
- p_tokens = p.split()
- t_tokens = t.split()
- common = Counter() & Counter(t.split())
- num_same = sum(common.values())
- if len(t_tokens) == 0 or len(p_tokens) == 0:
- f1 = int(p == t)
- elif num_same == 0:
- f1 = 0
- else:
- precision = 1.0 * num_same / len(p_tokens)
- recall = 1.0 * num_same / len(t_tokens)
- f1 = (2 * precision * recall) / (precision + recall + 1e-8)
- f1_score.append(f1)
- f1 = sum(f1_score) / float(len(cand))
- return f1
-
-
-def generate_sentence(raw_list):
- words = []
- i = 0
- while i < len(raw_list) and raw_list[i] != "":
- words.append(raw_list[i])
- i += 1
- return "".join(words)
-
-
-def remove_pad(raw_text, ref=False):
- if ref:
- return [raw_text.lstrip("")]
- else:
- return raw_text.lstrip("")
-
-
-def compute_bleu(preditions, labels):
-
- score_nltk = corpus_bleu(labels, preditions)
- return score_nltk
-
-
-def get_gradient_norm(model):
- total_norm = 0
- parameters = [
- p for p in model.parameters() if p.grad is not None and p.requires_grad
- ]
- for p in parameters:
- param_norm = p.grad.detach().data.norm(2)
- total_norm += param_norm.item() ** 2
- total_norm = total_norm**0.5
- return total_norm
-
-
-def get_time_str():
- return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_large_ontonotes4.sh b/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_large_ontonotes4.sh
deleted file mode 100644
index f8bb41316b4cec4bb94fa36ac9bc39c9f3ce41f8..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_large_ontonotes4.sh
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=zen2_large_ontonotes4 # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=1 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_large_ontonotes4/%x-%j.log # output and error file name (%x=job name, %j=job id)
-
-
-# export CUDA_VISIBLE_DEVICES='2'
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-
-MODEL_NAME=zen2_large
-
-TASK=ontonotes4
-
-ZERO_STAGE=1
-STRATEGY=deepspeed_stage_${ZERO_STAGE}
-
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK}
-if [ ! -d ${ROOT_DIR} ];then
- mkdir -p ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/OntoNotes4/
-PRETRAINED_MODEL_PATH=IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese
-
-CHECKPOINT_PATH=${ROOT_DIR}/ckpt/
-OUTPUT_PATH=${ROOT_DIR}/predict.json
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train.char.bmes \
- --valid_data test.char.bmes \
- --test_data test.char.bmes \
- --train_batchsize 16 \
- --valid_batchsize 16 \
- --max_seq_length 256 \
- --task_name ontonotes4 \
- "
-
-MODEL_ARGS="\
- --learning_rate 3e-5 \
- --weight_decay 0.1 \
- --warmup_ratio 0.01 \
- --markup bioes \
- --middle_prefix M- \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_f1 \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 200 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_f1:.4f} \
- "
-
-TRAINER_ARGS="\
- --max_epochs 30 \
- --gpus 1 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 200 \
- --default_root_dir $ROOT_DIR \
- "
-
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \
- --do_lower_case \
- --output_save_path $OUTPUT_PATH \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
-"
-SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py
-/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# python3 $SCRIPT_PATH $options
-# source activate base
-# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
diff --git a/spaces/society-ethics/Average_diffusion_faces/app.py b/spaces/society-ethics/Average_diffusion_faces/app.py
deleted file mode 100644
index df8b983ea97fd73469154af12caa2dfa453cad69..0000000000000000000000000000000000000000
--- a/spaces/society-ethics/Average_diffusion_faces/app.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from PIL import Image
-import os,csv
-import pandas as pd
-import numpy as np
-import gradio as gr
-
-prompts=pd.read_csv('promptsadjectives.csv')
-masc = prompts['Masc-adj'][:10].tolist()
-fem = prompts['Fem-adj'][:10].tolist()
-adjectives = sorted(masc+fem)
-adjectives.insert(0, '')
-occupations = prompts['Occupation-Noun'][:150].tolist()
-
-
-def get_averages(adj, profession):
- if adj != "":
- prompt = (adj + ' ' + profession).replace(' ','_')
- else:
- prompt = profession.replace(' ','_')
- #TODO: fix upper/lowercase error
- sd14_average = 'facer_faces/SDv14/'+prompt+'.png'
- if os.path.isfile(sd14_average) == False:
- sd14_average = 'facer_faces/blank.png'
- sdv2_average = 'facer_faces/SDv2/'+prompt+'.png'
- if os.path.isfile(sdv2_average) == False:
- sdv2_average = 'facer_faces/blank.png'
- dalle_average = 'facer_faces/dalle2/'+prompt.lower()+'.png'
- if os.path.isfile(dalle_average) == False:
- dalle_average = 'facer_faces/blank.png'
-
- return((Image.open(sd14_average), "Stable Diffusion v 1.4"), (Image.open(sdv2_average), "Stable Diffusion v 2"), (Image.open(dalle_average), "Dall-E 2"))
-
-
-with gr.Blocks() as demo:
-
- gr.Markdown("# Text-to-Image Diffusion Model Average Faces")
- gr.Markdown("### We ran 150 professions and 20 adjectives through 3 text-to-image diffusion models to examine what they generate.")
- gr.Markdown("#### Choose one of the professions and adjectives from the dropdown menus and see the average face generated by each model.")
- with gr.Row():
-
- with gr.Column():
- adj = gr.Dropdown(sorted(adjectives, key=str.casefold), value = '',label = "Choose an adjective", interactive= True)
- prof = gr.Dropdown(sorted(occupations, key=str.casefold), value = '', label = "Choose a profession", interactive= True)
- btn = gr.Button("Get average faces!")
- with gr.Column():
- gallery = gr.Gallery(
- label="Average images", show_label=False, elem_id="gallery"
- ).style(grid=[0,3], height="auto")
- gr.Markdown("The three models are: Stable Diffusion v.1.4, Stable Diffusion v.2, and Dall-E 2.")
- gr.Markdown("If you see a black square above, we weren't able to compute an average face for this profession!")
-
-
- btn.click(fn=get_averages, inputs=[adj,prof], outputs=gallery)
-
-demo.launch()
-
diff --git a/spaces/sourav11295/Movie_Recommendation/movie_recommendation.py b/spaces/sourav11295/Movie_Recommendation/movie_recommendation.py
deleted file mode 100644
index 1ba12ca5004cea46ae455fe6d52cbd8b2288f450..0000000000000000000000000000000000000000
--- a/spaces/sourav11295/Movie_Recommendation/movie_recommendation.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Movie Recommendation.ipynb
-
-Automatically generated by Colaboratory.
-
-Original file is located at
- https://colab.research.google.com/drive/16mb8GFViCsAzCEZxBKLbV12h3pQEoU_l
-"""
-
-# Commented out IPython magic to ensure Python compatibility.
-# %pip install gradio
-
-import pandas as pd
-import requests
-
-movies_df = pd.read_csv('./movies.csv')
-links_df = pd.read_csv('./links.csv')
-combined_df = pd.concat([movies_df, links_df[['imdbId','tmdbId']]], axis=1)
-combined_df = combined_df.set_index('title')
-
-combined_df.head()
-
-df = movies_df[['title','genres']]
-df.head()
-
-print(df.isnull().sum())
-
-from sklearn.feature_extraction.text import TfidfVectorizer
-
-tf = TfidfVectorizer(analyzer='word', ngram_range=(1, 3), min_df=0, stop_words='english')
-
-matrix = tf.fit_transform(df['genres'])
-
-from sklearn.metrics.pairwise import linear_kernel
-
-cosine_similarities = linear_kernel(matrix,matrix)
-
-movie_title = df['title']
-
-indices = pd.Series(df.index, index=df['title'])
-
-def movie_recommend(original_title):
-
- id = 'tt'+str(combined_df.loc[[original_title]].imdbId.values).replace('[','').replace(']','').zfill(7)
-
- URL = f"http://www.omdbapi.com/?i={id}&apikey=3bd2165d"
-
- # sending get request and saving the response as response object
- r = requests.get(url = URL)
-
- # extracting data in json format
- data = r.json()
-
- poster_url = data['Poster']
-
- idx = indices[original_title]
-
- sim_scores = list(enumerate(cosine_similarities[idx]))
-
- sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
-
- sim_scores = sim_scores[2:12]
-
- movie_indices = [i[0] for i in sim_scores]
-
- results = pd.DataFrame(list(data.items()), columns=['Key','Value']).head(20)
-
- movies = pd.DataFrame(movie_title.iloc[movie_indices].reset_index(drop=True))
-
- return results, movies, poster_url
-
-import gradio as gr
-
-with gr.Blocks(title='Movie Recommendation') as Intf:
- gr.Markdown(value='Content Based Recommendation System')
- with gr.Row():
- with gr.Column():
- inp = gr.Dropdown(choices=list(df['title']), label="Choose Movie")
- btn = gr.Button("Run")
- gr.Markdown(value='Movie Details')
- results = gr.DataFrame()
- poster = gr.Image(label="Poster Image")
- recomms = gr.DataFrame(label='Recommended Content, Similar to this Movie')
- btn.click(fn=movie_recommend, inputs=inp, outputs=[results,recomms,poster])
-
-Intf.launch(debug=False)
\ No newline at end of file
diff --git a/spaces/spacy/gradio_pipeline_visualizer/README.md b/spaces/spacy/gradio_pipeline_visualizer/README.md
deleted file mode 100644
index 45c6804f497c9e475c3cc560671b9b5f3a2ece87..0000000000000000000000000000000000000000
--- a/spaces/spacy/gradio_pipeline_visualizer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Gradio Pipeline Visualizer
-emoji: 👀
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: cc
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sporg/Ongo/README.md b/spaces/sporg/Ongo/README.md
deleted file mode 100644
index 20f059fc6e23732f7892b6fee236c59200221584..0000000000000000000000000000000000000000
--- a/spaces/sporg/Ongo/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: Ongo
-emoji: 👀
-colorFrom: blue
-colorTo: green
-sdk: docker
-pinned: false
-app_port: 8080
----
\ No newline at end of file
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/data_utils.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/data_utils.py
deleted file mode 100644
index 41afac0bf8f6d70e06bee1a34e220ab396ec247d..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/data_utils.py
+++ /dev/null
@@ -1,382 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import csv
-from pathlib import Path
-import zipfile
-from functools import reduce
-from multiprocessing import cpu_count
-from typing import Any, Dict, List, Optional, Union
-import io
-
-import numpy as np
-import pandas as pd
-import sentencepiece as sp
-from fairseq.data.audio.audio_utils import (
- convert_waveform, _get_kaldi_fbank, _get_torchaudio_fbank, is_npy_data,
- is_sf_audio_data
-)
-import torch
-import soundfile as sf
-from tqdm import tqdm
-
-
-UNK_TOKEN, UNK_TOKEN_ID = "", 3
-BOS_TOKEN, BOS_TOKEN_ID = "", 0
-EOS_TOKEN, EOS_TOKEN_ID = " ", 2
-PAD_TOKEN, PAD_TOKEN_ID = "", 1
-
-
-def gen_vocab(
- input_path: Path, output_path_prefix: Path, model_type="bpe",
- vocab_size=1000, special_symbols: Optional[List[str]] = None
-):
- # Train SentencePiece Model
- arguments = [
- f"--input={input_path.as_posix()}",
- f"--model_prefix={output_path_prefix.as_posix()}",
- f"--model_type={model_type}",
- f"--vocab_size={vocab_size}",
- "--character_coverage=1.0",
- f"--num_threads={cpu_count()}",
- f"--unk_id={UNK_TOKEN_ID}",
- f"--bos_id={BOS_TOKEN_ID}",
- f"--eos_id={EOS_TOKEN_ID}",
- f"--pad_id={PAD_TOKEN_ID}",
- ]
- if special_symbols is not None:
- _special_symbols = ",".join(special_symbols)
- arguments.append(f"--user_defined_symbols={_special_symbols}")
- sp.SentencePieceTrainer.Train(" ".join(arguments))
- # Export fairseq dictionary
- spm = sp.SentencePieceProcessor()
- spm.Load(output_path_prefix.as_posix() + ".model")
- vocab = {i: spm.IdToPiece(i) for i in range(spm.GetPieceSize())}
- assert (
- vocab.get(UNK_TOKEN_ID) == UNK_TOKEN
- and vocab.get(PAD_TOKEN_ID) == PAD_TOKEN
- and vocab.get(BOS_TOKEN_ID) == BOS_TOKEN
- and vocab.get(EOS_TOKEN_ID) == EOS_TOKEN
- )
- vocab = {
- i: s
- for i, s in vocab.items()
- if s not in {UNK_TOKEN, BOS_TOKEN, EOS_TOKEN, PAD_TOKEN}
- }
- with open(output_path_prefix.as_posix() + ".txt", "w") as f_out:
- for _, s in sorted(vocab.items(), key=lambda x: x[0]):
- f_out.write(f"{s} 1\n")
-
-
-def extract_fbank_features(
- waveform: torch.FloatTensor,
- sample_rate: int,
- output_path: Optional[Path] = None,
- n_mel_bins: int = 80,
- overwrite: bool = False,
-):
- if output_path is not None and output_path.is_file() and not overwrite:
- return
-
- _waveform = convert_waveform(waveform, sample_rate, to_mono=True)
- # Kaldi compliance: 16-bit signed integers
- _waveform = _waveform * (2 ** 15)
- _waveform = _waveform.numpy()
-
- features = _get_kaldi_fbank(_waveform, sample_rate, n_mel_bins)
- if features is None:
- features = _get_torchaudio_fbank(_waveform, sample_rate, n_mel_bins)
- if features is None:
- raise ImportError(
- "Please install pyKaldi or torchaudio to enable fbank feature extraction"
- )
-
- if output_path is not None:
- np.save(output_path.as_posix(), features)
- return features
-
-
-def create_zip(data_root: Path, zip_path: Path):
- paths = list(data_root.glob("*.npy"))
- with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_STORED) as f:
- for path in tqdm(paths):
- f.write(path, arcname=path.name)
-
-
-def get_zip_manifest(
- zip_path: Path, zip_root: Optional[Path] = None, is_audio=False
-):
- _zip_path = Path.joinpath(zip_root or Path(""), zip_path)
- with zipfile.ZipFile(_zip_path, mode="r") as f:
- info = f.infolist()
- paths, lengths = {}, {}
- for i in tqdm(info):
- utt_id = Path(i.filename).stem
- offset, file_size = i.header_offset + 30 + len(i.filename), i.file_size
- paths[utt_id] = f"{zip_path.as_posix()}:{offset}:{file_size}"
- with open(_zip_path, "rb") as f:
- f.seek(offset)
- byte_data = f.read(file_size)
- assert len(byte_data) > 1
- if is_audio:
- assert is_sf_audio_data(byte_data), i
- else:
- assert is_npy_data(byte_data), i
- byte_data_fp = io.BytesIO(byte_data)
- if is_audio:
- lengths[utt_id] = sf.info(byte_data_fp).frames
- else:
- lengths[utt_id] = np.load(byte_data_fp).shape[0]
- return paths, lengths
-
-
-def gen_config_yaml(
- manifest_root: Path,
- spm_filename: Optional[str] = None,
- vocab_name: Optional[str] = None,
- yaml_filename: str = "config.yaml",
- specaugment_policy: Optional[str] = "lb",
- prepend_tgt_lang_tag: bool = False,
- sampling_alpha: Optional[float] = None,
- input_channels: Optional[int] = 1,
- input_feat_per_channel: Optional[int] = 80,
- audio_root: str = "",
- cmvn_type: str = "utterance",
- gcmvn_path: Optional[Path] = None,
- extra=None
-):
- manifest_root = manifest_root.absolute()
- writer = S2TDataConfigWriter(manifest_root / yaml_filename)
- assert spm_filename is not None or vocab_name is not None
- vocab_name = spm_filename.replace(".model", ".txt") if vocab_name is None \
- else vocab_name
- writer.set_vocab_filename(vocab_name)
- if input_channels is not None:
- writer.set_input_channels(input_channels)
- if input_feat_per_channel is not None:
- writer.set_input_feat_per_channel(input_feat_per_channel)
- specaugment_setters = {
- "lb": writer.set_specaugment_lb_policy,
- "ld": writer.set_specaugment_ld_policy,
- "sm": writer.set_specaugment_sm_policy,
- "ss": writer.set_specaugment_ss_policy,
- }
- specaugment_setter = specaugment_setters.get(specaugment_policy, None)
- if specaugment_setter is not None:
- specaugment_setter()
- if spm_filename is not None:
- writer.set_bpe_tokenizer(
- {
- "bpe": "sentencepiece",
- "sentencepiece_model": (manifest_root / spm_filename).as_posix(),
- }
- )
- if prepend_tgt_lang_tag:
- writer.set_prepend_tgt_lang_tag(True)
- if sampling_alpha is not None:
- writer.set_sampling_alpha(sampling_alpha)
-
- if cmvn_type not in ["global", "utterance"]:
- raise NotImplementedError
-
- if specaugment_policy is not None:
- writer.set_feature_transforms(
- "_train", [f"{cmvn_type}_cmvn", "specaugment"]
- )
- writer.set_feature_transforms("*", [f"{cmvn_type}_cmvn"])
-
- if cmvn_type == "global":
- if gcmvn_path is None:
- raise ValueError("Please provide path of global cmvn file.")
- else:
- writer.set_global_cmvn(gcmvn_path.as_posix())
-
- if len(audio_root) > 0:
- writer.set_audio_root(audio_root)
-
- if extra is not None:
- writer.set_extra(extra)
- writer.flush()
-
-
-def load_df_from_tsv(path: Union[str, Path]) -> pd.DataFrame:
- _path = path if isinstance(path, str) else path.as_posix()
- return pd.read_csv(
- _path,
- sep="\t",
- header=0,
- encoding="utf-8",
- escapechar="\\",
- quoting=csv.QUOTE_NONE,
- na_filter=False,
- )
-
-
-def save_df_to_tsv(dataframe, path: Union[str, Path]):
- _path = path if isinstance(path, str) else path.as_posix()
- dataframe.to_csv(
- _path,
- sep="\t",
- header=True,
- index=False,
- encoding="utf-8",
- escapechar="\\",
- quoting=csv.QUOTE_NONE,
- )
-
-
-def load_tsv_to_dicts(path: Union[str, Path]) -> List[dict]:
- with open(path, "r") as f:
- reader = csv.DictReader(
- f,
- delimiter="\t",
- quotechar=None,
- doublequote=False,
- lineterminator="\n",
- quoting=csv.QUOTE_NONE,
- )
- rows = [dict(e) for e in reader]
- return rows
-
-
-def filter_manifest_df(
- df, is_train_split=False, extra_filters=None, min_n_frames=5, max_n_frames=3000
-):
- filters = {
- "no speech": df["audio"] == "",
- f"short speech (<{min_n_frames} frames)": df["n_frames"] < min_n_frames,
- "empty sentence": df["tgt_text"] == "",
- }
- if is_train_split:
- filters[f"long speech (>{max_n_frames} frames)"] = df["n_frames"] > max_n_frames
- if extra_filters is not None:
- filters.update(extra_filters)
- invalid = reduce(lambda x, y: x | y, filters.values())
- valid = ~invalid
- print(
- "| "
- + ", ".join(f"{n}: {f.sum()}" for n, f in filters.items())
- + f", total {invalid.sum()} filtered, {valid.sum()} remained."
- )
- return df[valid]
-
-
-def cal_gcmvn_stats(features_list):
- features = np.concatenate(features_list)
- square_sums = (features ** 2).sum(axis=0)
- mean = features.mean(axis=0)
- features = np.subtract(features, mean)
- var = square_sums / features.shape[0] - mean ** 2
- std = np.sqrt(np.maximum(var, 1e-8))
- return {"mean": mean.astype("float32"), "std": std.astype("float32")}
-
-
-class S2TDataConfigWriter(object):
- DEFAULT_VOCAB_FILENAME = "dict.txt"
- DEFAULT_INPUT_FEAT_PER_CHANNEL = 80
- DEFAULT_INPUT_CHANNELS = 1
-
- def __init__(self, yaml_path: Path):
- try:
- import yaml
- except ImportError:
- print("Please install PyYAML for S2T data config YAML files")
- self.yaml = yaml
- self.yaml_path = yaml_path
- self.config = {}
-
- def flush(self):
- with open(self.yaml_path, "w") as f:
- self.yaml.dump(self.config, f)
-
- def set_audio_root(self, audio_root=""):
- self.config["audio_root"] = audio_root
-
- def set_vocab_filename(self, vocab_filename: str = "dict.txt"):
- self.config["vocab_filename"] = vocab_filename
-
- def set_specaugment(
- self,
- time_wrap_w: int,
- freq_mask_n: int,
- freq_mask_f: int,
- time_mask_n: int,
- time_mask_t: int,
- time_mask_p: float,
- ):
- self.config["specaugment"] = {
- "time_wrap_W": time_wrap_w,
- "freq_mask_N": freq_mask_n,
- "freq_mask_F": freq_mask_f,
- "time_mask_N": time_mask_n,
- "time_mask_T": time_mask_t,
- "time_mask_p": time_mask_p,
- }
-
- def set_specaugment_lb_policy(self):
- self.set_specaugment(
- time_wrap_w=0,
- freq_mask_n=1,
- freq_mask_f=27,
- time_mask_n=1,
- time_mask_t=100,
- time_mask_p=1.0,
- )
-
- def set_specaugment_ld_policy(self):
- self.set_specaugment(
- time_wrap_w=0,
- freq_mask_n=2,
- freq_mask_f=27,
- time_mask_n=2,
- time_mask_t=100,
- time_mask_p=1.0,
- )
-
- def set_specaugment_sm_policy(self):
- self.set_specaugment(
- time_wrap_w=0,
- freq_mask_n=2,
- freq_mask_f=15,
- time_mask_n=2,
- time_mask_t=70,
- time_mask_p=0.2,
- )
-
- def set_specaugment_ss_policy(self):
- self.set_specaugment(
- time_wrap_w=0,
- freq_mask_n=2,
- freq_mask_f=27,
- time_mask_n=2,
- time_mask_t=70,
- time_mask_p=0.2,
- )
-
- def set_input_channels(self, input_channels: int = 1):
- self.config["input_channels"] = input_channels
-
- def set_input_feat_per_channel(self, input_feat_per_channel: int = 80):
- self.config["input_feat_per_channel"] = input_feat_per_channel
-
- def set_bpe_tokenizer(self, bpe_tokenizer: Dict[str, Any]):
- self.config["bpe_tokenizer"] = bpe_tokenizer
-
- def set_global_cmvn(self, stats_npz_path: str):
- self.config["global_cmvn"] = {"stats_npz_path": stats_npz_path}
-
- def set_feature_transforms(self, split: str, transforms: List[str]):
- if "transforms" not in self.config:
- self.config["transforms"] = {}
- self.config["transforms"][split] = transforms
-
- def set_prepend_tgt_lang_tag(self, flag: bool = True):
- self.config["prepend_tgt_lang_tag"] = flag
-
- def set_sampling_alpha(self, sampling_alpha: float = 1.0):
- self.config["sampling_alpha"] = sampling_alpha
-
- def set_extra(self, data):
- self.config.update(data)
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py
deleted file mode 100644
index a5dd7ae6c15b358206e067385be260c94021bf20..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py
+++ /dev/null
@@ -1,128 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-import os.path as osp
-import numpy as np
-import tqdm
-import torch
-import sys
-
-import faiss
-import torch.nn.functional as F
-
-from wav2vec_cluster_faiss import parse_faiss_specs, Wav2VecFeatureReader
-
-
-def get_parser():
- parser = argparse.ArgumentParser(description="apply clusters")
- # fmt: off
- parser.add_argument('data', help='location of tsv files')
- parser.add_argument('--split', help='split to process', required=True)
- parser.add_argument('--labels', help='split to process', default="phn")
- parser.add_argument('--path', help='path to pca and centroids', required=True)
- parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec model (if using wav2vec features)', required=True)
- parser.add_argument('--layer', '-l', type=int, help='which layer to read', default=14)
- parser.add_argument('--max-tsz', type=int, help='batch kmeans up to this much', default=14)
- # fmt: on
-
- return parser
-
-
-def get_iterator(args):
- label_path = osp.join(args.data, f"{args.split}.{args.labels}")
- if osp.exists(label_path):
- lp = open(label_path, "r")
- else:
- lp = None
-
- with open(osp.join(args.data, f"{args.split}.tsv"), "r") as fp:
- lines = fp.read().split("\n")
- root = lines.pop(0).strip()
- files = [line.rstrip() for line in lines if len(line) > 0]
-
- if lp is not None:
- lbls = [line.rstrip() for line in lp]
- else:
- lbls = [None] * len(files)
-
- num = len(files)
- reader = Wav2VecFeatureReader(args.checkpoint, args.layer)
-
- def iterate():
- for fname, lbl in zip(files, lbls):
- file = osp.join(root, fname.split("\t")[0])
- feats = reader.get_feats(file)
- yield feats.data, fname, lbl
-
- return iterate, num, root
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- spec = osp.basename(args.path)
-
- try:
- faiss_spec = parse_faiss_specs(spec.rstrip("/"))[0]
- except:
- print(spec)
- raise
-
- print("Faiss Spec:", faiss_spec, file=sys.stderr)
-
- if faiss_spec.pca:
- A = torch.from_numpy(np.load(osp.join(args.path, "pca_A.npy"))).cuda()
- b = torch.from_numpy(np.load(osp.join(args.path, "pca_b.npy"))).cuda()
- print("Loaded PCA", file=sys.stderr)
-
- centroids = np.load(osp.join(args.path, "centroids.npy"))
- print("Loaded centroids", centroids.shape, file=sys.stderr)
-
- res = faiss.StandardGpuResources()
- index_flat = (
- faiss.IndexFlatL2(centroids.shape[1])
- if not faiss_spec.sphere
- else faiss.IndexFlatIP(centroids.shape[1])
- )
- faiss_index = faiss.index_cpu_to_gpu(res, 0, index_flat)
- faiss_index.add(centroids)
-
- generator, num, root = get_iterator(args)
- iterator = generator()
-
- had_labels = False
- label_path = osp.join(args.path, f"{args.split}.{args.labels}")
-
- with torch.no_grad():
- with open(osp.join(args.path, f"{args.split}.src"), "w") as fp, open(
- osp.join(args.path, f"{args.split}.tsv"), "w"
- ) as pp, open(label_path, "w") as lp:
- print(root, file=pp)
- for f, fname, lbl in tqdm.tqdm(iterator, total=num):
- if faiss_spec.pca:
- f = torch.mm(f, A) + b
- if faiss_spec.norm:
- f = F.normalize(f, p=2, dim=-1)
-
- f = f.cpu().numpy()
-
- _, z = faiss_index.search(f, 1)
-
- print(" ".join(str(x.item()) for x in z), file=fp)
- print(fname, file=pp)
-
- if lbl is not None:
- print(lbl, file=lp)
- had_labels = True
- if not had_labels:
- os.remove(label_path)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_fp16_optimizer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_fp16_optimizer.py
deleted file mode 100644
index ce4f1c055ce68b8e3933636fae66cca73c5e9d18..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_fp16_optimizer.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import copy
-import logging
-import unittest
-
-import torch
-from fairseq.optim.fp16_optimizer import FP16Optimizer, MemoryEfficientFP16Optimizer
-from omegaconf import OmegaConf
-
-
-@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU")
-class TestGradientScaling(unittest.TestCase):
- def setUp(self):
- self.x = torch.tensor([2.0]).cuda().half()
- weight = 3.0
- bias = 5.0
- self.error = 1.0
- self.target = torch.tensor([self.x * weight + bias + self.error]).cuda().half()
- self.loss_fn = torch.nn.L1Loss()
-
- self.model = torch.nn.Linear(1, 1)
- self.model.weight.data = torch.tensor([[weight]])
- self.model.bias.data = torch.tensor([bias])
- self.model.cuda().half()
- self.params = list(self.model.parameters())
-
- self.cfg_dls = OmegaConf.create(
- {
- "optimization": {
- "lr": [0.1],
- },
- "optimizer": {
- "_name": "adam",
- "lr": [0.1],
- "adam_betas": "(0.9, 0.999)",
- "adam_eps": 1e-8,
- "weight_decay": 0.0,
- },
- "common": {
- "fp16_init_scale": 1,
- "fp16_scale_window": 1,
- "fp16_scale_tolerance": 1,
- "threshold_loss_scale": 1,
- "min_loss_scale": 1e-4,
- "tpu": False,
- },
- }
- )
- logging.disable(logging.CRITICAL)
-
- def tearDown(self):
- logging.disable(logging.NOTSET)
-
- def run_iter(self, model, params, optimizer):
- optimizer.zero_grad()
- y = model(self.x)
- loss = self.loss_fn(y, self.target)
- optimizer.backward(loss)
- self.assertEqual(loss, torch.tensor(1.0, device="cuda:0", dtype=torch.float16))
-
- grad_norm = optimizer.clip_grad_norm(0)
- self.assertAlmostEqual(grad_norm.item(), 2.2361, 4)
-
- optimizer.step()
- self.assertEqual(
- model.weight,
- torch.tensor(
- [[3.0996]], device="cuda:0", dtype=torch.float16, requires_grad=True
- ),
- )
- self.assertEqual(
- model.bias,
- torch.tensor(
- [5.1016], device="cuda:0", dtype=torch.float16, requires_grad=True
- ),
- )
- self.assertEqual(optimizer.scaler.loss_scale, 2.0)
-
- def test_mixed_precision(self):
- model = copy.deepcopy(self.model)
- params = list(model.parameters())
- optimizer = FP16Optimizer.build_optimizer(self.cfg_dls, params)
-
- self.run_iter(model, params, optimizer)
- self.assertTrue(
- all(
- torch.all(
- fp32_params.eq(
- torch.tensor(
- [3.1000, 5.1000], device="cuda:0", requires_grad=True
- )
- )
- )
- for fp32_params in optimizer.fp32_params.values()
- )
- )
-
- def test_memory_efficient(self):
- model = copy.deepcopy(self.model)
- params = list(model.parameters())
- optimizer = MemoryEfficientFP16Optimizer.build_optimizer(self.cfg_dls, params)
-
- self.run_iter(model, params, optimizer)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/stomexserde/gpt4-ui/Ansi-Paint-In-Ral-Equivalent-HOT.md b/spaces/stomexserde/gpt4-ui/Ansi-Paint-In-Ral-Equivalent-HOT.md
deleted file mode 100644
index b484948b09e69b6d5c51f5e2c42a9fa87854950b..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Ansi-Paint-In-Ral-Equivalent-HOT.md
+++ /dev/null
@@ -1,85 +0,0 @@
-## Ansi Paint In Ral Equivalent
-
-
-
-**Download ->->->-> [https://urluso.com/2tx1UR](https://urluso.com/2tx1UR)**
-
-
-
-# How to Convert ANSI Paint Colors to RAL Colors
-
-
-
-ANSI and RAL are two different color systems that are used for paints, coatings and plastics. ANSI stands for American National Standards Institute, and it is a standard that defines a set of colors for industrial applications. RAL stands for Reichs-Ausschuà für Lieferbedingungen und Gütesicherung, and it is a German color system that is widely used in Europe and other parts of the world.
-
-
-
-Converting ANSI paint colors to RAL colors can be useful if you want to match the color of your enclosure, equipment or product with a different color system. However, there is no direct or official conversion between the two systems, as they are based on different color models and measurements. Therefore, the conversion can only be approximate and may not result in an exact match.
-
-
-
-One way to convert ANSI paint colors to RAL colors is to use online tools that provide color charts or calculators for both systems. For example, you can visit [Converting Colors](https://convertingcolors.com/ral-colors.html) [^1^] to see a list of all RAL colors and their corresponding RGB, CMYK, HSL and HSV values. You can also use the search function to find a specific RAL color by name or number. Similarly, you can visit [McMaster-Carr](https://www.mcmaster.com/ansi-61-paint) [^2^] to see a selection of ANSI paint colors and their product codes. You can also use the filter function to narrow down your choices by color family, finish or application.
-
-
-
-Another way to convert ANSI paint colors to RAL colors is to use a physical color chart or fan deck that shows samples of both systems. You can compare the colors visually and find the closest match by eye. However, this method may not be very accurate, as the colors may vary depending on the lighting conditions, the quality of the print or the screen resolution. Therefore, it is advisable to test the colors on a small area before applying them on a large scale.
-
-
-
-In conclusion, converting ANSI paint colors to RAL colors can be done by using online tools or physical color charts, but the results may not be exact. It is important to consider the purpose and context of the color conversion and to verify the colors before using them.
-
-
-
-## Advantages and Disadvantages of ANSI and RAL Color Systems
-
-
-
-Both ANSI and RAL color systems have their own advantages and disadvantages for different purposes and applications. Here are some of the main pros and cons of each system:
-
-
-
-### ANSI Color System
-
-
-
-- **Advantages:**
-
- - It is a widely recognized and accepted standard for industrial colors in the US.
-
- - It provides a consistent and reliable way of identifying and matching colors for various products and materials.
-
- - It covers a wide range of colors, from basic to metallic, that can suit different needs and preferences.
-
-- **Disadvantages:**
-
- - It is not compatible with other color systems, such as RAL or Pantone, which can cause confusion or inconsistency when working with international clients or suppliers.
-
- - It does not have a digital version or a color conversion tool that can help designers and users to visualize the colors on screen or print.
-
- - It may not reflect the actual appearance of the colors due to variations in printing, lighting, or surface conditions.
-
-
-
-### RAL Color System
-
-
-
-- **Advantages:**
-
- - It is a globally recognized and respected standard for colors, especially in Europe and other parts of the world.
-
- - It offers a comprehensive and versatile color system that includes four different collections: RAL Classic, RAL Design, RAL Effect, and RAL Digital.[^1^]
-
- - It provides a high level of accuracy and quality control for colors, as each color is assigned a unique number and name, and verified by a hologram.[^2^]
-
-- **Disadvantages:**
-
- - It can be expensive and difficult to obtain the official RAL color charts or fan decks, as they are only available from authorized dealers or distributors.
-
- - It can be overwhelming and confusing to choose from the thousands of colors available in the RAL system, especially for beginners or non-professionals.
-
- - It may not match the exact colors of other color systems, such as ANSI or Pantone, which can cause discrepancies or errors when working with different mediums or formats.
-
-
-
- dfd1c89656
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Aces Of The Luftwaffe - Squadron Download For Pc Ocean Of Games.md b/spaces/stomexserde/gpt4-ui/Examples/Aces Of The Luftwaffe - Squadron Download For Pc Ocean Of Games.md
deleted file mode 100644
index 5d1c8fe721780da954bece6a9ce0157e4bb4ec50..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Aces Of The Luftwaffe - Squadron Download For Pc Ocean Of Games.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-Aces of the Luftwaffe - Squadron: A thrilling shoot 'em up game for PC
-If you are looking for a fast-paced and action-packed game that will test your reflexes and skills, you might want to check out Aces of the Luftwaffe - Squadron. This is a vertical scrolling shoot 'em up game that features a captivating storyline, epic co-op multiplayer, spectacular boss fights and dogfights, and a variety of power ups and abilities to customize your squadron.
-Aces of the Luftwaffe - Squadron download for pc ocean of games Download Zip › https://urlgoal.com/2uI6xE
-The game is set in an alternate history where a secret German commando called The Aces of the Luftwaffe invaded the USA after Europe was freed from the Nazis. You play as one of the four American pilots who must restore peace and liberty by fighting against their fearsome war machines in bombastic air battles. You can also team up with up to three friends in an epic 4 player local co-op mode.
-The game has a fully voiced narrative that will immerse you in the story and the characters. You can also upgrade each member of your squadron with their personal skill tree as you level up. The game has a creative haptics system that enhances the gameplay and cut scenes with realistic vibrations. The game also boasts an epic movie-like orchestral soundtrack that will make you feel like you are in a blockbuster film.
-If you want to download Aces of the Luftwaffe - Squadron for PC, you can find it on Steam or on Ocean of Games. The game requires Windows Vista or higher, 3.0 GHz CPU, 6 GB RAM, and 3 GB hard disk space. The game is rated E10+ for everyone 10 and older.
-
-Aces of the Luftwaffe - Squadron is a game that will keep you on the edge of your seat with its thrilling gameplay and stunning graphics. If you are a fan of shoot 'em up games or WWII-themed games, you should definitely give this game a try.
-
-One of the most appealing features of Aces of the Luftwaffe - Squadron is the co-op multiplayer mode. You can play with up to three friends on the same screen and enjoy the game together. Each player can choose their own pilot and wingman, and each wingman has a unique ability that can help the team. For example, one wingman can heal the team, another can slow down time, and another can unleash a powerful laser beam. You can also combine your abilities for more devastating effects.
-The co-op mode also adds more challenge and fun to the game. You have to coordinate with your teammates and avoid friendly fire. You also have to share the power ups and lives that you collect during the game. The game has a friendly fire option that you can turn on or off depending on your preference. The game also has a dynamic difficulty system that adjusts the game's difficulty according to the number of players and their skill level.
-The co-op mode is a great way to enjoy Aces of the Luftwaffe - Squadron with your friends and family. You can experience the game's story together and have a blast shooting down enemies and bosses. The co-op mode is available for both the main campaign and the DLC campaign, which adds a new storyline, new characters, new enemies, and new bosses.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Black Hawk Down 2001 1080p Mkv BEST.md b/spaces/stomexserde/gpt4-ui/Examples/Black Hawk Down 2001 1080p Mkv BEST.md
deleted file mode 100644
index 424e75bad62c8402bb0a594a089fa2d3209b20e6..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Black Hawk Down 2001 1080p Mkv BEST.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-Black Hawk Down (2001) 1080p MKV: A Review of the Action-Packed War Drama
-Black Hawk Down is a 2001 war drama film directed by Ridley Scott and based on the best-selling book by Mark Bowden. The film depicts the events of October 3, 1993, when a group of US Army Rangers and Special Forces were sent to Mogadishu, Somalia, to capture two top lieutenants of a renegade warlord. The mission went awry when two US Black Hawk helicopters were shot down by Somali militia, resulting in a fierce and prolonged firefight that claimed the lives of 18 Americans and hundreds of Somalis.
-black hawk down 2001 1080p mkv Download >>> https://urlgoal.com/2uI7YK
-The film features an ensemble cast of actors, including Josh Hartnett, Ewan McGregor, Tom Sizemore, Eric Bana, Sam Shepard, and many others. The film was praised for its realistic and intense portrayal of the combat scenes, as well as its cinematography, editing, sound design, and score. The film won two Academy Awards for Best Film Editing and Best Sound Mixing, and was nominated for two more for Best Director and Best Cinematography.
-If you are looking for a high-quality version of this film to watch on your home theater system, you might be interested in the 1080p MKV format. This format offers a high resolution of 1920x1080 pixels, which is ideal for large screens and high-definition displays. The MKV format also supports multiple audio and subtitle tracks, which can enhance your viewing experience. You can find various sources of Black Hawk Down (2001) 1080p MKV online, such as Pahe.in[^1^], YTS.mx[^2^], or BTBTBA.com[^3^]. However, be careful when downloading torrents from these sites, as they might contain viruses or malware that can harm your device. You should always use a VPN when downloading torrents to protect your privacy and avoid legal issues.
-Black Hawk Down (2001) 1080p MKV is a great option for fans of war movies who want to enjoy this film in its full glory. The film is a gripping and thrilling account of one of the most notorious military operations in recent history, and a testament to the courage and sacrifice of the soldiers who fought there.
-
-One of the strengths of Black Hawk Down (2001) 1080p MKV is the cast of actors who portray the soldiers involved in the operation. The film features a mix of established stars and newcomers, who deliver convincing and nuanced performances. Josh Hartnett plays Staff Sergeant Matt Eversmann, the leader of Chalk Four, who has to take charge of his inexperienced men and survive the ordeal. Ewan McGregor plays Specialist John Grimes, a desk clerk who volunteers for the mission and proves his worth in combat. Eric Bana plays Sergeant First Class Norm "Hoot" Gibson, a Delta Force operator who is calm and skilled under fire. Tom Sizemore plays Lieutenant Colonel Danny McKnight, the commander of the ground convoy that tries to rescue the downed crews. William Fichtner plays Sergeant First Class Jeff Sanderson, another Delta Force operator who is loyal and brave. Ewen Bremner plays Specialist Shawn Nelson, a Ranger who suffers a hearing loss after an RPG explosion. Sam Shepard plays Major General William F. Garrison, the overall commander of Task Force Ranger.
-The film also features many actors who would later become famous, such as Orlando Bloom as Private First Class Todd Blackburn, a young Ranger who falls from a helicopter during the insertion; Ty Burrell as Captain Larry Wilkinson, a 160th SOAR pilot; Nikolaj Coster-Waldau as Master Sergeant Gary Gordon, a Delta Force sniper who volunteers to protect one of the crash sites; Tom Hardy as Specialist Lance Twombly, a Ranger who is wounded in the leg; and Ioan Gruffudd as Lieutenant John Beales, a 160th SOAR pilot who flies back to base with a wounded soldier.
-
-Black Hawk Down (2001) 1080p MKV showcases the talent and diversity of these actors, who bring to life the personalities and emotions of the real soldiers they portray. The film does not focus on any single character, but rather gives each one a moment to shine and contribute to the story. The film also does not glorify war or take sides, but rather shows the harsh realities and challenges faced by both sides of the conflict.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Bully Dog Update Agent [WORK] Download.md b/spaces/stomexserde/gpt4-ui/Examples/Bully Dog Update Agent [WORK] Download.md
deleted file mode 100644
index d2afd2171d71d5462361e18d20b75d0777db7f8f..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Bully Dog Update Agent [WORK] Download.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-How to Download and Install the Bully Dog Update Agent
-The Bully Dog Update Agent is a software program that allows you to update your Bully Dog devices with the latest tunes and features. The Update Agent is compatible with Windows-based PCs and requires an Internet connection. Here are the steps to download and install the Update Agent:
-
-Click here to go to the Update Agent download link[^1^] [^2^].
-Save the file to a known location on your computer.
-Run the program and let the agent install.
-To find the agent, click Start Menu > All Programs > Bully Dog Technologies.
-Plug your device into your computer and follow the new hardware wizard.
-
-Note: The Update Agent is not Mac compatible. You will need a Windows-based PC to update your device. If you have any issues with downloading or installing the Update Agent, you can contact Bully Dog customer support or use TeamViewer for remote assistance[^2^].
-bully dog update agent download Download ✶ https://urlgoal.com/2uI7gb
How to Update Your Bully Dog Device After Installing the Update Agent
-After you have installed the Update Agent on your computer, you can use it to check for updates and download them to your Bully Dog device. Depending on your device model, you may have different options for updating. Here are the general steps to update your device:
-
-Connect your Bully Dog device to your computer using a USB cable.
-Open the Update Agent program on your computer.
-Click the âCheck for Updatesâ button and wait for the program to scan your device and find any available updates.
-If there are any updates, you will see a list of them on the screen. You can select which ones you want to download and install on your device.
-Click the âUpdateâ button and wait for the program to transfer the updates to your device. Do not disconnect your device or close the program during this process.
-When the update is complete, you will see a confirmation message on the screen. You can then disconnect your device and use it normally.
-
-Note: Some devices may require you to update directly from the device over WiFi. To do this, you need to power up your device through your OBDII port on your vehicle or via USB connection from your computer, go to the main menu, scroll down and select "Device Settings", and then select "WiFi Update"[^1^].
-Why You Should Update Your Bully Dog Device Regularly
-Updating your Bully Dog device is important for several reasons. First, updating ensures that your device has the latest firmware and software that can improve its performance, stability, and compatibility with your vehicle. Second, updating allows you to access new features and functions that Bully Dog may release over time, such as new tunes, customizations, data logging options, and more. Third, updating can fix any bugs or issues that may affect your device or your vehicle. Updating can also help you avoid any potential problems that may arise from using outdated software or firmware.
-Therefore, it is recommended that you check for updates regularly and install them as soon as possible. You can also enable automatic updates on some devices so that they can update themselves whenever they are connected to WiFi[^1^]. By keeping your device updated, you can enjoy the best performance and experience from your Bully Dog product.
-
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Extensoft Artisteer 4.3 Keygen [UPD] Generator.md b/spaces/stomexserde/gpt4-ui/Examples/Extensoft Artisteer 4.3 Keygen [UPD] Generator.md
deleted file mode 100644
index f7db84a857d670ae8a76a0a66d42b0e068aabdd1..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Extensoft Artisteer 4.3 Keygen [UPD] Generator.md
+++ /dev/null
@@ -1,44 +0,0 @@
-
-How to Crack Extensoft Artisteer 4.3 with Keygen Generator
-Extensoft Artisteer 4.3 is a web design automation tool that allows you to create stunning and unique website templates and blog themes without any coding skills. You can export your designs as WordPress themes, Joomla templates, Drupal themes, Blogger templates, DotNetNuke skins and more. However, to enjoy the full features of Artisteer 4.3, you need to activate it with a license key.
-In this article, we will show you how to crack Extensoft Artisteer 4.3 with a keygen generator that can generate valid license keys for you. This way, you can use Artisteer 4.3 for free and create unlimited web designs for yourself or your clients.
-extensoft artisteer 4.3 keygen generator Download ✺ https://urlgoal.com/2uI9KD
-Steps to Crack Extensoft Artisteer 4.3 with Keygen Generator
-
-Download and install Extensoft Artisteer 4.3 trial version from the official website or from any trusted source.
-Download the keygen generator from the link below and extract it to a folder on your computer.
-Run the keygen generator as administrator and select the edition of Artisteer 4.3 that you want to crack (Home or Academic).
-Click on the Patch button and browse to the installation directory of Artisteer 4.3 (usually C:\Program Files\Artisteer 4\bin or C:\Program Files (x86)\Artisteer 4\bin).
-Wait for the patching process to complete and then click on the Generate button to generate a license key.
-Copy the license key and paste it into the activation window of Artisteer 4.3.
-Click on the Activate button and enjoy using Extensoft Artisteer 4.3 full version.
-
-Disclaimer
-This article is for educational purposes only. We do not condone or encourage any illegal or unethical use of Extensoft Artisteer 4.3 or any other software. Please respect the intellectual property rights of the developers and purchase a legitimate license if you like their product.
-
-Benefits of Using Extensoft Artisteer 4.3
-Extensoft Artisteer 4.3 is a powerful and easy-to-use web design tool that can help you create professional and attractive web designs in minutes. Here are some of the benefits of using Artisteer 4.3:
-
-
-You can design awesome blogs and cool web templates without learning Photoshop, CSS, HTML or other web technologies.
-You can export your designs to various CMS platforms such as WordPress, Joomla, Drupal, Blogger and more.
-You can preview, download and edit online website and template samples from the Artisteer gallery.
-You can create perfectly correct, validated HTML and CSS that conform to web standards.
-You can generate cool web design ideas and adjust generated designs to suit your preferences.
-You can choose and use many included design elements, from backgrounds to photo objects and buttons.
-You can save money on web design templates and WordPress themes.
-You can generate ideas, prototypes and quick websites for your clients and friends.
-
-Tips and Tricks for Using Extensoft Artisteer 4.3
-To get the most out of Extensoft Artisteer 4.3, here are some tips and tricks that you can use:
-
-Use the Suggest Design button to get random design ideas from Artisteer 4.3.
-Use the Edit in Browser button to edit your design online using the Artisteer online editor.
-Use the Export as HTML button to export your design as a standalone HTML file that you can upload to any web server.
-Use the Export as Theme button to export your design as a theme for your chosen CMS platform.
-Use the Save Project button to save your design as an Artisteer project file that you can open and edit later.
-Use the Options menu to customize various settings of Artisteer 4.3 such as language, interface, output format and more.
-Use the Help menu to access the user manual, tutorials, support forum and other resources of Artisteer 4.3.
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Gta San Andreas Tokyo Drift Download Pc Free ((INSTALL)).md b/spaces/stomexserde/gpt4-ui/Examples/Gta San Andreas Tokyo Drift Download Pc Free ((INSTALL)).md
deleted file mode 100644
index 2be9706d2ec914c56ba86a008456c0630ff03f2c..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Gta San Andreas Tokyo Drift Download Pc Free ((INSTALL)).md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-How to Download GTA San Andreas Tokyo Drift for PC Free
-GTA San Andreas is one of the most popular and iconic games in the Grand Theft Auto series. It features a vast open-world map, a rich story, and a variety of missions and activities. But what if you want to spice up your gameplay with some fast and furious drifting action? That's where GTA San Andreas Tokyo Drift comes in.
-gta san andreas tokyo drift download pc free Download Zip ○ https://urlgoal.com/2uI745
-GTA San Andreas Tokyo Drift is a mod that transforms the game into a racing adventure inspired by the movie The Fast and the Furious: Tokyo Drift. It adds new cars, new tracks, new missions, new animations, and new effects to the game. You can drift around the streets of San Andreas, compete with other racers, and even customize your own vehicles.
-But how can you download GTA San Andreas Tokyo Drift for PC free? There are several websites that offer this mod for download, but not all of them are safe or reliable. Some may contain viruses, malware, or unwanted software that can harm your computer or compromise your privacy. Others may have broken links, outdated versions, or incomplete files that can cause errors or crashes.
-To avoid these problems, we recommend you to download GTA San Andreas Tokyo Drift from a trusted source. One of them is LibertyCity.net[^1^], a website that hosts various mods for GTA games. Here are the steps to download GTA San Andreas Tokyo Drift from LibertyCity.net:
-
-
-Go to this link to access the mod page.
-Click on the green "Download" button at the bottom of the page.
-Wait for the download to finish. The file size is 396 MB.
-Extract the downloaded file using a program like WinRAR or 7-Zip.
-Copy and paste the contents of the extracted folder into your GTA San Andreas installation directory. This is usually located at C:\Program Files\Rockstar Games\GTA San Andreas.
-Run the game and enjoy GTA San Andreas Tokyo Drift!
-
-Note: Before installing any mod, it is advisable to backup your original game files in case something goes wrong. You can also use a mod manager like SAMI or Mod Loader to make the installation easier and safer.
-If you are looking for other sources to download GTA San Andreas Tokyo Drift for PC free, you can also check out these websites:
-
-GTAinside.com[^2^] : This website offers a different version of GTA San Andreas Tokyo Drift called Ultimate Drifting Mod v1. It claims to allow you to perform effective drifts like in NFS Underground 2 or 2 Fast 2 Furious. It modifies the handling of Hotring Racer, but you can use it for any other two-door car.
-Gta.cz[^3^] : This website offers another version of GTA San Andreas Tokyo Drift called Tokyo Drift. It does not provide much information about the mod, but it has some screenshots that show some of the cars and tracks included.
-
-We hope this article has helped you to download GTA San Andreas Tokyo Drift for PC free. Have fun drifting in San Andreas!
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/memory/test_brain_memory.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/memory/test_brain_memory.py
deleted file mode 100644
index b5fc942ca5ed87f85db30c02a3b34b198723fbee..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MetaGPT/tests/metagpt/memory/test_brain_memory.py
+++ /dev/null
@@ -1,57 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/8/27
-@Author : mashenquan
-@File : test_brain_memory.py
-"""
-import json
-from typing import List
-
-import pydantic
-
-from metagpt.memory.brain_memory import BrainMemory
-from metagpt.schema import Message
-
-
-def test_json():
- class Input(pydantic.BaseModel):
- history: List[str]
- solution: List[str]
- knowledge: List[str]
- stack: List[str]
-
- inputs = [
- {
- "history": ["a", "b"],
- "solution": ["c"],
- "knowledge": ["d", "e"],
- "stack": ["f"]
- }
- ]
-
- for i in inputs:
- v = Input(**i)
- bm = BrainMemory()
- for h in v.history:
- msg = Message(content=h)
- bm.history.append(msg.dict())
- for h in v.solution:
- msg = Message(content=h)
- bm.solution.append(msg.dict())
- for h in v.knowledge:
- msg = Message(content=h)
- bm.knowledge.append(msg.dict())
- for h in v.stack:
- msg = Message(content=h)
- bm.stack.append(msg.dict())
- s = bm.json()
- m = json.loads(s)
- bm = BrainMemory(**m)
- assert bm
- for v in bm.history:
- msg = Message(**v)
- assert msg
-
-if __name__ == '__main__':
- test_json()
\ No newline at end of file
diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/textual_inversion/ui.py b/spaces/supertori/files/stable-diffusion-webui/modules/textual_inversion/ui.py
deleted file mode 100644
index 5b75f799e745fa693cda06763af80069324a964f..0000000000000000000000000000000000000000
--- a/spaces/supertori/files/stable-diffusion-webui/modules/textual_inversion/ui.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import html
-
-import gradio as gr
-
-import modules.textual_inversion.textual_inversion
-import modules.textual_inversion.preprocess
-from modules import sd_hijack, shared
-
-
-def create_embedding(name, initialization_text, nvpt, overwrite_old):
- filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text)
-
- sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings()
-
- return gr.Dropdown.update(choices=sorted(sd_hijack.model_hijack.embedding_db.word_embeddings.keys())), f"Created: {filename}", ""
-
-
-def preprocess(*args):
- modules.textual_inversion.preprocess.preprocess(*args)
-
- return f"Preprocessing {'interrupted' if shared.state.interrupted else 'finished'}.", ""
-
-
-def train_embedding(*args):
-
- assert not shared.cmd_opts.lowvram, 'Training models with lowvram not possible'
-
- apply_optimizations = shared.opts.training_xattention_optimizations
- try:
- if not apply_optimizations:
- sd_hijack.undo_optimizations()
-
- embedding, filename = modules.textual_inversion.textual_inversion.train_embedding(*args)
-
- res = f"""
-Training {'interrupted' if shared.state.interrupted else 'finished'} at {embedding.step} steps.
-Embedding saved to {html.escape(filename)}
-"""
- return res, ""
- except Exception:
- raise
- finally:
- if not apply_optimizations:
- sd_hijack.apply_optimizations()
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Solucionariofisicageneralalvarengaymaximo.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Solucionariofisicageneralalvarengaymaximo.md
deleted file mode 100644
index 60e377eb3adb11314ab7bc2733341801f8e16a41..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Solucionariofisicageneralalvarengaymaximo.md
+++ /dev/null
@@ -1,6 +0,0 @@
-solucionariofisicageneralalvarengaymaximo Download Zip ->>->>->> https://cinurl.com/2uEXGr
-
-Solucionario Fisica General Alvarenga Y Maximo. Hi everyone & welcome to this money cauldron! With just one click, you can help a person in need cover ... € 1 MILLION € 20 MILLION € 30 MILLION In this guide, I would like to share with you some interesting and important aspects related to working in our country in terms of finance. € 1 MILLION € 20 MILLION € 30 MILLION I would like to share with you some important points related to working in our country in terms of finances. € 1 MILLION € 20 MILLION € 30 MILLION We live in a world where there is no scarcity. People we didn't know before became our neighbors. 8a78ff9644
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wic Reset Keygen Download Crack.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wic Reset Keygen Download Crack.md
deleted file mode 100644
index 70f51a0efd5ef446d434d472f21d4a65d0a00616..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wic Reset Keygen Download Crack.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-How to Download and Use WIC Reset Keygen Crack
-WIC Reset Keygen Crack is a software tool that allows you to reset the waste ink counter (WIC) of your Epson printer. The WIC is a feature that monitors the amount of ink used by the printer and warns you when it is near or over its limit. When the WIC reaches its maximum value, the printer stops working and displays an error message.
-wic reset keygen download crack Download ✒ https://cinurl.com/2uEX9U
-Resetting the WIC can help you save money and avoid buying new ink cartridges or a new printer. However, resetting the WIC is not an official procedure supported by Epson, and it may void your warranty or damage your printer. Therefore, you should use this tool at your own risk and only when necessary.
-In this article, we will show you how to download and use WIC Reset Keygen Crack to reset your Epson printer's WIC.
-Step 1: Download WIC Reset Keygen Crack
-The first step is to download WIC Reset Keygen Crack from a reliable source. There are many websites that offer this tool, but some of them may contain viruses, malware, or fake files. To avoid these risks, we recommend you to use one of the following links:
-
-[^1^] - This website provides a direct download link for WIC Reset Keygen Crack, as well as a detailed guide on how to use it.
-[^3^] - This website allows you to download WIC Reset Keygen Crack as a PDF file, which you can then open and extract the software.
-[^4^] - This website offers a free download of WIC Reset Keygen Crack, as well as other useful tools for Epson printers.
-
-After downloading the file, you need to unzip it and run the wic-reset.exe file. You may need to disable your antivirus or firewall software temporarily, as they may block the program from running.
-
-Step 2: Generate a Serial Key
-The next step is to generate a serial key for WIC Reset Keygen Crack. The serial key is a code that activates the software and allows you to use its features. To generate a serial key, you need to follow these steps:
-
-Open WIC Reset Keygen Crack and click on the "Generate" button.
-A random serial key will appear in the text box. Copy it and paste it somewhere safe.
-Close WIC Reset Keygen Crack and open WIC Reset Utility.
-Click on the "Enter Key" button and paste the serial key in the dialog box.
-Click on "OK" and wait for the confirmation message.
-
-Congratulations! You have successfully activated WIC Reset Utility with WIC Reset Keygen Crack.
-Step 3: Reset Your Printer's WIC
-The final step is to reset your printer's WIC using WIC Reset Utility. To do this, you need to follow these steps:
-
-Connect your printer to your computer with a USB cable and turn it on.
-Open WIC Reset Utility and select your printer model from the drop-down menu.
-Click on the "Read Waste Counters" button and check the current value of your printer's WIC.
-If the value is more than 100%, click on the "Reset Waste Counters" button and confirm your choice.
-Wait for the process to finish and check the new value of your printer's WIC.
-If the value is less than 10%, click on the "Finish" button and close WIC Reset Utility.
-
-Congratulations! You have successfully reset your printer's WIC using WIC Reset Utility and WIC Reset Keygen Crack.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/((FREE)) Free Ebook Of Mukesh Singhal And N G Shivaratri Advanced Concepts In Operating Systems Mcgraw Hill 2.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/((FREE)) Free Ebook Of Mukesh Singhal And N G Shivaratri Advanced Concepts In Operating Systems Mcgraw Hill 2.md
deleted file mode 100644
index 3006a85ae8332f8c07660ff5a8db8375874210b2..0000000000000000000000000000000000000000
--- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/((FREE)) Free Ebook Of Mukesh Singhal And N G Shivaratri Advanced Concepts In Operating Systems Mcgraw Hill 2.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-How to Download a Free Ebook of Advanced Concepts in Operating Systems by Mukesh Singhal and Niranjan G. Shivaratri
-If you are looking for a free ebook of Advanced Concepts in Operating Systems by Mukesh Singhal and Niranjan G. Shivaratri, you might be disappointed to know that there is no official or legal way to get it online. This book is published by McGraw-Hill and is not available for free download on any website. However, there are some alternative ways to access this book without paying for it.
-free ebook of mukesh singhal and n g shivaratri advanced concepts in operating systems mcgraw hill 2 Download ✵✵✵ https://urluss.com/2uCDHs
-One way is to borrow it from a library or a friend who has a copy of it. You can also try to find a used copy of it on online platforms like Amazon or eBay. Another way is to use an online service like Scribd or Google Books that allows you to read some parts of the book for free or for a subscription fee. However, these services may not have the complete or latest edition of the book, and you may not be able to download or print the pages.
-A third way is to use an online archive like Archive.org that has a scanned version of the book available for free download[^1^]. However, this version may have poor quality, missing pages, or errors in the text. Moreover, downloading this version may violate the copyright of the authors and the publisher, so you should use it at your own risk and only for personal or educational purposes.
-In conclusion, there is no easy or legal way to get a free ebook of Advanced Concepts in Operating Systems by Mukesh Singhal and Niranjan G. Shivaratri online. You may have to buy it from a bookstore or an online retailer, or use one of the alternative methods mentioned above. However, you should be aware of the quality and legality issues involved in using these methods.
-
Advanced Concepts in Operating Systems by Mukesh Singhal and Niranjan G. Shivaratri is a comprehensive textbook that covers the fundamental concepts and mechanisms of operating systems, with a focus on distributed, database, and multiprocessor operating systems. The book provides a detailed analysis of the design principles, algorithms, and implementation techniques of various types of operating systems, such as process synchronization, deadlock detection and avoidance, distributed mutual exclusion, distributed agreement, distributed file systems, distributed shared memory, distributed scheduling, fault tolerance, security, and parallel programming models. The book also includes numerous examples and case studies of real-world operating systems, such as UNIX, Mach, Amoeba, Chorus, and DASH.
-The book is intended for the second course in operating systems, usually called advanced operating systems or distributed systems, taught in computer science and computer engineering departments. The book assumes that the readers have a basic knowledge of operating systems concepts and data structures. The book is suitable for both undergraduate and graduate students who want to learn more about the theory and practice of advanced operating systems.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Adobe InDesign CC 2014 Multilanguage (64 Bit-crack) Free Download Free.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Adobe InDesign CC 2014 Multilanguage (64 Bit-crack) Free Download Free.md
deleted file mode 100644
index 885618c2325d37558e99aabc0849fd2b7bd2013f..0000000000000000000000000000000000000000
--- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Adobe InDesign CC 2014 Multilanguage (64 Bit-crack) Free Download Free.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Adobe InDesign CC 2014 Multilanguage (64 bit-crack) free download DOWNLOAD ::: https://urluss.com/2uCH9B
-
- 8a78ff9644
-
-
-
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/deeplabv3_r50-d8.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/deeplabv3_r50-d8.py
deleted file mode 100644
index d7a43bee01422ad4795dd27874e0cd4bb6cbfecf..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/deeplabv3_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='ASPPHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- dilations=(1, 12, 24, 36),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/terfces0erbo/CollegeProjectV2/CRACK VersaCheck Gold 2007 PATCHED.md b/spaces/terfces0erbo/CollegeProjectV2/CRACK VersaCheck Gold 2007 PATCHED.md
deleted file mode 100644
index 72e952613d87bc97361646fba918470df77da0cf..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/CRACK VersaCheck Gold 2007 PATCHED.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-How to Use VersaCheck Gold 2007 for Check Printing
-VersaCheck Gold 2007 is a software that allows you to create and print your own checks. It is designed to help you manage your finances, monitor your transactions and payments, and prevent fraud. Here are some steps on how to use VersaCheck Gold 2007 for check printing:
-CRACK VersaCheck Gold 2007 Download Zip >>> https://bytlly.com/2uGkhI
-
-Install VersaCheck Gold 2007 on your computer. You can download it from the official website[^1^] or use a CD-ROM if you have one. You will need to enter your serial number and activation code during the installation process.
-Launch VersaCheck Gold 2007 and create a new account. You will need to enter your personal and bank information, such as your name, address, phone number, account number, routing number, etc. You can also customize your check design by choosing from various templates, fonts, colors, logos, etc.
-Load blank check paper into your printer. VersaCheck Gold 2007 supports various types of check paper, such as VersaCheck Form #1000[^2^], #1001[^2^], #3000[^2^], etc. Make sure the paper is aligned correctly and the printer settings are adjusted accordingly.
-Write a check using VersaCheck Gold 2007. You can enter the payee name, amount, date, memo, etc. You can also use the software to track your expenses, balance your accounts, schedule payments, etc.
-Print the check using VersaCheck Gold 2007. You can preview the check before printing and make any changes if needed. You can also print multiple checks at once or print a batch of checks from a file.
-Sign the check and deliver it to the payee. You can also use VersaCheck Gold 2007 to print envelopes, labels, deposit slips, etc. You can also use the software to monitor your transactions and payments, receive alerts for suspicious activities, etc.
-
-VersaCheck Gold 2007 is a powerful and secure check printing software that can help you manage your finances and prevent fraud. However, it is not a free software and you will need to purchase it from the official website or other sources. You will also need to buy blank check paper and ink cartridges for your printer. You should also be careful when using VersaCheck Gold 2007 and follow the legal and ethical guidelines for check writing.
Benefits of Using VersaCheck Gold 2007 for Check Printing
-VersaCheck Gold 2007 is not only a check printing software, but also a comprehensive financial management tool that can help you with various aspects of your personal and business finances. Here are some of the benefits of using VersaCheck Gold 2007 for check printing:
-
-You can save money and time by creating and printing your own checks. You don't have to order pre-printed checks from your bank or other vendors, which can be expensive and take a long time to arrive. You can also avoid fees for bounced checks, overdrafts, etc.
-You can protect yourself and your clients from fraud and identity theft by using VersaCheck Gold 2007. The software uses advanced security features, such as DNA Secure, which embeds a unique fingerprint on each check that can be verified by the bank. You can also use the software to encrypt your data, backup your files, etc.
-You can improve your cash flow and budgeting by using VersaCheck Gold 2007. The software allows you to track your income and expenses, balance your accounts, reconcile your statements, etc. You can also use the software to create invoices, estimates, purchase orders, statements, etc.
-You can enhance your professional image and brand by using VersaCheck Gold 2007. The software allows you to customize your check design with your own logo, slogan, signature, etc. You can also use the software to print business cards, badges, labels, etc.
-
-VersaCheck Gold 2007 is a versatile and reliable check printing software that can help you with various financial tasks and goals. You can use it for personal or business purposes, depending on your needs and preferences. You can also use it with other VersaCheck products, such as VersaInk, VersaToner, etc.
-
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Kasumi Rebirth V3 25 [BEST] Cracked Mag.md b/spaces/terfces0erbo/CollegeProjectV2/Kasumi Rebirth V3 25 [BEST] Cracked Mag.md
deleted file mode 100644
index 792866e25dc2319c032f51e0b1f6a2bed8554a1c..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Kasumi Rebirth V3 25 [BEST] Cracked Mag.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-kasumi rebirth v3 25 cracked magazine Crack Winpcsign Pro 2010 25. Download Kasumi Rebirth V3 1 Cracked Full Version 2 31: gistfile1.txt. 3 Benefits Digital Asset Management Provides Game Designers and. VirusBest-v9.9.4.669.Incl.Lic.Key. And majority of the times you are stuck with no way out to find.
-Sawatex (Cracked) Kasumi Rebirth V3 Full Version 1. Kasumi rebirth for iphone and ipod touch, version 2.1, kasumi reborn 2.1. Kasumi rebirth for ios device. Kasumi Rebirth V3 Crack Code is initially launched as a platform. And then they continued with the game. It's a platform game only with a.
-Kasumi Rebirth V3 25 Cracked Mag Download File >>> https://bytlly.com/2uGk9t
-Cazeneuve (Cracked) Kasumi Rebirth V3 Crack Codes. Kasumi Rebirth V3 Crack Codes are initially launched as a platform. The current team. Kasumi rebirth for iphone and ipod touch, version 2.1, kasumi reborn v2.0. The current team.
-Kasumi Rebirth V3 Crack. Kasumi reborn is an android game which will surely. Kasumi Rebirth V3 Crack Code is initially launched as a platform. The current team. Kasumi rebirth for iphone and ipod touch, version 2.1, kasumi reborn v2.0. The current team.
-Was anybody able to crack the latest version of Kasumi Rebirth It's an amazing game and I've seen some of its content and there is a significant . Sawatex., however you may download the full cracked version of it here: (Cracked by Skidrow) http://kasumi-rebirth.blogspot.fr In Kasumi Rebirth v3.25,.
-Download Movies HD Action. Watch Your Favorite Movies Online. Kasumi Rebirth V3 25 Cracked Mag free download. Kasumi reborn v3 cracked software download cracked software download. Download of Kasumi reborn v3. Ffhc Kasumi Rebirth V3 25 Crack Winpcsign Pro 2010 25. Of kasumi reborn i.e., Sawatex., however you may download the full cracked. Ffhc Kasumi Rebirth V3 1 Cracked Full Version 2 31: gistfile1.txt.
-
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/the-neural-networker/multilingual-language-recognition/README.md b/spaces/the-neural-networker/multilingual-language-recognition/README.md
deleted file mode 100644
index 096020c0a85a6c3adea4adfd4574eaa24cc4283b..0000000000000000000000000000000000000000
--- a/spaces/the-neural-networker/multilingual-language-recognition/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Multilingual Language Recognition
-emoji: 🌍
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.43.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/theaster/RVC-New-Arknights/infer_pack/commons.py b/spaces/theaster/RVC-New-Arknights/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/theaster/RVC-New-Arknights/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/((EXCLUSIVE)) 23km C051 07v.pdf.md b/spaces/tialenAdioni/chat-gpt-api/logs/((EXCLUSIVE)) 23km C051 07v.pdf.md
deleted file mode 100644
index a26d22ece81306e4012b595daef8ae23c9d3b6f8..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/((EXCLUSIVE)) 23km C051 07v.pdf.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-How to use 23KM-C051-07V stepper motor for your DIY projects
-A stepper motor is a type of electric motor that can rotate in discrete steps, making it ideal for precise positioning and speed control. One of the common stepper motors used by hobbyists and makers is the 23KM-C051-07V, which is a hybrid type with a bipolar winding configuration. This article will show you how to use this motor for your DIY projects, such as robotics, CNC machines, 3D printers, etc.
-((EXCLUSIVE)) 23km C051 07v.pdf Download Zip ○ https://urlcod.com/2uK9Ez
-What is 23KM-C051-07V stepper motor?
-The 23KM-C051-07V stepper motor is a product of NMB Technologies Corporation, a subsidiary of MinebeaMitsumi Inc., which is a global leader in precision electro-mechanical components. The 23KM-C051-07V is part of the 23KM series of standard hybrid step motors, which have a 1.8 degree step angle and a NEMA 23 frame size. The motor has four wires (two for each coil) and can be driven by a bipolar stepper driver. The motor has a rated voltage of 7.0 V, a rated current of 1.0 A per phase, a holding torque of 0.55 Nm, and a resistance of 7.0 ohms per phase[^1^]. The motor also has a single shaft with a flat section for easy mounting.
-How to connect 23KM-C051-07V stepper motor to a driver?
-To connect the 23KM-C051-07V stepper motor to a driver, you need to identify the wires of the motor and the pins of the driver. The wires of the motor are color-coded as follows[^1^]:
-
-Red: A+ (coil A positive)
-Blue: A- (coil A negative)
-Green: B+ (coil B positive)
-Black: B- (coil B negative)
-
-The pins of the driver may vary depending on the model and manufacturer, but they usually have labels such as A+, A-, B+, B-, VCC, GND, STEP, DIR, EN, etc. The VCC and GND pins are for supplying power to the driver, which should match the rated voltage of the motor. The STEP and DIR pins are for controlling the step pulses and direction signals from a microcontroller or another source. The EN pin is for enabling or disabling the driver output. The A+, A-, B+, B- pins are for connecting to the wires of the motor according to their polarity.
-A possible wiring diagram for connecting the 23KM-C051-07V stepper motor to a generic bipolar stepper driver is shown below:
-
-
-```
-+----------------+ +----------------+
-| | | |
-| Stepper | | Stepper |
-| Driver | | Motor |
-| | | |
-+----------------+ +----------------+
-| | | |
-| VCC +-----+------+-+ Red A+ |
-| | | | |
-| GND +-----+------+-+ Blue A- |
-| | | | |
-| STEP +-----+------+ + Green B+ |
-| | | | |
-| DIR +-----+------+ + Black B- |
-| | | | |
-| EN +-----+------+ |
-| | |
-+----------------+ |
- |
- +
-```
-How to control 23KM-C051-07V stepper motor with Arduino?
-To control the 23KM-C051-07V stepper motor with Arduino, you need to connect the STEP and DIR pins of the driver to two digital pins of the Arduino board, such as pin 2 and pin 3. You also need to connect the VCC and GND pins of the driver to an external power supply that can provide enough current for the motor, such as a battery or an adapter. The EN pin of the driver can be left unconnected or connected to another digital pin of the Arduino board if you want to enable or disable the motor programmatically.
-To make the motor rotate, you need to send pulses to the STEP pin of the
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack A Comprehensive Tutorial for Beginners.md b/spaces/tialenAdioni/chat-gpt-api/logs/Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack A Comprehensive Tutorial for Beginners.md
deleted file mode 100644
index 29d2af5850c886ec16558435ff634bd95c9b0e00..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack A Comprehensive Tutorial for Beginners.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack: A Complete Solution for UML Modeling and Code Generation
-If you are looking for a way to create, design, and document software systems using UML diagrams, you may have heard of Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack. This is a software that can help you model any type of software system using UML 2.5, generate code from your models in C#, Java, or Visual Basic, and synchronize your code and models in both directions. In this article, we will explain what Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack is, how to use it, and why it is a reliable and efficient tool for UML modeling and code generation.
-What is Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack?
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack is a software that can help you model any type of software system using UML diagrams, generate code from your models in C#, Java, or Visual Basic, and synchronize your code and models in both directions. It is developed by Altova, a leading provider of software tools for XML, data management, UML, and web services technologies.
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack DOWNLOAD ✓ https://urlcod.com/2uK6JT
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack is the latest version of Altova UModel Enterprise, which is the ultimate edition of Altova UModel that supports all features and functions of the software. It works for both 32-bit and 64-bit systems.
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack is a cracked version of Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 that allows you to use the software without paying for a license or subscription. It is created by X-Force, a team of hackers who specialize in cracking software and games.
-How to use Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack?
-To use Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack, you need to follow these steps:
-
-Download Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack from a trusted source. You can find it on many websites that offer software downloads, but make sure to scan it with an antivirus before opening it.
-Install Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 on your computer system. You can find the installation files on the official Altova website or on other sources.
-Run Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack as administrator. You need to right-click on the file and select "Run as administrator" to avoid any errors or problems.
-Select the product that you want to activate from the drop-down menu.
-Click on "Patch" and wait for the message "Successfully patched" to appear.
-Copy the request code from the Altova activation screen and paste it into the crack.
-Click on "Generate" and copy the activation code from the crack.
-Paste the activation code into the Altova activation screen and click on "Next".
-Enjoy your activated Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64!
-
-Why use Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack?
-There are many reasons why you may want to use Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack to model your software systems using UML diagrams and generate code from them. Here are some of them:
-How to download Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack for free
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack full version download link
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack serial key generator
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack activation code
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack license key
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack patch
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack torrent
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack review
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack features
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack system requirements
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack installation guide
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack user manual
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack tutorial
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack video demo
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack comparison with other software
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack alternatives
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack discount coupon code
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack price
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack customer support
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack testimonials
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack pros and cons
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack benefits and drawbacks
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack advantages and disadvantages
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack strengths and weaknesses
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack tips and tricks
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack best practices
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack use cases and examples
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack FAQs and answers
-Altova UModel Enterprise
-
-You can save money by not having to buy a license or subscription for Altova UModel Enterprise.
-You can access all the features and functions of Altova UModel Enterprise without any limitations or restrictions.
-You can use Altova UModel Enterprise offline without needing an internet connection or an account.
-You can update Altova UModel Enterprise without worrying about losing your activation or getting blocked by Altova.
-You can trust Altova UModel Enterprise as a safe and reliable tool that does not contain any viruses or malware.
-
-Conclusion
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack is a powerful tool that can help you model any type of software system using UML diagrams, generate code from your models in C#, Java, or Visual Basic, and synchronize your code and models in both directions. It is easy to use, safe to download, and reliable to work. With Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack, you can enjoy all the benefits of using Altova UModel Enterprise without spending a dime.
-What are the benefits of using Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack?
-Using Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack can bring you many benefits for your software development projects. Some of these benefits are:
-
-You can create high-quality UML diagrams that can help you visualize, understand, and communicate your software design.
-You can generate code from your UML models in C#, Java, or Visual Basic languages that can save you time and effort.
-You can synchronize your code and models in both directions that can ensure consistency and accuracy.
-You can validate your models against UML standards and rules that can improve your design quality and correctness.
-You can document your models using rich text, images, hyperlinks, and tables that can enhance your documentation readability and usability.
-
-These are some of the benefits that you can enjoy by using Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack for your UML modeling and code generation needs.
-What are the features of Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack?
-Altova UModel Enterprise 2018 20.2.1 R2 SP1 x64 Crack has many features that can help you create, design, and document software systems using UML diagrams and generate code from them. Some of these features are:
-