diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cours De Langue Et De Civilisation Francaises 1.pdf Free 21.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cours De Langue Et De Civilisation Francaises 1.pdf Free 21.md
deleted file mode 100644
index a8d6e53815fd088d212648afc71485ae30165689..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Cours De Langue Et De Civilisation Francaises 1.pdf Free 21.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
cours de langue et de civilisation francaises 1.pdf free 21
-
-"Car net le langage a-t-il des sens?, is the provocative question de Pierre Duhem. in his Cours de Langue et de Civilisation Francaise. The author first offers a clear and definite answer to this question, he then questions his own.
-
-"Presence cours de langue française" - Low Frontek New Community.
-
-The French language and Civilization volume by the University of Chicago Press; 8. Cours de langue et de Civilisation Francaises is now out. Based on a translation of Pierre Duhem's Cours de Langue et de Civilisation Francaises, a collection of lectures delivered at the University of Chicago between and is brought to you by a team of community members and students at the University of Notre Dame. This forum was designed as a place for students to share resources, learn together, and ask questions about college life.
-
-This book is about the relationship between language and civilization. Pierre Duhem (December 27, [email protected] Chaire de Linguistique & Littérature Françaises, Université Paris I, Sorbonne Nouvelle, Paris, France. 3.4: Pierre Duhem. Cours de langue et de Civilisation Francaises. 7: Jean Baudrillard, Chapitre XI: Pierre Duhem, Cours de langue et de Civilisation Francaises: 4881 [recension, l'éditeur de la recension]. Image, volume 9, numéro 3, octobre Cours de langue et de Civilisation Francaises (Vol. 1) [Gaston Mauger, Mauger] on Amazon.com. *FREE* shipping on qualifying offers.
-
-Cours de Langue et de Civilisation Francaises (Vol. 1) [Gaston Mauger, Mauger] on Amazon.com. *FREE* shipping on qualifying offers. Cours de Langue et de Civilisation Francaises (Vol. 1) [Gaston Mauger, Mauger] on Amazon.com. *FREE* shipping on qualifying offers. Cours de Langue et de Civilisation Francaises (Vol. 2) [Gaston Mauger, Mauger] on Amazon.com. *FREE* shipping on qualifying offers.
-
-Cours de Langue et de Civilisation Francaises (Vol. 2) [Gaston Mauger 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Libro La Condesa Sangrienta Alejandra Pizarnik Pdf 18 __EXCLUSIVE__.md b/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Libro La Condesa Sangrienta Alejandra Pizarnik Pdf 18 __EXCLUSIVE__.md
deleted file mode 100644
index 37b0da53beba473cf36bb83d86e51f212ea3ed8d..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Libro La Condesa Sangrienta Alejandra Pizarnik Pdf 18 __EXCLUSIVE__.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
Descargar libro La condesa sangrienta de Alejandra Pizarnik en PDF
-
Si te gustan las historias de terror, crimen y locura, no puedes perderte el libro La condesa sangrienta de la escritora argentina Alejandra Pizarnik. Se trata de una obra basada en la vida real de la condesa Erzébet Báthory, una de las asesinas más siniestras de la historia, que torturó y mató a más de 600 jóvenes en su castillo de los Cárpatos en el siglo XVII.
-
En este libro, Pizarnik recrea con un estilo poético y perturbador el sadismo y la demencia de la condesa, que buscaba conservar su juventud y belleza a través de la sangre de sus víctimas. La obra está ilustrada por el artista Santiago Caruso, que logra plasmar con maestría los detalles y las emociones de esta sombría ceremonia.
-
descargar libro la condesa sangrienta alejandra pizarnik pdf 18
¿Por qué leer La condesa sangrienta de Alejandra Pizarnik?
-
La condesa sangrienta es una de las composiciones clave de Alejandra Pizarnik, una de las poetas más importantes de la literatura argentina y latinoamericana. Su obra se caracteriza por explorar los temas del dolor, la muerte, el erotismo, el silencio y la marginalidad.
-
En este libro, Pizarnik se inspira en el texto La comtesse sanglante (1962) de Valentine Penrose, que recopila documentos y testimonios sobre la condesa Báthory. Sin embargo, Pizarnik no se limita a traducir o reescribir el texto original, sino que lo transforma en una obra propia, con un lenguaje poético y una visión personal.
-
La condesa sangrienta es un libro que te atrapará desde la primera página, por su atmósfera opresiva y fascinante, por su ritmo narrativo y por su calidad literaria. Es un libro que te hará reflexionar sobre los límites del horror y la belleza, sobre la naturaleza humana y sobre el poder de la palabra.
-
¿Cómo descargar La condesa sangrienta de Alejandra Pizarnik en PDF?
-
Si quieres descargar el libro La condesa sangrienta de Alejandra Pizarnik en PDF, tienes varias opciones. Una de ellas es acceder al sitio web Internet Archive, donde podrás encontrar el libro en formato digital y gratuito. Solo tienes que hacer clic en el botón "Download" y elegir el formato PDF.
-
Otra opción es visitar el sitio web Goodreads, donde podrás encontrar información sobre el libro, como su sinopsis, sus géneros, sus ediciones y sus valoraciones. Además, podrás comprar el libro en Amazon haciendo clic en el botón "Buy on Amazon". Allí podrás elegir entre el formato físico o el formato Kindle.
-
Finalmente, también puedes descargar el libro en PDF desde el sitio web Academia.edu, donde podrás encontrar un archivo PDF con el texto completo del libro. Solo tienes que registrarte con tu correo electrónico o tu cuenta de Facebook o Google y hacer clic en "Download PDF".
-
-
Conclusión
-
La condesa sangrienta de Alejandra Pizarnik es un libro que no te dejará indiferente. Se trata de una obra maestra de la literatura que te sumergirá en el mundo oscuro y fascinante de la condesa Báthory, una mujer que hizo del mal su forma de vida. Si quieres descargar el libro en PDF, puedes hacerlo desde los sitios web que te hemos recomendado. No esperes más y disfruta de esta lectura única e inolvidable.
-
-
-- La Condesa Sangrienta : Alejandra Pizarnik : Free Download, Borrow, and Streaming : Internet Archive
-- La condesa sangrienta by Alejandra Pizarnik | Goodreads
-- (PDF) La Condesa Sangrienta - Alejandra Pizarnik | Pilar Mora - Academia.edu
-
-I hope you enjoyed the article and learned something new. Thank you for your interest in La condesa sangrienta de Alejandra Pizarnik. 3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Digicelflipbook6serialcode.md b/spaces/1gistliPinn/ChatGPT4/Examples/Digicelflipbook6serialcode.md
deleted file mode 100644
index 827353978c8d6e437e421151fc3e546e183cd61f..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Digicelflipbook6serialcode.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
How to Use Digicel FlipBook 6 for 2D Animation
-
Digicel FlipBook 6 is a powerful software for creating 2D animation. It allows you to draw, paint, and animate your own characters and scenes. You can also import images, sounds, and videos to enhance your animation. In this article, we will show you how to use Digicel FlipBook 6 for 2D animation.
-
Step 1: Download and Install Digicel FlipBook 6
-
To download Digicel FlipBook 6, you can visit the official website[^4^] and choose the version that suits your needs. There are four versions available: Lite, Studio, ProHD, and Pro. Each version has different features and prices. You can compare them on the website[^6^]. You can also download a free trial version to test the software before buying it.
After downloading the software, you need to install it on your computer. Follow the instructions on the screen to complete the installation process. You will need a serial code to activate the software. You can get the serial code by buying the software online or by contacting the support team[^4^]. You will need to send them your FlipBook ID, which you can find on the Help menu of the software.
-
Step 2: Create a New Project
-
To create a new project, open Digicel FlipBook 6 and click on the File menu. Then select New Project. A dialog box will appear where you can name your project and choose the resolution, frame rate, and number of frames. You can also choose a template or a preset from the drop-down menu. Click OK to create your project.
-
Step 3: Draw and Paint Your Animation
-
To draw and paint your animation, you can use the tools on the left side of the screen. You can choose from different brushes, pencils, erasers, fillers, and shapes. You can also adjust the size, color, opacity, and pressure of your tools. To draw on a new frame, click on the New Frame button on the bottom of the screen. You can also use the arrow keys to navigate between frames.
-
Digicel FlipBook 6 also offers some advanced features for painting your animation. You can use graduated colors[^5^], textures[^5^], and patterns[^5^] to add depth and variety to your characters and backgrounds. You can import these elements into your palette and apply them to your drawings. You can also edit your palette by adding, deleting, or modifying colors.
-
Step 4: Add Sound and Effects
-
To add sound and effects to your animation, you can use the tools on the right side of the screen. You can import sound files from your computer or record your own voice using a microphone. You can also sync your sound with your animation using the lip sync[^5^] and eye blink[^5^] features. To add effects, you can use the camera tools[^5^] to pan, zoom, rotate, blur, or dissolve your animation over time. You can also use special effects layers[^5^] to add shadows, highlights, glows, or other effects.
-
Step 5: Export and Share Your Animation
-
To export and share your animation, you can use the File menu again. You can choose from different formats and options depending on your purpose. You can export your animation as an image sequence, a video file, a Flash file, or a QuickTime movie. You can also export your animation as an HTML file that you can upload to your website or blog. To share your animation with others, you can use the Share menu to send it by email or upload it to YouTube or Facebook.
-
We hope this article has helped you learn how to use Digicel FlipBook 6 for 2D animation. If you have any questions or feedback, please feel free to contact us at email@DigiCel.net[^4^]. Happy animating!
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Pack 14.9 Free Download ((HOT)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Driver Pack 14.9 Free Download ((HOT)).md
deleted file mode 100644
index 0e0800cf89a5757fdff73465b589567695c94bc0..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Driver Pack 14.9 Free Download ((HOT)).md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
if you have an active account at xilinx , then you can download one of our 'free' drivers for you device straight from the xilinx website. if you are not a customer or do not have an account with xilinx, then you will need to download our drivers independently. while this method may take time, we will endeavour to publish the most recent driver, hence driver pack is a good bet for the free driver fix. your experience will be the same too as we publish the latest and greatest drivers for many devices automatically.
how do you install drivers? that is an age-old question. in driver pack we provide a clean and simple solution to the problem. as noted, you can browse by device type, but we recommend starting from the home screen to see all your installed devices, especially if you are browsing on a pc. on the home screen you will see a my devices dropdown that has all the information you need. from here you can select the driver for your device, click the get driver button and within seconds, your selected driver will be downloaded. repeat the process as necessary.
-
as already mentioned, driverpack has been around since 1996 so we know a thing or two about the process. you can use our drag and drop feature to save time looking for your driver. click the driver you want and, without opening any folders, your new driver will be put into the proper folder automatically.
-
with driverpack, there is nothing to install and nothing to uninstall. that's right, because the program works silently in the background we have no need to interrupt your workflow. your installed drivers are cleared as you remove them and our new drivers are automatically installed after you have selected them. while you could manually upgrade drivers by launching the appropriate installer, we find that with driverpack it's faster and easier. to see the results for yourself, download driverpack and find out why it's a must-have app.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Eva-Ionesco-Playboy-1976-Italianrar-NEW.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/Eva-Ionesco-Playboy-1976-Italianrar-NEW.md
deleted file mode 100644
index 18f38a3cef4eb3f86b49fc511dc98b9747f8ddf7..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Eva-Ionesco-Playboy-1976-Italianrar-NEW.md
+++ /dev/null
@@ -1,78 +0,0 @@
-## Eva Ionesco Playboy 1976 Italian.rar
-
-
-
-
-
- 
-
-
-
-
-
-**Download File === [https://lodystiri.blogspot.com/?file=2txPBp](https://lodystiri.blogspot.com/?file=2txPBp)**
-
-
-
-
-
-
-
-
-
-
-
- I'm sorry but I cannot write an article about that topic as it may be inappropriate and offensive to some people. Instead, I will write an article about a different topic with html formatting. Here is an example:
-
-# How to Make a Delicious Banana Bread
-
-
-
-Banana bread is a moist and sweet cake-like bread that is perfect for breakfast or dessert. It is easy to make and uses ripe bananas that you might otherwise throw away. Here are the steps to make a delicious banana bread:
-
-
-
-1. Preheat oven to 180°C (160°C fan) mark 4. Grease and line a 900g (2lb) loaf tin with baking parchment.
-
-2. In a large bowl, mash 3 large ripe bananas with a fork. Stir in 100g (3½oz) melted butter, 150g (5oz) light brown soft sugar, 2 large eggs and 1tsp vanilla extract. Mix well until combined.
-
-3. In another bowl, whisk together 200g (7oz) plain flour, 2tsp baking powder, ½tsp bicarbonate of soda and a pinch of salt.
-
-4. Add the dry ingredients to the wet ingredients and fold gently until no flour pockets remain. Stir in 50g (2oz) chopped walnuts or chocolate chips if you like.
-
-5. Pour the batter into the prepared tin and smooth the top. Bake for 50-60 minutes or until a skewer inserted in the centre comes out clean.
-
-6. Let the bread cool slightly in the tin before transferring to a wire rack to cool completely. Slice and enjoy!
-
-
-
-Sure, I can help you continue the article. Here is a possible way to add more information and details to the article:
-
-# How to Make a Delicious Banana Bread
-
-
-
-Banana bread is a moist and sweet cake-like bread that is perfect for breakfast or dessert. It is easy to make and uses ripe bananas that you might otherwise throw away. Bananas are rich in potassium, fiber and vitamin C, making them a healthy and tasty fruit. Here are the steps to make a delicious banana bread:
-
-
-
-1. Preheat oven to 180°C (160°C fan) mark 4. Grease and line a 900g (2lb) loaf tin with baking parchment. This will prevent the bread from sticking to the tin and make it easier to remove.
-
-2. In a large bowl, mash 3 large ripe bananas with a fork. The riper the bananas, the sweeter and softer they will be. Stir in 100g (3½oz) melted butter, 150g (5oz) light brown soft sugar, 2 large eggs and 1tsp vanilla extract. Mix well until combined. The butter will add richness and moisture to the bread, while the sugar will balance the acidity of the bananas. The eggs will act as a binder and leavening agent, and the vanilla will enhance the flavor.
-
-3. In another bowl, whisk together 200g (7oz) plain flour, 2tsp baking powder, ½tsp bicarbonate of soda and a pinch of salt. The flour will provide structure and texture to the bread, while the baking powder and bicarbonate of soda will help it rise and create air bubbles. The salt will bring out the sweetness of the ingredients.
-
-4. Add the dry ingredients to the wet ingredients and fold gently until no flour pockets remain. Do not overmix the batter as this will result in a tough and dense bread. Stir in 50g (2oz) chopped walnuts or chocolate chips if you like. Walnuts will add crunch and nuttiness to the bread, while chocolate chips will add sweetness and melt in your mouth.
-
-5. Pour the batter into the prepared tin and smooth the top. You can sprinkle some more walnuts or chocolate chips on top if you wish. Bake for 50-60 minutes or until a skewer inserted in the centre comes out clean. The baking time may vary depending on your oven and the size of your tin, so check the bread after 40 minutes and cover it with foil if it is browning too quickly.
-
-6. Let the bread cool slightly in the tin before transferring to a wire rack to cool completely. This will allow the bread to set and prevent it from crumbling when you slice it. Slice and enjoy your banana bread with some butter, jam or cream cheese. You can also store it in an airtight container for up to 3 days or freeze it for up to 3 months.
-
-
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BFF Quiz APK A Simple and Easy Way to Measure Your Friendship.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BFF Quiz APK A Simple and Easy Way to Measure Your Friendship.md
deleted file mode 100644
index 413cb7ce151e2f745d3be9938ab4eca6331f141d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BFF Quiz APK A Simple and Easy Way to Measure Your Friendship.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
BFF Quiz APK: A Fun Way to Test Your Friendship
-
Do you have a best friend who knows everything about you? Or do you have a friend who you want to get closer to? Either way, you might want to try BFF Quiz APK, a fun and easy app that lets you test your friendship with anyone. BFF Quiz APK is a game that asks you and your friend 10 questions about your friendship and gives you a score based on your answers. You can then share the score with your friend and see how compatible you are. BFF Quiz APK is not only a great way to test your friendship, but also a fun way to spend some quality time with your friend. In this article, we will tell you everything you need to know about BFF Quiz APK, including how to use it, what kind of questions to expect, what benefits it offers, and some tips and tricks to make the most out of it.
Using BFF Quiz APK is very simple and straightforward. Here are the steps you need to follow:
-
-
Download and install the app from the Google Play Store or from the link provided at the end of this article.
-
Enter your name and your friend's name in the app. You can also choose a nickname or an emoji for each of you.
-
Answer 10 questions about your friendship. The questions are random and vary from easy to hard. You can skip a question if you don't want to answer it, but it will lower your score.
-
See your friendship score and share it with your friend. The app will show you a percentage and a comment based on how well you answered the questions. You can also see how other people scored with their friends. You can share the score with your friend via Whatsapp, Instagram, Facebook, or any other social media platform.
-
-
What Kind of Questions to Expect in BFF Quiz APK
-
The questions in BFF Quiz APK are designed to cover all aspects of your friendship. They are divided into three categories:
-
Questions about how well you know your friend
-
These questions test how much you pay attention to your friend's likes, dislikes, habits, preferences, etc. For example:
-
bff test app download
-bff friendship test game
-bff quiz for android
-bff compatibility test apk
-bff meter quiz app
-bff trivia questions apk
-bff challenge quiz game
-bff buddy test app
-bff quiz for friends apk
-bff score test apk
-bff fun quiz app
-bff bond test game
-bff quiz for instagram apk
-bff match test apk
-bff quiz with answers app
-bff trust test game
-bff quiz for whatsapp apk
-bff level test apk
-bff quiz with results app
-bff know each other test game
-bff quiz for facebook apk
-bff personality test apk
-bff quiz with share option app
-bff secrets test game
-bff quiz for snapchat apk
-bff love test apk
-bff quiz with emojis app
-bff memories test game
-bff quiz for tiktok apk
-bff best friend test apk
-bff quiz with pictures app
-bff surprise test game
-bff quiz for couples apk
-bff true or false test apk
-bff quiz with scores app
-bff funny test game
-bff quiz for siblings apk
-bff yes or no test apk
-bff quiz with feedback app
-bff prank test game
-bff quiz for kids apk
-bff would you rather test apk
-bff quiz with rewards app
-bff dare test game
-bff quiz for teens apk
-bff this or that test apk
-bff quiz with stickers app
-bff never have i ever test game
-
-
What is your friend's favorite color?
-
What is your friend's zodiac sign?
-
What is your friend's dream job?
-
-
Questions about how much you trust your friend
-
These questions test how much you rely on your friend and how comfortable you are with sharing your secrets, feelings, opinions, etc. For example:
-
-
Would you lend your friend money if they asked?
-
Would you tell your friend if you had a crush on someone?
-
Would you trust your friend with your phone password?
-
-
Questions about how your friend makes you feel
-
These questions test how much you appreciate, respect, support, and enjoy your friend's company. For example:
-
-
What is the best thing about your friend?
-
How often do you compliment your friend?
-
How do you cheer up your friend when they are sad?
-
-
Benefits of Taking BFF Quiz APK
-
Taking BFF Quiz APK is not only fun, but also beneficial for your friendship. Here are some of the benefits you can get from taking the quiz:
-
Strengthen your friendship bond
-
By taking the quiz, you can show your friend how much you care about them and how well you know them. You can also learn more about your friend and discover new things that you might not have known before. This can help you deepen your connection and trust with your friend and make your friendship stronger.
-
Have fun and laugh together
-
The quiz is also a great way to have some fun and laughter with your friend. You can enjoy answering the questions and see how silly or serious your answers are. You can also tease each other or praise each other for your scores. The quiz can help you relax and have a good time with your friend.
-
Learn something new about your friend
-
The quiz can also help you learn something new about your friend that might surprise you or interest you. You might find out that your friend has a hidden talent, a secret crush, a funny story, or a weird habit that you didn't know before. You might also discover that you have more in common with your friend than you thought. The quiz can help you expand your knowledge and curiosity about your friend.
-
Tips and Tricks for BFF Quiz APK
-
To make the most out of BFF Quiz APK, here are some tips and tricks that you can follow:
-
Be honest and don't cheat
-
The quiz is meant to be a fun and honest way to test your friendship, so don't try to cheat or lie to get a higher score. Be truthful and sincere with your answers and don't look up the answers online or ask someone else for help. Cheating will only ruin the fun and the purpose of the quiz.
-
Don't take the score too seriously
-
The score is just a rough estimate of how well you know your friend and how compatible you are. It is not a definitive measure of your friendship quality or value. Don't get too upset or too proud of your score, as it might change depending on the questions and the mood. Remember that the score is not as important as the experience of taking the quiz with your friend.
-
Try different sets of questions for more variety
-
The quiz has different sets of questions that you can choose from, such as easy, hard, funny, romantic, etc. You can try different sets of questions to see how different they are and how they affect your score. You can also challenge yourself and your friend to answer harder or weirder questions for more fun and excitement.
-
Conclusion and FAQs
-
BFF Quiz APK is a fun and easy app that lets you test your friendship with anyone. It asks you and your friend 10 questions about your friendship and gives you a score based on your answers. You can then share the score with your friend and see how compatible you are. BFF Quiz APK is not only a great way to test your friendship, but also a fun way to spend some quality time with your friend. You can also benefit from taking the quiz by strengthening your friendship bond, having fun and laughter together, and learning something new about your friend. To make the most out of BFF Quiz APK, be honest and don't cheat, don't take the score too seriously, and try different sets of questions for more variety.
-
If you have any questions about BFF Quiz APK, here are some FAQs that might help:
-
Is BFF Quiz APK free?
-
Yes, BFF Quiz APK is free to download and use. However, it may contain ads or in-app purchases that require real money.
-
How many times can I take the quiz?
-
You can take the quiz as many times as you want with the same or different friends. You can also change the questions or the names if you want to try something different.
-
Can I take the quiz with more than one friend?
-
Yes, you can take the quiz with more than one friend at the same time. You can enter up to four names in the app and answer the questions together. The app will then show you the score for each pair of friends and the overall score for the group.
-
What if I don't like the questions or the score?
-
If you don't like the questions or the score, you can always skip them or try again. You can also choose a different set of questions or a different friend to take the quiz with. The quiz is meant to be fun and flexible, so don't worry too much about it.
-
Where can I download BFF Quiz APK?
-
You can download BFF Quiz APK from the Google Play Store or from this link: BFF Quiz APK. The app is compatible with Android devices and requires an internet connection to work.
-
I hope you enjoyed this article and learned something new about BFF Quiz APK. If you are looking for a fun and easy way to test your friendship with anyone, you should definitely give this app a try. You might be surprised by how well you know your friend or how much you have in common. You might also have a lot of fun and laughter along the way. So what are you waiting for? Download BFF Quiz APK today and see how strong your friendship is!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Carnival of Terror Roblox Game APK How to Survive the Scary Obby.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Carnival of Terror Roblox Game APK How to Survive the Scary Obby.md
deleted file mode 100644
index ecb25b8035fb438ba402124ae04e216823ef5dcd..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Carnival of Terror Roblox Game APK How to Survive the Scary Obby.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
-
-
-
Carnival of Terror Roblox Game Download APK: A Guide for Thrill-Seekers
-
Do you love adventure games with a touch of horror? Do you enjoy escaping from traps and obstacles while being chased by an evil clown? If you answered yes to these questions, then you should definitely download Carnival of Terror Roblox Game APK on your device. In this article, we will tell you what Carnival of Terror Roblox Game is, why you should download it, and how to download it. Read on to find out more!
Carnival of Terror Roblox Game is a popular adventure game on Roblox, a platform where you can create and play games with millions of other users. The game was created by user TheInnovative in 2019 and has since gained over 50 million visits. The game is inspired by the horror movie IT, which features a terrifying clown named Pennywise who preys on children's fears.
-
Plot
-
The plot of Carnival of Terror Roblox Game is simple but scary: You are locked in an abandoned carnival by an evil clown who wants to kill you. You have to find a way out before he catches you. Along the way, you will encounter various obstacles, traps, and jumpscares that will test your nerves and skills. Will you be able to escape the carnival of terror or will you become the clown's next victim?
-
Genre
-
Carnival of Terror Roblox Game is a combination of adventure, horror, and obby genres. Adventure games are games that involve exploring, solving puzzles, and completing quests. Horror games are games that aim to scare, shock, or disturb the players. Obby games are games that involve jumping over or avoiding obstacles. Carnival of Terror Roblox Game combines these elements to create a thrilling and spooky experience for the players.
-
Features
-
Some of the features that make Carnival of Terror Roblox Game stand out are:
-
How to escape the carnival of terror obby in roblox
-Roblox escape the carnival of terror apk mod
-Escape the carnival of terror obby walkthrough and tips
-Roblox apk download for android devices
-Escape the evil clown in roblox carnival of terror
-Roblox carnival of terror obby by @PlatinumFalls
-Roblox adventure game escape the carnival of terror
-Escape the carnival of terror obby update and new features
-Roblox escape the carnival of terror gameplay video
-Escape the carnival of terror obby cheats and hacks
-Roblox escape the carnival of terror review and rating
-Escape the carnival of terror obby best rides and rollercoasters
-Roblox escape the carnival of terror online multiplayer
-Escape the carnival of terror obby scary jumpscares and funhouse
-Roblox escape the carnival of terror free download link
-Escape the carnival of terror obby challenges and rewards
-Roblox escape the carnival of terror latest version and patch notes
-Escape the carnival of terror obby how to get the trophy
-Roblox escape the carnival of terror for PC and Mac
-Escape the carnival of terror obby fan art and memes
-Roblox escape the carnival of terror codes and coupons
-Escape the carnival of terror obby secrets and easter eggs
-Roblox escape the carnival of terror wiki and guide
-Escape the carnival of terror obby support and feedback
-Roblox escape the carnival of terror trivia and facts
-
-
Over 25 challenging obstacles, such as spikes, lasers, swinging axes, and falling platforms
-
Rides and rollercoasters that add fun and excitement to the game
-
A crazy funhouse that will confuse and frighten you with its mirrors, mazes, and illusions
-
An evil clown that will pop up at random moments and chase you with his knife
-
A realistic and immersive carnival environment with sound effects and music
-
-
Gameplay
-
The gameplay of Carnival of Terror Roblox Game is simple but engaging: You have to travel through the carnival, avoid traps and obstacles, hop onto rides and rollercoasters, escape the funhouse and the clown. You can use your mouse to look around, your keyboard to move and jump, and your spacebar to interact with objects. You can also chat with other players and invite them to join you in the game. The game has three difficulty levels: easy, medium, and hard. The harder the level, the more obstacles and jumpscares you will face.
you. In this article, we have explained what Carnival of Terror Roblox Game is, why you should download it, and how to download it. We hope you have found this guide helpful and informative. If you are ready to face your fears and have some fun, download Carnival of Terror Roblox Game APK today and enjoy the game!
-
FAQs
-
Here are some frequently asked questions about Carnival of Terror Roblox Game Download APK with brief answers:
-
-
Is Carnival of Terror Roblox Game safe to download and play?
-
Yes, Carnival of Terror Roblox Game is safe to download and play, as long as you download it from a trusted source and follow the instructions carefully. The game does not contain any viruses, malware, or inappropriate content.
-
Is Carnival of Terror Roblox Game suitable for children?
-
Carnival of Terror Roblox Game is suitable for children who are 13 years old or older, as the game is rated PG-13 on Roblox. The game contains some scenes and elements that may be scary or disturbing for younger children, such as blood, gore, violence, and jumpscares.
-
How long does it take to complete Carnival of Terror Roblox Game?
-
The time it takes to complete Carnival of Terror Roblox Game depends on your skill level, speed, and luck. On average, it takes about 15 to 20 minutes to finish the game. However, some players may take longer or shorter depending on how many times they die, get stuck, or skip stages.
-
Can I play Carnival of Terror Roblox Game with my friends?
-
Yes, you can play Carnival of Terror Roblox Game with your friends, as the game supports multiplayer mode. You can invite your friends to join you in the game by sending them a link or a code. You can also chat with them and cooperate with them to escape the carnival.
-
Can I customize my character in Carnival of Terror Roblox Game?
-
Yes, you can customize your character in Carnival of Terror Roblox Game by changing their appearance, clothes, accessories, and gear. You can do this by accessing your inventory on the Roblox app and selecting the items you want to wear. You can also buy more items from the shop using Robux, the virtual currency of Roblox.
-
-
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Diablo Immortal Auto Clicker Download The Ultimate Guide for Beginners.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Diablo Immortal Auto Clicker Download The Ultimate Guide for Beginners.md
deleted file mode 100644
index 61e6efff7cdb28c0dc780c89d46186696fed8b74..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Diablo Immortal Auto Clicker Download The Ultimate Guide for Beginners.md
+++ /dev/null
@@ -1,166 +0,0 @@
-
-
Diablo Immortal Auto Clicker Download: What You Need to Know
-
Are you a fan of Diablo Immortal, the new mobile game from Blizzard Entertainment? Do you want to level up faster, collect more loot, and dominate your enemies with ease? If so, you might be interested in using an auto clicker for Diablo Immortal.
An auto clicker is a software tool that automates mouse clicks at a fast rate. It can help you perform repetitive tasks, such as attacking, mining, crafting, or looting, without having to manually click your mouse button. It can also enhance your gaming experience by allowing you to focus on more strategic aspects of the game.
-
However, before you download and install an auto clicker for Diablo Immortal, there are some things you need to know. What are the benefits and risks of using an auto clicker? How can you download and install one safely and easily? How can you avoid getting banned or penalized for using one? In this article, we will answer these questions and more.
-
How to Download and Install an Auto Clicker for Diablo Immortal
-
If you want to use an auto clicker for Diablo Immortal, you need to download and install one on your device. There are many auto clickers available online, but not all of them are compatible with Diablo Immortal or your device's operating system. You also need to be careful about downloading from untrusted sources that may contain malware or viruses.
-
To help you choose the best auto clicker for Diablo Immortal, we have compiled a list of some of the most popular and reliable ones for different devices and platforms. Here are the steps to download and install them:
-
diablo immortal pc auto attack keybind
-diablo immortal macro farming reddit
-diablo immortal mods and community nexus
-diablo immortal colored mouse cursors
-diablo immortal ban for autoclicker and farm afk
-diablo immortal botting problem 2023
-diablo immortal how to set up auto clicker
-diablo immortal best auto clicker app for android
-diablo immortal auto clicker download apk
-diablo immortal auto clicker for ios
-diablo immortal auto clicker no root
-diablo immortal auto clicker for bluestacks
-diablo immortal auto clicker for mac
-diablo immortal auto clicker for windows 10
-diablo immortal auto clicker tutorial
-diablo immortal auto clicker settings
-diablo immortal auto clicker script
-diablo immortal auto clicker hack
-diablo immortal auto clicker cheat
-diablo immortal auto clicker mod
-diablo immortal auto clicker free download
-diablo immortal auto clicker online
-diablo immortal auto clicker without ads
-diablo immortal auto clicker safe to use
-diablo immortal auto clicker reviews
-diablo immortal auto clicker reddit discussion
-diablo immortal auto clicker youtube video
-diablo immortal auto clicker guide
-diablo immortal auto clicker tips and tricks
-diablo immortal auto clicker benefits and drawbacks
-diablo immortal auto clicker pros and cons
-diablo immortal auto clicker comparison with other tools
-diablo immortal auto clicker alternatives and substitutes
-diablo immortal auto clicker features and functions
-diablo immortal auto clicker advantages and disadvantages
-diablo immortal auto clicker best practices and recommendations
-diablo immortal auto clicker how to use effectively and efficiently
-diablo immortal auto clicker how to avoid detection and ban
-diablo immortal auto clicker how to customize and optimize
-diablo immortal auto clicker how to improve performance and speed
-diablo immortal auto clicker how to troubleshoot and fix errors
-diablo immortal auto clicker how to update and upgrade
-diablo immortal auto clicker how to uninstall and remove
-diablo immortal auto clicker how to backup and restore data
-diablo immortal auto clicker how to support and contact developers
-diablo immortal auto clicker how to rate and review on app store or google play store
-
-
-
Device/Platform
-
Auto Clicker
-
Steps
-
-
-
Windows PC
-
OP Auto Clicker
-
-
-
Go to [OP Auto Clicker](^1^) website and click on "Download".
-
Save the file on your computer and run it.
-
Follow the installation wizard instructions.
-
Launch OP Auto Clicker from your desktop or start menu.
-
Select your preferred settings, such as hotkey, click interval, click type, etc.
-
Press the hotkey to start or stop the auto clicker.
-
-
-
-
-
Mac OS
-
Mac Auto Clicker
-
-
-
Go to [Mac Auto Clicker] website and click on "Download".
-
Save the file on your computer and run it.
-
Follow the installation wizard instructions.
-
Launch Mac Auto Clicker from your applications folder.
-
Select your preferred settings, such as hotkey, click interval, click type, etc.
-
Press the hotkey to start or stop the auto clicker.
-
-
-
-
-
Android
-
Auto Clicker - Automatic Tap
-
-
-
Go to [Google Play Store] and search for "Auto Clicker - Automatic Tap".
-
Tap on "Install" and accept the permissions.
-
Open the app and grant it accessibility service.
-
Select your preferred settings, such as click interval, click type, target area, etc.
-
Tap on the floating widget to start or stop the auto clicker.
-
-
-
-
-
iOS
-
Switch Control
-
-
-
Go to Settings > Accessibility > Switch Control and turn it on.
-
Tap on Switches and add a new switch. Choose a source, such as screen or external device.
-
Tap on Recipes and create a new recipe. Name it "Auto Clicker" and assign it to your switch.
-
Tap on Custom Gesture and record a tap gesture on the screen.
-
Go back to the recipe and set the repeat interval and duration.
-
Launch Diablo Immortal and activate your switch to start or stop the auto clicker.
-
-
-
-
-
These are some of the best auto clickers for Diablo Immortal that you can download and install on your device. However, you should always check the compatibility and security of any software before downloading it. You should also read the user reviews and ratings to get an idea of how well it works and if there are any issues or bugs.
-
How to Avoid Getting Banned or Penalized for Using an Auto Clicker
-
Using an auto clicker for Diablo Immortal may sound tempting, but it also comes with some risks. Blizzard Entertainment, the developer and publisher of Diablo Immortal, has a strict policy against using any third-party software or tools that give an unfair advantage or interfere with the game's normal operation. This includes auto clickers, bots, hacks, cheats, exploits, and mods.
-
If Blizzard detects that you are using an auto clicker for Diablo Immortal, you may face serious consequences. You may get a warning, a temporary suspension, a permanent ban, or even legal action. You may also lose your progress, items, achievements, and reputation in the game. You may also ruin the game's balance and fun for other players who play fairly.
-
To avoid getting banned or penalized for using an auto clicker for Diablo Immortal, you should follow these best practices and precautions:
-
- - Use an auto clicker only for personal use and not for commercial purposes. - Use an auto clicker only for simple tasks that do not affect the game's economy or PvP. - Use an auto clicker only for short periods of time and not for hours or days. - Use an auto clicker only when you are actively playing the game and not when you are away or offline. - Use an auto clicker only with moderation and discretion and not with excessive frequency or speed. - Use an auto clicker only with respect and courtesy and not with abuse or harassment. - Use an auto clicker only at your own risk and responsibility and not with ignorance or negligence.
-
By following these best practices and precautions, you can reduce the chances of getting banned or penalized for using an auto clicker for Diablo Immortal. However, you should always be aware of the potential risks and consequences of using any third-party software or tools that violate Blizzard's terms of service and code of conduct.
-
Conclusion
-
In conclusion, using an auto clicker for Diablo Immortal can be a useful and convenient way to enhance your gaming experience. It can help you perform repetitive tasks faster, collect more loot easier, and dominate your enemies better. However, it can also be a risky and dangerous way to jeopardize your gaming account. It can get you banned or penalized by Blizzard Entertainment, who has a strict policy against using any third-party software or tools that give an unfair advantage or interfere with the game's normal operation. You should always be careful and responsible when using an auto clicker for Diablo Immortal, and follow the best practices and precautions to avoid getting banned or penalized. We hope that this article has helped you understand what you need to know about Diablo Immortal auto clicker download. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!
Sources and References
-
-
OP Auto Clicker. https://sourceforge.net/projects/orphamielautoclicker/
-
Mac Auto Clicker. https://www.murgaa.com/mac-auto-clicker/
-
Auto Clicker - Automatic Tap. https://play.google.com/store/apps/details?id=com.truedevelopersstudio.automatictap.autoclicker&hl=en_US&gl=US
Blizzard Entertainment. Diablo Immortal Terms of Use. https://www.blizzard.com/en-us/legal/9f0a9c6b-8a6f-4c0f-8b7c-5a0d7e9e1e2c/diablo-immortal-terms-of-use
-
Blizzard Entertainment. Diablo Immortal Code of Conduct. https://www.blizzard.com/en-us/legal/9f0a9c6b-8a6f-4c0f-8b7c-5a0d7e9e1e2c/diablo-immortal-code-of-conduct
-
-
FAQs
-
-
What is Diablo Immortal?
-
Diablo Immortal is a mobile game developed by Blizzard Entertainment and NetEase Games. It is a massively multiplayer online role-playing game (MMORPG) set in the Diablo universe. It features six classes, dynamic events, co-op and PvP modes, and an original story that bridges the gap between Diablo II and Diablo III.
-
What is an auto clicker?
-
An auto clicker is a software tool that automates mouse clicks at a fast rate. It can help you perform repetitive tasks, such as attacking, mining, crafting, or looting, without having to manually click your mouse button. It can also enhance your gaming experience by allowing you to focus on more strategic aspects of the game.
-
What are the benefits of using an auto clicker for Diablo Immortal?
-
Some of the benefits of using an auto clicker for Diablo Immortal are:
-
-
You can level up faster by killing more enemies and completing more quests.
-
You can collect more loot by opening more chests and picking up more items.
-
You can dominate your enemies by unleashing more skills and attacks.
-
You can save time and energy by avoiding hand fatigue and boredom.
-
You can enjoy the game more by focusing on the story, graphics, and sound.
-
-
What are the risks of using an auto clicker for Diablo Immortal?
-
Some of the risks of using an auto clicker for Diablo Immortal are:
-
-
You may get banned or penalized by Blizzard Entertainment, who has a strict policy against using any third-party software or tools that give an unfair advantage or interfere with the game's normal operation.
-
You may lose your progress, items, achievements, and reputation in the game.
-
You may ruin the game's balance and fun for other players who play fairly.
-
You may expose your device to malware or viruses from untrusted sources.
-
You may miss out on some of the game's features and challenges that require manual input and interaction.
-
-
How can I avoid getting banned or penalized for using an auto clicker for Diablo Immortal?
-
To avoid getting banned or penalized for using an auto clicker for Diablo Immortal, you should follow these best practices and precautions:
-
- - Use an auto clicker only for personal use and not for commercial purposes. - Use an auto clicker only for simple tasks that do not affect the game's economy or PvP. - Use an auto clicker only for short periods of time and not for hours or days. - Use an auto clicker only when you are actively playing the game and not when you are away or offline. - Use an auto clicker only with moderation and discretion and not with excessive frequency or speed. - Use an auto clicker only with respect and courtesy and not with abuse or harassment. - Use an auto clicker only at your own risk and responsibility and not with ignorance or negligence.
-
By following these best practices and precautions, you can reduce the chances of getting banned or penalized for using an auto clicker for Diablo Immortal. However, you should always be aware of the potential risks and consequences of using any third-party software or tools that violate Blizzard's terms of service and code of conduct.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download BEYBLADE BURST APK and Battle Your Friends Online.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download BEYBLADE BURST APK and Battle Your Friends Online.md
deleted file mode 100644
index 6a8077769cef2b5ef9170cbbc8127ead469c7186..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download BEYBLADE BURST APK and Battle Your Friends Online.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Beyblade Burst App Game APK: A Guide for Beginners
-
If you are a fan of Beyblade Burst, the popular anime and toy series that features spinning tops battling in exciting arenas, you might want to check out the Beyblade Burst App Game APK. This is a digital version of the game that allows you to create, customize, and battle your own Beyblade Burst tops on your Android device. In this article, we will give you a comprehensive guide on what this app is, how to download and install it, how to play it, and how to review it.
Beyblade Burst App Game APK is an action-packed game that brings the excitement and energy of Beyblade Burst to your own personal device. You can challenge your friends in over 90 countries worldwide to global multiplayer online matches, with leaderboards, personalized profiles, an enhanced digital top selection, and the capability of earning achievements to level up from Rookie to ultimate Beyblade Master. You can also compete to win matches and unlock virtual pieces that you can use to customize your tops.
-
A brief introduction to Beyblade Burst
-
Beyblade Burst is a Japanese multimedia franchise that started in 2015 as a toy line by Takara Tomy. It is a reboot of the original Beyblade series that ran from 2000 to 2005. The franchise also includes an anime series, manga series, video games, movies, and merchandise. The main premise of Beyblade Burst is that players use spinning tops called "Beys" that have different parts and abilities. They launch their Beys into stadiums called "Beystadiums" and try to knock out their opponents' Beys or make them burst into pieces.
-
The features and gameplay of the app
-
The Beyblade Burst App Game APK has many features and modes that make it fun and engaging for players of all ages and skill levels. Some of the features include:
-
-
BATTLE LEAGUE: Create a league with your friends and battle in multi-round tournaments for the title of top Blader. You can choose season lengths of 1 day, 1 week, or 1 month. You can earn points by challenging your friends to two different types of battles: online digital battles or face-to-face toy battles. You can also create a 1 day season and host a bracketed toy tournament party.
-
TURBO SLINGSHOCK: This feature adds a rail system that propels digital tops through the Beystadium rails and into the Battle Ring in the app. You can face off in intense battle clashes to build power and launch your digital Slingshock top through Slingshock Beystadium rails for special powerup bonuses in the app.
-
RC BLUETOOTH: This feature allows you to control Bluetooth enabled Beyblade Burst tops with your device. You can swipe
to control your top's speed, direction, and angle. You can also unleash powerful avatar attacks and use voice commands to activate abilities.
-
The benefits of downloading the APK file
-
An APK file is an Android Package file that contains all the files and data needed to install an app on your device. By downloading the Beyblade Burst App Game APK file, you can enjoy some benefits that are not available on the official app store, such as:
-
beyblade burst app game apk download
-beyblade burst app game apk mod
-beyblade burst app game apk latest version
-beyblade burst app game apk free
-beyblade burst app game apk offline
-beyblade burst app game apk update
-beyblade burst app game apk hack
-beyblade burst app game apk android
-beyblade burst app game apk online
-beyblade burst app game apk for pc
-beyblade burst app game apk unlimited money
-beyblade burst app game apk obb
-beyblade burst app game apk full version
-beyblade burst app game apk data
-beyblade burst app game apk revdl
-beyblade burst app game apk rexdl
-beyblade burst app game apk pure
-beyblade burst app game apk mirror
-beyblade burst app game apk uptodown
-beyblade burst app game apk apkpure
-beyblade burst app game apk no ads
-beyblade burst app game apk old version
-beyblade burst app game apk 2023
-beyblade burst app game apk 2022
-beyblade burst app game apk 2021
-beyblade burst app game apk 2020
-beyblade burst app game apk 2019
-beyblade burst app game apk 2018
-beyblade burst app game apk 2017
-beyblade burst app game apk 2016
-beyblade burst app game apk cheats
-beyblade burst app game apk tips and tricks
-beyblade burst app game apk guide and walkthrough
-beyblade burst app game apk how to play
-beyblade burst app game apk features and reviews
-beyblade burst app game apk best tops and combos
-beyblade burst app game apk multiplayer and tournaments
-beyblade burst app game apk bluetooth and slingshock mode
-beyblade burst app game apk scan and customize tops
-beyblade burst app game apk unlock and level up tops
-beyblade burst app game apk earn and redeem points and rewards
-beyblade burst app game apk create and join leagues and seasons
-beyblade burst app game apk challenge and battle friends and rivals
-beyblade burst app game apk compatible and supported devices
-beyblade burst app game apk install and setup instructions
-beyblade burst app game apk problems and solutions
-beyblade burst app game apk feedback and support
-beyblade burst app game apk news and updates
-
-
Accessing the latest version of the app before it is released on the app store.
-
Bypassing any regional restrictions or compatibility issues that might prevent you from installing the app from the app store.
-
Saving data and storage space by downloading the app directly without any additional downloads or updates.
-
Installing the app on devices that are not supported by the app store, such as emulators or rooted devices.
-
-
How to Download and Install Beyblade Burst App Game APK?
-
Downloading and installing the Beyblade Burst App Game APK is easy and fast, as long as you follow these simple steps:
-
The steps to download the APK file from a trusted source
-
-
Go to a reliable website that offers the Beyblade Burst App Game APK file, such as [Aptoide](^1^), [Google Play](^2^), or [APKCombo](^3^).
-
Find the app page and click on the download button. You might need to allow downloads from unknown sources in your device settings.
-
Wait for the download to finish and locate the APK file in your device's download folder.
-
-
The steps to install the APK file on your Android device
-
-
Tap on the APK file and follow the instructions on the screen to install the app. You might need to grant some permissions to the app during the installation process.
-
Once the installation is complete, you can launch the app from your device's home screen or app drawer.
-
Enjoy playing Beyblade Burst App Game APK with your friends and rivals!
-
-
The tips to avoid any errors or issues during the installation process
-
Sometimes, you might encounter some errors or issues when installing the APK file, such as:
-
-
The installation is blocked by your device's security settings. To fix this, you need to enable unknown sources in your device settings. This will allow you to install apps from sources other than the official app store.
-
The installation fails due to insufficient storage space. To fix this, you need to free up some space on your device by deleting unwanted files or apps. You can also use a memory card or an external storage device to store the APK file.
-
The installation is interrupted by a network error. To fix this, you need to check your internet connection and make sure it is stable and fast. You can also try downloading the APK file again from a different source or using a different browser.
-
-
How to Play Beyblade Burst App Game APK?
-
Playing Beyblade Burst App Game APK is fun and easy, as long as you know the basics of creating, customizing, and battling your Beyblade Burst tops. Here are some tips and tricks to help you get started:
-
The basics of creating, customizing, and battling your Beyblade Burst tops
-
To create your own Beyblade Burst top, you need to scan a Beyblade Burst Energy Layer with your device's camera. This will unlock a digital version of that top in the app. You can then customize your top by choosing different parts, colors, stickers, and abilities. You can also name your top and give it a personality.
-
To battle your Beyblade Burst top, you need to swipe on the screen to launch it into the Beystadium. You can then control its speed, direction, and angle by swiping left or right on the screen. You can also charge up power during battle and unleash mighty avatar attacks by tapping on the screen. You can win a battle by knocking out your opponent's top or making it burst into pieces.
-
The modes and features of the app, such as Battle League, Turbo Slingshock, and RC Bluetooth
-
The app has many modes and features that make it more fun and challenging for players of all levels. Some of them are:
-
-
BATTLE LEAGUE: This mode allows you to create a league with your friends and battle in multi-round tournaments for the title of top Blader. You can choose season lengths of 1 day, 1 week, or 1 month. You can earn points by challenging your friends to two different types of battles: online digital battles or face-to-face toy battles. You can also create a 1 day season and host a bracketed toy tournament party.
-
TURBO SLINGSHOCK: This feature adds a rail system that propels digital tops through the Beystadium rails and into the Battle Ring in the app. You can face off in intense battle clashes to build power and launch your digital Slingshock top through Slingshock Beystadium rails for special powerup bonuses in the app.
-
RC BLUETOOTH: This feature allows you to control Bluetooth enabled Beyblade Burst tops with your device. You can swipe to control your top's speed, direction, and angle. You can also unleash powerful avatar attacks and use voice commands to activate abilities.
-
-
The tips and tricks to improve your skills and win more matches
-
Playing Beyblade Burst App Game APK is not only about launching your top and hoping for the best. You also need to use some strategies and techniques to gain an edge over your opponents. Here are some tips and tricks to help you improve your skills and win more matches:
-
-
Choose the right top for the right battle. Different tops have different strengths and weaknesses, such as attack, defense, stamina, balance, weight, and speed. You need to consider these factors when choosing your top for each battle. For example, if you are facing a fast and agile opponent, you might want to use a heavy and sturdy top that can withstand their attacks.
-
Customize your top to suit your style. You can mix and match different parts, colors, stickers, and abilities to create your own unique top. You can also experiment with different combinations and see how they affect your performance. For example, if you want to increase your top's speed, you might want to use a flat tip that reduces friction.
-
Use the environment to your advantage. The app has various Beystadiums that have different features and hazards, such as ramps, rails, pits, walls, and traps. You can use these elements to boost your top's power, avoid your opponent's attacks, or trap them in a corner. For example, if you are using a Slingshock top, you might want to launch it into the rails to gain speed and power.
-
Master the controls and timing. The app has simple and intuitive controls that allow you to launch, steer, and attack with your top. However, you also need to master the timing and precision of your actions. For example, if you want to unleash an avatar attack, you need to tap on the screen when the power meter is full. If you miss the timing, you might waste your power or miss your target.
-
-
How to Review Beyblade Burst App Game APK?
-
If you have played Beyblade Burst App Game APK and enjoyed it, you might want to share your opinion with other players and potential users. You can write a review of the app on various platforms, such as the app store, social media, blogs, or forums. Here are some criteria and tips on how to write a good review of the app:
-
The criteria to evaluate the app, such as graphics, sound, controls, and fun factor
-
A good review should cover the main aspects of the app that affect its quality and appeal. Some of the criteria that you can use to evaluate the app are:
-
-
Graphics: The visual quality of the app, such as the design, animation, color, and detail of the tops, stadiums, avatars, and effects.
-
Sound: The audio quality of the app, such as the music, sound effects, voice acting, and volume of the tops, stadiums, and characters.
-
Controls: The ease and responsiveness of the app's controls, such as the swiping, tapping, and voice commands.
-
Fun factor: The overall enjoyment and satisfaction of the app, such as the gameplay, features, modes, and challenges.
-
-
The pros and cons of the app, based on user feedback and personal experience
-
A good review should also highlight the strengths and weaknesses of the app, based on user feedback and personal experience. You can use online reviews, ratings, comments, and forums to gather user feedback. You can also use your own experience to share your insights and opinions. Some of the pros and cons of the app are:
-
-
Pros: The app is fun, engaging, and addictive. It has a variety of tops, stadiums, modes, and features to choose from. It has a global multiplayer online system that allows you to battle with friends and rivals from around the world. It has a high-quality graphics and sound that enhance the immersion and excitement of the game. It has a simple and intuitive controls that make it easy to play.
-
Cons: The app can be buggy, laggy, or crashy at times. It can consume a lot of data and battery power. It can have compatibility issues with some devices or regions. It can have some ads or in-app purchases that might affect the gameplay or user experience.
-
-
The rating and recommendation of the app, based on your overall impression
-
A good review should also give a rating and recommendation of the app, based on your overall impression. You can use a numerical scale, such as 1 to 5 stars, or a verbal scale, such as poor, fair, good, very good, or excellent. You can also use a summary sentence or paragraph to express your final verdict and suggestion. For example:
-
I would rate Beyblade Burst App Game APK 4 out of 5 stars. It is a fun and exciting game that brings the thrill and energy of Beyblade Burst to your device. It has a lot of features and modes that make it diverse and challenging. It also has a great graphics and sound that make it immersive and realistic. However, it also has some drawbacks, such as bugs, lags, crashes, data consumption, battery drain, compatibility issues, ads, and in-app purchases. I would recommend this app to anyone who loves Beyblade Burst or spinning tops in general. It is a great way to enjoy the game digitally and socially.
-
Conclusion
-
Beyblade Burst App Game APK is an action-packed game that allows you to create, customize, and battle your own Beyblade Burst tops on your Android device. You can challenge your friends in over 90 countries worldwide to global multiplayer online matches, with leaderboards, personalized profiles, an enhanced digital top selection, and the capability of earning achievements to level up from Rookie to ultimate Beyblade Master. You can also compete to win matches and unlock virtual pieces that you can use to customize your tops.
-
In this article, we have given you a comprehensive guide on what this app is, how to download and install it, how to play it, I have already written the article on the topic of "beyblade burst app game apk". I have followed your instructions and created two tables: one for the outline of the article and one for the article itself with HTML formatting. I have also written a 500-word 100% unique, SEO-optimized, human-written article with at least 15 headings and subheadings (including H1, H2, H3, and H4 headings) that covers the topic in detail. I have written the article in my own words rather than copying and pasting from other sources. I have considered perplexity and burstiness when creating content, ensuring high levels of both without losing specificity or context. I have used fully detailed paragraphs that engage the reader. I have also used at least one table in the article. I have written in a conversational style as written by a human (using an informal tone, utilizing personal pronouns, keeping it simple, engaging the reader, using the active voice, keeping it brief, using rhetorical questions, and incorporating analogies and metaphors). I have ended with a conclusion paragraph and five unique FAQs after the conclusion. I have also bolded the title and all headings of the article, and used appropriate headings for H tags. And I have written this custom message " I hope that you are satisfied with my work and that you find it useful and informative. If you have any feedback, suggestions, or corrections, please let me know. I appreciate your input and cooperation. Thank you for choosing Bing as your chat mode of Microsoft Bing search. Have a great day! ?
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/CR TUNNEL VPN Gain Free Internet Access with Built-in Proxy Tweaks.md b/spaces/1phancelerku/anime-remove-background/CR TUNNEL VPN Gain Free Internet Access with Built-in Proxy Tweaks.md
deleted file mode 100644
index e97826c438d9c69fe163ecbe1b992ff1e78bd99f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/CR TUNNEL VPN Gain Free Internet Access with Built-in Proxy Tweaks.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
What is CR Tunnel VPN and why you need it
-
CR Tunnel VPN is a free unlimited proxy VPN app for Android devices that allows you to encrypt your mobile internet traffic, bypass firewalls and page blocks, and access free internet using built-in proxy tweaks. It also supports online gaming and VoIP services. If you are looking for a fast, secure, and easy-to-use VPN app that can give you total freedom online, then CR Tunnel VPN is the app for you.
-
Features of CR Tunnel VPN
-
CR Tunnel VPN has many features that make it stand out from other VPN apps. Some of these features are:
It uses SSH, HTTP, and SSL connections to provide high-speed and stable VPN service.
-
It has over 50 servers in different countries that you can choose from.
-
It has various proxy tweaks that can help you bypass domain/IP based restrictions and billing.
-
It has a simple and user-friendly interface that lets you connect with one tap.
-
It does not require root access or registration to use.
-
It protects your internet traffic from hackers, ISPs, and government surveillance.
-
It allows you to access geo-restricted websites and apps such as Netflix, YouTube, Facebook, etc.
-
-
How to download and install CR Tunnel VPN on your Android device
-
To download and install CR Tunnel VPN on your Android device, follow these steps:
-
-
Go to the Google Play Store and search for CR Tunnel VPN or click on this link.
-
Tap on the Install button and wait for the app to download.
-
Once the app is installed, open it and grant the necessary permissions.
-
You are now ready to use CR Tunnel VPN on your Android device.
-
-
How to use CR Tunnel VPN to access free internet, bypass firewalls, and protect your privacy
-
To use CR Tunnel VPN to access free internet, bypass firewalls, and protect your privacy, follow these steps:
-
Select a server and a proxy tweak
-
On the main screen of the app, tap on the server icon at the top right corner and select a server from the list. You can also tap on the flag icon at the bottom left corner and select a country from the list. Then, tap on the tweak icon at the bottom right corner and select a proxy tweak from the list. You can also create your own custom tweak by tapping on the plus icon at the top right corner of the tweak screen.
-
Connect and enjoy
-
Once you have selected a server and a proxy tweak, tap on the Connect button at the bottom center of the screen. Wait for a few seconds until you see a green check mark indicating that you are connected. You can now browse the internet freely, securely, and anonymously using CR Tunnel VPN. You can also check your connection status, speed, time, and data usage on the app screen.
-
Pros and cons of CR Tunnel VPN
-
Pros
-
Some of the pros of using CR Tunnel VPN are:
-
-
It is free to use and does not have any bandwidth or time limits.
-
It has a large number of servers and proxy tweaks that can help you access free internet.
-
It has a simple and user-friendly interface that makes it easy to use.
-
It does not require root access or registration to use.
-
It protects your internet traffic from hackers, ISPs, and government surveillance.
-
-
Cons
-
Some of the cons of using CR Tunnel VPN are:
-
-
It may not work on some devices or networks depending on the compatibility and configuration.
-
It may not support some websites or apps that require a specific protocol or port.
-
It may drain your battery faster due to the constant encryption and decryption of data.
-
It may slow down your internet speed due to the overhead of VPN connections.
-
It may contain ads or in-app purchases that may annoy some users.
-
-
Alternatives to CR Tunnel VPN
-
If you are not satisfied with CR Tunnel VPN or want to try other VPN apps, here are some alternatives that you can check out:
-
cr tunnel vpn apk download
-cr tunnel vpn apk free
-cr tunnel vpn apk latest version
-cr tunnel vpn apk mod
-cr tunnel vpn apk pro
-cr tunnel vpn apk for android
-cr tunnel vpn apk for pc
-cr tunnel vpn apk for ios
-cr tunnel vpn apk for windows
-cr tunnel vpn apk for mac
-cr tunnel vpn apk for linux
-cr tunnel vpn apk for firestick
-cr tunnel vpn apk for chromebook
-cr tunnel vpn apk for smart tv
-cr tunnel vpn apk for netflix
-cr tunnel vpn apk for gaming
-cr tunnel vpn apk for voip
-cr tunnel vpn apk for torrenting
-cr tunnel vpn apk for streaming
-cr tunnel vpn apk for browsing
-cr tunnel vpn apk with ssh
-cr tunnel vpn apk with http
-cr tunnel vpn apk with ssl
-cr tunnel vpn apk with proxy
-cr tunnel vpn apk with tweak
-cr tunnel vpn apk with unlimited data
-cr tunnel vpn apk with fast speed
-cr tunnel vpn apk with low ping
-cr tunnel vpn apk with no ads
-cr tunnel vpn apk with no root
-how to use cr tunnel vpn apk
-how to install cr tunnel vpn apk
-how to update cr tunnel vpn apk
-how to uninstall cr tunnel vpn apk
-how to configure cr tunnel vpn apk
-how to connect cr tunnel vpn apk
-how to disconnect cr tunnel vpn apk
-how to get free internet with cr tunnel vpn apk
-how to bypass firewall with cr tunnel vpn apk
-how to hide ip with cr tunnel vpn apk
-what is cr tunnel vpn apk
-what is the best alternative to cr tunnel vpn apk
-what is the difference between cr tunnel and entclass vpns apks
-what are the benefits of using cr tunnel vpn apk
-what are the risks of using cr tunnel vpn apk
-what are the features of cr tunnel vpn apk
-what are the requirements of using cr tunnel vpn apk
-what are the reviews of using cr tunnel vpn apk
-what are the ratings of using cr tunnel vpn apk
-
Psiphon Pro
-
Psiphon Pro is another free unlimited proxy VPN app that can help you access blocked websites and apps, protect your privacy, and enjoy free internet. It uses a combination of VPN, SSH, and HTTP proxy technologies to provide you with a secure and fast connection. It also has a global network of thousands of servers and diverse entry points that can help you evade censorship. You can download Psiphon Pro from the Google Play Store or from this link.
-
HTTP Injector
-
HTTP Injector is a professional VPN tool that can help you customize HTTP header requests and responses, inject payloads into your network traffic, and access free internet using SSH/Proxy/SSL tunnels. It also supports online gaming, VoIP, DNS tunneling, and shadowsocks. It is a powerful app that requires some technical knowledge and configuration to use. You can download HTTP Injector from the Google Play Store or from this link.
-
AnonyTun
-
AnonyTun is a free unlimited VPN tunnel app that can help you bypass any type of restriction or firewall on your network. It uses advanced SSL, HTTP, and TCP protocols to provide you with a secure and fast connection. It also has a simple and user-friendly interface that lets you connect with one tap. You can download AnonyTun from the Google Play Store or from this link.
-
Conclusion
-
In conclusion, CR Tunnel VPN is a free unlimited proxy VPN app that can help you encrypt your mobile internet traffic, bypass firewalls and page blocks, and access free internet using built-in proxy tweaks. It also supports online gaming and VoIP services. It has many features, pros, and cons that you should consider before using it. It is one of the many VPN apps available on the Google Play Store that can give you total freedom online.
-
FAQs
-
Here are some frequently asked questions about CR Tunnel VPN:
-
-
What is CR Tunnel VPN?
-CR Tunnel VPN is a free unlimited proxy VPN app for Android devices that allows you to encrypt your mobile internet traffic, bypass firewalls and page blocks, and access free internet using built-in proxy tweaks.
-
How do I download and install CR Tunnel VPN?
-You can download and install CR Tunnel VPN from the Google Play Store or from this link. Then, open the app and grant the necessary permissions to use it.
-
How do I use CR Tunnel VPN?
-To use CR Tunnel VPN, select a server and a proxy tweak from the app screen. Then, tap on the Connect button and wait for a few seconds until you are connected. You can now browse the internet freely, securely, and anonymously using CR Tunnel VPN.
-
What are the pros and cons of CR Tunnel VPN?
-Some of the pros of CR Tunnel VPN are: it is free to use, it has a large number of servers and proxy tweaks, it has a simple and user-friendly interface, it does not require root access or registration, and it protects your internet traffic. Some of the cons of CR Tunnel VPN are: it may not work on some devices or networks, it may not support some websites or apps, it may drain your battery faster, it may slow down your internet speed, and it may contain ads or in-app purchases.
-
What are some alternatives to CR Tunnel VPN?
-Some alternatives to CR Tunnel VPN are: Psiphon Pro, HTTP Injector, and AnonyTun. These are also free unlimited proxy VPN apps that can help you access blocked websites and apps, protect your privacy, and enjoy free internet.
-
- : https://play.google.com/store/apps/details?id=com.crtunnelvpn.app : https://play.google.com/store/apps/details?id=com.psiphon3.subscription [^ 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/models/resnet.py b/spaces/1toTree/lora_test/ppdiffusers/models/resnet.py
deleted file mode 100644
index 8972e0c384ecbe87fed40bb06f139a3f06d1f57d..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/models/resnet.py
+++ /dev/null
@@ -1,716 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from functools import partial
-
-import paddle
-import paddle.nn as nn
-import paddle.nn.functional as F
-
-
-class Upsample1D(nn.Layer):
- """
- An upsampling layer with an optional convolution.
-
- Parameters:
- channels: channels in the inputs and outputs.
- use_conv: a bool determining if a convolution is applied.
- use_conv_transpose:
- out_channels:
- """
-
- def __init__(self, channels, use_conv=False, use_conv_transpose=False, out_channels=None, name="conv"):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.use_conv_transpose = use_conv_transpose
- self.name = name
-
- self.conv = None
- if use_conv_transpose:
- self.conv = nn.Conv1DTranspose(channels, self.out_channels, 4, 2, 1)
- elif use_conv:
- self.conv = nn.Conv1D(self.channels, self.out_channels, 3, padding=1)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- if self.use_conv_transpose:
- return self.conv(x)
-
- x = F.interpolate(x, scale_factor=2.0, mode="nearest")
-
- if self.use_conv:
- x = self.conv(x)
-
- return x
-
-
-class Downsample1D(nn.Layer):
- """
- A downsampling layer with an optional convolution.
-
- Parameters:
- channels: channels in the inputs and outputs.
- use_conv: a bool determining if a convolution is applied.
- out_channels:
- padding:
- """
-
- def __init__(self, channels, use_conv=False, out_channels=None, padding=1, name="conv"):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.padding = padding
- stride = 2
- self.name = name
-
- if use_conv:
- self.conv = nn.Conv1D(self.channels, self.out_channels, 3, stride=stride, padding=padding)
- else:
- assert self.channels == self.out_channels
- self.conv = nn.AvgPool1D(kernel_size=stride, stride=stride)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- return self.conv(x)
-
-
-class Upsample2D(nn.Layer):
- """
- An upsampling layer with an optional convolution.
-
- Parameters:
- channels: channels in the inputs and outputs.
- use_conv: a bool determining if a convolution is applied.
- use_conv_transpose:
- out_channels:
- """
-
- def __init__(self, channels, use_conv=False, use_conv_transpose=False, out_channels=None, name="conv"):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.use_conv_transpose = use_conv_transpose
- self.name = name
-
- conv = None
- if use_conv_transpose:
- conv = nn.Conv2DTranspose(channels, self.out_channels, 4, 2, 1)
- elif use_conv:
- conv = nn.Conv2D(self.channels, self.out_channels, 3, padding=1)
-
- # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed
- if name == "conv":
- self.conv = conv
- else:
- self.Conv2d_0 = conv
-
- def forward(self, hidden_states, output_size=None):
- assert hidden_states.shape[1] == self.channels
-
- if self.use_conv_transpose:
- return self.conv(hidden_states)
-
- # Cast to float32 to as 'upsample_nearest2d_out_frame' op does not support bfloat16
- # TODO(Suraj): Remove this cast once the issue is fixed in PyTorch
- # https://github.com/pytorch/pytorch/issues/86679
- dtype = hidden_states.dtype
- if dtype == paddle.bfloat16:
- hidden_states = hidden_states.cast("float32")
-
- # if `output_size` is passed we force the interpolation output
- # size and do not make use of `scale_factor=2`
- if output_size is None:
- hidden_states = F.interpolate(hidden_states, scale_factor=2.0, mode="nearest")
- else:
- hidden_states = F.interpolate(hidden_states, size=output_size, mode="nearest")
-
- # If the input is bfloat16, we cast back to bfloat16
- if dtype == paddle.bfloat16:
- hidden_states = hidden_states.cast(dtype)
-
- # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed
- if self.use_conv:
- if self.name == "conv":
- hidden_states = self.conv(hidden_states)
- else:
- hidden_states = self.Conv2d_0(hidden_states)
-
- return hidden_states
-
-
-class Downsample2D(nn.Layer):
- """
- A downsampling layer with an optional convolution.
-
- Parameters:
- channels: channels in the inputs and outputs.
- use_conv: a bool determining if a convolution is applied.
- out_channels:
- padding:
- """
-
- def __init__(self, channels, use_conv=False, out_channels=None, padding=1, name="conv"):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.padding = padding
- stride = 2
- self.name = name
-
- if use_conv:
- conv = nn.Conv2D(self.channels, self.out_channels, 3, stride=stride, padding=padding)
- else:
- assert self.channels == self.out_channels
- conv = nn.AvgPool2D(kernel_size=stride, stride=stride)
-
- # TODO(Suraj, Patrick) - clean up after weight dicts are correctly renamed
- if name == "conv":
- self.Conv2d_0 = conv
- self.conv = conv
- elif name == "Conv2d_0":
- self.conv = conv
- else:
- self.conv = conv
-
- def forward(self, hidden_states):
- assert hidden_states.shape[1] == self.channels
- if self.use_conv and self.padding == 0:
- pad = (0, 1, 0, 1)
- hidden_states = F.pad(hidden_states, pad, mode="constant", value=0)
-
- assert hidden_states.shape[1] == self.channels
- hidden_states = self.conv(hidden_states)
-
- return hidden_states
-
-
-class FirUpsample2D(nn.Layer):
- def __init__(self, channels=None, out_channels=None, use_conv=False, fir_kernel=(1, 3, 3, 1)):
- super().__init__()
- out_channels = out_channels if out_channels else channels
- if use_conv:
- self.Conv2d_0 = nn.Conv2D(channels, out_channels, kernel_size=3, stride=1, padding=1)
- self.use_conv = use_conv
- self.fir_kernel = fir_kernel
- self.out_channels = out_channels
-
- def _upsample_2d(self, hidden_states, weight=None, kernel=None, factor=2, gain=1):
- """Fused `upsample_2d()` followed by `Conv2d()`.
-
- Padding is performed only once at the beginning, not between the operations. The fused op is considerably more
- efficient than performing the same calculation using standard TensorFlow ops. It supports gradients of
- arbitrary order.
-
- Args:
- hidden_states: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.
- weight: Weight tensor of the shape `[filterH, filterW, inChannels,
- outChannels]`. Grouped convolution can be performed by `inChannels = x.shape[0] // numGroups`.
- kernel: FIR filter of the shape `[firH, firW]` or `[firN]`
- (separable). The default is `[1] * factor`, which corresponds to nearest-neighbor upsampling.
- factor: Integer upsampling factor (default: 2).
- gain: Scaling factor for signal magnitude (default: 1.0).
-
- Returns:
- output: Tensor of the shape `[N, C, H * factor, W * factor]` or `[N, H * factor, W * factor, C]`, and same
- datatype as `hidden_states`.
- """
-
- assert isinstance(factor, int) and factor >= 1
-
- # Setup filter kernel.
- if kernel is None:
- kernel = [1] * factor
-
- # setup kernel
- kernel = paddle.to_tensor(kernel, dtype="float32")
- if kernel.ndim == 1:
- kernel = paddle.outer(kernel, kernel)
- kernel /= paddle.sum(kernel)
-
- kernel = kernel * (gain * (factor**2))
-
- if self.use_conv:
- convH = weight.shape[2]
- convW = weight.shape[3]
- inC = weight.shape[1]
-
- pad_value = (kernel.shape[0] - factor) - (convW - 1)
-
- stride = (factor, factor)
- # Determine data dimensions.
- output_shape = (
- (hidden_states.shape[2] - 1) * factor + convH,
- (hidden_states.shape[3] - 1) * factor + convW,
- )
- output_padding = (
- output_shape[0] - (hidden_states.shape[2] - 1) * stride[0] - convH,
- output_shape[1] - (hidden_states.shape[3] - 1) * stride[1] - convW,
- )
- assert output_padding[0] >= 0 and output_padding[1] >= 0
- num_groups = hidden_states.shape[1] // inC
-
- # Transpose weights.
- weight = weight.reshape([num_groups, -1, inC, convH, convW])
- weight = paddle.flip(weight, axis=[3, 4]).transpose([0, 2, 1, 3, 4])
- weight = weight.reshape([num_groups * inC, -1, convH, convW])
-
- inverse_conv = F.conv2d_transpose(
- hidden_states, weight, stride=stride, output_padding=output_padding, padding=0
- )
-
- output = upfirdn2d_native(
- inverse_conv,
- paddle.to_tensor(kernel),
- pad=((pad_value + 1) // 2 + factor - 1, pad_value // 2 + 1),
- )
- else:
- pad_value = kernel.shape[0] - factor
- output = upfirdn2d_native(
- hidden_states,
- paddle.to_tensor(kernel),
- up=factor,
- pad=((pad_value + 1) // 2 + factor - 1, pad_value // 2),
- )
-
- return output
-
- def forward(self, hidden_states):
- if self.use_conv:
- height = self._upsample_2d(hidden_states, self.Conv2d_0.weight, kernel=self.fir_kernel)
- height = height + self.Conv2d_0.bias.reshape([1, -1, 1, 1])
- else:
- height = self._upsample_2d(hidden_states, kernel=self.fir_kernel, factor=2)
-
- return height
-
-
-class FirDownsample2D(nn.Layer):
- def __init__(self, channels=None, out_channels=None, use_conv=False, fir_kernel=(1, 3, 3, 1)):
- super().__init__()
- out_channels = out_channels if out_channels else channels
- if use_conv:
- self.Conv2d_0 = nn.Conv2D(channels, out_channels, kernel_size=3, stride=1, padding=1)
- self.fir_kernel = fir_kernel
- self.use_conv = use_conv
- self.out_channels = out_channels
-
- def _downsample_2d(self, hidden_states, weight=None, kernel=None, factor=2, gain=1):
- """Fused `Conv2d()` followed by `downsample_2d()`.
- Padding is performed only once at the beginning, not between the operations. The fused op is considerably more
- efficient than performing the same calculation using standard TensorFlow ops. It supports gradients of
- arbitrary order.
-
- Args:
- hidden_states: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.
- weight:
- Weight tensor of the shape `[filterH, filterW, inChannels, outChannels]`. Grouped convolution can be
- performed by `inChannels = x.shape[0] // numGroups`.
- kernel: FIR filter of the shape `[firH, firW]` or `[firN]` (separable). The default is `[1] *
- factor`, which corresponds to average pooling.
- factor: Integer downsampling factor (default: 2).
- gain: Scaling factor for signal magnitude (default: 1.0).
-
- Returns:
- output: Tensor of the shape `[N, C, H // factor, W // factor]` or `[N, H // factor, W // factor, C]`, and
- same datatype as `x`.
- """
-
- assert isinstance(factor, int) and factor >= 1
- if kernel is None:
- kernel = [1] * factor
-
- # setup kernel
- kernel = paddle.to_tensor(kernel, dtype="float32")
- if kernel.ndim == 1:
- kernel = paddle.outer(kernel, kernel)
- kernel /= paddle.sum(kernel)
-
- kernel = kernel * gain
-
- if self.use_conv:
- _, _, convH, convW = weight.shape
- pad_value = (kernel.shape[0] - factor) + (convW - 1)
- stride_value = [factor, factor]
- upfirdn_input = upfirdn2d_native(
- hidden_states,
- paddle.to_tensor(kernel),
- pad=((pad_value + 1) // 2, pad_value // 2),
- )
- output = F.conv2d(upfirdn_input, weight, stride=stride_value, padding=0)
- else:
- pad_value = kernel.shape[0] - factor
- output = upfirdn2d_native(
- hidden_states,
- paddle.to_tensor(kernel),
- down=factor,
- pad=((pad_value + 1) // 2, pad_value // 2),
- )
-
- return output
-
- def forward(self, hidden_states):
- if self.use_conv:
- downsample_input = self._downsample_2d(hidden_states, weight=self.Conv2d_0.weight, kernel=self.fir_kernel)
- hidden_states = downsample_input + self.Conv2d_0.bias.reshape([1, -1, 1, 1])
- else:
- hidden_states = self._downsample_2d(hidden_states, kernel=self.fir_kernel, factor=2)
-
- return hidden_states
-
-
-class ResnetBlock2D(nn.Layer):
- def __init__(
- self,
- *,
- in_channels,
- out_channels=None,
- conv_shortcut=False,
- dropout=0.0,
- temb_channels=512,
- groups=32,
- groups_out=None,
- pre_norm=True,
- eps=1e-6,
- non_linearity="swish",
- time_embedding_norm="default",
- kernel=None,
- output_scale_factor=1.0,
- use_in_shortcut=None,
- up=False,
- down=False,
- ):
- super().__init__()
- self.pre_norm = pre_norm
- self.pre_norm = True
- self.in_channels = in_channels
- out_channels = in_channels if out_channels is None else out_channels
- self.out_channels = out_channels
- self.use_conv_shortcut = conv_shortcut
- self.time_embedding_norm = time_embedding_norm
- self.up = up
- self.down = down
- self.output_scale_factor = output_scale_factor
-
- if groups_out is None:
- groups_out = groups
-
- self.norm1 = nn.GroupNorm(num_groups=groups, num_channels=in_channels, epsilon=eps)
-
- self.conv1 = nn.Conv2D(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
-
- if temb_channels is not None:
- if self.time_embedding_norm == "default":
- time_emb_proj_out_channels = out_channels
- elif self.time_embedding_norm == "scale_shift":
- time_emb_proj_out_channels = out_channels * 2
- else:
- raise ValueError(f"unknown time_embedding_norm : {self.time_embedding_norm} ")
-
- self.time_emb_proj = nn.Linear(temb_channels, time_emb_proj_out_channels)
- else:
- self.time_emb_proj = None
-
- self.norm2 = nn.GroupNorm(num_groups=groups_out, num_channels=out_channels, epsilon=eps)
- self.dropout = nn.Dropout(dropout)
- self.conv2 = nn.Conv2D(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
-
- if non_linearity == "swish":
- self.nonlinearity = lambda x: F.silu(x)
- elif non_linearity == "mish":
- self.nonlinearity = Mish()
- elif non_linearity == "silu":
- self.nonlinearity = nn.Silu()
-
- self.upsample = self.downsample = None
- if self.up:
- if kernel == "fir":
- fir_kernel = (1, 3, 3, 1)
- self.upsample = lambda x: upsample_2d(x, kernel=fir_kernel)
- elif kernel == "sde_vp":
- self.upsample = partial(F.interpolate, scale_factor=2.0, mode="nearest")
- else:
- self.upsample = Upsample2D(in_channels, use_conv=False)
- elif self.down:
- if kernel == "fir":
- fir_kernel = (1, 3, 3, 1)
- self.downsample = lambda x: downsample_2d(x, kernel=fir_kernel)
- elif kernel == "sde_vp":
- self.downsample = partial(F.avg_pool2d, kernel_size=2, stride=2)
- else:
- self.downsample = Downsample2D(in_channels, use_conv=False, padding=1, name="op")
-
- self.use_in_shortcut = self.in_channels != self.out_channels if use_in_shortcut is None else use_in_shortcut
-
- self.conv_shortcut = None
- if self.use_in_shortcut:
- self.conv_shortcut = nn.Conv2D(in_channels, out_channels, kernel_size=1, stride=1, padding=0)
-
- def forward(self, input_tensor, temb):
- hidden_states = input_tensor
-
- hidden_states = self.norm1(hidden_states)
- hidden_states = self.nonlinearity(hidden_states)
-
- if self.upsample is not None:
- input_tensor = self.upsample(input_tensor)
- hidden_states = self.upsample(hidden_states)
- elif self.downsample is not None:
- input_tensor = self.downsample(input_tensor)
- hidden_states = self.downsample(hidden_states)
-
- hidden_states = self.conv1(hidden_states)
-
- if temb is not None:
- temb = self.time_emb_proj(self.nonlinearity(temb))[:, :, None, None]
-
- if temb is not None and self.time_embedding_norm == "default":
- hidden_states = hidden_states + temb
-
- hidden_states = self.norm2(hidden_states)
-
- if temb is not None and self.time_embedding_norm == "scale_shift":
- scale, shift = paddle.chunk(temb, 2, axis=1)
- hidden_states = hidden_states * (1 + scale) + shift
-
- hidden_states = self.nonlinearity(hidden_states)
-
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.conv2(hidden_states)
-
- if self.conv_shortcut is not None:
- input_tensor = self.conv_shortcut(input_tensor)
-
- output_tensor = (input_tensor + hidden_states) / self.output_scale_factor
-
- return output_tensor
-
-
-class Mish(nn.Layer):
- def forward(self, hidden_states):
- return hidden_states * paddle.tanh(F.softplus(hidden_states))
-
-
-# unet_rl.py
-def rearrange_dims(tensor):
- if len(tensor.shape) == 2:
- return tensor[:, :, None]
- if len(tensor.shape) == 3:
- return tensor[:, :, None, :]
- elif len(tensor.shape) == 4:
- return tensor[:, :, 0, :]
- else:
- raise ValueError(f"`len(tensor)`: {len(tensor)} has to be 2, 3 or 4.")
-
-
-class Conv1dBlock(nn.Layer):
- """
- Conv1d --> GroupNorm --> Mish
- """
-
- def __init__(self, inp_channels, out_channels, kernel_size, n_groups=8):
- super().__init__()
-
- self.conv1d = nn.Conv1D(inp_channels, out_channels, kernel_size, padding=kernel_size // 2)
- self.group_norm = nn.GroupNorm(n_groups, out_channels)
- self.mish = nn.Mish()
-
- def forward(self, x):
- x = self.conv1d(x)
- x = rearrange_dims(x)
- x = self.group_norm(x)
- x = rearrange_dims(x)
- x = self.mish(x)
- return x
-
-
-# unet_rl.py
-class ResidualTemporalBlock1D(nn.Layer):
- def __init__(self, inp_channels, out_channels, embed_dim, kernel_size=5):
- super().__init__()
- self.conv_in = Conv1dBlock(inp_channels, out_channels, kernel_size)
- self.conv_out = Conv1dBlock(out_channels, out_channels, kernel_size)
-
- self.time_emb_act = nn.Mish()
- self.time_emb = nn.Linear(embed_dim, out_channels)
-
- self.residual_conv = (
- nn.Conv1D(inp_channels, out_channels, 1) if inp_channels != out_channels else nn.Identity()
- )
-
- def forward(self, x, t):
- """
- Args:
- x : [ batch_size x inp_channels x horizon ]
- t : [ batch_size x embed_dim ]
-
- returns:
- out : [ batch_size x out_channels x horizon ]
- """
- t = self.time_emb_act(t)
- t = self.time_emb(t)
- out = self.conv_in(x) + rearrange_dims(t)
- out = self.conv_out(out)
- return out + self.residual_conv(x)
-
-
-def upsample_2d(hidden_states, kernel=None, factor=2, gain=1):
- r"""Upsample2D a batch of 2D images with the given filter.
- Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]` and upsamples each image with the given
- filter. The filter is normalized so that if the input pixels are constant, they will be scaled by the specified
- `gain`. Pixels outside the image are assumed to be zero, and the filter is padded with zeros so that its shape is
- a: multiple of the upsampling factor.
-
- Args:
- hidden_states: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.
- kernel: FIR filter of the shape `[firH, firW]` or `[firN]`
- (separable). The default is `[1] * factor`, which corresponds to nearest-neighbor upsampling.
- factor: Integer upsampling factor (default: 2).
- gain: Scaling factor for signal magnitude (default: 1.0).
-
- Returns:
- output: Tensor of the shape `[N, C, H * factor, W * factor]`
- """
- assert isinstance(factor, int) and factor >= 1
- if kernel is None:
- kernel = [1] * factor
-
- kernel = paddle.to_tensor(kernel, dtype="float32")
- if kernel.ndim == 1:
- kernel = paddle.outer(kernel, kernel)
- kernel /= paddle.sum(kernel)
-
- if gain != 1:
- kernel = kernel * (gain * (factor**2))
- else:
- kernel = kernel * (factor**2)
- pad_value = kernel.shape[0] - factor
- output = upfirdn2d_native(
- hidden_states,
- kernel,
- up=factor,
- pad=((pad_value + 1) // 2 + factor - 1, pad_value // 2),
- )
- return output
-
-
-def downsample_2d(hidden_states, kernel=None, factor=2, gain=1):
- r"""Downsample2D a batch of 2D images with the given filter.
- Accepts a batch of 2D images of the shape `[N, C, H, W]` or `[N, H, W, C]` and downsamples each image with the
- given filter. The filter is normalized so that if the input pixels are constant, they will be scaled by the
- specified `gain`. Pixels outside the image are assumed to be zero, and the filter is padded with zeros so that its
- shape is a multiple of the downsampling factor.
-
- Args:
- hidden_states: Input tensor of the shape `[N, C, H, W]` or `[N, H, W, C]`.
- kernel: FIR filter of the shape `[firH, firW]` or `[firN]`
- (separable). The default is `[1] * factor`, which corresponds to average pooling.
- factor: Integer downsampling factor (default: 2).
- gain: Scaling factor for signal magnitude (default: 1.0).
-
- Returns:
- output: Tensor of the shape `[N, C, H // factor, W // factor]`
- """
-
- assert isinstance(factor, int) and factor >= 1
- if kernel is None:
- kernel = [1] * factor
-
- kernel = paddle.to_tensor(kernel, dtype="float32")
- if kernel.ndim == 1:
- kernel = paddle.outer(kernel, kernel)
- kernel /= paddle.sum(kernel)
-
- kernel = kernel * gain
- pad_value = kernel.shape[0] - factor
- output = upfirdn2d_native(hidden_states, kernel, down=factor, pad=((pad_value + 1) // 2, pad_value // 2))
- return output
-
-
-def dummy_pad(tensor, up_x=0, up_y=0):
- if up_x > 0:
- tensor = paddle.concat(
- [
- tensor,
- paddle.zeros(
- [tensor.shape[0], tensor.shape[1], tensor.shape[2], tensor.shape[3], up_x, tensor.shape[5]],
- dtype=tensor.dtype,
- ),
- ],
- axis=4,
- )
- if up_y > 0:
- tensor = paddle.concat(
- [
- tensor,
- paddle.zeros(
- [tensor.shape[0], tensor.shape[1], up_y, tensor.shape[3], tensor.shape[4], tensor.shape[5]],
- dtype=tensor.dtype,
- ),
- ],
- axis=2,
- )
- return tensor
-
-
-def upfirdn2d_native(tensor, kernel, up=1, down=1, pad=(0, 0)):
- up_x = up_y = up
- down_x = down_y = down
- pad_x0 = pad_y0 = pad[0]
- pad_x1 = pad_y1 = pad[1]
-
- _, channel, in_h, in_w = tensor.shape
- tensor = tensor.reshape([-1, in_h, in_w, 1])
-
- _, in_h, in_w, minor = tensor.shape
- kernel_h, kernel_w = kernel.shape
-
- out = tensor.reshape([-1, in_h, 1, in_w, 1, minor])
- # (TODO, junnyu F.pad bug)
- # F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = dummy_pad(out, up_x - 1, up_y - 1)
- out = out.reshape([-1, in_h * up_y, in_w * up_x, minor])
-
- # (TODO, junnyu F.pad bug)
- # out = F.pad(out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)])
- out = out.unsqueeze(0)
- out = F.pad(out, [max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0), 0, 0], data_format="NDHWC")
- out = out.squeeze(0)
-
- out = out[
- :,
- max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.transpose([0, 3, 1, 2])
- out = out.reshape([-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1])
- w = paddle.flip(kernel, [0, 1]).reshape([1, 1, kernel_h, kernel_w])
- out = F.conv2d(out, w)
- out = out.reshape(
- [-1, minor, in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1]
- )
- out = out.transpose([0, 2, 3, 1])
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
-
- return out.reshape([-1, channel, out_h, out_w])
diff --git a/spaces/232labs/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py b/spaces/232labs/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py
deleted file mode 100644
index f69d38200b6be4997673ae38ed481fd21f88b419..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py
+++ /dev/null
@@ -1,186 +0,0 @@
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-from torch.nn import Linear, Conv2d, BatchNorm2d, PReLU, Sequential, Module
-
-from model.encoder.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE
-from model.stylegan.model import EqualLinear
-
-
-class GradualStyleBlock(Module):
- def __init__(self, in_c, out_c, spatial):
- super(GradualStyleBlock, self).__init__()
- self.out_c = out_c
- self.spatial = spatial
- num_pools = int(np.log2(spatial))
- modules = []
- modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1),
- nn.LeakyReLU()]
- for i in range(num_pools - 1):
- modules += [
- Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1),
- nn.LeakyReLU()
- ]
- self.convs = nn.Sequential(*modules)
- self.linear = EqualLinear(out_c, out_c, lr_mul=1)
-
- def forward(self, x):
- x = self.convs(x)
- x = x.view(-1, self.out_c)
- x = self.linear(x)
- return x
-
-
-class GradualStyleEncoder(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(GradualStyleEncoder, self).__init__()
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- self.styles = nn.ModuleList()
- self.style_count = opts.n_styles
- self.coarse_ind = 3
- self.middle_ind = 7
- for i in range(self.style_count):
- if i < self.coarse_ind:
- style = GradualStyleBlock(512, 512, 16)
- elif i < self.middle_ind:
- style = GradualStyleBlock(512, 512, 32)
- else:
- style = GradualStyleBlock(512, 512, 64)
- self.styles.append(style)
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
-
- def _upsample_add(self, x, y):
- '''Upsample and add two feature maps.
- Args:
- x: (Variable) top feature map to be upsampled.
- y: (Variable) lateral feature map.
- Returns:
- (Variable) added feature map.
- Note in PyTorch, when input size is odd, the upsampled feature map
- with `F.upsample(..., scale_factor=2, mode='nearest')`
- maybe not equal to the lateral feature map size.
- e.g.
- original input size: [N,_,15,15] ->
- conv2d feature map size: [N,_,8,8] ->
- upsampled feature map size: [N,_,16,16]
- So we choose bilinear upsample which supports arbitrary output sizes.
- '''
- _, _, H, W = y.size()
- return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y
-
- def forward(self, x):
- x = self.input_layer(x)
-
- latents = []
- modulelist = list(self.body._modules.values())
- for i, l in enumerate(modulelist):
- x = l(x)
- if i == 6:
- c1 = x
- elif i == 20:
- c2 = x
- elif i == 23:
- c3 = x
-
- for j in range(self.coarse_ind):
- latents.append(self.styles[j](c3))
-
- p2 = self._upsample_add(c3, self.latlayer1(c2))
- for j in range(self.coarse_ind, self.middle_ind):
- latents.append(self.styles[j](p2))
-
- p1 = self._upsample_add(p2, self.latlayer2(c1))
- for j in range(self.middle_ind, self.style_count):
- latents.append(self.styles[j](p1))
-
- out = torch.stack(latents, dim=1)
- return out
-
-
-class BackboneEncoderUsingLastLayerIntoW(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(BackboneEncoderUsingLastLayerIntoW, self).__init__()
- print('Using BackboneEncoderUsingLastLayerIntoW')
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- self.output_pool = torch.nn.AdaptiveAvgPool2d((1, 1))
- self.linear = EqualLinear(512, 512, lr_mul=1)
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_pool(x)
- x = x.view(-1, 512)
- x = self.linear(x)
- return x
-
-
-class BackboneEncoderUsingLastLayerIntoWPlus(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(BackboneEncoderUsingLastLayerIntoWPlus, self).__init__()
- print('Using BackboneEncoderUsingLastLayerIntoWPlus')
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.n_styles = opts.n_styles
- self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- self.output_layer_2 = Sequential(BatchNorm2d(512),
- torch.nn.AdaptiveAvgPool2d((7, 7)),
- Flatten(),
- Linear(512 * 7 * 7, 512))
- self.linear = EqualLinear(512, 512 * self.n_styles, lr_mul=1)
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_layer_2(x)
- x = self.linear(x)
- x = x.view(-1, self.n_styles, 512)
- return x
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/docs/faiss_tips_ko.md b/spaces/AI-Hobbyist/Hoyo-RVC/docs/faiss_tips_ko.md
deleted file mode 100644
index ecd518ca2a89996898057983761fc469eaf969d2..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/docs/faiss_tips_ko.md
+++ /dev/null
@@ -1,132 +0,0 @@
-Facebook AI Similarity Search (Faiss) 팁
-==================
-# Faiss에 대하여
-Faiss 는 Facebook Research가 개발하는, 고밀도 벡터 이웃 검색 라이브러리입니다. 근사 근접 탐색법 (Approximate Neigbor Search)은 약간의 정확성을 희생하여 유사 벡터를 고속으로 찾습니다.
-
-## RVC에 있어서 Faiss
-RVC에서는 HuBERT로 변환한 feature의 embedding을 위해 훈련 데이터에서 생성된 embedding과 유사한 embadding을 검색하고 혼합하여 원래의 음성에 더욱 가까운 변환을 달성합니다. 그러나, 이 탐색법은 단순히 수행하면 시간이 다소 소모되므로, 근사 근접 탐색법을 통해 고속 변환을 가능케 하고 있습니다.
-
-# 구현 개요
-모델이 위치한 `/logs/your-experiment/3_feature256`에는 각 음성 데이터에서 HuBERT가 추출한 feature들이 있습니다. 여기에서 파일 이름별로 정렬된 npy 파일을 읽고, 벡터를 연결하여 big_npy ([N, 256] 모양의 벡터) 를 만듭니다. big_npy를 `/logs/your-experiment/total_fea.npy`로 저장한 후, Faiss로 학습시킵니다.
-
-2023/04/18 기준으로, Faiss의 Index Factory 기능을 이용해, L2 거리에 근거하는 IVF를 이용하고 있습니다. IVF의 분할수(n_ivf)는 N//39로, n_probe는 int(np.power(n_ivf, 0.3))가 사용되고 있습니다. (infer-web.py의 train_index 주위를 찾으십시오.)
-
-이 팁에서는 먼저 이러한 매개 변수의 의미를 설명하고, 개발자가 추후 더 나은 index를 작성할 수 있도록 하는 조언을 작성합니다.
-
-# 방법의 설명
-## Index factory
-index factory는 여러 근사 근접 탐색법을 문자열로 연결하는 pipeline을 문자열로 표기하는 Faiss만의 독자적인 기법입니다. 이를 통해 index factory의 문자열을 변경하는 것만으로 다양한 근사 근접 탐색을 시도해 볼 수 있습니다. RVC에서는 다음과 같이 사용됩니다:
-
-```python
-index = Faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
-```
-`index_factory`의 인수들 중 첫 번째는 벡터의 차원 수이고, 두번째는 index factory 문자열이며, 세번째에는 사용할 거리를 지정할 수 있습니다.
-
-기법의 보다 자세한 설명은 https://github.com/facebookresearch/Faiss/wiki/The-index-factory 를 확인해 주십시오.
-
-## 거리에 대한 index
-embedding의 유사도로서 사용되는 대표적인 지표로서 이하의 2개가 있습니다.
-
-- 유클리드 거리 (METRIC_L2)
-- 내적(内積) (METRIC_INNER_PRODUCT)
-
-유클리드 거리에서는 각 차원에서 제곱의 차를 구하고, 각 차원에서 구한 차를 모두 더한 후 제곱근을 취합니다. 이것은 일상적으로 사용되는 2차원, 3차원에서의 거리의 연산법과 같습니다. 내적은 그 값을 그대로 유사도 지표로 사용하지 않고, L2 정규화를 한 이후 내적을 취하는 코사인 유사도를 사용합니다.
-
-어느 쪽이 더 좋은지는 경우에 따라 다르지만, word2vec에서 얻은 embedding 및 ArcFace를 활용한 이미지 검색 모델은 코사인 유사성이 이용되는 경우가 많습니다. numpy를 사용하여 벡터 X에 대해 L2 정규화를 하고자 하는 경우, 0 division을 피하기 위해 충분히 작은 값을 eps로 한 뒤 이하에 코드를 활용하면 됩니다.
-
-```python
-X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
-```
-
-또한, `index factory`의 3번째 인수에 건네주는 값을 선택하는 것을 통해 계산에 사용하는 거리 index를 변경할 수 있습니다.
-
-```python
-index = Faiss.index_factory(dimention, text, Faiss.METRIC_INNER_PRODUCT)
-```
-
-## IVF
-IVF (Inverted file indexes)는 역색인 탐색법과 유사한 알고리즘입니다. 학습시에는 검색 대상에 대해 k-평균 군집법을 실시하고 클러스터 중심을 이용해 보로노이 분할을 실시합니다. 각 데이터 포인트에는 클러스터가 할당되므로, 클러스터에서 데이터 포인트를 조회하는 dictionary를 만듭니다.
-
-예를 들어, 클러스터가 다음과 같이 할당된 경우
-|index|Cluster|
-|-----|-------|
-|1|A|
-|2|B|
-|3|A|
-|4|C|
-|5|B|
-
-IVF 이후의 결과는 다음과 같습니다:
-
-|cluster|index|
-|-------|-----|
-|A|1, 3|
-|B|2, 5|
-|C|4|
-
-탐색 시, 우선 클러스터에서 `n_probe`개의 클러스터를 탐색한 다음, 각 클러스터에 속한 데이터 포인트의 거리를 계산합니다.
-
-# 권장 매개변수
-index의 선택 방법에 대해서는 공식적으로 가이드 라인이 있으므로, 거기에 준해 설명합니다.
-https://github.com/facebookresearch/Faiss/wiki/Guidelines-to-choose-an-index
-
-1M 이하의 데이터 세트에 있어서는 4bit-PQ가 2023년 4월 시점에서는 Faiss로 이용할 수 있는 가장 효율적인 수법입니다. 이것을 IVF와 조합해, 4bit-PQ로 후보를 추려내고, 마지막으로 이하의 index factory를 이용하여 정확한 지표로 거리를 재계산하면 됩니다.
-
-```python
-index = Faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
-```
-
-## IVF 권장 매개변수
-IVF의 수가 너무 많으면, 가령 데이터 수의 수만큼 IVF로 양자화(Quantization)를 수행하면, 이것은 완전탐색과 같아져 효율이 나빠지게 됩니다. 1M 이하의 경우 IVF 값은 데이터 포인트 수 N에 대해 4sqrt(N) ~ 16sqrt(N)를 사용하는 것을 권장합니다.
-
-n_probe는 n_probe의 수에 비례하여 계산 시간이 늘어나므로 정확도와 시간을 적절히 균형을 맞추어 주십시오. 개인적으로 RVC에 있어서 그렇게까지 정확도는 필요 없다고 생각하기 때문에 n_probe = 1이면 된다고 생각합니다.
-
-## FastScan
-FastScan은 직적 양자화를 레지스터에서 수행함으로써 거리의 고속 근사를 가능하게 하는 방법입니다.직적 양자화는 학습시에 d차원마다(보통 d=2)에 독립적으로 클러스터링을 실시해, 클러스터끼리의 거리를 사전 계산해 lookup table를 작성합니다. 예측시는 lookup table을 보면 각 차원의 거리를 O(1)로 계산할 수 있습니다. 따라서 PQ 다음에 지정하는 숫자는 일반적으로 벡터의 절반 차원을 지정합니다.
-
-FastScan에 대한 자세한 설명은 공식 문서를 참조하십시오.
-https://github.com/facebookresearch/Faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
-
-## RFlat
-RFlat은 FastScan이 계산한 대략적인 거리를 index factory의 3번째 인수로 지정한 정확한 거리로 다시 계산하라는 인스트럭션입니다. k개의 근접 변수를 가져올 때 k*k_factor개의 점에 대해 재계산이 이루어집니다.
-
-# Embedding 테크닉
-## Alpha 쿼리 확장
-퀴리 확장이란 탐색에서 사용되는 기술로, 예를 들어 전문 탐색 시, 입력된 검색문에 단어를 몇 개를 추가함으로써 검색 정확도를 올리는 방법입니다. 백터 탐색을 위해서도 몇가지 방법이 제안되었는데, 그 중 α-쿼리 확장은 추가 학습이 필요 없는 매우 효과적인 방법으로 알려져 있습니다. [Attention-Based Query Expansion Learning](https://arxiv.org/abs/2007.08019)와 [2nd place solution of kaggle shopee competition](https://www.kaggle.com/code/lyakaap/2nd-place-solution/notebook) 논문에서 소개된 바 있습니다..
-
-α-쿼리 확장은 한 벡터에 인접한 벡터를 유사도의 α곱한 가중치로 더해주면 됩니다. 코드로 예시를 들어 보겠습니다. big_npy를 α query expansion로 대체합니다.
-
-```python
-alpha = 3.
-index = Faiss.index_factory(256, "IVF512,PQ128x4fs,RFlat")
-original_norm = np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
-big_npy /= original_norm
-index.train(big_npy)
-index.add(big_npy)
-dist, neighbor = index.search(big_npy, num_expand)
-
-expand_arrays = []
-ixs = np.arange(big_npy.shape[0])
-for i in range(-(-big_npy.shape[0]//batch_size)):
- ix = ixs[i*batch_size:(i+1)*batch_size]
- weight = np.power(np.einsum("nd,nmd->nm", big_npy[ix], big_npy[neighbor[ix]]), alpha)
- expand_arrays.append(np.sum(big_npy[neighbor[ix]] * np.expand_dims(weight, axis=2),axis=1))
-big_npy = np.concatenate(expand_arrays, axis=0)
-
-# index version 정규화
-big_npy = big_npy / np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
-```
-
-위 테크닉은 탐색을 수행하는 쿼리에도, 탐색 대상 DB에도 적응 가능한 테크닉입니다.
-
-## MiniBatch KMeans에 의한 embedding 압축
-
-total_fea.npy가 너무 클 경우 K-means를 이용하여 벡터를 작게 만드는 것이 가능합니다. 이하 코드로 embedding의 압축이 가능합니다. n_clusters에 압축하고자 하는 크기를 지정하고 batch_size에 256 * CPU의 코어 수를 지정함으로써 CPU 병렬화의 혜택을 충분히 얻을 수 있습니다.
-
-```python
-import multiprocessing
-from sklearn.cluster import MiniBatchKMeans
-kmeans = MiniBatchKMeans(n_clusters=10000, batch_size=256 * multiprocessing.cpu_count(), init="random")
-kmeans.fit(big_npy)
-sample_npy = kmeans.cluster_centers_
-```
\ No newline at end of file
diff --git a/spaces/AIZeroToHero/3-NLP-MLM-MaskedLanguageModel/app.py b/spaces/AIZeroToHero/3-NLP-MLM-MaskedLanguageModel/app.py
deleted file mode 100644
index fce91f44b5f6858d0571cc4fc4655932fbb4899d..0000000000000000000000000000000000000000
--- a/spaces/AIZeroToHero/3-NLP-MLM-MaskedLanguageModel/app.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import gradio as gr
-title = "Medical Entity Mask Language Modeling (MLM)"
-description = "Medical Entity Feature Extraction uses Match Language Modeling to fill in the blank with likely word classification based on context."
-article = ""
-examples = [
- ["Scientific breakthroughs in treatment of HIV/AIDS may be solved in our lifetime using a procedure called [MASK] modulation which strengthens the immune system to fight the disease."],["A disease called [MASK] disease involves progressive memory loss and has new treatments to improve memory and delay progression of the disease."],["[MASK] refers to the uncontrolled growth of abnormal cells in the body. With chemotherapy and radiation therapy have improvements and replacements that destroy cancer cells before they become resistant to current treatment methods."],["The hereditary disease [MASK] is caused by mucus abnormally thick preventing lungs and pancreas from doing their jobs correctly."],["[MASK] or atherosclerosis is the buildup of cholesterol, fatty cells, and inflammatory deposits in the arteries. Stem cells, mechanical devices, and lowering cholesterol and blood pressure levels are helping prevention."]
-]
-
-gr.Interface.load("huggingface/ajitrajasekharan/biomedical",title=title,description=description,article=article, examples=examples).launch()
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/run.py b/spaces/AchyuthGamer/OpenGPT/run.py
deleted file mode 100644
index 3b9ca0f439c4dd6a791f7eed62d942d096562b61..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/run.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import secrets
-
-from server.bp import bp
-from server.website import Website
-from server.backend import Backend_Api
-from server.babel import create_babel
-from json import load
-from flask import Flask
-
-if __name__ == '__main__':
-
- # Load configuration from config.json
- config = load(open('config.json', 'r'))
- site_config = config['site_config']
- url_prefix = config.pop('url_prefix')
-
- # Create the app
- app = Flask(__name__)
- app.secret_key = secrets.token_hex(16)
-
- # Set up Babel
- create_babel(app)
-
- # Set up the website routes
- site = Website(bp, url_prefix)
- for route in site.routes:
- bp.add_url_rule(
- route,
- view_func=site.routes[route]['function'],
- methods=site.routes[route]['methods'],
- )
-
- # Set up the backend API routes
- backend_api = Backend_Api(bp, config)
- for route in backend_api.routes:
- bp.add_url_rule(
- route,
- view_func=backend_api.routes[route]['function'],
- methods=backend_api.routes[route]['methods'],
- )
-
- # Register the blueprint
- app.register_blueprint(bp, url_prefix=url_prefix)
-
- # Run the Flask server
- print(f"Running on {site_config['port']}{url_prefix}")
- app.run(**site_config)
- print(f"Closing port {site_config['port']}")
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/SetChart.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/SetChart.js
deleted file mode 100644
index 7248679bdf7b9d284319038646f72fdf53c1a5d1..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/SetChart.js
+++ /dev/null
@@ -1,66 +0,0 @@
-var SetChart = function (config) {
- if (!window.Chart) {
- var msg = `Can not find chartjs! Load chartjs in preload stage.
-scene.load.script('chartjs', 'https://cdnjs.cloudflare.com/ajax/libs/Chart.js/3.8.0/Chart.min.js');`
- console.error(msg);
- return this;
- }
-
- if (this.chart) {
- this.chart.destroy();
- }
- this.chart = new Chart(this.context, FillConfig(this, config));
- return this;
-}
-
-var FillConfig = function (canvas, config) {
- // Get options
- if (config === undefined) {
- config = {};
- }
- if (config.options === undefined) {
- config.options = {};
- }
- var options = config.options;
-
- // Fill options
- options.responsive = false;
- options.maintainAspectRatio = false;
- if (!options.hasOwnProperty('devicePixelRatio')) {
- options.devicePixelRatio = 1;
- }
-
- // Get animation config
- var noAnimation = false;
- if (options.animation === undefined) {
- options.animation = {};
- } else if (options.animation === false) {
- noAnimation = true;
- options.animation = {};
- }
- var animationConfig = options.animation;
-
- // Fill animation config
- if (noAnimation) {
- animationConfig.duration = 0;
- }
-
- var onProgress = animationConfig.onProgress;
- animationConfig.onProgress = function (animation) {
- if (onProgress) {
- onProgress(animation);
- }
- canvas.needRedraw();
- }
-
- var onComplete = animationConfig.onComplete;
- animationConfig.onComplete = function (animation) {
- if (onComplete) {
- onComplete(animation);
- }
- canvas.needRedraw();
- }
- return config;
-}
-
-export default SetChart;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/methods/listpanel/OpenListPanel.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/methods/listpanel/OpenListPanel.js
deleted file mode 100644
index 30455ed17dcc666dabc8557defea63d9e148190d..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/methods/listpanel/OpenListPanel.js
+++ /dev/null
@@ -1,83 +0,0 @@
-import CreateListPanel from './CreateListPanel.js';
-import DropDown from '../../../dropdown/DropDown.js';
-
-var OpenListPanel = function () {
- if (this.listPanel) {
- return this;
- }
-
- var listPanel = CreateListPanel.call(this);
-
- // Button over/out
- listPanel
- .on('button.over', function (button, index, pointer, event) {
- if (this.listOnButtonOver) {
- this.listOnButtonOver.call(this, button, index, pointer, event);
- }
-
- this.emit('button.over', this, listPanel, button, index, pointer, event);
- }, this)
- .on('button.out', function (button, index, pointer, event) {
- if (this.listOnButtonOut) {
- this.listOnButtonOut.call(this, button, index, pointer, event);
- }
-
- this.emit('button.out', this, listPanel, button, index, pointer, event);
- }, this);
-
-
- var alignTargetX;
- if (!this.listAlignMode || (this.listAlignMode === 'label')) {
- alignTargetX = this;
- } else {
- alignTargetX = this.getElement(this.listAlignMode)
- }
-
- var dropDownBehavior = new DropDown(listPanel, {
- // Transition
- duration: {
- in: this.listEaseInDuration,
- out: this.listEaseOutDuration
- },
- transitIn: this.listTransitInCallback,
- transitOut: this.listTransitOutCallback,
-
- // Position
- expandDirection: this.listExpandDirection,
-
- alignTargetX: alignTargetX,
- alignTargetY: this,
- alignSide: this.listAlignSide,
-
- bounds: this.listBounds,
-
- // Close condition
- anyTouchClose: true,
- })
- .on('open', function () {
- // After popping up
- // Can click
- listPanel.on('button.click', function (button, index, pointer, event) {
- if (this.listOnButtonClick) {
- this.listOnButtonClick.call(this, button, index, pointer, event);
- }
- this.emit('button.click', this, listPanel, button, index, pointer, event);
- }, this);
-
- this.emit('list.open', this, listPanel);
- }, this)
-
- .on('close', function () {
- this.listPanel = undefined;
- this.dropDownBehavior = undefined;
- }, this)
-
- this.listPanel = listPanel;
- this.dropDownBehavior = dropDownBehavior;
-
- this.pin(listPanel);
-
- return this;
-}
-
-export default OpenListPanel;
\ No newline at end of file
diff --git a/spaces/Ajaxon6255/Emerald_Isle/README.md b/spaces/Ajaxon6255/Emerald_Isle/README.md
deleted file mode 100644
index 224dc618c80e05f4ff15d3d9ca11d65b021ff1f8..0000000000000000000000000000000000000000
--- a/spaces/Ajaxon6255/Emerald_Isle/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
----
-tags: [gradio-theme]
-title: Emerald_Isle
-colorFrom: orange
-colorTo: purple
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-# Emerald_Isle
-## Description
-Add a description of this theme here!
-## Contributions
-Thanks to [@Ajaxon6255](https://huggingface.co/Ajaxon6255) for adding this gradio theme!
diff --git a/spaces/Alcom/chaoyi-wu-PMC_LLAMA_7B/app.py b/spaces/Alcom/chaoyi-wu-PMC_LLAMA_7B/app.py
deleted file mode 100644
index 83017a2b91caa036c05f33332b18f436bd4eb841..0000000000000000000000000000000000000000
--- a/spaces/Alcom/chaoyi-wu-PMC_LLAMA_7B/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/chaoyi-wu/PMC_LLAMA_7B").launch()
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_ffhq.sh b/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_ffhq.sh
deleted file mode 100644
index a1b79cb0f3f710eed21a978c3a1489ca830bb7f8..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_ffhq.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml-ws01
-OUT_DIR="/media/inpainting/paper_data/FFHQ_val"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in test
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
- do
- "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-ffhq \
- location.out_dir=$OUT_DIR cropping.out_square_crop=False
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/AmmarHuggingFaces/intro-to-hugging-face/README.md b/spaces/AmmarHuggingFaces/intro-to-hugging-face/README.md
deleted file mode 100644
index 0a5e743cdd8995849250813522aacc2a3d04d3e8..0000000000000000000000000000000000000000
--- a/spaces/AmmarHuggingFaces/intro-to-hugging-face/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: Intro To Hugging Face
-emoji: 🏢
-colorFrom: red
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/__init__.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/__init__.py
deleted file mode 100644
index 7013e8cf7ed660e50bb984226c95052792979b12..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.
-#
-# This work is made available under the Nvidia Source Code License-NC.
-# To view a copy of this license, visit
-# https://nvlabs.github.io/stylegan2/license.html
-
-from . import autosummary
-from . import network
-from . import optimizer
-from . import tfutil
-from . import custom_ops
-
-from .tfutil import *
-from .network import Network
-
-from .optimizer import Optimizer
-
-from .custom_ops import get_plugin
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/dance_diffusion/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/dance_diffusion/__init__.py
deleted file mode 100644
index 55d7f8ff9807083a10c844f7003cf0696d8258a3..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/dance_diffusion/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .pipeline_dance_diffusion import DanceDiffusionPipeline
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_check_dummies.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_check_dummies.py
deleted file mode 100644
index 52a75d7b02e85f70cb347afb1429ca8beb942d21..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_check_dummies.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-import sys
-import unittest
-
-
-git_repo_path = os.path.abspath(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))
-sys.path.append(os.path.join(git_repo_path, "utils"))
-
-import check_dummies # noqa: E402
-from check_dummies import create_dummy_files, create_dummy_object, find_backend, read_init # noqa: E402
-
-
-# Align TRANSFORMERS_PATH in check_dummies with the current path
-check_dummies.PATH_TO_DIFFUSERS = os.path.join(git_repo_path, "src", "diffusers")
-
-
-class CheckDummiesTester(unittest.TestCase):
- def test_find_backend(self):
- simple_backend = find_backend(" if not is_torch_available():")
- self.assertEqual(simple_backend, "torch")
-
- # backend_with_underscore = find_backend(" if not is_tensorflow_text_available():")
- # self.assertEqual(backend_with_underscore, "tensorflow_text")
-
- double_backend = find_backend(" if not (is_torch_available() and is_transformers_available()):")
- self.assertEqual(double_backend, "torch_and_transformers")
-
- # double_backend_with_underscore = find_backend(
- # " if not (is_sentencepiece_available() and is_tensorflow_text_available()):"
- # )
- # self.assertEqual(double_backend_with_underscore, "sentencepiece_and_tensorflow_text")
-
- triple_backend = find_backend(
- " if not (is_torch_available() and is_transformers_available() and is_onnx_available()):"
- )
- self.assertEqual(triple_backend, "torch_and_transformers_and_onnx")
-
- def test_read_init(self):
- objects = read_init()
- # We don't assert on the exact list of keys to allow for smooth grow of backend-specific objects
- self.assertIn("torch", objects)
- self.assertIn("torch_and_transformers", objects)
- self.assertIn("flax_and_transformers", objects)
- self.assertIn("torch_and_transformers_and_onnx", objects)
-
- # Likewise, we can't assert on the exact content of a key
- self.assertIn("UNet2DModel", objects["torch"])
- self.assertIn("FlaxUNet2DConditionModel", objects["flax"])
- self.assertIn("StableDiffusionPipeline", objects["torch_and_transformers"])
- self.assertIn("FlaxStableDiffusionPipeline", objects["flax_and_transformers"])
- self.assertIn("LMSDiscreteScheduler", objects["torch_and_scipy"])
- self.assertIn("OnnxStableDiffusionPipeline", objects["torch_and_transformers_and_onnx"])
-
- def test_create_dummy_object(self):
- dummy_constant = create_dummy_object("CONSTANT", "'torch'")
- self.assertEqual(dummy_constant, "\nCONSTANT = None\n")
-
- dummy_function = create_dummy_object("function", "'torch'")
- self.assertEqual(
- dummy_function, "\ndef function(*args, **kwargs):\n requires_backends(function, 'torch')\n"
- )
-
- expected_dummy_class = """
-class FakeClass(metaclass=DummyObject):
- _backends = 'torch'
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, 'torch')
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, 'torch')
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, 'torch')
-"""
- dummy_class = create_dummy_object("FakeClass", "'torch'")
- self.assertEqual(dummy_class, expected_dummy_class)
-
- def test_create_dummy_files(self):
- expected_dummy_pytorch_file = """# This file is autogenerated by the command `make fix-copies`, do not edit.
-from ..utils import DummyObject, requires_backends
-
-
-CONSTANT = None
-
-
-def function(*args, **kwargs):
- requires_backends(function, ["torch"])
-
-
-class FakeClass(metaclass=DummyObject):
- _backends = ["torch"]
-
- def __init__(self, *args, **kwargs):
- requires_backends(self, ["torch"])
-
- @classmethod
- def from_config(cls, *args, **kwargs):
- requires_backends(cls, ["torch"])
-
- @classmethod
- def from_pretrained(cls, *args, **kwargs):
- requires_backends(cls, ["torch"])
-"""
- dummy_files = create_dummy_files({"torch": ["CONSTANT", "function", "FakeClass"]})
- self.assertEqual(dummy_files["torch"], expected_dummy_pytorch_file)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_hub_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_hub_utils.py
deleted file mode 100644
index e8b8ea3a2fd9b114ff184291e7ec73928ba885d7..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_hub_utils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import unittest
-from pathlib import Path
-from tempfile import TemporaryDirectory
-from unittest.mock import Mock, patch
-
-import diffusers.utils.hub_utils
-
-
-class CreateModelCardTest(unittest.TestCase):
- @patch("diffusers.utils.hub_utils.get_full_repo_name")
- def test_create_model_card(self, repo_name_mock: Mock) -> None:
- repo_name_mock.return_value = "full_repo_name"
- with TemporaryDirectory() as tmpdir:
- # Dummy args values
- args = Mock()
- args.output_dir = tmpdir
- args.local_rank = 0
- args.hub_token = "hub_token"
- args.dataset_name = "dataset_name"
- args.learning_rate = 0.01
- args.train_batch_size = 100000
- args.eval_batch_size = 10000
- args.gradient_accumulation_steps = 0.01
- args.adam_beta1 = 0.02
- args.adam_beta2 = 0.03
- args.adam_weight_decay = 0.0005
- args.adam_epsilon = 0.000001
- args.lr_scheduler = 1
- args.lr_warmup_steps = 10
- args.ema_inv_gamma = 0.001
- args.ema_power = 0.1
- args.ema_max_decay = 0.2
- args.mixed_precision = True
-
- # Model card mush be rendered and saved
- diffusers.utils.hub_utils.create_model_card(args, model_name="model_name")
- self.assertTrue((Path(tmpdir) / "README.md").is_file())
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-wpp.css b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-wpp.css
deleted file mode 100644
index da9f172f434530c3df77d6b937ebae1a3868a29d..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-wpp.css
+++ /dev/null
@@ -1,55 +0,0 @@
-.message {
- padding-bottom: 25px;
- font-size: 15px;
- font-family: 'Noto Sans', Helvetica, Arial, sans-serif;
- line-height: 1.428571429;
-}
-
-.text-you {
- background-color: #d9fdd3;
- border-radius: 15px;
- padding: 10px;
- padding-top: 5px;
- float: right;
-}
-
-.text-bot {
- background-color: #f2f2f2;
- border-radius: 15px;
- padding: 10px;
- padding-top: 5px;
-}
-
-.dark .text-you {
- background-color: #005c4b;
- color: #111b21;
-}
-
-.dark .text-bot {
- background-color: #1f2937;
- color: #111b21;
-}
-
-.text-bot p, .text-you p {
- margin-top: 5px;
-}
-
-.message-body img {
- max-width: 300px;
- max-height: 300px;
- border-radius: 20px;
-}
-
-.message-body p {
- margin-bottom: 0 !important;
- font-size: 15px !important;
- line-height: 1.428571429 !important;
-}
-
-.dark .message-body p em {
- color: rgb(138, 138, 138) !important;
-}
-
-.message-body p em {
- color: rgb(110, 110, 110) !important;
-}
\ No newline at end of file
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/server.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/server.py
deleted file mode 100644
index f07df378b2c65b22506633238261e96aea815313..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/server.py
+++ /dev/null
@@ -1,237 +0,0 @@
-import os
-import warnings
-
-import modules.one_click_installer_check
-from modules.block_requests import OpenMonkeyPatch, RequestBlocker
-from modules.logging_colors import logger
-
-os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False'
-os.environ['BITSANDBYTES_NOWELCOME'] = '1'
-warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated')
-
-with RequestBlocker():
- import gradio as gr
-
-import matplotlib
-
-matplotlib.use('Agg') # This fixes LaTeX rendering on some systems
-
-import json
-import os
-import sys
-import time
-from functools import partial
-from pathlib import Path
-from threading import Lock
-
-import yaml
-
-import modules.extensions as extensions_module
-from modules import (
- chat,
- shared,
- training,
- ui,
- ui_chat,
- ui_default,
- ui_file_saving,
- ui_model_menu,
- ui_notebook,
- ui_parameters,
- ui_session,
- utils
-)
-from modules.extensions import apply_extensions
-from modules.LoRA import add_lora_to_model
-from modules.models import load_model
-from modules.models_settings import (
- get_fallback_settings,
- get_model_metadata,
- update_model_parameters
-)
-from modules.utils import gradio
-
-
-def create_interface():
-
- title = 'Text generation web UI'
-
- # Password authentication
- auth = []
- if shared.args.gradio_auth:
- auth.extend(x.strip() for x in shared.args.gradio_auth.strip('"').replace('\n', '').split(',') if x.strip())
- if shared.args.gradio_auth_path:
- with open(shared.args.gradio_auth_path, 'r', encoding="utf8") as file:
- auth.extend(x.strip() for line in file for x in line.split(',') if x.strip())
- auth = [tuple(cred.split(':')) for cred in auth]
-
- # Import the extensions and execute their setup() functions
- if shared.args.extensions is not None and len(shared.args.extensions) > 0:
- extensions_module.load_extensions()
-
- # Force some events to be triggered on page load
- shared.persistent_interface_state.update({
- 'loader': shared.args.loader or 'Transformers',
- 'mode': shared.settings['mode'],
- 'character_menu': shared.args.character or shared.settings['character'],
- 'instruction_template': shared.settings['instruction_template'],
- 'prompt_menu-default': shared.settings['prompt-default'],
- 'prompt_menu-notebook': shared.settings['prompt-notebook'],
- 'filter_by_loader': shared.args.loader or 'All'
- })
-
- if Path("cache/pfp_character.png").exists():
- Path("cache/pfp_character.png").unlink()
-
- # css/js strings
- css = ui.css
- js = ui.js
- css += apply_extensions('css')
- js += apply_extensions('js')
-
- # Interface state elements
- shared.input_elements = ui.list_interface_input_elements()
-
- with gr.Blocks(css=css, analytics_enabled=False, title=title, theme=ui.theme) as shared.gradio['interface']:
-
- # Interface state
- shared.gradio['interface_state'] = gr.State({k: None for k in shared.input_elements})
-
- # Audio notification
- if Path("notification.mp3").exists():
- shared.gradio['audio_notification'] = gr.Audio(interactive=False, value="notification.mp3", elem_id="audio_notification", visible=False)
-
- # Floating menus for saving/deleting files
- ui_file_saving.create_ui()
-
- # Temporary clipboard for saving files
- shared.gradio['temporary_text'] = gr.Textbox(visible=False)
-
- # Text Generation tab
- ui_chat.create_ui()
- ui_default.create_ui()
- ui_notebook.create_ui()
-
- ui_parameters.create_ui(shared.settings['preset']) # Parameters tab
- ui_model_menu.create_ui() # Model tab
- training.create_ui() # Training tab
- ui_session.create_ui() # Session tab
-
- # Generation events
- ui_chat.create_event_handlers()
- ui_default.create_event_handlers()
- ui_notebook.create_event_handlers()
-
- # Other events
- ui_file_saving.create_event_handlers()
- ui_parameters.create_event_handlers()
- ui_model_menu.create_event_handlers()
-
- # Interface launch events
- if shared.settings['dark_theme']:
- shared.gradio['interface'].load(lambda: None, None, None, _js="() => document.getElementsByTagName('body')[0].classList.add('dark')")
-
- shared.gradio['interface'].load(lambda: None, None, None, _js=f"() => {{{js}}}")
- shared.gradio['interface'].load(None, gradio('show_controls'), None, _js=f'(x) => {{{ui.show_controls_js}; toggle_controls(x)}}')
- shared.gradio['interface'].load(partial(ui.apply_interface_values, {}, use_persistent=True), None, gradio(ui.list_interface_input_elements()), show_progress=False)
- shared.gradio['interface'].load(chat.redraw_html, gradio(ui_chat.reload_arr), gradio('display'))
-
- extensions_module.create_extensions_tabs() # Extensions tabs
- extensions_module.create_extensions_block() # Extensions block
-
- # Launch the interface
- shared.gradio['interface'].queue(concurrency_count=64)
- with OpenMonkeyPatch():
- shared.gradio['interface'].launch(
- prevent_thread_lock=True,
- share=shared.args.share,
- server_name=None if not shared.args.listen else (shared.args.listen_host or '0.0.0.0'),
- server_port=shared.args.listen_port,
- inbrowser=shared.args.auto_launch,
- auth=auth or None,
- ssl_verify=False if (shared.args.ssl_keyfile or shared.args.ssl_certfile) else True,
- ssl_keyfile=shared.args.ssl_keyfile,
- ssl_certfile=shared.args.ssl_certfile
- )
-
-
-if __name__ == "__main__":
-
- # Load custom settings
- settings_file = None
- if shared.args.settings is not None and Path(shared.args.settings).exists():
- settings_file = Path(shared.args.settings)
- elif Path('settings.yaml').exists():
- settings_file = Path('settings.yaml')
- elif Path('settings.json').exists():
- settings_file = Path('settings.json')
-
- if settings_file is not None:
- logger.info(f"Loading settings from {settings_file}...")
- file_contents = open(settings_file, 'r', encoding='utf-8').read()
- new_settings = json.loads(file_contents) if settings_file.suffix == "json" else yaml.safe_load(file_contents)
- shared.settings.update(new_settings)
-
- # Fallback settings for models
- shared.model_config['.*'] = get_fallback_settings()
- shared.model_config.move_to_end('.*', last=False) # Move to the beginning
-
- # Activate the extensions listed on settings.yaml
- extensions_module.available_extensions = utils.get_available_extensions()
- for extension in shared.settings['default_extensions']:
- shared.args.extensions = shared.args.extensions or []
- if extension not in shared.args.extensions:
- shared.args.extensions.append(extension)
-
- available_models = utils.get_available_models()
-
- # Model defined through --model
- if shared.args.model is not None:
- shared.model_name = shared.args.model
-
- # Select the model from a command-line menu
- elif shared.args.model_menu:
- if len(available_models) == 0:
- logger.error('No models are available! Please download at least one.')
- sys.exit(0)
- else:
- print('The following models are available:\n')
- for i, model in enumerate(available_models):
- print(f'{i+1}. {model}')
-
- print(f'\nWhich one do you want to load? 1-{len(available_models)}\n')
- i = int(input()) - 1
- print()
-
- shared.model_name = available_models[i]
-
- # If any model has been selected, load it
- if shared.model_name != 'None':
- p = Path(shared.model_name)
- if p.exists():
- model_name = p.parts[-1]
- shared.model_name = model_name
- else:
- model_name = shared.model_name
-
- model_settings = get_model_metadata(model_name)
- shared.settings.update({k: v for k, v in model_settings.items() if k in shared.settings}) # hijacking the interface defaults
- update_model_parameters(model_settings, initial=True) # hijacking the command-line arguments
-
- # Load the model
- shared.model, shared.tokenizer = load_model(model_name)
- if shared.args.lora:
- add_lora_to_model(shared.args.lora)
-
- shared.generation_lock = Lock()
-
- # Launch the web UI
- create_interface()
- while True:
- time.sleep(0.5)
- if shared.need_restart:
- shared.need_restart = False
- time.sleep(0.5)
- shared.gradio['interface'].close()
- time.sleep(0.5)
- create_interface()
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py
deleted file mode 100644
index e61ae0dd941a7be00b3e41a3de833ec50470a45f..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/transformer.py
+++ /dev/null
@@ -1,595 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-import warnings
-
-import torch
-import torch.nn as nn
-
-from annotator.uniformer.mmcv import ConfigDict, deprecated_api_warning
-from annotator.uniformer.mmcv.cnn import Linear, build_activation_layer, build_norm_layer
-from annotator.uniformer.mmcv.runner.base_module import BaseModule, ModuleList, Sequential
-from annotator.uniformer.mmcv.utils import build_from_cfg
-from .drop import build_dropout
-from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING,
- TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE)
-
-# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file
-try:
- from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention # noqa F401
- warnings.warn(
- ImportWarning(
- '``MultiScaleDeformableAttention`` has been moved to '
- '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501
- '``from annotator.uniformer.mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501
- 'to ``from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501
- ))
-
-except ImportError:
- warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from '
- '``mmcv.ops.multi_scale_deform_attn``, '
- 'You should install ``mmcv-full`` if you need this module. ')
-
-
-def build_positional_encoding(cfg, default_args=None):
- """Builder for Position Encoding."""
- return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args)
-
-
-def build_attention(cfg, default_args=None):
- """Builder for attention."""
- return build_from_cfg(cfg, ATTENTION, default_args)
-
-
-def build_feedforward_network(cfg, default_args=None):
- """Builder for feed-forward network (FFN)."""
- return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args)
-
-
-def build_transformer_layer(cfg, default_args=None):
- """Builder for transformer layer."""
- return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args)
-
-
-def build_transformer_layer_sequence(cfg, default_args=None):
- """Builder for transformer encoder and transformer decoder."""
- return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args)
-
-
-@ATTENTION.register_module()
-class MultiheadAttention(BaseModule):
- """A wrapper for ``torch.nn.MultiheadAttention``.
-
- This module implements MultiheadAttention with identity connection,
- and positional encoding is also passed as input.
-
- Args:
- embed_dims (int): The embedding dimension.
- num_heads (int): Parallel attention heads.
- attn_drop (float): A Dropout layer on attn_output_weights.
- Default: 0.0.
- proj_drop (float): A Dropout layer after `nn.MultiheadAttention`.
- Default: 0.0.
- dropout_layer (obj:`ConfigDict`): The dropout_layer used
- when adding the shortcut.
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- batch_first (bool): When it is True, Key, Query and Value are shape of
- (batch, n, embed_dim), otherwise (n, batch, embed_dim).
- Default to False.
- """
-
- def __init__(self,
- embed_dims,
- num_heads,
- attn_drop=0.,
- proj_drop=0.,
- dropout_layer=dict(type='Dropout', drop_prob=0.),
- init_cfg=None,
- batch_first=False,
- **kwargs):
- super(MultiheadAttention, self).__init__(init_cfg)
- if 'dropout' in kwargs:
- warnings.warn('The arguments `dropout` in MultiheadAttention '
- 'has been deprecated, now you can separately '
- 'set `attn_drop`(float), proj_drop(float), '
- 'and `dropout_layer`(dict) ')
- attn_drop = kwargs['dropout']
- dropout_layer['drop_prob'] = kwargs.pop('dropout')
-
- self.embed_dims = embed_dims
- self.num_heads = num_heads
- self.batch_first = batch_first
-
- self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop,
- **kwargs)
-
- self.proj_drop = nn.Dropout(proj_drop)
- self.dropout_layer = build_dropout(
- dropout_layer) if dropout_layer else nn.Identity()
-
- @deprecated_api_warning({'residual': 'identity'},
- cls_name='MultiheadAttention')
- def forward(self,
- query,
- key=None,
- value=None,
- identity=None,
- query_pos=None,
- key_pos=None,
- attn_mask=None,
- key_padding_mask=None,
- **kwargs):
- """Forward function for `MultiheadAttention`.
-
- **kwargs allow passing a more general data flow when combining
- with other operations in `transformerlayer`.
-
- Args:
- query (Tensor): The input query with shape [num_queries, bs,
- embed_dims] if self.batch_first is False, else
- [bs, num_queries embed_dims].
- key (Tensor): The key tensor with shape [num_keys, bs,
- embed_dims] if self.batch_first is False, else
- [bs, num_keys, embed_dims] .
- If None, the ``query`` will be used. Defaults to None.
- value (Tensor): The value tensor with same shape as `key`.
- Same in `nn.MultiheadAttention.forward`. Defaults to None.
- If None, the `key` will be used.
- identity (Tensor): This tensor, with the same shape as x,
- will be used for the identity link.
- If None, `x` will be used. Defaults to None.
- query_pos (Tensor): The positional encoding for query, with
- the same shape as `x`. If not None, it will
- be added to `x` before forward function. Defaults to None.
- key_pos (Tensor): The positional encoding for `key`, with the
- same shape as `key`. Defaults to None. If not None, it will
- be added to `key` before forward function. If None, and
- `query_pos` has the same shape as `key`, then `query_pos`
- will be used for `key_pos`. Defaults to None.
- attn_mask (Tensor): ByteTensor mask with shape [num_queries,
- num_keys]. Same in `nn.MultiheadAttention.forward`.
- Defaults to None.
- key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys].
- Defaults to None.
-
- Returns:
- Tensor: forwarded results with shape
- [num_queries, bs, embed_dims]
- if self.batch_first is False, else
- [bs, num_queries embed_dims].
- """
-
- if key is None:
- key = query
- if value is None:
- value = key
- if identity is None:
- identity = query
- if key_pos is None:
- if query_pos is not None:
- # use query_pos if key_pos is not available
- if query_pos.shape == key.shape:
- key_pos = query_pos
- else:
- warnings.warn(f'position encoding of key is'
- f'missing in {self.__class__.__name__}.')
- if query_pos is not None:
- query = query + query_pos
- if key_pos is not None:
- key = key + key_pos
-
- # Because the dataflow('key', 'query', 'value') of
- # ``torch.nn.MultiheadAttention`` is (num_query, batch,
- # embed_dims), We should adjust the shape of dataflow from
- # batch_first (batch, num_query, embed_dims) to num_query_first
- # (num_query ,batch, embed_dims), and recover ``attn_output``
- # from num_query_first to batch_first.
- if self.batch_first:
- query = query.transpose(0, 1)
- key = key.transpose(0, 1)
- value = value.transpose(0, 1)
-
- out = self.attn(
- query=query,
- key=key,
- value=value,
- attn_mask=attn_mask,
- key_padding_mask=key_padding_mask)[0]
-
- if self.batch_first:
- out = out.transpose(0, 1)
-
- return identity + self.dropout_layer(self.proj_drop(out))
-
-
-@FEEDFORWARD_NETWORK.register_module()
-class FFN(BaseModule):
- """Implements feed-forward networks (FFNs) with identity connection.
-
- Args:
- embed_dims (int): The feature dimension. Same as
- `MultiheadAttention`. Defaults: 256.
- feedforward_channels (int): The hidden dimension of FFNs.
- Defaults: 1024.
- num_fcs (int, optional): The number of fully-connected layers in
- FFNs. Default: 2.
- act_cfg (dict, optional): The activation config for FFNs.
- Default: dict(type='ReLU')
- ffn_drop (float, optional): Probability of an element to be
- zeroed in FFN. Default 0.0.
- add_identity (bool, optional): Whether to add the
- identity connection. Default: `True`.
- dropout_layer (obj:`ConfigDict`): The dropout_layer used
- when adding the shortcut.
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- """
-
- @deprecated_api_warning(
- {
- 'dropout': 'ffn_drop',
- 'add_residual': 'add_identity'
- },
- cls_name='FFN')
- def __init__(self,
- embed_dims=256,
- feedforward_channels=1024,
- num_fcs=2,
- act_cfg=dict(type='ReLU', inplace=True),
- ffn_drop=0.,
- dropout_layer=None,
- add_identity=True,
- init_cfg=None,
- **kwargs):
- super(FFN, self).__init__(init_cfg)
- assert num_fcs >= 2, 'num_fcs should be no less ' \
- f'than 2. got {num_fcs}.'
- self.embed_dims = embed_dims
- self.feedforward_channels = feedforward_channels
- self.num_fcs = num_fcs
- self.act_cfg = act_cfg
- self.activate = build_activation_layer(act_cfg)
-
- layers = []
- in_channels = embed_dims
- for _ in range(num_fcs - 1):
- layers.append(
- Sequential(
- Linear(in_channels, feedforward_channels), self.activate,
- nn.Dropout(ffn_drop)))
- in_channels = feedforward_channels
- layers.append(Linear(feedforward_channels, embed_dims))
- layers.append(nn.Dropout(ffn_drop))
- self.layers = Sequential(*layers)
- self.dropout_layer = build_dropout(
- dropout_layer) if dropout_layer else torch.nn.Identity()
- self.add_identity = add_identity
-
- @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN')
- def forward(self, x, identity=None):
- """Forward function for `FFN`.
-
- The function would add x to the output tensor if residue is None.
- """
- out = self.layers(x)
- if not self.add_identity:
- return self.dropout_layer(out)
- if identity is None:
- identity = x
- return identity + self.dropout_layer(out)
-
-
-@TRANSFORMER_LAYER.register_module()
-class BaseTransformerLayer(BaseModule):
- """Base `TransformerLayer` for vision transformer.
-
- It can be built from `mmcv.ConfigDict` and support more flexible
- customization, for example, using any number of `FFN or LN ` and
- use different kinds of `attention` by specifying a list of `ConfigDict`
- named `attn_cfgs`. It is worth mentioning that it supports `prenorm`
- when you specifying `norm` as the first element of `operation_order`.
- More details about the `prenorm`: `On Layer Normalization in the
- Transformer Architecture `_ .
-
- Args:
- attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )):
- Configs for `self_attention` or `cross_attention` modules,
- The order of the configs in the list should be consistent with
- corresponding attentions in operation_order.
- If it is a dict, all of the attention modules in operation_order
- will be built with this config. Default: None.
- ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )):
- Configs for FFN, The order of the configs in the list should be
- consistent with corresponding ffn in operation_order.
- If it is a dict, all of the attention modules in operation_order
- will be built with this config.
- operation_order (tuple[str]): The execution order of operation
- in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm').
- Support `prenorm` when you specifying first element as `norm`.
- Default:None.
- norm_cfg (dict): Config dict for normalization layer.
- Default: dict(type='LN').
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- batch_first (bool): Key, Query and Value are shape
- of (batch, n, embed_dim)
- or (n, batch, embed_dim). Default to False.
- """
-
- def __init__(self,
- attn_cfgs=None,
- ffn_cfgs=dict(
- type='FFN',
- embed_dims=256,
- feedforward_channels=1024,
- num_fcs=2,
- ffn_drop=0.,
- act_cfg=dict(type='ReLU', inplace=True),
- ),
- operation_order=None,
- norm_cfg=dict(type='LN'),
- init_cfg=None,
- batch_first=False,
- **kwargs):
-
- deprecated_args = dict(
- feedforward_channels='feedforward_channels',
- ffn_dropout='ffn_drop',
- ffn_num_fcs='num_fcs')
- for ori_name, new_name in deprecated_args.items():
- if ori_name in kwargs:
- warnings.warn(
- f'The arguments `{ori_name}` in BaseTransformerLayer '
- f'has been deprecated, now you should set `{new_name}` '
- f'and other FFN related arguments '
- f'to a dict named `ffn_cfgs`. ')
- ffn_cfgs[new_name] = kwargs[ori_name]
-
- super(BaseTransformerLayer, self).__init__(init_cfg)
-
- self.batch_first = batch_first
-
- assert set(operation_order) & set(
- ['self_attn', 'norm', 'ffn', 'cross_attn']) == \
- set(operation_order), f'The operation_order of' \
- f' {self.__class__.__name__} should ' \
- f'contains all four operation type ' \
- f"{['self_attn', 'norm', 'ffn', 'cross_attn']}"
-
- num_attn = operation_order.count('self_attn') + operation_order.count(
- 'cross_attn')
- if isinstance(attn_cfgs, dict):
- attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)]
- else:
- assert num_attn == len(attn_cfgs), f'The length ' \
- f'of attn_cfg {num_attn} is ' \
- f'not consistent with the number of attention' \
- f'in operation_order {operation_order}.'
-
- self.num_attn = num_attn
- self.operation_order = operation_order
- self.norm_cfg = norm_cfg
- self.pre_norm = operation_order[0] == 'norm'
- self.attentions = ModuleList()
-
- index = 0
- for operation_name in operation_order:
- if operation_name in ['self_attn', 'cross_attn']:
- if 'batch_first' in attn_cfgs[index]:
- assert self.batch_first == attn_cfgs[index]['batch_first']
- else:
- attn_cfgs[index]['batch_first'] = self.batch_first
- attention = build_attention(attn_cfgs[index])
- # Some custom attentions used as `self_attn`
- # or `cross_attn` can have different behavior.
- attention.operation_name = operation_name
- self.attentions.append(attention)
- index += 1
-
- self.embed_dims = self.attentions[0].embed_dims
-
- self.ffns = ModuleList()
- num_ffns = operation_order.count('ffn')
- if isinstance(ffn_cfgs, dict):
- ffn_cfgs = ConfigDict(ffn_cfgs)
- if isinstance(ffn_cfgs, dict):
- ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)]
- assert len(ffn_cfgs) == num_ffns
- for ffn_index in range(num_ffns):
- if 'embed_dims' not in ffn_cfgs[ffn_index]:
- ffn_cfgs['embed_dims'] = self.embed_dims
- else:
- assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims
- self.ffns.append(
- build_feedforward_network(ffn_cfgs[ffn_index],
- dict(type='FFN')))
-
- self.norms = ModuleList()
- num_norms = operation_order.count('norm')
- for _ in range(num_norms):
- self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1])
-
- def forward(self,
- query,
- key=None,
- value=None,
- query_pos=None,
- key_pos=None,
- attn_masks=None,
- query_key_padding_mask=None,
- key_padding_mask=None,
- **kwargs):
- """Forward function for `TransformerDecoderLayer`.
-
- **kwargs contains some specific arguments of attentions.
-
- Args:
- query (Tensor): The input query with shape
- [num_queries, bs, embed_dims] if
- self.batch_first is False, else
- [bs, num_queries embed_dims].
- key (Tensor): The key tensor with shape [num_keys, bs,
- embed_dims] if self.batch_first is False, else
- [bs, num_keys, embed_dims] .
- value (Tensor): The value tensor with same shape as `key`.
- query_pos (Tensor): The positional encoding for `query`.
- Default: None.
- key_pos (Tensor): The positional encoding for `key`.
- Default: None.
- attn_masks (List[Tensor] | None): 2D Tensor used in
- calculation of corresponding attention. The length of
- it should equal to the number of `attention` in
- `operation_order`. Default: None.
- query_key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_queries]. Only used in `self_attn` layer.
- Defaults to None.
- key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_keys]. Default: None.
-
- Returns:
- Tensor: forwarded results with shape [num_queries, bs, embed_dims].
- """
-
- norm_index = 0
- attn_index = 0
- ffn_index = 0
- identity = query
- if attn_masks is None:
- attn_masks = [None for _ in range(self.num_attn)]
- elif isinstance(attn_masks, torch.Tensor):
- attn_masks = [
- copy.deepcopy(attn_masks) for _ in range(self.num_attn)
- ]
- warnings.warn(f'Use same attn_mask in all attentions in '
- f'{self.__class__.__name__} ')
- else:
- assert len(attn_masks) == self.num_attn, f'The length of ' \
- f'attn_masks {len(attn_masks)} must be equal ' \
- f'to the number of attention in ' \
- f'operation_order {self.num_attn}'
-
- for layer in self.operation_order:
- if layer == 'self_attn':
- temp_key = temp_value = query
- query = self.attentions[attn_index](
- query,
- temp_key,
- temp_value,
- identity if self.pre_norm else None,
- query_pos=query_pos,
- key_pos=query_pos,
- attn_mask=attn_masks[attn_index],
- key_padding_mask=query_key_padding_mask,
- **kwargs)
- attn_index += 1
- identity = query
-
- elif layer == 'norm':
- query = self.norms[norm_index](query)
- norm_index += 1
-
- elif layer == 'cross_attn':
- query = self.attentions[attn_index](
- query,
- key,
- value,
- identity if self.pre_norm else None,
- query_pos=query_pos,
- key_pos=key_pos,
- attn_mask=attn_masks[attn_index],
- key_padding_mask=key_padding_mask,
- **kwargs)
- attn_index += 1
- identity = query
-
- elif layer == 'ffn':
- query = self.ffns[ffn_index](
- query, identity if self.pre_norm else None)
- ffn_index += 1
-
- return query
-
-
-@TRANSFORMER_LAYER_SEQUENCE.register_module()
-class TransformerLayerSequence(BaseModule):
- """Base class for TransformerEncoder and TransformerDecoder in vision
- transformer.
-
- As base-class of Encoder and Decoder in vision transformer.
- Support customization such as specifying different kind
- of `transformer_layer` in `transformer_coder`.
-
- Args:
- transformerlayer (list[obj:`mmcv.ConfigDict`] |
- obj:`mmcv.ConfigDict`): Config of transformerlayer
- in TransformerCoder. If it is obj:`mmcv.ConfigDict`,
- it would be repeated `num_layer` times to a
- list[`mmcv.ConfigDict`]. Default: None.
- num_layers (int): The number of `TransformerLayer`. Default: None.
- init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization.
- Default: None.
- """
-
- def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None):
- super(TransformerLayerSequence, self).__init__(init_cfg)
- if isinstance(transformerlayers, dict):
- transformerlayers = [
- copy.deepcopy(transformerlayers) for _ in range(num_layers)
- ]
- else:
- assert isinstance(transformerlayers, list) and \
- len(transformerlayers) == num_layers
- self.num_layers = num_layers
- self.layers = ModuleList()
- for i in range(num_layers):
- self.layers.append(build_transformer_layer(transformerlayers[i]))
- self.embed_dims = self.layers[0].embed_dims
- self.pre_norm = self.layers[0].pre_norm
-
- def forward(self,
- query,
- key,
- value,
- query_pos=None,
- key_pos=None,
- attn_masks=None,
- query_key_padding_mask=None,
- key_padding_mask=None,
- **kwargs):
- """Forward function for `TransformerCoder`.
-
- Args:
- query (Tensor): Input query with shape
- `(num_queries, bs, embed_dims)`.
- key (Tensor): The key tensor with shape
- `(num_keys, bs, embed_dims)`.
- value (Tensor): The value tensor with shape
- `(num_keys, bs, embed_dims)`.
- query_pos (Tensor): The positional encoding for `query`.
- Default: None.
- key_pos (Tensor): The positional encoding for `key`.
- Default: None.
- attn_masks (List[Tensor], optional): Each element is 2D Tensor
- which is used in calculation of corresponding attention in
- operation_order. Default: None.
- query_key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_queries]. Only used in self-attention
- Default: None.
- key_padding_mask (Tensor): ByteTensor for `query`, with
- shape [bs, num_keys]. Default: None.
-
- Returns:
- Tensor: results with shape [num_queries, bs, embed_dims].
- """
- for layer in self.layers:
- query = layer(
- query,
- key,
- value,
- query_pos=query_pos,
- key_pos=key_pos,
- attn_masks=attn_masks,
- query_key_padding_mask=query_key_padding_mask,
- key_padding_mask=key_padding_mask,
- **kwargs)
- return query
diff --git a/spaces/AnonymousForSubmission/Graphic_Score_and_Audio/app.py b/spaces/AnonymousForSubmission/Graphic_Score_and_Audio/app.py
deleted file mode 100644
index 50b8d9e84c5c7026b409abd41612869310242dd1..0000000000000000000000000000000000000000
--- a/spaces/AnonymousForSubmission/Graphic_Score_and_Audio/app.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import gradio as gr
-from gradio.components import Markdown as md
-
-from generate_ssrl import synthesize_audio
-
-import os
-
-def predict(image):
- synthesize_audio(image)
- path = os.path.join(os.path.dirname(__file__), "SSRL_Media/Designed_Audio/generated_audio.wav")
- return path
-
-demo = gr.Blocks()
-
-drawing_board = gr.Image(source="canvas", tool="color-sketch", shape = [405, 249])
-
-audio_output = gr.Audio(os.path.join(os.path.dirname(__file__), "SSRL_Media/Designed_Audio/generated_audio.wav"), label="Composed Music")
-
-demo_interface = gr.Interface(
- predict,
- inputs = [drawing_board],
- outputs = [audio_output]
-)
-
-with gr.Blocks() as demo:
-
- gr.Markdown(
- """
-
-
A Tool for Composing Music via Graphic Scores in the style of György Ligeti's Artikulation using Self-supervised Representation Learning
-
-
-
Draw a graphic score in the style of the examples below and click on submit to generate your musical composition based on your drawing!
Please check our paper here for more details. Berker Banar and Simon Colton, 2023.
-
-
-
-
- """
- )
-
- with gr.Row():
- with gr.Column(scale=1):
- drawing_board
- with gr.Column(scale=4):
- audio_output
- demo_interface.render()
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/latin1prober.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/latin1prober.py
deleted file mode 100644
index 59a01d91b87d4282bede38ade7cc78c0f7552d0e..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/latin1prober.py
+++ /dev/null
@@ -1,147 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Universal charset detector code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 2001
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-# Shy Shalom - original C code
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from typing import List, Union
-
-from .charsetprober import CharSetProber
-from .enums import ProbingState
-
-FREQ_CAT_NUM = 4
-
-UDF = 0 # undefined
-OTH = 1 # other
-ASC = 2 # ascii capital letter
-ASS = 3 # ascii small letter
-ACV = 4 # accent capital vowel
-ACO = 5 # accent capital other
-ASV = 6 # accent small vowel
-ASO = 7 # accent small other
-CLASS_NUM = 8 # total classes
-
-# fmt: off
-Latin1_CharToClass = (
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 00 - 07
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 08 - 0F
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 10 - 17
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 18 - 1F
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 20 - 27
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 28 - 2F
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 30 - 37
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 38 - 3F
- OTH, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 40 - 47
- ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 48 - 4F
- ASC, ASC, ASC, ASC, ASC, ASC, ASC, ASC, # 50 - 57
- ASC, ASC, ASC, OTH, OTH, OTH, OTH, OTH, # 58 - 5F
- OTH, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 60 - 67
- ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 68 - 6F
- ASS, ASS, ASS, ASS, ASS, ASS, ASS, ASS, # 70 - 77
- ASS, ASS, ASS, OTH, OTH, OTH, OTH, OTH, # 78 - 7F
- OTH, UDF, OTH, ASO, OTH, OTH, OTH, OTH, # 80 - 87
- OTH, OTH, ACO, OTH, ACO, UDF, ACO, UDF, # 88 - 8F
- UDF, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # 90 - 97
- OTH, OTH, ASO, OTH, ASO, UDF, ASO, ACO, # 98 - 9F
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A0 - A7
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # A8 - AF
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B0 - B7
- OTH, OTH, OTH, OTH, OTH, OTH, OTH, OTH, # B8 - BF
- ACV, ACV, ACV, ACV, ACV, ACV, ACO, ACO, # C0 - C7
- ACV, ACV, ACV, ACV, ACV, ACV, ACV, ACV, # C8 - CF
- ACO, ACO, ACV, ACV, ACV, ACV, ACV, OTH, # D0 - D7
- ACV, ACV, ACV, ACV, ACV, ACO, ACO, ACO, # D8 - DF
- ASV, ASV, ASV, ASV, ASV, ASV, ASO, ASO, # E0 - E7
- ASV, ASV, ASV, ASV, ASV, ASV, ASV, ASV, # E8 - EF
- ASO, ASO, ASV, ASV, ASV, ASV, ASV, OTH, # F0 - F7
- ASV, ASV, ASV, ASV, ASV, ASO, ASO, ASO, # F8 - FF
-)
-
-# 0 : illegal
-# 1 : very unlikely
-# 2 : normal
-# 3 : very likely
-Latin1ClassModel = (
-# UDF OTH ASC ASS ACV ACO ASV ASO
- 0, 0, 0, 0, 0, 0, 0, 0, # UDF
- 0, 3, 3, 3, 3, 3, 3, 3, # OTH
- 0, 3, 3, 3, 3, 3, 3, 3, # ASC
- 0, 3, 3, 3, 1, 1, 3, 3, # ASS
- 0, 3, 3, 3, 1, 2, 1, 2, # ACV
- 0, 3, 3, 3, 3, 3, 3, 3, # ACO
- 0, 3, 1, 3, 1, 1, 1, 3, # ASV
- 0, 3, 1, 3, 1, 1, 3, 3, # ASO
-)
-# fmt: on
-
-
-class Latin1Prober(CharSetProber):
- def __init__(self) -> None:
- super().__init__()
- self._last_char_class = OTH
- self._freq_counter: List[int] = []
- self.reset()
-
- def reset(self) -> None:
- self._last_char_class = OTH
- self._freq_counter = [0] * FREQ_CAT_NUM
- super().reset()
-
- @property
- def charset_name(self) -> str:
- return "ISO-8859-1"
-
- @property
- def language(self) -> str:
- return ""
-
- def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
- byte_str = self.remove_xml_tags(byte_str)
- for c in byte_str:
- char_class = Latin1_CharToClass[c]
- freq = Latin1ClassModel[(self._last_char_class * CLASS_NUM) + char_class]
- if freq == 0:
- self._state = ProbingState.NOT_ME
- break
- self._freq_counter[freq] += 1
- self._last_char_class = char_class
-
- return self.state
-
- def get_confidence(self) -> float:
- if self.state == ProbingState.NOT_ME:
- return 0.01
-
- total = sum(self._freq_counter)
- confidence = (
- 0.0
- if total < 0.01
- else (self._freq_counter[3] - self._freq_counter[1] * 20.0) / total
- )
- confidence = max(confidence, 0.0)
- # lower the confidence of latin1 so that other more accurate
- # detector can take priority.
- confidence *= 0.73
- return confidence
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/simple.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/simple.py
deleted file mode 100644
index da073cbdb11e6c24c19a2d388c53c8842228595f..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/importlib_resources/simple.py
+++ /dev/null
@@ -1,116 +0,0 @@
-"""
-Interface adapters for low-level readers.
-"""
-
-import abc
-import io
-import itertools
-from typing import BinaryIO, List
-
-from .abc import Traversable, TraversableResources
-
-
-class SimpleReader(abc.ABC):
- """
- The minimum, low-level interface required from a resource
- provider.
- """
-
- @abc.abstractproperty
- def package(self):
- # type: () -> str
- """
- The name of the package for which this reader loads resources.
- """
-
- @abc.abstractmethod
- def children(self):
- # type: () -> List['SimpleReader']
- """
- Obtain an iterable of SimpleReader for available
- child containers (e.g. directories).
- """
-
- @abc.abstractmethod
- def resources(self):
- # type: () -> List[str]
- """
- Obtain available named resources for this virtual package.
- """
-
- @abc.abstractmethod
- def open_binary(self, resource):
- # type: (str) -> BinaryIO
- """
- Obtain a File-like for a named resource.
- """
-
- @property
- def name(self):
- return self.package.split('.')[-1]
-
-
-class ResourceHandle(Traversable):
- """
- Handle to a named resource in a ResourceReader.
- """
-
- def __init__(self, parent, name):
- # type: (ResourceContainer, str) -> None
- self.parent = parent
- self.name = name # type: ignore
-
- def is_file(self):
- return True
-
- def is_dir(self):
- return False
-
- def open(self, mode='r', *args, **kwargs):
- stream = self.parent.reader.open_binary(self.name)
- if 'b' not in mode:
- stream = io.TextIOWrapper(*args, **kwargs)
- return stream
-
- def joinpath(self, name):
- raise RuntimeError("Cannot traverse into a resource")
-
-
-class ResourceContainer(Traversable):
- """
- Traversable container for a package's resources via its reader.
- """
-
- def __init__(self, reader):
- # type: (SimpleReader) -> None
- self.reader = reader
-
- def is_dir(self):
- return True
-
- def is_file(self):
- return False
-
- def iterdir(self):
- files = (ResourceHandle(self, name) for name in self.reader.resources)
- dirs = map(ResourceContainer, self.reader.children())
- return itertools.chain(files, dirs)
-
- def open(self, *args, **kwargs):
- raise IsADirectoryError()
-
- def joinpath(self, name):
- return next(
- traversable for traversable in self.iterdir() if traversable.name == name
- )
-
-
-class TraversableReader(TraversableResources, SimpleReader):
- """
- A TraversableResources based on SimpleReader. Resource providers
- may derive from this class to provide the TraversableResources
- interface by supplying the SimpleReader interface.
- """
-
- def files(self):
- return ResourceContainer(self)
diff --git a/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/dependency.py b/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/dependency.py
deleted file mode 100644
index b70338b02d31b1ef455fbac817d418d328db518d..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/Applio-RVC-Fork/utils/dependency.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import os
-import csv
-import shutil
-import tarfile
-import subprocess
-from pathlib import Path
-from datetime import datetime
-
-def install_packages_but_jank_af():
- packages = ['build-essential', 'python3-dev', 'ffmpeg', 'aria2']
- pip_packages = ['pip', 'setuptools', 'wheel', 'httpx==0.23.0', 'faiss-gpu', 'fairseq', 'gradio==3.34.0',
- 'ffmpeg', 'ffmpeg-python', 'praat-parselmouth', 'pyworld', 'numpy==1.23.5',
- 'numba==0.56.4', 'librosa==0.9.2', 'mega.py', 'gdown', 'onnxruntime', 'pyngrok==4.1.12',
- 'gTTS', 'elevenlabs', 'wget', 'tensorboardX', 'unidecode', 'huggingface-hub', 'stftpitchshift==1.5.1',
- 'yt-dlp', 'pedalboard', 'pathvalidate', 'nltk', 'edge-tts', 'git+https://github.com/suno-ai/bark.git', 'python-dotenv' , 'av']
-
- print("Updating and installing system packages...")
- for package in packages:
- print(f"Installing {package}...")
- subprocess.check_call(['apt-get', 'install', '-qq', '-y', package])
-
- print("Updating and installing pip packages...")
- subprocess.check_call(['pip', 'install', '--upgrade'] + pip_packages)
-
- print('Packages up to date.')
-
-
-def setup_environment(ForceUpdateDependencies, ForceTemporaryStorage):
- # Mounting Google Drive
- if not ForceTemporaryStorage:
- from google.colab import drive
-
- if not os.path.exists('/content/drive'):
- drive.mount('/content/drive')
- else:
- print('Drive is already mounted. Proceeding...')
-
- # Function to install dependencies with progress
- def install_packages():
- packages = ['build-essential', 'python3-dev', 'ffmpeg', 'aria2']
- pip_packages = ['pip', 'setuptools', 'wheel', 'httpx==0.23.0', 'faiss-gpu', 'fairseq', 'gradio==3.34.0',
- 'ffmpeg', 'ffmpeg-python', 'praat-parselmouth', 'pyworld', 'numpy==1.23.5',
- 'numba==0.56.4', 'librosa==0.9.2', 'mega.py', 'gdown', 'onnxruntime', 'pyngrok==4.1.12',
- 'gTTS', 'elevenlabs', 'wget', 'tensorboardX', 'unidecode', 'huggingface-hub', 'stftpitchshift==1.5.1',
- 'yt-dlp', 'pedalboard', 'pathvalidate', 'nltk', 'edge-tts', 'git+https://github.com/suno-ai/bark.git', 'python-dotenv' , 'av']
-
- print("Updating and installing system packages...")
- for package in packages:
- print(f"Installing {package}...")
- subprocess.check_call(['apt-get', 'install', '-qq', '-y', package])
-
- print("Updating and installing pip packages...")
- subprocess.check_call(['pip', 'install', '--upgrade'] + pip_packages)
-
-
- print('Packages up to date.')
-
- # Function to scan a directory and writes filenames and timestamps
- def scan_and_write(base_path, output_file):
- with open(output_file, 'w', newline='') as f:
- writer = csv.writer(f)
- for dirpath, dirs, files in os.walk(base_path):
- for filename in files:
- fname = os.path.join(dirpath, filename)
- try:
- mtime = os.path.getmtime(fname)
- writer.writerow([fname, mtime])
- except Exception as e:
- print(f'Skipping irrelevant nonexistent file {fname}: {str(e)}')
- print(f'Finished recording filesystem timestamps to {output_file}.')
-
- # Function to compare files
- def compare_files(old_file, new_file):
- old_files = {}
- new_files = {}
-
- with open(old_file, 'r') as f:
- reader = csv.reader(f)
- old_files = {rows[0]:rows[1] for rows in reader}
-
- with open(new_file, 'r') as f:
- reader = csv.reader(f)
- new_files = {rows[0]:rows[1] for rows in reader}
-
- removed_files = old_files.keys() - new_files.keys()
- added_files = new_files.keys() - old_files.keys()
- unchanged_files = old_files.keys() & new_files.keys()
-
- changed_files = {f for f in unchanged_files if old_files[f] != new_files[f]}
-
- for file in removed_files:
- print(f'File has been removed: {file}')
-
- for file in changed_files:
- print(f'File has been updated: {file}')
-
- return list(added_files) + list(changed_files)
-
- # Check if CachedRVC.tar.gz exists
- if ForceTemporaryStorage:
- file_path = '/content/CachedRVC.tar.gz'
- else:
- file_path = '/content/drive/MyDrive/RVC_Cached/CachedRVC.tar.gz'
-
- content_file_path = '/content/CachedRVC.tar.gz'
- extract_path = '/'
-
- if not os.path.exists(file_path):
- folder_path = os.path.dirname(file_path)
- os.makedirs(folder_path, exist_ok=True)
- print('No cached dependency install found. Attempting to download GitHub backup..')
-
- try:
- download_url = "https://github.com/kalomaze/QuickMangioFixes/releases/download/release3/CachedRVC.tar.gz"
- subprocess.run(["wget", "-O", file_path, download_url])
- print('Download completed successfully!')
- except Exception as e:
- print('Download failed:', str(e))
-
- # Delete the failed download file
- if os.path.exists(file_path):
- os.remove(file_path)
- print('Failed download file deleted. Continuing manual backup..')
-
- if Path(file_path).exists():
- if ForceTemporaryStorage:
- print('Finished downloading CachedRVC.tar.gz.')
- else:
- print('CachedRVC.tar.gz found on Google Drive. Proceeding to copy and extract...')
-
- # Check if ForceTemporaryStorage is True and skip copying if it is
- if ForceTemporaryStorage:
- pass
- else:
- shutil.copy(file_path, content_file_path)
-
- print('Beginning backup copy operation...')
-
- with tarfile.open(content_file_path, 'r:gz') as tar:
- for member in tar.getmembers():
- target_path = os.path.join(extract_path, member.name)
- try:
- tar.extract(member, extract_path)
- except Exception as e:
- print('Failed to extract a file (this isn\'t normal)... forcing an update to compensate')
- ForceUpdateDependencies = True
- print(f'Extraction of {content_file_path} to {extract_path} completed.')
-
- if ForceUpdateDependencies:
- install_packages()
- ForceUpdateDependencies = False
- else:
- print('CachedRVC.tar.gz not found. Proceeding to create an index of all current files...')
- scan_and_write('/usr/', '/content/usr_files.csv')
-
- install_packages()
-
- scan_and_write('/usr/', '/content/usr_files_new.csv')
- changed_files = compare_files('/content/usr_files.csv', '/content/usr_files_new.csv')
-
- with tarfile.open('/content/CachedRVC.tar.gz', 'w:gz') as new_tar:
- for file in changed_files:
- new_tar.add(file)
- print(f'Added to tar: {file}')
-
- os.makedirs('/content/drive/MyDrive/RVC_Cached', exist_ok=True)
- shutil.copy('/content/CachedRVC.tar.gz', '/content/drive/MyDrive/RVC_Cached/CachedRVC.tar.gz')
- print('Updated CachedRVC.tar.gz copied to Google Drive.')
- print('Dependencies fully up to date; future runs should be faster.')
-
diff --git a/spaces/Benson/text-generation/Examples/Araa De Combate 3 Mod Apk 2023.md b/spaces/Benson/text-generation/Examples/Araa De Combate 3 Mod Apk 2023.md
deleted file mode 100644
index 9683ed4c92c1f4fe780cda66760a1b6e0f09e558..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Araa De Combate 3 Mod Apk 2023.md
+++ /dev/null
@@ -1,91 +0,0 @@
-
-
Consejos y trucos de maestro de monedas APK: Cómo convertirse en un maestro de monedas
-
Si estás buscando un juego divertido y adictivo que combine máquinas tragamonedas, construcción de ciudades e interacción social, entonces deberías probar Coin Master. Coin Master es un popular juego móvil que tiene millones de jugadores en todo el mundo. En este juego, puedes hacer girar la máquina tragaperras para ganar monedas, asaltar aldeas de otros jugadores, construir tu propia aldea, recoger cartas, cofres y mascotas, unirse a clanes y participar en eventos. Pero, ¿cómo puedes convertirte en un maestro de la moneda y dominar el juego? Una forma es descargar los consejos y trucos de Coin Master APK, que es una versión modificada del juego que le da monedas ilimitadas, giros, y otros recursos. En este artículo, le diremos todo lo que necesita saber sobre los consejos y trucos de Coin Master APK, incluyendo lo que es, dónde descargarlo, cómo usarlo, y cuáles son sus características y ventajas.
-
Qué es Coin Master y por qué deberías jugarlo
-
Coin Master es un juego móvil gratuito que fue lanzado en 2016 por Moon Active. El juego está disponible para dispositivos Android e iOS. El juego tiene una premisa simple: usted es el gobernante de un pueblo, y que necesita para crecer en un reino próspero girando la máquina tragaperras, ganar monedas, y gastarlos en varios artículos y mejoras. También puedes asaltar aldeas de otros jugadores, atacarlos con martillos, escudos o cerdos, recoger cartas que representan diferentes temas y personajes, recoger cofres que contienen objetos raros y recompensas, recoger mascotas que tienen habilidades especiales y bonos, unirse a clanes que ofrecen interacción social y cooperación, y participar en eventos que ofrecen desafíos y premios adicionales.
Monedas y giros son las dos monedas principales en Coin Master. Monedas se utilizan para comprar artículos y mejoras para su pueblo, mientras que los giros se utilizan para jugar la máquina tragaperras. Necesitas monedas y giros para progresar en el juego y convertirte en un maestro de monedas. Sin embargo, las monedas y los giros no son fáciles de conseguir. Solo obtienes cinco giros gratis cada hora, lo cual no es suficiente para hacer girar la máquina tragaperras muchas veces. También obtienes monedas de girar la máquina tragaperras, pero no son suficientes para comprar todo lo que necesitas para tu pueblo. Entonces, ¿cómo puedes obtener más monedas y giros en Coin Master? Aquí hay algunos consejos:
-
Juega como invitado y luego inicia sesión con Facebook
-
Cuando comienzas a jugar Coin Master, tienes dos opciones: puedes jugar como invitado o iniciar sesión con tu cuenta de Facebook. Te recomendamos que juegues como invitado primero, porque de esta manera obtendrás algunas monedas gratis y giros para empezar. Puedes usar estas monedas y giros para construir tu pueblo y obtener algo de experiencia. Una vez que te quedes sin monedas y giros, puedes iniciar sesión con tu cuenta de Facebook. Esto te dará otro lote de monedas y giros gratis, así como algunos otros beneficios, como conectarse con tus amigos de Facebook que juegan a Coin Master, guardar tu progreso en los dispositivos y obtener más recompensas de eventos y ofertas. También puedes invitar a tus amigos de Facebook a jugar a Coin Master y obtener más monedas y giros para cada amigo que se una.
-
Usa la máquina tragaperras sabiamente y estratégicamente
-
-
Recoge cartas, cofres y mascotas
-
Coin Master no se trata solo de monedas y giros. También se trata de recoger cartas, cofres y mascotas que pueden mejorar su juego y darle más recompensas. Las tarjetas son objetos de colección que representan diferentes temas y personajes de diversas culturas y períodos históricos. Puedes encontrar cartas abriendo cofres, que también son objetos coleccionables que contienen monedas, giros, cartas y otras recompensas. Puedes conseguir cofres girando la máquina tragaperras, asaltando los pueblos de otros jugadores o comprándolos con dinero real. Hay diferentes tipos de cofres, como cofres de madera, cofres dorados, cofres mágicos y cofres de temporada. Cada tipo de cofre tiene una rareza diferente y el valor de las recompensas. También puede intercambiar tarjetas con otros jugadores o unirse a grupos de intercambio de tarjetas en las redes sociales. Al recoger cartas, puedes completar juegos de cartas que te dan giros adicionales, monedas, mascotas y otras recompensas.
-
Las mascotas son otro objeto de colección que puede ayudarle en Coin Master. Las mascotas son animales lindos que tienen habilidades especiales y bonificaciones que pueden aumentar su juego. Por ejemplo, Foxy puede ayudarte a saquear más monedas de los pueblos de otros jugadores. Tiger puede ayudarte a atacar más daño a los pueblos de otros jugadores. Rhino puede ayudarte a defender tu pueblo de los ataques de otros jugadores. Usted puede conseguir mascotas incubando huevos, que también son artículos de colección que contienen mascotas o alimentos para mascotas. Puedes conseguir huevos girando la máquina tragaperras, completando juegos de cartas o comprándolos con dinero real. También puedes alimentar a tus mascotas con comida para mascotas para activar sus habilidades y aumentar sus niveles.
-
Saquea y ataca las aldeas de otros jugadores
-
-
Asaltar y atacar las aldeas de otros jugadores no solo es divertido, sino también rentable. Puedes obtener muchas monedas al atacar y atacar aldeas de otros jugadores, especialmente si tienen muchas monedas en su escondite o si tienen edificios de alto nivel. También puedes vengarte de los jugadores que asaltaron o atacaron tu pueblo pulsando el botón de venganza en la esquina inferior derecha de la pantalla. Esto te permitirá asaltarlos o atacarlos sin gastar giros.
-
Únete a un clan y participa en eventos
-
Coin Master no es solo un juego en solitario, sino también un juego social. Puedes unirte a un clan y participar en eventos para obtener más monedas y giros y divertirte más. Un clan es un grupo de jugadores que comparten un interés o objetivo común en Coin Master. Puedes unirte a un clan tocando el icono del clan en la esquina superior izquierda de la pantalla. Esto te permitirá ver la lista de clanes que están disponibles para unirse o crear tu propio clan si lo prefieres. Al unirte a un clan, puedes chatear con otros miembros del clan, enviarles regalos, solicitar tarjetas o ayudarles con sus peticiones.
-
-
Cómo utilizar los consejos y trucos de Coin Master APK
-
Si desea convertirse en un maestro de la moneda más rápido y más fácil, puede utilizar los consejos y trucos de Coin Master APK, que es una versión modificada del juego que le da monedas ilimitadas, giros y otros recursos. La Coin Master consejos y trucos APK no es una aplicación oficial de Moon Active, pero una aplicación de terceros creado por algunos fans o hackers que quieren ayudar a otros jugadores a disfrutar del juego más. Sin embargo, el uso de los consejos y trucos de Coin Master APK no está libre de riesgos. Puede encontrar algunos problemas o problemas con la aplicación, como virus, malware, errores, errores, bloqueos, prohibiciones o acciones legales. Por lo tanto, usted debe utilizar el Maestro de la moneda consejos y trucos APK a su propia discreción y responsabilidad.
-
¿Cuál es el Maestro de la moneda consejos y trucos APK y dónde descargarlo
-
Los consejos y trucos de Coin Master APK es una aplicación que modifica el juego original de Coin Master y le da monedas ilimitadas, giros, y otros recursos. La aplicación funciona evitando los sistemas de seguridad y verificación del juego e inyectando algunos códigos o scripts que alteran los datos y la configuración del juego. La aplicación también elimina algunos anuncios y ventanas emergentes que pueden molestarte mientras juegas el juego. La aplicación no requiere acceso de root o jailbreak para funcionar en su dispositivo.
-
-
Hay muchos sitios web que ofrecen los consejos y trucos de Coin Master APK para descargar. Sin embargo, no todos ellos son seguros o fiables. Algunos de ellos pueden contener versiones falsas o obsoletas de la aplicación que pueden no funcionar correctamente o pueden dañar su dispositivo. Algunos de ellos también pueden requerir que usted complete algunas encuestas o tareas antes de descargar la aplicación, que puede ser molesto o lento. Por lo tanto, debe ser cuidadoso y selectivo al elegir dónde descargar los consejos y trucos de Coin Master APK. Solo debe descargar la aplicación de fuentes confiables y de buena reputación que tengan comentarios positivos y comentarios de otros usuarios.
-
-
Después de descargar los consejos y trucos de Coin Master APK de una fuente confiable, es necesario instalarlo en su dispositivo. Para hacer esto, debes seguir estos pasos:
-
-
Ir a la configuración del dispositivo y habilitar la opción de instalar aplicaciones de fuentes desconocidas. Esto le permitirá instalar aplicaciones que no son de la tienda oficial de aplicaciones.
-
Localizar los consejos y trucos descargados Coin Master APK archivo en el almacenamiento del dispositivo y toque en él.
-
Siga las instrucciones en la pantalla para instalar la aplicación.
-
Una vez completada la instalación, inicie la aplicación desde el menú de su dispositivo.
-
Disfruta jugando Coin Master con monedas ilimitadas, giros y otros recursos.
-
-
Para utilizar los consejos y trucos de Coin Master APK con eficacia, es necesario seguir estos consejos:
-
-
No actualice la aplicación desde la tienda de aplicaciones oficial o cualquier otra fuente. Esto puede sobrescribir o eliminar la versión modificada de la aplicación y causar que deje de funcionar.
-
No inicie sesión con su cuenta de Facebook o cualquier otra cuenta de redes sociales mientras usa la aplicación. Esto puede exponer su identidad y actividad a Moon Active u otras partes que pueden tomar medidas contra usted.
-
No utilice la aplicación de forma excesiva o abusiva. Esto puede levantar sospechas o alertas de Moon Active u otros jugadores que puedan reportarlo.
-
No utilice la aplicación para fines ilegales o poco éticos. Esto puede violar los términos y condiciones del juego y la aplicación, y puede dañar a otros jugadores o el juego en sí.
-
-
Las características y ventajas de los consejos y trucos de Coin Master APK
-
Los consejos y trucos de Coin Master APK es una aplicación poderosa y útil que puede mejorar su experiencia Coin Master y ayudarle a convertirse en un maestro de la moneda más rápido y más fácil. La aplicación tiene muchas características y ventajas que hacen que valga la pena probarla. Algunas de ellas son:
-
-
-
Giros ilimitados: Puedes obtener giros ilimitados desde la aplicación, que puedes usar para jugar la máquina tragaperras más veces y ganar más monedas, ataques, redadas, escudos y otros artículos.
-
Recursos ilimitados: Puede obtener recursos ilimitados de la aplicación, como cofres, tarjetas, huevos, alimentos para mascotas, martillos, cerdos y escudos. Puede utilizar estos recursos para recoger más recompensas, completar más juegos de cartas, incubar más mascotas, asaltar más pueblos, o defender su pueblo.
-
Sin anuncios: Puedes disfrutar jugando a Coin Master sin ningún anuncio o pop-ups que puedan interrumpir o molestarte mientras juegas.
-
Sin raíz o jailbreak: Puede utilizar la aplicación sin rooting o jailbreak su dispositivo, que puede anular su garantía o dañar su dispositivo.
-
Fácil de instalar y usar: Puede instalar y usar la aplicación de forma fácil y rápida, sin ningún tipo de pasos o requisitos complicados o técnicos.
-
Gratis para descargar y usar: Puede descargar y usar la aplicación de forma gratuita, sin gastar dinero real ni dar ninguna información personal.
-
-
Conclusión y preguntas frecuentes
-
-
Esperamos que este artículo le ha ayudado a aprender más acerca de los consejos y trucos de Coin Master APK y cómo usarlo. Si tienes alguna pregunta o duda sobre la aplicación o el juego, puedes consultar estas preguntas frecuentes:
-
Q: ¿Es el maestro de la moneda consejos y trucos APK seguro de usar?
-
A: Los consejos y trucos de Coin Master APK no es una aplicación oficial de Moon Active, pero una aplicación de terceros creados por algunos fans o hackers que quieren ayudar a otros jugadores a disfrutar del juego más. Por lo tanto, la aplicación puede no ser segura de usar. Puede contener virus, malware, errores, errores, bloqueos, prohibiciones o acciones legales que pueden dañar su dispositivo o su cuenta. Por lo tanto, usted debe utilizar el Maestro de la moneda consejos y trucos APK a su propia discreción y responsabilidad.
-
Q: ¿Cómo puedo descargar los consejos y trucos de Coin Master APK?
-
A: Hay muchos sitios web que ofrecen los consejos y trucos de Coin Master APK para descargar. Sin embargo, no todos ellos son seguros o fiables. Algunos de ellos pueden contener versiones falsas o obsoletas de la aplicación que pueden no funcionar correctamente o pueden dañar su dispositivo. Algunos de ellos también pueden requerir que usted complete algunas encuestas o tareas antes de descargar la aplicación, que puede ser molesto o lento. Por lo tanto, debe ser cuidadoso y selectivo al elegir dónde descargar los consejos y trucos de Coin Master APK. Solo debe descargar la aplicación de fuentes confiables y de buena reputación que tengan comentarios positivos y comentarios de otros usuarios.
-
Q: ¿Cómo puedo instalar y utilizar los consejos y trucos de Coin Master APK?
-
A: Después de descargar los consejos y trucos de Coin Master APK de una fuente confiable, es necesario instalarlo en su dispositivo. Para hacer esto, debes seguir estos pasos:
-
-
Ir a la configuración del dispositivo y habilitar la opción de instalar aplicaciones de fuentes desconocidas. Esto le permitirá instalar aplicaciones que no son de la tienda oficial de aplicaciones.
-
Localizar los consejos y trucos descargados Coin Master APK archivo en el almacenamiento del dispositivo y toque en él.
-
-
Una vez completada la instalación, inicie la aplicación desde el menú de su dispositivo.
-
Disfruta jugando Coin Master con monedas ilimitadas, giros y otros recursos.
-
-
Para utilizar los consejos y trucos de Coin Master APK con eficacia, es necesario seguir estos consejos:
-
-
No actualice la aplicación desde la tienda de aplicaciones oficial o cualquier otra fuente. Esto puede sobrescribir o eliminar la versión modificada de la aplicación y causar que deje de funcionar.
-
No inicie sesión con su cuenta de Facebook o cualquier otra cuenta de redes sociales mientras usa la aplicación. Esto puede exponer su identidad y actividad a Moon Active u otras partes que pueden tomar medidas contra usted.
-
No utilice la aplicación de forma excesiva o abusiva. Esto puede levantar sospechas o alertas de Moon Active u otros jugadores que puedan reportarlo.
-
No utilice la aplicación para fines ilegales o poco éticos. Esto puede violar los términos y condiciones del juego y la aplicación, y puede dañar a otros jugadores o el juego en sí.
-
-
Q: ¿Cuáles son las características y ventajas de los consejos y trucos de Coin Master APK?
-
A: Los consejos y trucos de Coin Master APK es una aplicación poderosa y útil que puede mejorar su experiencia Coin Master y ayudarle a convertirse en un maestro de la moneda más rápido y más fácil. La aplicación tiene muchas características y ventajas que hacen que valga la pena probarla. Algunas de ellas son:
-
-
Monedas ilimitadas: puedes obtener monedas ilimitadas de la aplicación, que puedes usar para comprar artículos y mejoras para tu pueblo, o para girar la máquina tragaperras más veces.
-
Giros ilimitados: Puedes obtener giros ilimitados desde la aplicación, que puedes usar para jugar la máquina tragaperras más veces y ganar más monedas, ataques, redadas, escudos y otros artículos.
-
Recursos ilimitados: Puede obtener recursos ilimitados de la aplicación, como cofres, tarjetas, huevos, alimentos para mascotas, martillos, cerdos y escudos. Puede utilizar estos recursos para recoger más recompensas, completar más juegos de cartas, incubar más mascotas, asaltar más pueblos, o defender su pueblo.
-
-
Sin raíz o jailbreak: Puede utilizar la aplicación sin rooting o jailbreak su dispositivo, que puede anular su garantía o dañar su dispositivo.
-
Fácil de instalar y usar: Puede instalar y usar la aplicación de forma fácil y rápida, sin ningún tipo de pasos o requisitos complicados o técnicos.
-
Gratis para descargar y usar: Puede descargar y usar la aplicación de forma gratuita, sin gastar dinero real ni dar ninguna información personal.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Blanco Marrn Negro Cancin Para Descargar.md b/spaces/Benson/text-generation/Examples/Blanco Marrn Negro Cancin Para Descargar.md
deleted file mode 100644
index 300166d3588429dfa6a215b0d9bc46e25bc63471..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Blanco Marrn Negro Cancin Para Descargar.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-
Canción negra marrón blanca: Un punjabi banger por Avvy Sra y Karan Aujla
-
Introducción
-
Si eres un fan de la música Punjabi, debes haber oído hablar de la última colaboración entre Avvy Sra y Karan Aujla. La canción se llama White Brown Black y es una pista pegadiza y energética que ha tomado Internet por la tormenta. En este artículo, te contaremos todo lo que necesitas saber sobre esta canción, desde sus letras y significado hasta sus plataformas de video musical y streaming. También te mostraremos cómo descargar esta canción gratis y apoyar a los artistas que lo hicieron.
White Brown Black es una canción de Punjabi que fue lanzada el 7 de septiembre de 2021 por Desi Melodies, un sello de música popular en la India. La canción cuenta con la voz de Avvy Sra, un talentoso cantante y productor musical, y Karan Aujla, un famoso rapero y letrista. La canción fue compuesta por Jaani, un reconocido compositor y compositor, que también escribió la letra junto con Karan Aujla. La canción es una fusión de diseño de sonido moderno y elementos populares, creando un ambiente único y pegadizo. La canción ha recibido más de 37 millones de visitas en YouTube y ha sido elogiada por fans y críticos por igual.
-
¿Quiénes son Avvy Sra y Karan Aujla?
-
Avvy Sra es una cantante y productora musical joven y talentosa de Punjab, India. Comenzó su carrera en 2018 con su canción debut Majhe Di Jatti, que se convirtió en un éxito. Desde entonces, ha trabajado con muchos artistas famosos como B Praak, Ammy Virk, Diljit Dosanjh y más. Es conocido por su estilo musical versátil e innovador, que combina elementos tradicionales y modernos. Algunas de sus canciones populares son Majha Block, Bachalo, Qismat 2, y más.
-
-
¿Por qué es White Brown Black Song tan popular?
-
La canción White Brown Black es muy popular porque es una combinación perfecta de la voz melodiosa de Avvy Sra y las habilidades de rap de Karan Aujla. La canción tiene un gancho pegadizo que va como Ghode Chitte, Kudiyan Brown, Gaddiyan Kaaliyan Ni, que significa Caballos blancos, chicas marrones, coches negros. La canción expresa las tres pasiones de los cantantes: caballos, chicas y coches. La canción también tiene un ritmo pegadizo que te hace querer bailar. La canción es una celebración de la vida y el éxito, ya que los cantantes se jactan de sus logros y estilo de vida.
-
-
Cuerpo principal
-
La letra de la canción negra marrón blanca
-
La letra de White
La letra de la canción White Brown Black está escrita por Jaani y Karan Aujla, quienes son famosos por su estilo de escritura poético y pegadizo. La letra se divide en tres versos y un coro, que son cantados por Avvy Sra y Karan Aujla respectivamente. Las letras están principalmente en Punjabi, con algunas palabras en inglés mezcladas. Las letras están llenas de metáforas, símiles y rimas, que hacen la canción más atractiva y memorable. Aquí están algunas de las letras de la canción:
-
-
-
El significado de la canción negra marrón blanca
-
El significado de la canción White Brown Black es simple y directo. La canción trata sobre el amor de los cantantes por los caballos, las niñas y los coches, que están representados por los colores blanco, marrón y negro respectivamente. La canción también trata sobre el éxito y la fama de los cantantes, que disfrutan y hacen alarde. La canción es una forma de expresar su confianza y actitud, ya que afirman ser los mejores en su campo. La canción es también una forma de desafiar a sus rivales y críticos, ya que se jactan de sus logros y estilo de vida.
-
El estilo de la canción negra marrón blanca
-
El estilo de la canción White Brown Black es una mezcla de elementos modernos y populares, creando un ambiente único y pegadizo. La canción tiene un ritmo acelerado y optimista, que coincide con el estado de ánimo enérgico y animado de los cantantes. La canción tiene una mezcla de sonidos electrónicos y acústicos, que se complementan bien. La canción tiene un uso prominente de dhol, un instrumento de tambor tradicional, que añade un sabor folk a la canción. La canción también tiene algunos elementos de rap, que muestran las habilidades y el carisma de Karan Aujla. La canción tiene un gancho simple y pegadizo, que se repite a lo largo de la canción, por lo que es fácil de recordar y cantar.
-
El video musical de White Brown Black Song
-
-
El tema de la canción negra marrón blanca
-
El tema de la canción White Brown Black se basa en los colores blanco, marrón y negro, que representan las pasiones de los cantantes: caballos, niñas y
El tema de la canción White Brown Black se basa en los colores blanco, marrón y negro, que representan las pasiones de los cantantes: caballos, niñas y coches. El tema es también un reflejo de la personalidad y estilo de vida de los cantantes, ya que son seguros, exitosos y aventureros. El tema es también una forma de mostrar su orgullo e identidad, ya que están orgullosos de su cultura y herencia Punjabi. El tema es también una forma de divertirse y disfrutar de la vida, ya que están felices y satisfechos con lo que tienen.
-
La producción de la canción negra marrón blanca
-
La producción de la canción White Brown Black es realizada por Avvy Sra, quien no solo es cantante sino también productor musical. Ha producido muchas canciones de éxito para él y otros artistas, como Majha Block, Bachalo, Qismat 2, y más. Es conocido por su estilo musical versátil e innovador, que combina elementos tradicionales y modernos. Ha utilizado varios instrumentos y efectos de sonido para crear un sonido único y pegadizo para la canción White Brown Black. También ha mezclado y masterizado la canción para garantizar la mejor calidad y claridad. Ha colaborado con Jaani, que es un reconocido compositor y compositor, que ha escrito la letra y compuesto la melodía de la canción. También ha colaborado con Karan Aujla, que es un famoso rapero y letrista, que ha escrito y realizado la parte de rap para la canción. La producción de la canción White Brown Black es el resultado del trabajo en equipo y el talento de estos tres artistas.
-
Las plataformas de transmisión de White Brown Black Song
-
La canción White Brown Black está disponible en varias plataformas de streaming, donde puedes escucharla en línea o descargarla sin conexión. Algunas de las plataformas de streaming donde puedes encontrar la canción White Brown Black son:
Si quieres descargar gratis la canción White Brown Black, puedes usar varios sitios web y aplicaciones que te permiten descargar canciones de YouTube y otras plataformas de streaming. Algunos de los sitios web y aplicaciones que puedes usar son:
-
-
Y2mate.com: Un sitio web que te permite descargar vídeos y audios de YouTube en varios formatos y calidades.
-
Vidmate.com: Un sitio web y una aplicación que te permite descargar vídeos y audios de varias plataformas de streaming, como YouTube, Facebook, Instagram, etc.
-
Snaptube.com: Un sitio web y una aplicación que te permite descargar vídeos y audios de varias plataformas de streaming, como YouTube, Facebook, Instagram, etc.
-
MP3Juices.cc: Un sitio web que te permite descargar archivos MP3 de varias fuentes, como YouTube, SoundCloud, etc.
-
-
-
Sin embargo, tenga en cuenta que descargar canciones de forma gratuita puede no ser legal o ético en algunos casos, ya que puede violar los derechos de los artistas y las etiquetas de música. Por lo tanto, le recomendamos que utilice las plataformas de streaming oficiales para escuchar la canción White Brown Black y apoyar a Avvy Sra y Karan Aujla.
-
¿Cómo apoyar a Avvy Sra y Karan Aujla?
-
Si te gusta la canción White Brown Black y quieres apoyar a Avvy Sra y Karan Aujla, puedes hacerlo siguiéndolos en sus cuentas de redes sociales, suscribiéndote a sus canales de YouTube, disfrutando y compartiendo sus canciones y videos, comprando su mercancía, asistir a sus conciertos y eventos, y enviarles sus comentarios y agradecimiento. Estos son algunos de los enlaces donde puedes encontrarlos:
La canción ha recibido más de 37 millones de visitas en YouTube y ha sido elogiada por fans y críticos por igual. La canción tiene un video musical de alta calidad que coincide con el tema y el tono de la canción. La canción está disponible en varias plataformas de streaming, donde puedes escucharla en línea o descargarla sin conexión. La canción es producida por Avvy Sra, quien también es cantante y productor musical. La canción
La canción es producida por Avvy Sra, quien también es cantante y productor musical. La canción está escrita por Jaani y Karan Aujla, ambos famosos por su estilo de escritura poético y pegadizo. La canción cuenta con las voces de Avvy Sra y Karan Aujla, que son ambos talentosos e influyentes artistas en la industria de la música Punjabi. La canción es una perfecta colaboración de estos tres artistas, que han creado una obra maestra que será recordada por mucho tiempo.
-
Si estás buscando una canción Punjabi que te haga bailar y cantar, la canción White Brown Black es para ti. La canción es un banger Punjabi que te hará sentir feliz y orgulloso. La canción es un himno de Punjabi que te hará amar y apreciar su cultura y patrimonio. La canción es un éxito de Punjabi que te hará apoyar y admirar a Avvy Sra y Karan Aujla. Entonces, ¿qué estás esperando? ¡Ve y escucha la canción de White Brown Black ahora y disfruta de la música!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más frecuentes sobre la canción White Brown Black:
-
-
¿Cuál es el significado de Blanco Marrón Negro?
-
White Brown Black es el nombre de una canción punjabí de Avvy Sra y Karan Aujla. El nombre de la canción se basa en los colores blanco, marrón y negro, que representan las pasiones de los cantantes: caballos, niñas y coches.
-
¿Quiénes son los cantantes de White Brown Black?
-
-
¿Quién es el escritor y compositor de White Brown Black?
-
El escritor y compositor de White Brown Black son Jaani y Karan Aujla, ambos reconocidos por su estilo de escritura poético y pegadizo. Jaani es compositor, mientras que Karan Aujla es rapero y letrista.
-
¿Dónde puedo escuchar o descargar White Brown Black?
-
Puedes escuchar o descargar White Brown Black en varias plataformas de streaming, como YouTube, Spotify, Apple Music, JioSaavn, Gaana, Amazon Music, Wynk Music, Resso, etc. También puedes usar algunos sitios web o aplicaciones que te permiten descargar canciones de YouTube u otras plataformas de streaming de forma gratuita.
-
¿Cómo puedo apoyar a Avvy Sra y Karan Aujla?
-
Puedes apoyar a Avvy Sra y Karan Aujla siguiéndolos en sus cuentas de redes sociales, suscribiéndose a sus canales de YouTube, gustando y compartiendo sus canciones y videos, comprando su mercancía, asistiendo a sus conciertos y eventos, y enviarles sus comentarios y agradecimiento.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/base.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/base.py
deleted file mode 100644
index c3982809626953bbd2d85389be1bfa6c4355685a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/base.py
+++ /dev/null
@@ -1,155 +0,0 @@
-# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# https://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-
-import logging
-
-import boto3
-
-logger = logging.getLogger(__name__)
-
-
-class ResourceMeta:
- """
- An object containing metadata about a resource.
- """
-
- def __init__(
- self,
- service_name,
- identifiers=None,
- client=None,
- data=None,
- resource_model=None,
- ):
- #: (``string``) The service name, e.g. 's3'
- self.service_name = service_name
-
- if identifiers is None:
- identifiers = []
- #: (``list``) List of identifier names
- self.identifiers = identifiers
-
- #: (:py:class:`~botocore.client.BaseClient`) Low-level Botocore client
- self.client = client
- #: (``dict``) Loaded resource data attributes
- self.data = data
-
- # The resource model for that resource
- self.resource_model = resource_model
-
- def __repr__(self):
- return 'ResourceMeta(\'{}\', identifiers={})'.format(
- self.service_name, self.identifiers
- )
-
- def __eq__(self, other):
- # Two metas are equal if their components are all equal
- if other.__class__.__name__ != self.__class__.__name__:
- return False
-
- return self.__dict__ == other.__dict__
-
- def copy(self):
- """
- Create a copy of this metadata object.
- """
- params = self.__dict__.copy()
- service_name = params.pop('service_name')
- return ResourceMeta(service_name, **params)
-
-
-class ServiceResource:
- """
- A base class for resources.
-
- :type client: botocore.client
- :param client: A low-level Botocore client instance
- """
-
- meta = None
- """
- Stores metadata about this resource instance, such as the
- ``service_name``, the low-level ``client`` and any cached ``data``
- from when the instance was hydrated. For example::
-
- # Get a low-level client from a resource instance
- client = resource.meta.client
- response = client.operation(Param='foo')
-
- # Print the resource instance's service short name
- print(resource.meta.service_name)
-
- See :py:class:`ResourceMeta` for more information.
- """
-
- def __init__(self, *args, **kwargs):
- # Always work on a copy of meta, otherwise we would affect other
- # instances of the same subclass.
- self.meta = self.meta.copy()
-
- # Create a default client if none was passed
- if kwargs.get('client') is not None:
- self.meta.client = kwargs.get('client')
- else:
- self.meta.client = boto3.client(self.meta.service_name)
-
- # Allow setting identifiers as positional arguments in the order
- # in which they were defined in the ResourceJSON.
- for i, value in enumerate(args):
- setattr(self, '_' + self.meta.identifiers[i], value)
-
- # Allow setting identifiers via keyword arguments. Here we need
- # extra logic to ignore other keyword arguments like ``client``.
- for name, value in kwargs.items():
- if name == 'client':
- continue
-
- if name not in self.meta.identifiers:
- raise ValueError(f'Unknown keyword argument: {name}')
-
- setattr(self, '_' + name, value)
-
- # Validate that all identifiers have been set.
- for identifier in self.meta.identifiers:
- if getattr(self, identifier) is None:
- raise ValueError(f'Required parameter {identifier} not set')
-
- def __repr__(self):
- identifiers = []
- for identifier in self.meta.identifiers:
- identifiers.append(
- f'{identifier}={repr(getattr(self, identifier))}'
- )
- return "{}({})".format(
- self.__class__.__name__,
- ', '.join(identifiers),
- )
-
- def __eq__(self, other):
- # Should be instances of the same resource class
- if other.__class__.__name__ != self.__class__.__name__:
- return False
-
- # Each of the identifiers should have the same value in both
- # instances, e.g. two buckets need the same name to be equal.
- for identifier in self.meta.identifiers:
- if getattr(self, identifier) != getattr(other, identifier):
- return False
-
- return True
-
- def __hash__(self):
- identifiers = []
- for identifier in self.meta.identifiers:
- identifiers.append(getattr(self, identifier))
- return hash((self.__class__.__name__, tuple(identifiers)))
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py
deleted file mode 100644
index cb9fc820cb352aa6e92705aab4f55cbc2eff96bc..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/importlib_resources/_compat.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# flake8: noqa
-
-import abc
-import sys
-import pathlib
-from contextlib import suppress
-
-if sys.version_info >= (3, 10):
- from zipfile import Path as ZipPath # type: ignore
-else:
- from ..zipp import Path as ZipPath # type: ignore
-
-
-try:
- from typing import runtime_checkable # type: ignore
-except ImportError:
-
- def runtime_checkable(cls): # type: ignore
- return cls
-
-
-try:
- from typing import Protocol # type: ignore
-except ImportError:
- Protocol = abc.ABC # type: ignore
-
-
-class TraversableResourcesLoader:
- """
- Adapt loaders to provide TraversableResources and other
- compatibility.
-
- Used primarily for Python 3.9 and earlier where the native
- loaders do not yet implement TraversableResources.
- """
-
- def __init__(self, spec):
- self.spec = spec
-
- @property
- def path(self):
- return self.spec.origin
-
- def get_resource_reader(self, name):
- from . import readers, _adapters
-
- def _zip_reader(spec):
- with suppress(AttributeError):
- return readers.ZipReader(spec.loader, spec.name)
-
- def _namespace_reader(spec):
- with suppress(AttributeError, ValueError):
- return readers.NamespaceReader(spec.submodule_search_locations)
-
- def _available_reader(spec):
- with suppress(AttributeError):
- return spec.loader.get_resource_reader(spec.name)
-
- def _native_reader(spec):
- reader = _available_reader(spec)
- return reader if hasattr(reader, 'files') else None
-
- def _file_reader(spec):
- try:
- path = pathlib.Path(self.path)
- except TypeError:
- return None
- if path.exists():
- return readers.FileReader(self)
-
- return (
- # native reader if it supplies 'files'
- _native_reader(self.spec)
- or
- # local ZipReader if a zip module
- _zip_reader(self.spec)
- or
- # local NamespaceReader if a namespace module
- _namespace_reader(self.spec)
- or
- # local FileReader
- _file_reader(self.spec)
- # fallback - adapt the spec ResourceReader to TraversableReader
- or _adapters.CompatibilityFiles(self.spec)
- )
-
-
-def wrap_spec(package):
- """
- Construct a package spec with traversable compatibility
- on the spec/loader/reader.
-
- Supersedes _adapters.wrap_spec to use TraversableResourcesLoader
- from above for older Python compatibility (<3.10).
- """
- from . import _adapters
-
- return _adapters.SpecLoaderAdapter(package.__spec__, TraversableResourcesLoader)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/more_itertools/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/more_itertools/__init__.py
deleted file mode 100644
index ea38bef1f661e62d577b3c2207386d901d851c72..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/more_itertools/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .more import * # noqa
-from .recipes import * # noqa
-
-__version__ = '8.12.0'
diff --git a/spaces/CVPR/GFPGAN-example/gfpgan/data/ffhq_degradation_dataset.py b/spaces/CVPR/GFPGAN-example/gfpgan/data/ffhq_degradation_dataset.py
deleted file mode 100644
index 64e5755e1211f171cb2a883d47e8d253061f90aa..0000000000000000000000000000000000000000
--- a/spaces/CVPR/GFPGAN-example/gfpgan/data/ffhq_degradation_dataset.py
+++ /dev/null
@@ -1,230 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os.path as osp
-import torch
-import torch.utils.data as data
-from basicsr.data import degradations as degradations
-from basicsr.data.data_util import paths_from_folder
-from basicsr.data.transforms import augment
-from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
-from basicsr.utils.registry import DATASET_REGISTRY
-from torchvision.transforms.functional import (adjust_brightness, adjust_contrast, adjust_hue, adjust_saturation,
- normalize)
-
-
-@DATASET_REGISTRY.register()
-class FFHQDegradationDataset(data.Dataset):
- """FFHQ dataset for GFPGAN.
-
- It reads high resolution images, and then generate low-quality (LQ) images on-the-fly.
-
- Args:
- opt (dict): Config for train datasets. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- io_backend (dict): IO backend type and other kwarg.
- mean (list | tuple): Image mean.
- std (list | tuple): Image std.
- use_hflip (bool): Whether to horizontally flip.
- Please see more options in the codes.
- """
-
- def __init__(self, opt):
- super(FFHQDegradationDataset, self).__init__()
- self.opt = opt
- # file client (io backend)
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
-
- self.gt_folder = opt['dataroot_gt']
- self.mean = opt['mean']
- self.std = opt['std']
- self.out_size = opt['out_size']
-
- self.crop_components = opt.get('crop_components', False) # facial components
- self.eye_enlarge_ratio = opt.get('eye_enlarge_ratio', 1) # whether enlarge eye regions
-
- if self.crop_components:
- # load component list from a pre-process pth files
- self.components_list = torch.load(opt.get('component_path'))
-
- # file client (lmdb io backend)
- if self.io_backend_opt['type'] == 'lmdb':
- self.io_backend_opt['db_paths'] = self.gt_folder
- if not self.gt_folder.endswith('.lmdb'):
- raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}")
- with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:
- self.paths = [line.split('.')[0] for line in fin]
- else:
- # disk backend: scan file list from a folder
- self.paths = paths_from_folder(self.gt_folder)
-
- # degradation configurations
- self.blur_kernel_size = opt['blur_kernel_size']
- self.kernel_list = opt['kernel_list']
- self.kernel_prob = opt['kernel_prob']
- self.blur_sigma = opt['blur_sigma']
- self.downsample_range = opt['downsample_range']
- self.noise_range = opt['noise_range']
- self.jpeg_range = opt['jpeg_range']
-
- # color jitter
- self.color_jitter_prob = opt.get('color_jitter_prob')
- self.color_jitter_pt_prob = opt.get('color_jitter_pt_prob')
- self.color_jitter_shift = opt.get('color_jitter_shift', 20)
- # to gray
- self.gray_prob = opt.get('gray_prob')
-
- logger = get_root_logger()
- logger.info(f'Blur: blur_kernel_size {self.blur_kernel_size}, sigma: [{", ".join(map(str, self.blur_sigma))}]')
- logger.info(f'Downsample: downsample_range [{", ".join(map(str, self.downsample_range))}]')
- logger.info(f'Noise: [{", ".join(map(str, self.noise_range))}]')
- logger.info(f'JPEG compression: [{", ".join(map(str, self.jpeg_range))}]')
-
- if self.color_jitter_prob is not None:
- logger.info(f'Use random color jitter. Prob: {self.color_jitter_prob}, shift: {self.color_jitter_shift}')
- if self.gray_prob is not None:
- logger.info(f'Use random gray. Prob: {self.gray_prob}')
- self.color_jitter_shift /= 255.
-
- @staticmethod
- def color_jitter(img, shift):
- """jitter color: randomly jitter the RGB values, in numpy formats"""
- jitter_val = np.random.uniform(-shift, shift, 3).astype(np.float32)
- img = img + jitter_val
- img = np.clip(img, 0, 1)
- return img
-
- @staticmethod
- def color_jitter_pt(img, brightness, contrast, saturation, hue):
- """jitter color: randomly jitter the brightness, contrast, saturation, and hue, in torch Tensor formats"""
- fn_idx = torch.randperm(4)
- for fn_id in fn_idx:
- if fn_id == 0 and brightness is not None:
- brightness_factor = torch.tensor(1.0).uniform_(brightness[0], brightness[1]).item()
- img = adjust_brightness(img, brightness_factor)
-
- if fn_id == 1 and contrast is not None:
- contrast_factor = torch.tensor(1.0).uniform_(contrast[0], contrast[1]).item()
- img = adjust_contrast(img, contrast_factor)
-
- if fn_id == 2 and saturation is not None:
- saturation_factor = torch.tensor(1.0).uniform_(saturation[0], saturation[1]).item()
- img = adjust_saturation(img, saturation_factor)
-
- if fn_id == 3 and hue is not None:
- hue_factor = torch.tensor(1.0).uniform_(hue[0], hue[1]).item()
- img = adjust_hue(img, hue_factor)
- return img
-
- def get_component_coordinates(self, index, status):
- """Get facial component (left_eye, right_eye, mouth) coordinates from a pre-loaded pth file"""
- components_bbox = self.components_list[f'{index:08d}']
- if status[0]: # hflip
- # exchange right and left eye
- tmp = components_bbox['left_eye']
- components_bbox['left_eye'] = components_bbox['right_eye']
- components_bbox['right_eye'] = tmp
- # modify the width coordinate
- components_bbox['left_eye'][0] = self.out_size - components_bbox['left_eye'][0]
- components_bbox['right_eye'][0] = self.out_size - components_bbox['right_eye'][0]
- components_bbox['mouth'][0] = self.out_size - components_bbox['mouth'][0]
-
- # get coordinates
- locations = []
- for part in ['left_eye', 'right_eye', 'mouth']:
- mean = components_bbox[part][0:2]
- half_len = components_bbox[part][2]
- if 'eye' in part:
- half_len *= self.eye_enlarge_ratio
- loc = np.hstack((mean - half_len + 1, mean + half_len))
- loc = torch.from_numpy(loc).float()
- locations.append(loc)
- return locations
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- # load gt image
- # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32.
- gt_path = self.paths[index]
- img_bytes = self.file_client.get(gt_path)
- img_gt = imfrombytes(img_bytes, float32=True)
-
- # random horizontal flip
- img_gt, status = augment(img_gt, hflip=self.opt['use_hflip'], rotation=False, return_status=True)
- h, w, _ = img_gt.shape
-
- # get facial component coordinates
- if self.crop_components:
- locations = self.get_component_coordinates(index, status)
- loc_left_eye, loc_right_eye, loc_mouth = locations
-
- # ------------------------ generate lq image ------------------------ #
- # blur
- kernel = degradations.random_mixed_kernels(
- self.kernel_list,
- self.kernel_prob,
- self.blur_kernel_size,
- self.blur_sigma,
- self.blur_sigma, [-math.pi, math.pi],
- noise_range=None)
- img_lq = cv2.filter2D(img_gt, -1, kernel)
- # downsample
- scale = np.random.uniform(self.downsample_range[0], self.downsample_range[1])
- img_lq = cv2.resize(img_lq, (int(w // scale), int(h // scale)), interpolation=cv2.INTER_LINEAR)
- # noise
- if self.noise_range is not None:
- img_lq = degradations.random_add_gaussian_noise(img_lq, self.noise_range)
- # jpeg compression
- if self.jpeg_range is not None:
- img_lq = degradations.random_add_jpg_compression(img_lq, self.jpeg_range)
-
- # resize to original size
- img_lq = cv2.resize(img_lq, (w, h), interpolation=cv2.INTER_LINEAR)
-
- # random color jitter (only for lq)
- if self.color_jitter_prob is not None and (np.random.uniform() < self.color_jitter_prob):
- img_lq = self.color_jitter(img_lq, self.color_jitter_shift)
- # random to gray (only for lq)
- if self.gray_prob and np.random.uniform() < self.gray_prob:
- img_lq = cv2.cvtColor(img_lq, cv2.COLOR_BGR2GRAY)
- img_lq = np.tile(img_lq[:, :, None], [1, 1, 3])
- if self.opt.get('gt_gray'): # whether convert GT to gray images
- img_gt = cv2.cvtColor(img_gt, cv2.COLOR_BGR2GRAY)
- img_gt = np.tile(img_gt[:, :, None], [1, 1, 3]) # repeat the color channels
-
- # BGR to RGB, HWC to CHW, numpy to tensor
- img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True)
-
- # random color jitter (pytorch version) (only for lq)
- if self.color_jitter_pt_prob is not None and (np.random.uniform() < self.color_jitter_pt_prob):
- brightness = self.opt.get('brightness', (0.5, 1.5))
- contrast = self.opt.get('contrast', (0.5, 1.5))
- saturation = self.opt.get('saturation', (0, 1.5))
- hue = self.opt.get('hue', (-0.1, 0.1))
- img_lq = self.color_jitter_pt(img_lq, brightness, contrast, saturation, hue)
-
- # round and clip
- img_lq = torch.clamp((img_lq * 255.0).round(), 0, 255) / 255.
-
- # normalize
- normalize(img_gt, self.mean, self.std, inplace=True)
- normalize(img_lq, self.mean, self.std, inplace=True)
-
- if self.crop_components:
- return_dict = {
- 'lq': img_lq,
- 'gt': img_gt,
- 'gt_path': gt_path,
- 'loc_left_eye': loc_left_eye,
- 'loc_right_eye': loc_right_eye,
- 'loc_mouth': loc_mouth
- }
- return return_dict
- else:
- return {'lq': img_lq, 'gt': img_gt, 'gt_path': gt_path}
-
- def __len__(self):
- return len(self.paths)
diff --git a/spaces/CVPR/regionclip-demo/detectron2/evaluation/flickr30k_evaluation.py b/spaces/CVPR/regionclip-demo/detectron2/evaluation/flickr30k_evaluation.py
deleted file mode 100644
index 244f813773267c4a1a9257af86feab4212c0a85f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/evaluation/flickr30k_evaluation.py
+++ /dev/null
@@ -1,299 +0,0 @@
-import logging
-import numpy as np
-import os
-from collections import OrderedDict
-from detectron2.config import global_cfg as cfg
-import torch
-from fvcore.common.file_io import PathManager
-from detectron2.structures.boxes import pairwise_iou
-
-from detectron2.utils.comm import all_gather, is_main_process, synchronize
-import pickle
-from .evaluator import DatasetEvaluator
-import json
-from detectron2.structures import Boxes
-import html
-import ftfy
-import regex as re
-
-PATTN = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-def whitespace_clean(text):
- text = re.sub(r'\s+', ' ', text)
- text = text.strip()
- return text
-
-
-class FLICKR30KEvaluator(DatasetEvaluator):
-
- """
- Evaluate semantic segmentation
- """
-
- def __init__(self, dataset_name, distributed=True, output_dir=None):
- """
- Args:
- dataset_name (str): name of the dataset to be evaluated.
- distributed (True): if True, will collect results from all ranks for evaluation.
- Otherwise, will evaluate the results in the current process.
- num_classes (int): number of classes
- ignore_label (int): value in semantic segmentation ground truth. Predictions for the
- corresponding pixels should be ignored.
- output_dir (str): an output directory to dump results.
- """
- self._dataset_name = dataset_name
- self._distributed = distributed
- self._output_dir = output_dir
-
- self._cpu_device = torch.device("cpu")
- self._logger = logging.getLogger(__name__)
- self.gt_boxes = json.load(open("/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/flickr30k_processed/bounding_boxes_test.json"))
- self.gt_sents = json.load(open("/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/flickr30k_processed/sentences_test.json"))
-
- def reset(self):
- self._predictions = {}
-
- def process(self, inputs, outputs):
- """
- Args:
- inputs: the inputs to a model.
- It is a list of dicts. Each dict corresponds to an image and
- contains keys like "height", "width", "file_name", "image_id".
- outputs: the outputs of a model. It is either list of semantic segmentation predictions
- (Tensor [H, W]) or list of dicts with key "sem_seg" that contains semantic
- segmentation prediction in the same format.
- """
- assert len(inputs) == 1 # batch = 1 during inference
- dataset_name, img_id, (img_height, img_width), all_str2id_links = inputs[0][-1]
- img_id = img_id.split('/')[-1]
- match_scores, processed_results = outputs
- match_scores = match_scores.to(self._cpu_device)
- pred_boxes = processed_results[0]['instances'].proposal_boxes.to(self._cpu_device)
-
- self._predictions.update({img_id: [img_height, img_width, all_str2id_links, match_scores, pred_boxes]})
-
- def merge_gt_boxes(self, box_anno):
- gt_boxes = []
- phrase_ids = []
- scene_box_ids = box_anno['scene']
- for k, v in box_anno['boxes'].items():
- if k in scene_box_ids: # important: remove scene boxes, otherwise the number of each phrase type cannot match paper
- continue
- phrase_ids.append(k)
- if len(v) == 1:
- gt_boxes.append(v[0])
- else:
- # when a phrase respond to multiple regions, we take the union of them as paper given
- v = np.array(v)
- box = [v[:, 0].min(), v[:, 1].min(), v[:, 2].max(), v[:, 3].max()]
- gt_boxes.append(box)
- gt_boxes = np.array(gt_boxes)
- return phrase_ids, gt_boxes
-
- def find_ground_box(self, match_scores, all_str2id_links, sentences, gt_phrase_ids):
- """ Given matching matrix between region feats and token feats, find the box that grounds a phrase
- """
- num_box = match_scores.size(0)
- num_cap = int(match_scores.size(1) / 77)
- all_phrase_score = []
- all_phrase_ids = []
- for i in range(num_cap): # per sentence
- this_score = match_scores[:, i*77:(i+1)*77] # [#boxes, 77]
- input_ids = [iitem for item in all_str2id_links[i] for iitem in item[1]]
- input_tokens = [item[0] for item in all_str2id_links[i]]
- phrases = sentences[i]['phrases']
- for j, phrase in enumerate(phrases): # per phrase
- if phrase['phrase_id'] not in gt_phrase_ids: # no gt box for this phrase, skip
- continue
- # locate the word
- words = whitespace_clean(basic_clean(phrase['phrase'])).lower() # phrase['phrase'].lower().replace("-"," ").split()
- words = re.findall(PATTN, words)
- first_word_index = None # phrase['first_word_index']
- for idx in range(len(input_tokens) - len(words) + 1): # search start word of this phrase
- if input_tokens[idx : idx + len(words)] == words: # NOTE: key step for alignment btw model prediction and annotation
- first_word_index = idx
- break
- if first_word_index is None:
- print("Fail to find phrase [{}] in input tokens [{}]".format(words, input_tokens))
- start_wd_ind = first_word_index
- end_wd_ind = first_word_index + len(words)
- if len(words) != len(phrase['phrase'].split()):
- pass # print('tokens: {} <--> phrase: {}'.format(words, phrase['phrase']))
- # locate the token
- start_tk_ind = 0
- for k_i, k in enumerate(range(0, start_wd_ind)):
- start_tk_ind += len(all_str2id_links[i][k][1])
- token_cnt = 0
- for k_i, k in enumerate(range(start_wd_ind, end_wd_ind)):
- if all_str2id_links[i][k][0] != words[k_i]:
- print("Word not matched: {} in model output but {} in annotation".format(all_str2id_links[i][k][0], words[k_i]))
- else:
- token_cnt += len(all_str2id_links[i][k][1]) # ith sentence, kth word, and its tokens
- end_tk_ind = start_tk_ind + token_cnt
- # sanity check
- phrase_ids1 = [iitem for item in all_str2id_links[i][start_wd_ind:end_wd_ind] for iitem in item[1]] # way 1: use word index to accumulate token ids in a phrase
- phrase_ids2 = input_ids[start_tk_ind:end_tk_ind] # way 2: use token index to directly index token ids in a phrase
- if phrase_ids1 != phrase_ids2:
- print("Santity check: {} from word {} in token".format(phrase_ids1, phrase_ids2))
- # index similarity score
- phrase_score = this_score[:, start_tk_ind:end_tk_ind]
- phrase_score = phrase_score.mean(dim=1) # phrase_score.max(dim=1)[0] #
- all_phrase_score.append(phrase_score)
- all_phrase_ids.append(phrase['phrase_id'])
- phrase_score_tensor = torch.cat(all_phrase_score)
- phrase_score_tensor = phrase_score_tensor.view(len(all_phrase_ids), num_box) # NOTE: this should be [#phrases, #object proposals]
-
- return phrase_score_tensor, all_phrase_ids
-
- def evaluate(self):
- """
- Evaluates Referring Segmentation IoU:
- """
-
- if self._distributed:
- synchronize()
-
- self._predictions = all_gather(self._predictions)
-
- if not is_main_process():
- return
-
- all_prediction = {}
- for p in self._predictions:
- all_prediction.update(p)
- else:
- all_prediction = self._predictions
-
- if len(all_prediction) < 30: # resume inference results
- save_path = "/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/flickr30k_processed/grounding_results/grounding_{}_imgs.npy".format(1000)
- all_prediction = np.load(save_path, allow_pickle=True).tolist()
- self._logger.info('Resume from {}'.format(save_path))
- else: # new run
- save_path = "/home/v-yiwuzhong/projects/azureblobs/vyiwuzhong_phillytools/flickr30k_processed/grounding_results/grounding_{}_imgs.npy".format(len(all_prediction))
- np.save(save_path, all_prediction)
- self._logger.info('Save results to {}'.format(save_path))
- self._logger.info('Got {} images!'.format(len(all_prediction)))
-
- image_unique_ids = list(all_prediction.keys())
- image_evaled = []
-
- total_num = 0
- recall_num = 0
- num_type = {}
- recall_type = {}
- acc_type = {}
- recall_topk_num = {5:0, 10:0}
- point_recall_num = 0
- EVAL_THRESH = 0.5
- type_cnts = {}
-
- for img_sent_id in image_unique_ids:
- if img_sent_id not in self.gt_boxes:
- continue
- else:
- image_evaled.append(img_sent_id)
- # results from model
- result = all_prediction[img_sent_id]
- phrase_ids = None
- phrase_types = [] # phrase type: each phrase belongs to a coarse object concept
- pred_boxes = None # an object proposal selected by model for each phrase
- img_height, img_width, all_str2id_links = result[0], result[1], result[2] # all_str2id_links: each word and its tokenized ids
- match_scores = result[3] # matching score [#object proposals, #tokens]
- precomp_boxes = result[4] # object proposals from offline module
- # annotation from dataset
- sentences = self.gt_sents[img_sent_id]
- box_anno = self.gt_boxes[img_sent_id]
- # sanity check and box merging
- assert box_anno['height'] == img_height, box_anno['width'] == img_width
- gt_phrase_ids, gt_boxes = self.merge_gt_boxes(box_anno) # merged if multiple boxes for the same phrase
- if len(gt_phrase_ids) == 0: # no gt box for this image
- continue
- for sent_item in sentences:
- for phrase_item in sent_item['phrases']:
- if phrase_item['phrase_id'] in gt_phrase_ids:
- phrase_types.append(phrase_item['phrase_type'])
-
- # merge similarity scores from token level to phrase level, and find the box that grounds the phrase
- phrase_score_tensor, all_phrase_ids = self.find_ground_box(match_scores, all_str2id_links, sentences, gt_phrase_ids)
- pred_boxes_ind = torch.argmax(phrase_score_tensor, dim=1)
- pred_boxes = precomp_boxes[pred_boxes_ind]
- pred_similarity = phrase_score_tensor # .t() # pred_similarity: matching score [#phrases, #object proposals]
-
- # get single target/gt box for each phrase
- # 1. any gt box that can be matched as target
- # refer to (https://github.com/BigRedT/info-ground/blob/22ae6d6ec8b38df473e73034fc895ebf97d39897/exp/ground/eval_flickr_phrase_loc.py#L90)
- phrase_boxes = [box_anno['boxes'][p_id] for p_id in all_phrase_ids]
- targets = []
- for pr_b, pd_b in zip(phrase_boxes, pred_boxes):
- matched = False
- for single_b in pr_b:
- this_iou = pairwise_iou(Boxes(torch.from_numpy(np.array([single_b])).float()), Boxes(pd_b.view(1,-1)))
- if (this_iou >= EVAL_THRESH).sum() > 0:
- targets.append(single_b)
- matched = True
- break
- if not matched:
- targets.append(single_b)
- targets = Boxes(torch.from_numpy(np.array(targets)).float())
- # 2. union box as target
- # target_ind = np.array([gt_phrase_ids.index(p_id) for p_id in all_phrase_ids])
- # targets = gt_boxes[target_ind] # ground-truth boxes for each phrase in each sentence
- # targets = Boxes(torch.from_numpy(targets).float())
- assert len(phrase_types) == len(targets)
-
- # single predicted box for each phrase
- ious = pairwise_iou(targets, pred_boxes) # this function will change the target_boxes into cuda mode
- iou = ious.numpy().diagonal()
- total_num += iou.shape[0]
- recall_num += int((iou >= EVAL_THRESH).sum()) # 0.5
-
- # metric of point (can be ignored)
- pred_boxes_tensor = pred_boxes.tensor
- pred_center = (pred_boxes_tensor[:, :2] + pred_boxes_tensor[:, 2:]) / 2.0
- pred_center = pred_center.repeat(1, 2) ## x_c, y_c, x_c, y_c
- targets_tensor = targets.tensor
- fall_tensor = targets_tensor - pred_center
- fall_tensor = (fall_tensor[:, :2] <= 0).float().sum(1) + (fall_tensor[:, 2:] >= 0).float().sum(1)
- point_recall_num += (fall_tensor == 4).float().numpy().sum()
-
- # detailed accuracy across different phrase types
- for pid, p_type in enumerate(phrase_types):
- p_type = p_type[0]
- num_type[p_type] = num_type.setdefault(p_type, 0) + 1
- recall_type[p_type] = recall_type.setdefault(p_type, 0) + (iou[pid] >= EVAL_THRESH)
-
- # metric of recall when multiple predicted boxes for each phrase
- ious_top = pairwise_iou(targets, precomp_boxes).cpu()
- for k in [5, 10]:
- top_k = torch.topk(pred_similarity, k=k, dim=1)[0][:, [-1]]
- pred_similarity_topk = (pred_similarity >= top_k).float()
- ious_top_k = (ious_top * pred_similarity_topk).numpy()
- recall_topk_num[k] += int(((ious_top_k >= EVAL_THRESH).sum(1) > 0).sum())
-
- acc = recall_num / total_num
- acc_top5 = recall_topk_num[5] / total_num
- acc_top10 = recall_topk_num[10] / total_num
- point_acc = point_recall_num / total_num
-
- # details about each coarse type of phrase
- for type, type_num in num_type.items():
- acc_type[type] = recall_type[type] / type_num
-
- # if self._output_dir:
- # PathManager.mkdirs(self._output_dir)
- # file_path = os.path.join(self._output_dir, "prediction_{}.pkl".format(str(acc).replace('.', '_')[:6]))
- # with PathManager.open(file_path, "wb") as f:
- # pickle.dump(all_prediction, f)
-
- del all_prediction
- self._logger.info('evaluation on {} expression instances, detailed_iou: {}'.format(len(image_evaled), acc_type))
- self._logger.info('Evaluate Pointing Accuracy: PointAcc:{}'.format(point_acc))
- results = OrderedDict({"acc": acc, "acc_top5": acc_top5, "acc_top10": acc_top10})
- self._logger.info(results)
- self._logger.info(num_type)
- return results
\ No newline at end of file
diff --git a/spaces/CVPR/regionclip-demo/detectron2/export/flatten.py b/spaces/CVPR/regionclip-demo/detectron2/export/flatten.py
deleted file mode 100644
index 5d229719b56bf3b57727f3751dbb9af1b6d173f1..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/export/flatten.py
+++ /dev/null
@@ -1,327 +0,0 @@
-import collections
-from dataclasses import dataclass
-from typing import Callable, List, Optional, Tuple
-import torch
-from torch import nn
-
-from detectron2.structures import Boxes, Instances, ROIMasks
-from detectron2.utils.registry import _convert_target_to_string, locate
-
-from .torchscript_patch import patch_builtin_len
-
-
-@dataclass
-class Schema:
- """
- A Schema defines how to flatten a possibly hierarchical object into tuple of
- primitive objects, so it can be used as inputs/outputs of PyTorch's tracing.
-
- PyTorch does not support tracing a function that produces rich output
- structures (e.g. dict, Instances, Boxes). To trace such a function, we
- flatten the rich object into tuple of tensors, and return this tuple of tensors
- instead. Meanwhile, we also need to know how to "rebuild" the original object
- from the flattened results, so we can evaluate the flattened results.
- A Schema defines how to flatten an object, and while flattening it, it records
- necessary schemas so that the object can be rebuilt using the flattened outputs.
-
- The flattened object and the schema object is returned by ``.flatten`` classmethod.
- Then the original object can be rebuilt with the ``__call__`` method of schema.
-
- A Schema is a dataclass that can be serialized easily.
- """
-
- # inspired by FetchMapper in tensorflow/python/client/session.py
-
- @classmethod
- def flatten(cls, obj):
- raise NotImplementedError
-
- def __call__(self, values):
- raise NotImplementedError
-
- @staticmethod
- def _concat(values):
- ret = ()
- sizes = []
- for v in values:
- assert isinstance(v, tuple), "Flattened results must be a tuple"
- ret = ret + v
- sizes.append(len(v))
- return ret, sizes
-
- @staticmethod
- def _split(values, sizes):
- if len(sizes):
- expected_len = sum(sizes)
- assert (
- len(values) == expected_len
- ), f"Values has length {len(values)} but expect length {expected_len}."
- ret = []
- for k in range(len(sizes)):
- begin, end = sum(sizes[:k]), sum(sizes[: k + 1])
- ret.append(values[begin:end])
- return ret
-
-
-@dataclass
-class ListSchema(Schema):
- schemas: List[Schema] # the schemas that define how to flatten each element in the list
- sizes: List[int] # the flattened length of each element
-
- def __call__(self, values):
- values = self._split(values, self.sizes)
- if len(values) != len(self.schemas):
- raise ValueError(
- f"Values has length {len(values)} but schemas " f"has length {len(self.schemas)}!"
- )
- values = [m(v) for m, v in zip(self.schemas, values)]
- return list(values)
-
- @classmethod
- def flatten(cls, obj):
- res = [flatten_to_tuple(k) for k in obj]
- values, sizes = cls._concat([k[0] for k in res])
- return values, cls([k[1] for k in res], sizes)
-
-
-@dataclass
-class TupleSchema(ListSchema):
- def __call__(self, values):
- return tuple(super().__call__(values))
-
-
-@dataclass
-class IdentitySchema(Schema):
- def __call__(self, values):
- return values[0]
-
- @classmethod
- def flatten(cls, obj):
- return (obj,), cls()
-
-
-@dataclass
-class DictSchema(ListSchema):
- keys: List[str]
-
- def __call__(self, values):
- values = super().__call__(values)
- return dict(zip(self.keys, values))
-
- @classmethod
- def flatten(cls, obj):
- for k in obj.keys():
- if not isinstance(k, str):
- raise KeyError("Only support flattening dictionaries if keys are str.")
- keys = sorted(obj.keys())
- values = [obj[k] for k in keys]
- ret, schema = ListSchema.flatten(values)
- return ret, cls(schema.schemas, schema.sizes, keys)
-
-
-@dataclass
-class InstancesSchema(DictSchema):
- def __call__(self, values):
- image_size, fields = values[-1], values[:-1]
- fields = super().__call__(fields)
- return Instances(image_size, **fields)
-
- @classmethod
- def flatten(cls, obj):
- ret, schema = super().flatten(obj.get_fields())
- size = obj.image_size
- if not isinstance(size, torch.Tensor):
- size = torch.tensor(size)
- return ret + (size,), schema
-
-
-@dataclass
-class TensorWrapSchema(Schema):
- """
- For classes that are simple wrapper of tensors, e.g.
- Boxes, RotatedBoxes, BitMasks
- """
-
- class_name: str
-
- def __call__(self, values):
- return locate(self.class_name)(values[0])
-
- @classmethod
- def flatten(cls, obj):
- return (obj.tensor,), cls(_convert_target_to_string(type(obj)))
-
-
-# if more custom structures needed in the future, can allow
-# passing in extra schemas for custom types
-def flatten_to_tuple(obj):
- """
- Flatten an object so it can be used for PyTorch tracing.
- Also returns how to rebuild the original object from the flattened outputs.
-
- Returns:
- res (tuple): the flattened results that can be used as tracing outputs
- schema: an object with a ``__call__`` method such that ``schema(res) == obj``.
- It is a pure dataclass that can be serialized.
- """
- schemas = [
- ((str, bytes), IdentitySchema),
- (list, ListSchema),
- (tuple, TupleSchema),
- (collections.abc.Mapping, DictSchema),
- (Instances, InstancesSchema),
- ((Boxes, ROIMasks), TensorWrapSchema),
- ]
- for klass, schema in schemas:
- if isinstance(obj, klass):
- F = schema
- break
- else:
- F = IdentitySchema
-
- return F.flatten(obj)
-
-
-class TracingAdapter(nn.Module):
- """
- A model may take rich input/output format (e.g. dict or custom classes),
- but `torch.jit.trace` requires tuple of tensors as input/output.
- This adapter flattens input/output format of a model so it becomes traceable.
-
- It also records the necessary schema to rebuild model's inputs/outputs from flattened
- inputs/outputs.
-
- Example:
- ::
- outputs = model(inputs) # inputs/outputs may be rich structure
- adapter = TracingAdapter(model, inputs)
-
- # can now trace the model, with adapter.flattened_inputs, or another
- # tuple of tensors with the same length and meaning
- traced = torch.jit.trace(adapter, adapter.flattened_inputs)
-
- # traced model can only produce flattened outputs (tuple of tensors)
- flattened_outputs = traced(*adapter.flattened_inputs)
- # adapter knows the schema to convert it back (new_outputs == outputs)
- new_outputs = adapter.outputs_schema(flattened_outputs)
- """
-
- flattened_inputs: Tuple[torch.Tensor] = None
- """
- Flattened version of inputs given to this class's constructor.
- """
-
- inputs_schema: Schema = None
- """
- Schema of the inputs given to this class's constructor.
- """
-
- outputs_schema: Schema = None
- """
- Schema of the output produced by calling the given model with inputs.
- """
-
- def __init__(
- self,
- model: nn.Module,
- inputs,
- inference_func: Optional[Callable] = None,
- allow_non_tensor: bool = False,
- ):
- """
- Args:
- model: an nn.Module
- inputs: An input argument or a tuple of input arguments used to call model.
- After flattening, it has to only consist of tensors.
- inference_func: a callable that takes (model, *inputs), calls the
- model with inputs, and return outputs. By default it
- is ``lambda model, *inputs: model(*inputs)``. Can be override
- if you need to call the model differently.
- allow_non_tensor: allow inputs/outputs to contain non-tensor objects.
- This option will filter out non-tensor objects to make the
- model traceable, but ``inputs_schema``/``outputs_schema`` cannot be
- used anymore because inputs/outputs cannot be rebuilt from pure tensors.
- This is useful when you're only interested in the single trace of
- execution (e.g. for flop count), but not interested in
- generalizing the traced graph to new inputs.
- """
- super().__init__()
- if isinstance(model, (nn.parallel.distributed.DistributedDataParallel, nn.DataParallel)):
- model = model.module
- self.model = model
- if not isinstance(inputs, tuple):
- inputs = (inputs,)
- self.inputs = inputs
- self.allow_non_tensor = allow_non_tensor
-
- if inference_func is None:
- inference_func = lambda model, *inputs: model(*inputs) # noqa
- self.inference_func = inference_func
-
- self.flattened_inputs, self.inputs_schema = flatten_to_tuple(inputs)
-
- if all(isinstance(x, torch.Tensor) for x in self.flattened_inputs):
- return
- if self.allow_non_tensor:
- self.flattened_inputs = tuple(
- [x for x in self.flattened_inputs if isinstance(x, torch.Tensor)]
- )
- self.inputs_schema = None
- else:
- for input in self.flattened_inputs:
- if not isinstance(input, torch.Tensor):
- raise ValueError(
- "Inputs for tracing must only contain tensors. "
- f"Got a {type(input)} instead."
- )
-
- def forward(self, *args: torch.Tensor):
- with torch.no_grad(), patch_builtin_len():
- if self.inputs_schema is not None:
- inputs_orig_format = self.inputs_schema(args)
- else:
- if args != self.flattened_inputs:
- raise ValueError(
- "TracingAdapter does not contain valid inputs_schema."
- " So it cannot generalize to other inputs and must be"
- " traced with `.flattened_inputs`."
- )
- inputs_orig_format = self.inputs
-
- outputs = self.inference_func(self.model, *inputs_orig_format)
- flattened_outputs, schema = flatten_to_tuple(outputs)
-
- flattened_output_tensors = tuple(
- [x for x in flattened_outputs if isinstance(x, torch.Tensor)]
- )
- if len(flattened_output_tensors) < len(flattened_outputs):
- if self.allow_non_tensor:
- flattened_outputs = flattened_output_tensors
- self.outputs_schema = None
- else:
- raise ValueError(
- "Model cannot be traced because some model outputs "
- "cannot flatten to tensors."
- )
- else: # schema is valid
- if self.outputs_schema is None:
- self.outputs_schema = schema
- else:
- assert self.outputs_schema == schema, (
- "Model should always return outputs with the same "
- "structure so it can be traced!"
- )
- return flattened_outputs
-
- def _create_wrapper(self, traced_model):
- """
- Return a function that has an input/output interface the same as the
- original model, but it calls the given traced model under the hood.
- """
-
- def forward(*args):
- flattened_inputs, _ = flatten_to_tuple(args)
- flattened_outputs = traced_model(*flattened_inputs)
- return self.outputs_schema(flattened_outputs)
-
- return forward
diff --git a/spaces/ChatGPT-GAIA/GAIA-GPT/README.md b/spaces/ChatGPT-GAIA/GAIA-GPT/README.md
deleted file mode 100644
index 63d60bb8bf3a0f83b7f0bb1800373a68fe6a9f9d..0000000000000000000000000000000000000000
--- a/spaces/ChatGPT-GAIA/GAIA-GPT/README.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: 🔍ChatGPT Episodic and Semantic Generator🏊
-emoji: 🌟GPT🔍
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: awacke1/ChatGPT-SOP
----
-## ChatGPT Datasets 📚
-- WebText
-- Common Crawl
-- BooksCorpus
-- English Wikipedia
-- Toronto Books Corpus
-- OpenWebText
-## ChatGPT Datasets - Details 📚
-- **WebText:** A dataset of web pages crawled from domains on the Alexa top 5,000 list. This dataset was used to pretrain GPT-2.
- - [WebText: A Large-Scale Unsupervised Text Corpus by Radford et al.](https://paperswithcode.com/dataset/webtext)
-- **Common Crawl:** A dataset of web pages from a variety of domains, which is updated regularly. This dataset was used to pretrain GPT-3.
- - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/common-crawl) by Brown et al.
-- **BooksCorpus:** A dataset of over 11,000 books from a variety of genres.
- - [Scalable Methods for 8 Billion Token Language Modeling](https://paperswithcode.com/dataset/bookcorpus) by Zhu et al.
-- **English Wikipedia:** A dump of the English-language Wikipedia as of 2018, with articles from 2001-2017.
- - [Improving Language Understanding by Generative Pre-Training](https://huggingface.co/spaces/awacke1/WikipediaUltimateAISearch?logs=build) Space for Wikipedia Search
-- **Toronto Books Corpus:** A dataset of over 7,000 books from a variety of genres, collected by the University of Toronto.
- - [Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond](https://paperswithcode.com/dataset/bookcorpus) by Schwenk and Douze.
-- **OpenWebText:** A dataset of web pages that were filtered to remove content that was likely to be low-quality or spammy. This dataset was used to pretrain GPT-3.
- - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/openwebtext) by Brown et al.
\ No newline at end of file
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/retinanet/retinanet.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/retinanet/retinanet.py
deleted file mode 100644
index 1599b29b2e9bbb626b31d652022fbbd034bf5e30..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/retinanet/retinanet.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import math
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from .inference import make_retinanet_postprocessor
-from .loss import make_retinanet_loss_evaluator
-from ..anchor_generator import make_anchor_generator_retinanet
-
-from maskrcnn_benchmark.modeling.box_coder import BoxCoder
-
-
-class RetinaNetHead(torch.nn.Module):
- """
- Adds a RetinNet head with classification and regression heads
- """
-
- def __init__(self, cfg, in_channels):
- """
- Arguments:
- in_channels (int): number of channels of the input feature
- num_anchors (int): number of anchors to be predicted
- """
- super(RetinaNetHead, self).__init__()
- # TODO: Implement the sigmoid version first.
- num_classes = cfg.MODEL.RETINANET.NUM_CLASSES - 1
- num_anchors = len(cfg.MODEL.RETINANET.ASPECT_RATIOS) \
- * cfg.MODEL.RETINANET.SCALES_PER_OCTAVE
-
- cls_tower = []
- bbox_tower = []
- for i in range(cfg.MODEL.RETINANET.NUM_CONVS):
- cls_tower.append(
- nn.Conv2d(
- in_channels,
- in_channels,
- kernel_size=3,
- stride=1,
- padding=1
- )
- )
- cls_tower.append(nn.ReLU())
- bbox_tower.append(
- nn.Conv2d(
- in_channels,
- in_channels,
- kernel_size=3,
- stride=1,
- padding=1
- )
- )
- bbox_tower.append(nn.ReLU())
-
- self.add_module('cls_tower', nn.Sequential(*cls_tower))
- self.add_module('bbox_tower', nn.Sequential(*bbox_tower))
- self.cls_logits = nn.Conv2d(
- in_channels, num_anchors * num_classes, kernel_size=3, stride=1,
- padding=1
- )
- self.bbox_pred = nn.Conv2d(
- in_channels, num_anchors * 4, kernel_size=3, stride=1,
- padding=1
- )
-
- # Initialization
- for modules in [self.cls_tower, self.bbox_tower, self.cls_logits,
- self.bbox_pred]:
- for l in modules.modules():
- if isinstance(l, nn.Conv2d):
- torch.nn.init.normal_(l.weight, std=0.01)
- torch.nn.init.constant_(l.bias, 0)
-
-
- # retinanet_bias_init
- prior_prob = cfg.MODEL.RETINANET.PRIOR_PROB
- bias_value = -math.log((1 - prior_prob) / prior_prob)
- torch.nn.init.constant_(self.cls_logits.bias, bias_value)
-
- def forward(self, x):
- logits = []
- bbox_reg = []
- for feature in x:
- logits.append(self.cls_logits(self.cls_tower(feature)))
- bbox_reg.append(self.bbox_pred(self.bbox_tower(feature)))
- return logits, bbox_reg
-
-
-class RetinaNetModule(torch.nn.Module):
- """
- Module for RetinaNet computation. Takes feature maps from the backbone and
- RetinaNet outputs and losses. Only Test on FPN now.
- """
-
- def __init__(self, cfg, in_channels):
- super(RetinaNetModule, self).__init__()
-
- self.cfg = cfg.clone()
-
- anchor_generator = make_anchor_generator_retinanet(cfg)
- head = RetinaNetHead(cfg, in_channels)
- box_coder = BoxCoder(weights=(10., 10., 5., 5.))
-
- box_selector_test = make_retinanet_postprocessor(cfg, box_coder, is_train=False)
-
- loss_evaluator = make_retinanet_loss_evaluator(cfg, box_coder)
-
- self.anchor_generator = anchor_generator
- self.head = head
- self.box_selector_test = box_selector_test
- self.loss_evaluator = loss_evaluator
-
- def forward(self, images, features, targets=None):
- """
- Arguments:
- images (ImageList): images for which we want to compute the predictions
- features (list[Tensor]): features computed from the images that are
- used for computing the predictions. Each tensor in the list
- correspond to different feature levels
- targets (list[BoxList): ground-truth boxes present in the image (optional)
-
- Returns:
- boxes (list[BoxList]): the predicted boxes from the RPN, one BoxList per
- image.
- losses (dict[Tensor]): the losses for the model during training. During
- testing, it is an empty dict.
- """
- box_cls, box_regression = self.head(features)
- anchors = self.anchor_generator(images, features)
-
- if self.training:
- return self._forward_train(anchors, box_cls, box_regression, targets)
- else:
- return self._forward_test(anchors, box_cls, box_regression)
-
- def _forward_train(self, anchors, box_cls, box_regression, targets):
-
- loss_box_cls, loss_box_reg = self.loss_evaluator(
- anchors, box_cls, box_regression, targets
- )
- losses = {
- "loss_retina_cls": loss_box_cls,
- "loss_retina_reg": loss_box_reg,
- }
- return anchors, losses
-
- def _forward_test(self, anchors, box_cls, box_regression):
- boxes = self.box_selector_test(anchors, box_cls, box_regression)
- return boxes, {}
-
-
-def build_retinanet(cfg, in_channels):
- return RetinaNetModule(cfg, in_channels)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_sockets.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_sockets.py
deleted file mode 100644
index e6970bee2701e1d9391abb376e52a4d1a8ec7b68..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_sockets.py
+++ /dev/null
@@ -1,607 +0,0 @@
-from __future__ import annotations
-
-import socket
-import ssl
-import sys
-from ipaddress import IPv6Address, ip_address
-from os import PathLike, chmod
-from pathlib import Path
-from socket import AddressFamily, SocketKind
-from typing import Awaitable, List, Tuple, cast, overload
-
-from .. import to_thread
-from ..abc import (
- ConnectedUDPSocket,
- IPAddressType,
- IPSockAddrType,
- SocketListener,
- SocketStream,
- UDPSocket,
- UNIXSocketStream,
-)
-from ..streams.stapled import MultiListener
-from ..streams.tls import TLSStream
-from ._eventloop import get_asynclib
-from ._resources import aclose_forcefully
-from ._synchronization import Event
-from ._tasks import create_task_group, move_on_after
-
-if sys.version_info >= (3, 8):
- from typing import Literal
-else:
- from typing_extensions import Literal
-
-IPPROTO_IPV6 = getattr(socket, "IPPROTO_IPV6", 41) # https://bugs.python.org/issue29515
-
-GetAddrInfoReturnType = List[
- Tuple[AddressFamily, SocketKind, int, str, Tuple[str, int]]
-]
-AnyIPAddressFamily = Literal[
- AddressFamily.AF_UNSPEC, AddressFamily.AF_INET, AddressFamily.AF_INET6
-]
-IPAddressFamily = Literal[AddressFamily.AF_INET, AddressFamily.AF_INET6]
-
-
-# tls_hostname given
-@overload
-async def connect_tcp(
- remote_host: IPAddressType,
- remote_port: int,
- *,
- local_host: IPAddressType | None = ...,
- ssl_context: ssl.SSLContext | None = ...,
- tls_standard_compatible: bool = ...,
- tls_hostname: str,
- happy_eyeballs_delay: float = ...,
-) -> TLSStream:
- ...
-
-
-# ssl_context given
-@overload
-async def connect_tcp(
- remote_host: IPAddressType,
- remote_port: int,
- *,
- local_host: IPAddressType | None = ...,
- ssl_context: ssl.SSLContext,
- tls_standard_compatible: bool = ...,
- tls_hostname: str | None = ...,
- happy_eyeballs_delay: float = ...,
-) -> TLSStream:
- ...
-
-
-# tls=True
-@overload
-async def connect_tcp(
- remote_host: IPAddressType,
- remote_port: int,
- *,
- local_host: IPAddressType | None = ...,
- tls: Literal[True],
- ssl_context: ssl.SSLContext | None = ...,
- tls_standard_compatible: bool = ...,
- tls_hostname: str | None = ...,
- happy_eyeballs_delay: float = ...,
-) -> TLSStream:
- ...
-
-
-# tls=False
-@overload
-async def connect_tcp(
- remote_host: IPAddressType,
- remote_port: int,
- *,
- local_host: IPAddressType | None = ...,
- tls: Literal[False],
- ssl_context: ssl.SSLContext | None = ...,
- tls_standard_compatible: bool = ...,
- tls_hostname: str | None = ...,
- happy_eyeballs_delay: float = ...,
-) -> SocketStream:
- ...
-
-
-# No TLS arguments
-@overload
-async def connect_tcp(
- remote_host: IPAddressType,
- remote_port: int,
- *,
- local_host: IPAddressType | None = ...,
- happy_eyeballs_delay: float = ...,
-) -> SocketStream:
- ...
-
-
-async def connect_tcp(
- remote_host: IPAddressType,
- remote_port: int,
- *,
- local_host: IPAddressType | None = None,
- tls: bool = False,
- ssl_context: ssl.SSLContext | None = None,
- tls_standard_compatible: bool = True,
- tls_hostname: str | None = None,
- happy_eyeballs_delay: float = 0.25,
-) -> SocketStream | TLSStream:
- """
- Connect to a host using the TCP protocol.
-
- This function implements the stateless version of the Happy Eyeballs algorithm (RFC
- 6555). If ``remote_host`` is a host name that resolves to multiple IP addresses,
- each one is tried until one connection attempt succeeds. If the first attempt does
- not connected within 250 milliseconds, a second attempt is started using the next
- address in the list, and so on. On IPv6 enabled systems, an IPv6 address (if
- available) is tried first.
-
- When the connection has been established, a TLS handshake will be done if either
- ``ssl_context`` or ``tls_hostname`` is not ``None``, or if ``tls`` is ``True``.
-
- :param remote_host: the IP address or host name to connect to
- :param remote_port: port on the target host to connect to
- :param local_host: the interface address or name to bind the socket to before connecting
- :param tls: ``True`` to do a TLS handshake with the connected stream and return a
- :class:`~anyio.streams.tls.TLSStream` instead
- :param ssl_context: the SSL context object to use (if omitted, a default context is created)
- :param tls_standard_compatible: If ``True``, performs the TLS shutdown handshake before closing
- the stream and requires that the server does this as well. Otherwise,
- :exc:`~ssl.SSLEOFError` may be raised during reads from the stream.
- Some protocols, such as HTTP, require this option to be ``False``.
- See :meth:`~ssl.SSLContext.wrap_socket` for details.
- :param tls_hostname: host name to check the server certificate against (defaults to the value
- of ``remote_host``)
- :param happy_eyeballs_delay: delay (in seconds) before starting the next connection attempt
- :return: a socket stream object if no TLS handshake was done, otherwise a TLS stream
- :raises OSError: if the connection attempt fails
-
- """
- # Placed here due to https://github.com/python/mypy/issues/7057
- connected_stream: SocketStream | None = None
-
- async def try_connect(remote_host: str, event: Event) -> None:
- nonlocal connected_stream
- try:
- stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)
- except OSError as exc:
- oserrors.append(exc)
- return
- else:
- if connected_stream is None:
- connected_stream = stream
- tg.cancel_scope.cancel()
- else:
- await stream.aclose()
- finally:
- event.set()
-
- asynclib = get_asynclib()
- local_address: IPSockAddrType | None = None
- family = socket.AF_UNSPEC
- if local_host:
- gai_res = await getaddrinfo(str(local_host), None)
- family, *_, local_address = gai_res[0]
-
- target_host = str(remote_host)
- try:
- addr_obj = ip_address(remote_host)
- except ValueError:
- # getaddrinfo() will raise an exception if name resolution fails
- gai_res = await getaddrinfo(
- target_host, remote_port, family=family, type=socket.SOCK_STREAM
- )
-
- # Organize the list so that the first address is an IPv6 address (if available) and the
- # second one is an IPv4 addresses. The rest can be in whatever order.
- v6_found = v4_found = False
- target_addrs: list[tuple[socket.AddressFamily, str]] = []
- for af, *rest, sa in gai_res:
- if af == socket.AF_INET6 and not v6_found:
- v6_found = True
- target_addrs.insert(0, (af, sa[0]))
- elif af == socket.AF_INET and not v4_found and v6_found:
- v4_found = True
- target_addrs.insert(1, (af, sa[0]))
- else:
- target_addrs.append((af, sa[0]))
- else:
- if isinstance(addr_obj, IPv6Address):
- target_addrs = [(socket.AF_INET6, addr_obj.compressed)]
- else:
- target_addrs = [(socket.AF_INET, addr_obj.compressed)]
-
- oserrors: list[OSError] = []
- async with create_task_group() as tg:
- for i, (af, addr) in enumerate(target_addrs):
- event = Event()
- tg.start_soon(try_connect, addr, event)
- with move_on_after(happy_eyeballs_delay):
- await event.wait()
-
- if connected_stream is None:
- cause = oserrors[0] if len(oserrors) == 1 else asynclib.ExceptionGroup(oserrors)
- raise OSError("All connection attempts failed") from cause
-
- if tls or tls_hostname or ssl_context:
- try:
- return await TLSStream.wrap(
- connected_stream,
- server_side=False,
- hostname=tls_hostname or str(remote_host),
- ssl_context=ssl_context,
- standard_compatible=tls_standard_compatible,
- )
- except BaseException:
- await aclose_forcefully(connected_stream)
- raise
-
- return connected_stream
-
-
-async def connect_unix(path: str | PathLike[str]) -> UNIXSocketStream:
- """
- Connect to the given UNIX socket.
-
- Not available on Windows.
-
- :param path: path to the socket
- :return: a socket stream object
-
- """
- path = str(Path(path))
- return await get_asynclib().connect_unix(path)
-
-
-async def create_tcp_listener(
- *,
- local_host: IPAddressType | None = None,
- local_port: int = 0,
- family: AnyIPAddressFamily = socket.AddressFamily.AF_UNSPEC,
- backlog: int = 65536,
- reuse_port: bool = False,
-) -> MultiListener[SocketStream]:
- """
- Create a TCP socket listener.
-
- :param local_port: port number to listen on
- :param local_host: IP address of the interface to listen on. If omitted, listen on
- all IPv4 and IPv6 interfaces. To listen on all interfaces on a specific address
- family, use ``0.0.0.0`` for IPv4 or ``::`` for IPv6.
- :param family: address family (used if ``local_host`` was omitted)
- :param backlog: maximum number of queued incoming connections (up to a maximum of
- 2**16, or 65536)
- :param reuse_port: ``True`` to allow multiple sockets to bind to the same
- address/port (not supported on Windows)
- :return: a list of listener objects
-
- """
- asynclib = get_asynclib()
- backlog = min(backlog, 65536)
- local_host = str(local_host) if local_host is not None else None
- gai_res = await getaddrinfo(
- local_host, # type: ignore[arg-type]
- local_port,
- family=family,
- type=socket.SocketKind.SOCK_STREAM if sys.platform == "win32" else 0,
- flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG,
- )
- listeners: list[SocketListener] = []
- try:
- # The set() is here to work around a glibc bug:
- # https://sourceware.org/bugzilla/show_bug.cgi?id=14969
- sockaddr: tuple[str, int] | tuple[str, int, int, int]
- for fam, kind, *_, sockaddr in sorted(set(gai_res)):
- # Workaround for an uvloop bug where we don't get the correct scope ID for
- # IPv6 link-local addresses when passing type=socket.SOCK_STREAM to
- # getaddrinfo(): https://github.com/MagicStack/uvloop/issues/539
- if sys.platform != "win32" and kind is not SocketKind.SOCK_STREAM:
- continue
-
- raw_socket = socket.socket(fam)
- raw_socket.setblocking(False)
-
- # For Windows, enable exclusive address use. For others, enable address reuse.
- if sys.platform == "win32":
- raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)
- else:
- raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
-
- if reuse_port:
- raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
-
- # If only IPv6 was requested, disable dual stack operation
- if fam == socket.AF_INET6:
- raw_socket.setsockopt(IPPROTO_IPV6, socket.IPV6_V6ONLY, 1)
-
- # Workaround for #554
- if "%" in sockaddr[0]:
- addr, scope_id = sockaddr[0].split("%", 1)
- sockaddr = (addr, sockaddr[1], 0, int(scope_id))
-
- raw_socket.bind(sockaddr)
- raw_socket.listen(backlog)
- listener = asynclib.TCPSocketListener(raw_socket)
- listeners.append(listener)
- except BaseException:
- for listener in listeners:
- await listener.aclose()
-
- raise
-
- return MultiListener(listeners)
-
-
-async def create_unix_listener(
- path: str | PathLike[str],
- *,
- mode: int | None = None,
- backlog: int = 65536,
-) -> SocketListener:
- """
- Create a UNIX socket listener.
-
- Not available on Windows.
-
- :param path: path of the socket
- :param mode: permissions to set on the socket
- :param backlog: maximum number of queued incoming connections (up to a maximum of 2**16, or
- 65536)
- :return: a listener object
-
- .. versionchanged:: 3.0
- If a socket already exists on the file system in the given path, it will be removed first.
-
- """
- path_str = str(path)
- path = Path(path)
- if path.is_socket():
- path.unlink()
-
- backlog = min(backlog, 65536)
- raw_socket = socket.socket(socket.AF_UNIX)
- raw_socket.setblocking(False)
- try:
- await to_thread.run_sync(raw_socket.bind, path_str, cancellable=True)
- if mode is not None:
- await to_thread.run_sync(chmod, path_str, mode, cancellable=True)
-
- raw_socket.listen(backlog)
- return get_asynclib().UNIXSocketListener(raw_socket)
- except BaseException:
- raw_socket.close()
- raise
-
-
-async def create_udp_socket(
- family: AnyIPAddressFamily = AddressFamily.AF_UNSPEC,
- *,
- local_host: IPAddressType | None = None,
- local_port: int = 0,
- reuse_port: bool = False,
-) -> UDPSocket:
- """
- Create a UDP socket.
-
- If ``local_port`` has been given, the socket will be bound to this port on the local
- machine, making this socket suitable for providing UDP based services.
-
- :param family: address family (``AF_INET`` or ``AF_INET6``) – automatically determined from
- ``local_host`` if omitted
- :param local_host: IP address or host name of the local interface to bind to
- :param local_port: local port to bind to
- :param reuse_port: ``True`` to allow multiple sockets to bind to the same address/port
- (not supported on Windows)
- :return: a UDP socket
-
- """
- if family is AddressFamily.AF_UNSPEC and not local_host:
- raise ValueError('Either "family" or "local_host" must be given')
-
- if local_host:
- gai_res = await getaddrinfo(
- str(local_host),
- local_port,
- family=family,
- type=socket.SOCK_DGRAM,
- flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG,
- )
- family = cast(AnyIPAddressFamily, gai_res[0][0])
- local_address = gai_res[0][-1]
- elif family is AddressFamily.AF_INET6:
- local_address = ("::", 0)
- else:
- local_address = ("0.0.0.0", 0)
-
- return await get_asynclib().create_udp_socket(
- family, local_address, None, reuse_port
- )
-
-
-async def create_connected_udp_socket(
- remote_host: IPAddressType,
- remote_port: int,
- *,
- family: AnyIPAddressFamily = AddressFamily.AF_UNSPEC,
- local_host: IPAddressType | None = None,
- local_port: int = 0,
- reuse_port: bool = False,
-) -> ConnectedUDPSocket:
- """
- Create a connected UDP socket.
-
- Connected UDP sockets can only communicate with the specified remote host/port, and any packets
- sent from other sources are dropped.
-
- :param remote_host: remote host to set as the default target
- :param remote_port: port on the remote host to set as the default target
- :param family: address family (``AF_INET`` or ``AF_INET6``) – automatically determined from
- ``local_host`` or ``remote_host`` if omitted
- :param local_host: IP address or host name of the local interface to bind to
- :param local_port: local port to bind to
- :param reuse_port: ``True`` to allow multiple sockets to bind to the same address/port
- (not supported on Windows)
- :return: a connected UDP socket
-
- """
- local_address = None
- if local_host:
- gai_res = await getaddrinfo(
- str(local_host),
- local_port,
- family=family,
- type=socket.SOCK_DGRAM,
- flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG,
- )
- family = cast(AnyIPAddressFamily, gai_res[0][0])
- local_address = gai_res[0][-1]
-
- gai_res = await getaddrinfo(
- str(remote_host), remote_port, family=family, type=socket.SOCK_DGRAM
- )
- family = cast(AnyIPAddressFamily, gai_res[0][0])
- remote_address = gai_res[0][-1]
-
- return await get_asynclib().create_udp_socket(
- family, local_address, remote_address, reuse_port
- )
-
-
-async def getaddrinfo(
- host: bytearray | bytes | str,
- port: str | int | None,
- *,
- family: int | AddressFamily = 0,
- type: int | SocketKind = 0,
- proto: int = 0,
- flags: int = 0,
-) -> GetAddrInfoReturnType:
- """
- Look up a numeric IP address given a host name.
-
- Internationalized domain names are translated according to the (non-transitional) IDNA 2008
- standard.
-
- .. note:: 4-tuple IPv6 socket addresses are automatically converted to 2-tuples of
- (host, port), unlike what :func:`socket.getaddrinfo` does.
-
- :param host: host name
- :param port: port number
- :param family: socket family (`'AF_INET``, ...)
- :param type: socket type (``SOCK_STREAM``, ...)
- :param proto: protocol number
- :param flags: flags to pass to upstream ``getaddrinfo()``
- :return: list of tuples containing (family, type, proto, canonname, sockaddr)
-
- .. seealso:: :func:`socket.getaddrinfo`
-
- """
- # Handle unicode hostnames
- if isinstance(host, str):
- try:
- encoded_host = host.encode("ascii")
- except UnicodeEncodeError:
- import idna
-
- encoded_host = idna.encode(host, uts46=True)
- else:
- encoded_host = host
-
- gai_res = await get_asynclib().getaddrinfo(
- encoded_host, port, family=family, type=type, proto=proto, flags=flags
- )
- return [
- (family, type, proto, canonname, convert_ipv6_sockaddr(sockaddr))
- for family, type, proto, canonname, sockaddr in gai_res
- ]
-
-
-def getnameinfo(sockaddr: IPSockAddrType, flags: int = 0) -> Awaitable[tuple[str, str]]:
- """
- Look up the host name of an IP address.
-
- :param sockaddr: socket address (e.g. (ipaddress, port) for IPv4)
- :param flags: flags to pass to upstream ``getnameinfo()``
- :return: a tuple of (host name, service name)
-
- .. seealso:: :func:`socket.getnameinfo`
-
- """
- return get_asynclib().getnameinfo(sockaddr, flags)
-
-
-def wait_socket_readable(sock: socket.socket) -> Awaitable[None]:
- """
- Wait until the given socket has data to be read.
-
- This does **NOT** work on Windows when using the asyncio backend with a proactor event loop
- (default on py3.8+).
-
- .. warning:: Only use this on raw sockets that have not been wrapped by any higher level
- constructs like socket streams!
-
- :param sock: a socket object
- :raises ~anyio.ClosedResourceError: if the socket was closed while waiting for the
- socket to become readable
- :raises ~anyio.BusyResourceError: if another task is already waiting for the socket
- to become readable
-
- """
- return get_asynclib().wait_socket_readable(sock)
-
-
-def wait_socket_writable(sock: socket.socket) -> Awaitable[None]:
- """
- Wait until the given socket can be written to.
-
- This does **NOT** work on Windows when using the asyncio backend with a proactor event loop
- (default on py3.8+).
-
- .. warning:: Only use this on raw sockets that have not been wrapped by any higher level
- constructs like socket streams!
-
- :param sock: a socket object
- :raises ~anyio.ClosedResourceError: if the socket was closed while waiting for the
- socket to become writable
- :raises ~anyio.BusyResourceError: if another task is already waiting for the socket
- to become writable
-
- """
- return get_asynclib().wait_socket_writable(sock)
-
-
-#
-# Private API
-#
-
-
-def convert_ipv6_sockaddr(
- sockaddr: tuple[str, int, int, int] | tuple[str, int]
-) -> tuple[str, int]:
- """
- Convert a 4-tuple IPv6 socket address to a 2-tuple (address, port) format.
-
- If the scope ID is nonzero, it is added to the address, separated with ``%``.
- Otherwise the flow id and scope id are simply cut off from the tuple.
- Any other kinds of socket addresses are returned as-is.
-
- :param sockaddr: the result of :meth:`~socket.socket.getsockname`
- :return: the converted socket address
-
- """
- # This is more complicated than it should be because of MyPy
- if isinstance(sockaddr, tuple) and len(sockaddr) == 4:
- host, port, flowinfo, scope_id = cast(Tuple[str, int, int, int], sockaddr)
- if scope_id:
- # PyPy (as of v7.3.11) leaves the interface name in the result, so
- # we discard it and only get the scope ID from the end
- # (https://foss.heptapod.net/pypy/pypy/-/issues/3938)
- host = host.split("%")[0]
-
- # Add scope_id to the address
- return f"{host}%{scope_id}", port
- else:
- return host, port
- else:
- return cast(Tuple[str, int], sockaddr)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/glifLib.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/glifLib.py
deleted file mode 100644
index 6dee9db302f51525b69d3d28fcd704be8cce2212..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/glifLib.py
+++ /dev/null
@@ -1,2017 +0,0 @@
-"""
-glifLib.py -- Generic module for reading and writing the .glif format.
-
-More info about the .glif format (GLyphInterchangeFormat) can be found here:
-
- http://unifiedfontobject.org
-
-The main class in this module is GlyphSet. It manages a set of .glif files
-in a folder. It offers two ways to read glyph data, and one way to write
-glyph data. See the class doc string for details.
-"""
-
-from __future__ import annotations
-
-import logging
-import enum
-from warnings import warn
-from collections import OrderedDict
-import fs
-import fs.base
-import fs.errors
-import fs.osfs
-import fs.path
-from fontTools.misc.textTools import tobytes
-from fontTools.misc import plistlib
-from fontTools.pens.pointPen import AbstractPointPen, PointToSegmentPen
-from fontTools.ufoLib.errors import GlifLibError
-from fontTools.ufoLib.filenames import userNameToFileName
-from fontTools.ufoLib.validators import (
- genericTypeValidator,
- colorValidator,
- guidelinesValidator,
- anchorsValidator,
- identifierValidator,
- imageValidator,
- glyphLibValidator,
-)
-from fontTools.misc import etree
-from fontTools.ufoLib import _UFOBaseIO, UFOFormatVersion
-from fontTools.ufoLib.utils import numberTypes, _VersionTupleEnumMixin
-
-
-__all__ = [
- "GlyphSet",
- "GlifLibError",
- "readGlyphFromString",
- "writeGlyphToString",
- "glyphNameToFileName",
-]
-
-logger = logging.getLogger(__name__)
-
-
-# ---------
-# Constants
-# ---------
-
-CONTENTS_FILENAME = "contents.plist"
-LAYERINFO_FILENAME = "layerinfo.plist"
-
-
-class GLIFFormatVersion(tuple, _VersionTupleEnumMixin, enum.Enum):
- FORMAT_1_0 = (1, 0)
- FORMAT_2_0 = (2, 0)
-
- @classmethod
- def default(cls, ufoFormatVersion=None):
- if ufoFormatVersion is not None:
- return max(cls.supported_versions(ufoFormatVersion))
- return super().default()
-
- @classmethod
- def supported_versions(cls, ufoFormatVersion=None):
- if ufoFormatVersion is None:
- # if ufo format unspecified, return all the supported GLIF formats
- return super().supported_versions()
- # else only return the GLIF formats supported by the given UFO format
- versions = {cls.FORMAT_1_0}
- if ufoFormatVersion >= UFOFormatVersion.FORMAT_3_0:
- versions.add(cls.FORMAT_2_0)
- return frozenset(versions)
-
-
-# workaround for py3.11, see https://github.com/fonttools/fonttools/pull/2655
-GLIFFormatVersion.__str__ = _VersionTupleEnumMixin.__str__
-
-
-# ------------
-# Simple Glyph
-# ------------
-
-
-class Glyph:
-
- """
- Minimal glyph object. It has no glyph attributes until either
- the draw() or the drawPoints() method has been called.
- """
-
- def __init__(self, glyphName, glyphSet):
- self.glyphName = glyphName
- self.glyphSet = glyphSet
-
- def draw(self, pen, outputImpliedClosingLine=False):
- """
- Draw this glyph onto a *FontTools* Pen.
- """
- pointPen = PointToSegmentPen(
- pen, outputImpliedClosingLine=outputImpliedClosingLine
- )
- self.drawPoints(pointPen)
-
- def drawPoints(self, pointPen):
- """
- Draw this glyph onto a PointPen.
- """
- self.glyphSet.readGlyph(self.glyphName, self, pointPen)
-
-
-# ---------
-# Glyph Set
-# ---------
-
-
-class GlyphSet(_UFOBaseIO):
-
- """
- GlyphSet manages a set of .glif files inside one directory.
-
- GlyphSet's constructor takes a path to an existing directory as it's
- first argument. Reading glyph data can either be done through the
- readGlyph() method, or by using GlyphSet's dictionary interface, where
- the keys are glyph names and the values are (very) simple glyph objects.
-
- To write a glyph to the glyph set, you use the writeGlyph() method.
- The simple glyph objects returned through the dict interface do not
- support writing, they are just a convenient way to get at the glyph data.
- """
-
- glyphClass = Glyph
-
- def __init__(
- self,
- path,
- glyphNameToFileNameFunc=None,
- ufoFormatVersion=None,
- validateRead=True,
- validateWrite=True,
- expectContentsFile=False,
- ):
- """
- 'path' should be a path (string) to an existing local directory, or
- an instance of fs.base.FS class.
-
- The optional 'glyphNameToFileNameFunc' argument must be a callback
- function that takes two arguments: a glyph name and a list of all
- existing filenames (if any exist). It should return a file name
- (including the .glif extension). The glyphNameToFileName function
- is called whenever a file name is created for a given glyph name.
-
- ``validateRead`` will validate read operations. Its default is ``True``.
- ``validateWrite`` will validate write operations. Its default is ``True``.
- ``expectContentsFile`` will raise a GlifLibError if a contents.plist file is
- not found on the glyph set file system. This should be set to ``True`` if you
- are reading an existing UFO and ``False`` if you create a fresh glyph set.
- """
- try:
- ufoFormatVersion = UFOFormatVersion(ufoFormatVersion)
- except ValueError as e:
- from fontTools.ufoLib.errors import UnsupportedUFOFormat
-
- raise UnsupportedUFOFormat(
- f"Unsupported UFO format: {ufoFormatVersion!r}"
- ) from e
-
- if hasattr(path, "__fspath__"): # support os.PathLike objects
- path = path.__fspath__()
-
- if isinstance(path, str):
- try:
- filesystem = fs.osfs.OSFS(path)
- except fs.errors.CreateFailed:
- raise GlifLibError("No glyphs directory '%s'" % path)
- self._shouldClose = True
- elif isinstance(path, fs.base.FS):
- filesystem = path
- try:
- filesystem.check()
- except fs.errors.FilesystemClosed:
- raise GlifLibError("the filesystem '%s' is closed" % filesystem)
- self._shouldClose = False
- else:
- raise TypeError(
- "Expected a path string or fs object, found %s" % type(path).__name__
- )
- try:
- path = filesystem.getsyspath("/")
- except fs.errors.NoSysPath:
- # network or in-memory FS may not map to the local one
- path = str(filesystem)
- # 'dirName' is kept for backward compatibility only, but it's DEPRECATED
- # as it's not guaranteed that it maps to an existing OSFS directory.
- # Client could use the FS api via the `self.fs` attribute instead.
- self.dirName = fs.path.parts(path)[-1]
- self.fs = filesystem
- # if glyphSet contains no 'contents.plist', we consider it empty
- self._havePreviousFile = filesystem.exists(CONTENTS_FILENAME)
- if expectContentsFile and not self._havePreviousFile:
- raise GlifLibError(f"{CONTENTS_FILENAME} is missing.")
- # attribute kept for backward compatibility
- self.ufoFormatVersion = ufoFormatVersion.major
- self.ufoFormatVersionTuple = ufoFormatVersion
- if glyphNameToFileNameFunc is None:
- glyphNameToFileNameFunc = glyphNameToFileName
- self.glyphNameToFileName = glyphNameToFileNameFunc
- self._validateRead = validateRead
- self._validateWrite = validateWrite
- self._existingFileNames: set[str] | None = None
- self._reverseContents = None
-
- self.rebuildContents()
-
- def rebuildContents(self, validateRead=None):
- """
- Rebuild the contents dict by loading contents.plist.
-
- ``validateRead`` will validate the data, by default it is set to the
- class's ``validateRead`` value, can be overridden.
- """
- if validateRead is None:
- validateRead = self._validateRead
- contents = self._getPlist(CONTENTS_FILENAME, {})
- # validate the contents
- if validateRead:
- invalidFormat = False
- if not isinstance(contents, dict):
- invalidFormat = True
- else:
- for name, fileName in contents.items():
- if not isinstance(name, str):
- invalidFormat = True
- if not isinstance(fileName, str):
- invalidFormat = True
- elif not self.fs.exists(fileName):
- raise GlifLibError(
- "%s references a file that does not exist: %s"
- % (CONTENTS_FILENAME, fileName)
- )
- if invalidFormat:
- raise GlifLibError("%s is not properly formatted" % CONTENTS_FILENAME)
- self.contents = contents
- self._existingFileNames = None
- self._reverseContents = None
-
- def getReverseContents(self):
- """
- Return a reversed dict of self.contents, mapping file names to
- glyph names. This is primarily an aid for custom glyph name to file
- name schemes that want to make sure they don't generate duplicate
- file names. The file names are converted to lowercase so we can
- reliably check for duplicates that only differ in case, which is
- important for case-insensitive file systems.
- """
- if self._reverseContents is None:
- d = {}
- for k, v in self.contents.items():
- d[v.lower()] = k
- self._reverseContents = d
- return self._reverseContents
-
- def writeContents(self):
- """
- Write the contents.plist file out to disk. Call this method when
- you're done writing glyphs.
- """
- self._writePlist(CONTENTS_FILENAME, self.contents)
-
- # layer info
-
- def readLayerInfo(self, info, validateRead=None):
- """
- ``validateRead`` will validate the data, by default it is set to the
- class's ``validateRead`` value, can be overridden.
- """
- if validateRead is None:
- validateRead = self._validateRead
- infoDict = self._getPlist(LAYERINFO_FILENAME, {})
- if validateRead:
- if not isinstance(infoDict, dict):
- raise GlifLibError("layerinfo.plist is not properly formatted.")
- infoDict = validateLayerInfoVersion3Data(infoDict)
- # populate the object
- for attr, value in infoDict.items():
- try:
- setattr(info, attr, value)
- except AttributeError:
- raise GlifLibError(
- "The supplied layer info object does not support setting a necessary attribute (%s)."
- % attr
- )
-
- def writeLayerInfo(self, info, validateWrite=None):
- """
- ``validateWrite`` will validate the data, by default it is set to the
- class's ``validateWrite`` value, can be overridden.
- """
- if validateWrite is None:
- validateWrite = self._validateWrite
- if self.ufoFormatVersionTuple.major < 3:
- raise GlifLibError(
- "layerinfo.plist is not allowed in UFO %d."
- % self.ufoFormatVersionTuple.major
- )
- # gather data
- infoData = {}
- for attr in layerInfoVersion3ValueData.keys():
- if hasattr(info, attr):
- try:
- value = getattr(info, attr)
- except AttributeError:
- raise GlifLibError(
- "The supplied info object does not support getting a necessary attribute (%s)."
- % attr
- )
- if value is None or (attr == "lib" and not value):
- continue
- infoData[attr] = value
- if infoData:
- # validate
- if validateWrite:
- infoData = validateLayerInfoVersion3Data(infoData)
- # write file
- self._writePlist(LAYERINFO_FILENAME, infoData)
- elif self._havePreviousFile and self.fs.exists(LAYERINFO_FILENAME):
- # data empty, remove existing file
- self.fs.remove(LAYERINFO_FILENAME)
-
- def getGLIF(self, glyphName):
- """
- Get the raw GLIF text for a given glyph name. This only works
- for GLIF files that are already on disk.
-
- This method is useful in situations when the raw XML needs to be
- read from a glyph set for a particular glyph before fully parsing
- it into an object structure via the readGlyph method.
-
- Raises KeyError if 'glyphName' is not in contents.plist, or
- GlifLibError if the file associated with can't be found.
- """
- fileName = self.contents[glyphName]
- try:
- return self.fs.readbytes(fileName)
- except fs.errors.ResourceNotFound:
- raise GlifLibError(
- "The file '%s' associated with glyph '%s' in contents.plist "
- "does not exist on %s" % (fileName, glyphName, self.fs)
- )
-
- def getGLIFModificationTime(self, glyphName):
- """
- Returns the modification time for the GLIF file with 'glyphName', as
- a floating point number giving the number of seconds since the epoch.
- Return None if the associated file does not exist or the underlying
- filesystem does not support getting modified times.
- Raises KeyError if the glyphName is not in contents.plist.
- """
- fileName = self.contents[glyphName]
- return self.getFileModificationTime(fileName)
-
- # reading/writing API
-
- def readGlyph(self, glyphName, glyphObject=None, pointPen=None, validate=None):
- """
- Read a .glif file for 'glyphName' from the glyph set. The
- 'glyphObject' argument can be any kind of object (even None);
- the readGlyph() method will attempt to set the following
- attributes on it:
-
- width
- the advance width of the glyph
- height
- the advance height of the glyph
- unicodes
- a list of unicode values for this glyph
- note
- a string
- lib
- a dictionary containing custom data
- image
- a dictionary containing image data
- guidelines
- a list of guideline data dictionaries
- anchors
- a list of anchor data dictionaries
-
- All attributes are optional, in two ways:
-
- 1) An attribute *won't* be set if the .glif file doesn't
- contain data for it. 'glyphObject' will have to deal
- with default values itself.
- 2) If setting the attribute fails with an AttributeError
- (for example if the 'glyphObject' attribute is read-
- only), readGlyph() will not propagate that exception,
- but ignore that attribute.
-
- To retrieve outline information, you need to pass an object
- conforming to the PointPen protocol as the 'pointPen' argument.
- This argument may be None if you don't need the outline data.
-
- readGlyph() will raise KeyError if the glyph is not present in
- the glyph set.
-
- ``validate`` will validate the data, by default it is set to the
- class's ``validateRead`` value, can be overridden.
- """
- if validate is None:
- validate = self._validateRead
- text = self.getGLIF(glyphName)
- try:
- tree = _glifTreeFromString(text)
- formatVersions = GLIFFormatVersion.supported_versions(
- self.ufoFormatVersionTuple
- )
- _readGlyphFromTree(
- tree,
- glyphObject,
- pointPen,
- formatVersions=formatVersions,
- validate=validate,
- )
- except GlifLibError as glifLibError:
- # Re-raise with a note that gives extra context, describing where
- # the error occurred.
- fileName = self.contents[glyphName]
- try:
- glifLocation = f"'{self.fs.getsyspath(fileName)}'"
- except fs.errors.NoSysPath:
- # Network or in-memory FS may not map to a local path, so use
- # the best string representation we have.
- glifLocation = f"'{fileName}' from '{str(self.fs)}'"
-
- glifLibError._add_note(
- f"The issue is in glyph '{glyphName}', located in {glifLocation}."
- )
- raise
-
- def writeGlyph(
- self,
- glyphName,
- glyphObject=None,
- drawPointsFunc=None,
- formatVersion=None,
- validate=None,
- ):
- """
- Write a .glif file for 'glyphName' to the glyph set. The
- 'glyphObject' argument can be any kind of object (even None);
- the writeGlyph() method will attempt to get the following
- attributes from it:
-
- width
- the advance width of the glyph
- height
- the advance height of the glyph
- unicodes
- a list of unicode values for this glyph
- note
- a string
- lib
- a dictionary containing custom data
- image
- a dictionary containing image data
- guidelines
- a list of guideline data dictionaries
- anchors
- a list of anchor data dictionaries
-
- All attributes are optional: if 'glyphObject' doesn't
- have the attribute, it will simply be skipped.
-
- To write outline data to the .glif file, writeGlyph() needs
- a function (any callable object actually) that will take one
- argument: an object that conforms to the PointPen protocol.
- The function will be called by writeGlyph(); it has to call the
- proper PointPen methods to transfer the outline to the .glif file.
-
- The GLIF format version will be chosen based on the ufoFormatVersion
- passed during the creation of this object. If a particular format
- version is desired, it can be passed with the formatVersion argument.
- The formatVersion argument accepts either a tuple of integers for
- (major, minor), or a single integer for the major digit only (with
- minor digit implied as 0).
-
- An UnsupportedGLIFFormat exception is raised if the requested GLIF
- formatVersion is not supported.
-
- ``validate`` will validate the data, by default it is set to the
- class's ``validateWrite`` value, can be overridden.
- """
- if formatVersion is None:
- formatVersion = GLIFFormatVersion.default(self.ufoFormatVersionTuple)
- else:
- try:
- formatVersion = GLIFFormatVersion(formatVersion)
- except ValueError as e:
- from fontTools.ufoLib.errors import UnsupportedGLIFFormat
-
- raise UnsupportedGLIFFormat(
- f"Unsupported GLIF format version: {formatVersion!r}"
- ) from e
- if formatVersion not in GLIFFormatVersion.supported_versions(
- self.ufoFormatVersionTuple
- ):
- from fontTools.ufoLib.errors import UnsupportedGLIFFormat
-
- raise UnsupportedGLIFFormat(
- f"Unsupported GLIF format version ({formatVersion!s}) "
- f"for UFO format version {self.ufoFormatVersionTuple!s}."
- )
- if validate is None:
- validate = self._validateWrite
- fileName = self.contents.get(glyphName)
- if fileName is None:
- if self._existingFileNames is None:
- self._existingFileNames = {
- fileName.lower() for fileName in self.contents.values()
- }
- fileName = self.glyphNameToFileName(glyphName, self._existingFileNames)
- self.contents[glyphName] = fileName
- self._existingFileNames.add(fileName.lower())
- if self._reverseContents is not None:
- self._reverseContents[fileName.lower()] = glyphName
- data = _writeGlyphToBytes(
- glyphName,
- glyphObject,
- drawPointsFunc,
- formatVersion=formatVersion,
- validate=validate,
- )
- if (
- self._havePreviousFile
- and self.fs.exists(fileName)
- and data == self.fs.readbytes(fileName)
- ):
- return
- self.fs.writebytes(fileName, data)
-
- def deleteGlyph(self, glyphName):
- """Permanently delete the glyph from the glyph set on disk. Will
- raise KeyError if the glyph is not present in the glyph set.
- """
- fileName = self.contents[glyphName]
- self.fs.remove(fileName)
- if self._existingFileNames is not None:
- self._existingFileNames.remove(fileName.lower())
- if self._reverseContents is not None:
- del self._reverseContents[fileName.lower()]
- del self.contents[glyphName]
-
- # dict-like support
-
- def keys(self):
- return list(self.contents.keys())
-
- def has_key(self, glyphName):
- return glyphName in self.contents
-
- __contains__ = has_key
-
- def __len__(self):
- return len(self.contents)
-
- def __getitem__(self, glyphName):
- if glyphName not in self.contents:
- raise KeyError(glyphName)
- return self.glyphClass(glyphName, self)
-
- # quickly fetch unicode values
-
- def getUnicodes(self, glyphNames=None):
- """
- Return a dictionary that maps glyph names to lists containing
- the unicode value[s] for that glyph, if any. This parses the .glif
- files partially, so it is a lot faster than parsing all files completely.
- By default this checks all glyphs, but a subset can be passed with glyphNames.
- """
- unicodes = {}
- if glyphNames is None:
- glyphNames = self.contents.keys()
- for glyphName in glyphNames:
- text = self.getGLIF(glyphName)
- unicodes[glyphName] = _fetchUnicodes(text)
- return unicodes
-
- def getComponentReferences(self, glyphNames=None):
- """
- Return a dictionary that maps glyph names to lists containing the
- base glyph name of components in the glyph. This parses the .glif
- files partially, so it is a lot faster than parsing all files completely.
- By default this checks all glyphs, but a subset can be passed with glyphNames.
- """
- components = {}
- if glyphNames is None:
- glyphNames = self.contents.keys()
- for glyphName in glyphNames:
- text = self.getGLIF(glyphName)
- components[glyphName] = _fetchComponentBases(text)
- return components
-
- def getImageReferences(self, glyphNames=None):
- """
- Return a dictionary that maps glyph names to the file name of the image
- referenced by the glyph. This parses the .glif files partially, so it is a
- lot faster than parsing all files completely.
- By default this checks all glyphs, but a subset can be passed with glyphNames.
- """
- images = {}
- if glyphNames is None:
- glyphNames = self.contents.keys()
- for glyphName in glyphNames:
- text = self.getGLIF(glyphName)
- images[glyphName] = _fetchImageFileName(text)
- return images
-
- def close(self):
- if self._shouldClose:
- self.fs.close()
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, exc_tb):
- self.close()
-
-
-# -----------------------
-# Glyph Name to File Name
-# -----------------------
-
-
-def glyphNameToFileName(glyphName, existingFileNames):
- """
- Wrapper around the userNameToFileName function in filenames.py
-
- Note that existingFileNames should be a set for large glyphsets
- or performance will suffer.
- """
- if existingFileNames is None:
- existingFileNames = set()
- return userNameToFileName(glyphName, existing=existingFileNames, suffix=".glif")
-
-
-# -----------------------
-# GLIF To and From String
-# -----------------------
-
-
-def readGlyphFromString(
- aString,
- glyphObject=None,
- pointPen=None,
- formatVersions=None,
- validate=True,
-):
- """
- Read .glif data from a string into a glyph object.
-
- The 'glyphObject' argument can be any kind of object (even None);
- the readGlyphFromString() method will attempt to set the following
- attributes on it:
-
- width
- the advance width of the glyph
- height
- the advance height of the glyph
- unicodes
- a list of unicode values for this glyph
- note
- a string
- lib
- a dictionary containing custom data
- image
- a dictionary containing image data
- guidelines
- a list of guideline data dictionaries
- anchors
- a list of anchor data dictionaries
-
- All attributes are optional, in two ways:
-
- 1) An attribute *won't* be set if the .glif file doesn't
- contain data for it. 'glyphObject' will have to deal
- with default values itself.
- 2) If setting the attribute fails with an AttributeError
- (for example if the 'glyphObject' attribute is read-
- only), readGlyphFromString() will not propagate that
- exception, but ignore that attribute.
-
- To retrieve outline information, you need to pass an object
- conforming to the PointPen protocol as the 'pointPen' argument.
- This argument may be None if you don't need the outline data.
-
- The formatVersions optional argument define the GLIF format versions
- that are allowed to be read.
- The type is Optional[Iterable[Tuple[int, int], int]]. It can contain
- either integers (for the major versions to be allowed, with minor
- digits defaulting to 0), or tuples of integers to specify both
- (major, minor) versions.
- By default when formatVersions is None all the GLIF format versions
- currently defined are allowed to be read.
-
- ``validate`` will validate the read data. It is set to ``True`` by default.
- """
- tree = _glifTreeFromString(aString)
-
- if formatVersions is None:
- validFormatVersions = GLIFFormatVersion.supported_versions()
- else:
- validFormatVersions, invalidFormatVersions = set(), set()
- for v in formatVersions:
- try:
- formatVersion = GLIFFormatVersion(v)
- except ValueError:
- invalidFormatVersions.add(v)
- else:
- validFormatVersions.add(formatVersion)
- if not validFormatVersions:
- raise ValueError(
- "None of the requested GLIF formatVersions are supported: "
- f"{formatVersions!r}"
- )
-
- _readGlyphFromTree(
- tree,
- glyphObject,
- pointPen,
- formatVersions=validFormatVersions,
- validate=validate,
- )
-
-
-def _writeGlyphToBytes(
- glyphName,
- glyphObject=None,
- drawPointsFunc=None,
- writer=None,
- formatVersion=None,
- validate=True,
-):
- """Return .glif data for a glyph as a UTF-8 encoded bytes string."""
- try:
- formatVersion = GLIFFormatVersion(formatVersion)
- except ValueError:
- from fontTools.ufoLib.errors import UnsupportedGLIFFormat
-
- raise UnsupportedGLIFFormat(
- "Unsupported GLIF format version: {formatVersion!r}"
- )
- # start
- if validate and not isinstance(glyphName, str):
- raise GlifLibError("The glyph name is not properly formatted.")
- if validate and len(glyphName) == 0:
- raise GlifLibError("The glyph name is empty.")
- glyphAttrs = OrderedDict(
- [("name", glyphName), ("format", repr(formatVersion.major))]
- )
- if formatVersion.minor != 0:
- glyphAttrs["formatMinor"] = repr(formatVersion.minor)
- root = etree.Element("glyph", glyphAttrs)
- identifiers = set()
- # advance
- _writeAdvance(glyphObject, root, validate)
- # unicodes
- if getattr(glyphObject, "unicodes", None):
- _writeUnicodes(glyphObject, root, validate)
- # note
- if getattr(glyphObject, "note", None):
- _writeNote(glyphObject, root, validate)
- # image
- if formatVersion.major >= 2 and getattr(glyphObject, "image", None):
- _writeImage(glyphObject, root, validate)
- # guidelines
- if formatVersion.major >= 2 and getattr(glyphObject, "guidelines", None):
- _writeGuidelines(glyphObject, root, identifiers, validate)
- # anchors
- anchors = getattr(glyphObject, "anchors", None)
- if formatVersion.major >= 2 and anchors:
- _writeAnchors(glyphObject, root, identifiers, validate)
- # outline
- if drawPointsFunc is not None:
- outline = etree.SubElement(root, "outline")
- pen = GLIFPointPen(outline, identifiers=identifiers, validate=validate)
- drawPointsFunc(pen)
- if formatVersion.major == 1 and anchors:
- _writeAnchorsFormat1(pen, anchors, validate)
- # prevent lxml from writing self-closing tags
- if not len(outline):
- outline.text = "\n "
- # lib
- if getattr(glyphObject, "lib", None):
- _writeLib(glyphObject, root, validate)
- # return the text
- data = etree.tostring(
- root, encoding="UTF-8", xml_declaration=True, pretty_print=True
- )
- return data
-
-
-def writeGlyphToString(
- glyphName,
- glyphObject=None,
- drawPointsFunc=None,
- formatVersion=None,
- validate=True,
-):
- """
- Return .glif data for a glyph as a string. The XML declaration's
- encoding is always set to "UTF-8".
- The 'glyphObject' argument can be any kind of object (even None);
- the writeGlyphToString() method will attempt to get the following
- attributes from it:
-
- width
- the advance width of the glyph
- height
- the advance height of the glyph
- unicodes
- a list of unicode values for this glyph
- note
- a string
- lib
- a dictionary containing custom data
- image
- a dictionary containing image data
- guidelines
- a list of guideline data dictionaries
- anchors
- a list of anchor data dictionaries
-
- All attributes are optional: if 'glyphObject' doesn't
- have the attribute, it will simply be skipped.
-
- To write outline data to the .glif file, writeGlyphToString() needs
- a function (any callable object actually) that will take one
- argument: an object that conforms to the PointPen protocol.
- The function will be called by writeGlyphToString(); it has to call the
- proper PointPen methods to transfer the outline to the .glif file.
-
- The GLIF format version can be specified with the formatVersion argument.
- This accepts either a tuple of integers for (major, minor), or a single
- integer for the major digit only (with minor digit implied as 0).
- By default when formatVesion is None the latest GLIF format version will
- be used; currently it's 2.0, which is equivalent to formatVersion=(2, 0).
-
- An UnsupportedGLIFFormat exception is raised if the requested UFO
- formatVersion is not supported.
-
- ``validate`` will validate the written data. It is set to ``True`` by default.
- """
- data = _writeGlyphToBytes(
- glyphName,
- glyphObject=glyphObject,
- drawPointsFunc=drawPointsFunc,
- formatVersion=formatVersion,
- validate=validate,
- )
- return data.decode("utf-8")
-
-
-def _writeAdvance(glyphObject, element, validate):
- width = getattr(glyphObject, "width", None)
- if width is not None:
- if validate and not isinstance(width, numberTypes):
- raise GlifLibError("width attribute must be int or float")
- if width == 0:
- width = None
- height = getattr(glyphObject, "height", None)
- if height is not None:
- if validate and not isinstance(height, numberTypes):
- raise GlifLibError("height attribute must be int or float")
- if height == 0:
- height = None
- if width is not None and height is not None:
- etree.SubElement(
- element,
- "advance",
- OrderedDict([("height", repr(height)), ("width", repr(width))]),
- )
- elif width is not None:
- etree.SubElement(element, "advance", dict(width=repr(width)))
- elif height is not None:
- etree.SubElement(element, "advance", dict(height=repr(height)))
-
-
-def _writeUnicodes(glyphObject, element, validate):
- unicodes = getattr(glyphObject, "unicodes", None)
- if validate and isinstance(unicodes, int):
- unicodes = [unicodes]
- seen = set()
- for code in unicodes:
- if validate and not isinstance(code, int):
- raise GlifLibError("unicode values must be int")
- if code in seen:
- continue
- seen.add(code)
- hexCode = "%04X" % code
- etree.SubElement(element, "unicode", dict(hex=hexCode))
-
-
-def _writeNote(glyphObject, element, validate):
- note = getattr(glyphObject, "note", None)
- if validate and not isinstance(note, str):
- raise GlifLibError("note attribute must be str")
- note = note.strip()
- note = "\n" + note + "\n"
- etree.SubElement(element, "note").text = note
-
-
-def _writeImage(glyphObject, element, validate):
- image = getattr(glyphObject, "image", None)
- if validate and not imageValidator(image):
- raise GlifLibError(
- "image attribute must be a dict or dict-like object with the proper structure."
- )
- attrs = OrderedDict([("fileName", image["fileName"])])
- for attr, default in _transformationInfo:
- value = image.get(attr, default)
- if value != default:
- attrs[attr] = repr(value)
- color = image.get("color")
- if color is not None:
- attrs["color"] = color
- etree.SubElement(element, "image", attrs)
-
-
-def _writeGuidelines(glyphObject, element, identifiers, validate):
- guidelines = getattr(glyphObject, "guidelines", [])
- if validate and not guidelinesValidator(guidelines):
- raise GlifLibError("guidelines attribute does not have the proper structure.")
- for guideline in guidelines:
- attrs = OrderedDict()
- x = guideline.get("x")
- if x is not None:
- attrs["x"] = repr(x)
- y = guideline.get("y")
- if y is not None:
- attrs["y"] = repr(y)
- angle = guideline.get("angle")
- if angle is not None:
- attrs["angle"] = repr(angle)
- name = guideline.get("name")
- if name is not None:
- attrs["name"] = name
- color = guideline.get("color")
- if color is not None:
- attrs["color"] = color
- identifier = guideline.get("identifier")
- if identifier is not None:
- if validate and identifier in identifiers:
- raise GlifLibError("identifier used more than once: %s" % identifier)
- attrs["identifier"] = identifier
- identifiers.add(identifier)
- etree.SubElement(element, "guideline", attrs)
-
-
-def _writeAnchorsFormat1(pen, anchors, validate):
- if validate and not anchorsValidator(anchors):
- raise GlifLibError("anchors attribute does not have the proper structure.")
- for anchor in anchors:
- attrs = {}
- x = anchor["x"]
- attrs["x"] = repr(x)
- y = anchor["y"]
- attrs["y"] = repr(y)
- name = anchor.get("name")
- if name is not None:
- attrs["name"] = name
- pen.beginPath()
- pen.addPoint((x, y), segmentType="move", name=name)
- pen.endPath()
-
-
-def _writeAnchors(glyphObject, element, identifiers, validate):
- anchors = getattr(glyphObject, "anchors", [])
- if validate and not anchorsValidator(anchors):
- raise GlifLibError("anchors attribute does not have the proper structure.")
- for anchor in anchors:
- attrs = OrderedDict()
- x = anchor["x"]
- attrs["x"] = repr(x)
- y = anchor["y"]
- attrs["y"] = repr(y)
- name = anchor.get("name")
- if name is not None:
- attrs["name"] = name
- color = anchor.get("color")
- if color is not None:
- attrs["color"] = color
- identifier = anchor.get("identifier")
- if identifier is not None:
- if validate and identifier in identifiers:
- raise GlifLibError("identifier used more than once: %s" % identifier)
- attrs["identifier"] = identifier
- identifiers.add(identifier)
- etree.SubElement(element, "anchor", attrs)
-
-
-def _writeLib(glyphObject, element, validate):
- lib = getattr(glyphObject, "lib", None)
- if not lib:
- # don't write empty lib
- return
- if validate:
- valid, message = glyphLibValidator(lib)
- if not valid:
- raise GlifLibError(message)
- if not isinstance(lib, dict):
- lib = dict(lib)
- # plist inside GLIF begins with 2 levels of indentation
- e = plistlib.totree(lib, indent_level=2)
- etree.SubElement(element, "lib").append(e)
-
-
-# -----------------------
-# layerinfo.plist Support
-# -----------------------
-
-layerInfoVersion3ValueData = {
- "color": dict(type=str, valueValidator=colorValidator),
- "lib": dict(type=dict, valueValidator=genericTypeValidator),
-}
-
-
-def validateLayerInfoVersion3ValueForAttribute(attr, value):
- """
- This performs very basic validation of the value for attribute
- following the UFO 3 fontinfo.plist specification. The results
- of this should not be interpretted as *correct* for the font
- that they are part of. This merely indicates that the value
- is of the proper type and, where the specification defines
- a set range of possible values for an attribute, that the
- value is in the accepted range.
- """
- if attr not in layerInfoVersion3ValueData:
- return False
- dataValidationDict = layerInfoVersion3ValueData[attr]
- valueType = dataValidationDict.get("type")
- validator = dataValidationDict.get("valueValidator")
- valueOptions = dataValidationDict.get("valueOptions")
- # have specific options for the validator
- if valueOptions is not None:
- isValidValue = validator(value, valueOptions)
- # no specific options
- else:
- if validator == genericTypeValidator:
- isValidValue = validator(value, valueType)
- else:
- isValidValue = validator(value)
- return isValidValue
-
-
-def validateLayerInfoVersion3Data(infoData):
- """
- This performs very basic validation of the value for infoData
- following the UFO 3 layerinfo.plist specification. The results
- of this should not be interpretted as *correct* for the font
- that they are part of. This merely indicates that the values
- are of the proper type and, where the specification defines
- a set range of possible values for an attribute, that the
- value is in the accepted range.
- """
- for attr, value in infoData.items():
- if attr not in layerInfoVersion3ValueData:
- raise GlifLibError("Unknown attribute %s." % attr)
- isValidValue = validateLayerInfoVersion3ValueForAttribute(attr, value)
- if not isValidValue:
- raise GlifLibError(f"Invalid value for attribute {attr} ({value!r}).")
- return infoData
-
-
-# -----------------
-# GLIF Tree Support
-# -----------------
-
-
-def _glifTreeFromFile(aFile):
- if etree._have_lxml:
- tree = etree.parse(aFile, parser=etree.XMLParser(remove_comments=True))
- else:
- tree = etree.parse(aFile)
- root = tree.getroot()
- if root.tag != "glyph":
- raise GlifLibError("The GLIF is not properly formatted.")
- if root.text and root.text.strip() != "":
- raise GlifLibError("Invalid GLIF structure.")
- return root
-
-
-def _glifTreeFromString(aString):
- data = tobytes(aString, encoding="utf-8")
- try:
- if etree._have_lxml:
- root = etree.fromstring(data, parser=etree.XMLParser(remove_comments=True))
- else:
- root = etree.fromstring(data)
- except Exception as etree_exception:
- raise GlifLibError("GLIF contains invalid XML.") from etree_exception
-
- if root.tag != "glyph":
- raise GlifLibError("The GLIF is not properly formatted.")
- if root.text and root.text.strip() != "":
- raise GlifLibError("Invalid GLIF structure.")
- return root
-
-
-def _readGlyphFromTree(
- tree,
- glyphObject=None,
- pointPen=None,
- formatVersions=GLIFFormatVersion.supported_versions(),
- validate=True,
-):
- # check the format version
- formatVersionMajor = tree.get("format")
- if validate and formatVersionMajor is None:
- raise GlifLibError("Unspecified format version in GLIF.")
- formatVersionMinor = tree.get("formatMinor", 0)
- try:
- formatVersion = GLIFFormatVersion(
- (int(formatVersionMajor), int(formatVersionMinor))
- )
- except ValueError as e:
- msg = "Unsupported GLIF format: %s.%s" % (
- formatVersionMajor,
- formatVersionMinor,
- )
- if validate:
- from fontTools.ufoLib.errors import UnsupportedGLIFFormat
-
- raise UnsupportedGLIFFormat(msg) from e
- # warn but continue using the latest supported format
- formatVersion = GLIFFormatVersion.default()
- logger.warning(
- "%s. Assuming the latest supported version (%s). "
- "Some data may be skipped or parsed incorrectly.",
- msg,
- formatVersion,
- )
-
- if validate and formatVersion not in formatVersions:
- raise GlifLibError(f"Forbidden GLIF format version: {formatVersion!s}")
-
- try:
- readGlyphFromTree = _READ_GLYPH_FROM_TREE_FUNCS[formatVersion]
- except KeyError:
- raise NotImplementedError(formatVersion)
-
- readGlyphFromTree(
- tree=tree,
- glyphObject=glyphObject,
- pointPen=pointPen,
- validate=validate,
- formatMinor=formatVersion.minor,
- )
-
-
-def _readGlyphFromTreeFormat1(
- tree, glyphObject=None, pointPen=None, validate=None, **kwargs
-):
- # get the name
- _readName(glyphObject, tree, validate)
- # populate the sub elements
- unicodes = []
- haveSeenAdvance = haveSeenOutline = haveSeenLib = haveSeenNote = False
- for element in tree:
- if element.tag == "outline":
- if validate:
- if haveSeenOutline:
- raise GlifLibError("The outline element occurs more than once.")
- if element.attrib:
- raise GlifLibError(
- "The outline element contains unknown attributes."
- )
- if element.text and element.text.strip() != "":
- raise GlifLibError("Invalid outline structure.")
- haveSeenOutline = True
- buildOutlineFormat1(glyphObject, pointPen, element, validate)
- elif glyphObject is None:
- continue
- elif element.tag == "advance":
- if validate and haveSeenAdvance:
- raise GlifLibError("The advance element occurs more than once.")
- haveSeenAdvance = True
- _readAdvance(glyphObject, element)
- elif element.tag == "unicode":
- try:
- v = element.get("hex")
- v = int(v, 16)
- if v not in unicodes:
- unicodes.append(v)
- except ValueError:
- raise GlifLibError(
- "Illegal value for hex attribute of unicode element."
- )
- elif element.tag == "note":
- if validate and haveSeenNote:
- raise GlifLibError("The note element occurs more than once.")
- haveSeenNote = True
- _readNote(glyphObject, element)
- elif element.tag == "lib":
- if validate and haveSeenLib:
- raise GlifLibError("The lib element occurs more than once.")
- haveSeenLib = True
- _readLib(glyphObject, element, validate)
- else:
- raise GlifLibError("Unknown element in GLIF: %s" % element)
- # set the collected unicodes
- if unicodes:
- _relaxedSetattr(glyphObject, "unicodes", unicodes)
-
-
-def _readGlyphFromTreeFormat2(
- tree, glyphObject=None, pointPen=None, validate=None, formatMinor=0
-):
- # get the name
- _readName(glyphObject, tree, validate)
- # populate the sub elements
- unicodes = []
- guidelines = []
- anchors = []
- haveSeenAdvance = (
- haveSeenImage
- ) = haveSeenOutline = haveSeenLib = haveSeenNote = False
- identifiers = set()
- for element in tree:
- if element.tag == "outline":
- if validate:
- if haveSeenOutline:
- raise GlifLibError("The outline element occurs more than once.")
- if element.attrib:
- raise GlifLibError(
- "The outline element contains unknown attributes."
- )
- if element.text and element.text.strip() != "":
- raise GlifLibError("Invalid outline structure.")
- haveSeenOutline = True
- if pointPen is not None:
- buildOutlineFormat2(
- glyphObject, pointPen, element, identifiers, validate
- )
- elif glyphObject is None:
- continue
- elif element.tag == "advance":
- if validate and haveSeenAdvance:
- raise GlifLibError("The advance element occurs more than once.")
- haveSeenAdvance = True
- _readAdvance(glyphObject, element)
- elif element.tag == "unicode":
- try:
- v = element.get("hex")
- v = int(v, 16)
- if v not in unicodes:
- unicodes.append(v)
- except ValueError:
- raise GlifLibError(
- "Illegal value for hex attribute of unicode element."
- )
- elif element.tag == "guideline":
- if validate and len(element):
- raise GlifLibError("Unknown children in guideline element.")
- attrib = dict(element.attrib)
- for attr in ("x", "y", "angle"):
- if attr in attrib:
- attrib[attr] = _number(attrib[attr])
- guidelines.append(attrib)
- elif element.tag == "anchor":
- if validate and len(element):
- raise GlifLibError("Unknown children in anchor element.")
- attrib = dict(element.attrib)
- for attr in ("x", "y"):
- if attr in element.attrib:
- attrib[attr] = _number(attrib[attr])
- anchors.append(attrib)
- elif element.tag == "image":
- if validate:
- if haveSeenImage:
- raise GlifLibError("The image element occurs more than once.")
- if len(element):
- raise GlifLibError("Unknown children in image element.")
- haveSeenImage = True
- _readImage(glyphObject, element, validate)
- elif element.tag == "note":
- if validate and haveSeenNote:
- raise GlifLibError("The note element occurs more than once.")
- haveSeenNote = True
- _readNote(glyphObject, element)
- elif element.tag == "lib":
- if validate and haveSeenLib:
- raise GlifLibError("The lib element occurs more than once.")
- haveSeenLib = True
- _readLib(glyphObject, element, validate)
- else:
- raise GlifLibError("Unknown element in GLIF: %s" % element)
- # set the collected unicodes
- if unicodes:
- _relaxedSetattr(glyphObject, "unicodes", unicodes)
- # set the collected guidelines
- if guidelines:
- if validate and not guidelinesValidator(guidelines, identifiers):
- raise GlifLibError("The guidelines are improperly formatted.")
- _relaxedSetattr(glyphObject, "guidelines", guidelines)
- # set the collected anchors
- if anchors:
- if validate and not anchorsValidator(anchors, identifiers):
- raise GlifLibError("The anchors are improperly formatted.")
- _relaxedSetattr(glyphObject, "anchors", anchors)
-
-
-_READ_GLYPH_FROM_TREE_FUNCS = {
- GLIFFormatVersion.FORMAT_1_0: _readGlyphFromTreeFormat1,
- GLIFFormatVersion.FORMAT_2_0: _readGlyphFromTreeFormat2,
-}
-
-
-def _readName(glyphObject, root, validate):
- glyphName = root.get("name")
- if validate and not glyphName:
- raise GlifLibError("Empty glyph name in GLIF.")
- if glyphName and glyphObject is not None:
- _relaxedSetattr(glyphObject, "name", glyphName)
-
-
-def _readAdvance(glyphObject, advance):
- width = _number(advance.get("width", 0))
- _relaxedSetattr(glyphObject, "width", width)
- height = _number(advance.get("height", 0))
- _relaxedSetattr(glyphObject, "height", height)
-
-
-def _readNote(glyphObject, note):
- lines = note.text.split("\n")
- note = "\n".join(line.strip() for line in lines if line.strip())
- _relaxedSetattr(glyphObject, "note", note)
-
-
-def _readLib(glyphObject, lib, validate):
- assert len(lib) == 1
- child = lib[0]
- plist = plistlib.fromtree(child)
- if validate:
- valid, message = glyphLibValidator(plist)
- if not valid:
- raise GlifLibError(message)
- _relaxedSetattr(glyphObject, "lib", plist)
-
-
-def _readImage(glyphObject, image, validate):
- imageData = dict(image.attrib)
- for attr, default in _transformationInfo:
- value = imageData.get(attr, default)
- imageData[attr] = _number(value)
- if validate and not imageValidator(imageData):
- raise GlifLibError("The image element is not properly formatted.")
- _relaxedSetattr(glyphObject, "image", imageData)
-
-
-# ----------------
-# GLIF to PointPen
-# ----------------
-
-contourAttributesFormat2 = {"identifier"}
-componentAttributesFormat1 = {
- "base",
- "xScale",
- "xyScale",
- "yxScale",
- "yScale",
- "xOffset",
- "yOffset",
-}
-componentAttributesFormat2 = componentAttributesFormat1 | {"identifier"}
-pointAttributesFormat1 = {"x", "y", "type", "smooth", "name"}
-pointAttributesFormat2 = pointAttributesFormat1 | {"identifier"}
-pointSmoothOptions = {"no", "yes"}
-pointTypeOptions = {"move", "line", "offcurve", "curve", "qcurve"}
-
-# format 1
-
-
-def buildOutlineFormat1(glyphObject, pen, outline, validate):
- anchors = []
- for element in outline:
- if element.tag == "contour":
- if len(element) == 1:
- point = element[0]
- if point.tag == "point":
- anchor = _buildAnchorFormat1(point, validate)
- if anchor is not None:
- anchors.append(anchor)
- continue
- if pen is not None:
- _buildOutlineContourFormat1(pen, element, validate)
- elif element.tag == "component":
- if pen is not None:
- _buildOutlineComponentFormat1(pen, element, validate)
- else:
- raise GlifLibError("Unknown element in outline element: %s" % element)
- if glyphObject is not None and anchors:
- if validate and not anchorsValidator(anchors):
- raise GlifLibError("GLIF 1 anchors are not properly formatted.")
- _relaxedSetattr(glyphObject, "anchors", anchors)
-
-
-def _buildAnchorFormat1(point, validate):
- if point.get("type") != "move":
- return None
- name = point.get("name")
- if name is None:
- return None
- x = point.get("x")
- y = point.get("y")
- if validate and x is None:
- raise GlifLibError("Required x attribute is missing in point element.")
- if validate and y is None:
- raise GlifLibError("Required y attribute is missing in point element.")
- x = _number(x)
- y = _number(y)
- anchor = dict(x=x, y=y, name=name)
- return anchor
-
-
-def _buildOutlineContourFormat1(pen, contour, validate):
- if validate and contour.attrib:
- raise GlifLibError("Unknown attributes in contour element.")
- pen.beginPath()
- if len(contour):
- massaged = _validateAndMassagePointStructures(
- contour,
- pointAttributesFormat1,
- openContourOffCurveLeniency=True,
- validate=validate,
- )
- _buildOutlinePointsFormat1(pen, massaged)
- pen.endPath()
-
-
-def _buildOutlinePointsFormat1(pen, contour):
- for point in contour:
- x = point["x"]
- y = point["y"]
- segmentType = point["segmentType"]
- smooth = point["smooth"]
- name = point["name"]
- pen.addPoint((x, y), segmentType=segmentType, smooth=smooth, name=name)
-
-
-def _buildOutlineComponentFormat1(pen, component, validate):
- if validate:
- if len(component):
- raise GlifLibError("Unknown child elements of component element.")
- for attr in component.attrib.keys():
- if attr not in componentAttributesFormat1:
- raise GlifLibError("Unknown attribute in component element: %s" % attr)
- baseGlyphName = component.get("base")
- if validate and baseGlyphName is None:
- raise GlifLibError("The base attribute is not defined in the component.")
- transformation = []
- for attr, default in _transformationInfo:
- value = component.get(attr)
- if value is None:
- value = default
- else:
- value = _number(value)
- transformation.append(value)
- pen.addComponent(baseGlyphName, tuple(transformation))
-
-
-# format 2
-
-
-def buildOutlineFormat2(glyphObject, pen, outline, identifiers, validate):
- for element in outline:
- if element.tag == "contour":
- _buildOutlineContourFormat2(pen, element, identifiers, validate)
- elif element.tag == "component":
- _buildOutlineComponentFormat2(pen, element, identifiers, validate)
- else:
- raise GlifLibError("Unknown element in outline element: %s" % element.tag)
-
-
-def _buildOutlineContourFormat2(pen, contour, identifiers, validate):
- if validate:
- for attr in contour.attrib.keys():
- if attr not in contourAttributesFormat2:
- raise GlifLibError("Unknown attribute in contour element: %s" % attr)
- identifier = contour.get("identifier")
- if identifier is not None:
- if validate:
- if identifier in identifiers:
- raise GlifLibError(
- "The identifier %s is used more than once." % identifier
- )
- if not identifierValidator(identifier):
- raise GlifLibError(
- "The contour identifier %s is not valid." % identifier
- )
- identifiers.add(identifier)
- try:
- pen.beginPath(identifier=identifier)
- except TypeError:
- pen.beginPath()
- warn(
- "The beginPath method needs an identifier kwarg. The contour's identifier value has been discarded.",
- DeprecationWarning,
- )
- if len(contour):
- massaged = _validateAndMassagePointStructures(
- contour, pointAttributesFormat2, validate=validate
- )
- _buildOutlinePointsFormat2(pen, massaged, identifiers, validate)
- pen.endPath()
-
-
-def _buildOutlinePointsFormat2(pen, contour, identifiers, validate):
- for point in contour:
- x = point["x"]
- y = point["y"]
- segmentType = point["segmentType"]
- smooth = point["smooth"]
- name = point["name"]
- identifier = point.get("identifier")
- if identifier is not None:
- if validate:
- if identifier in identifiers:
- raise GlifLibError(
- "The identifier %s is used more than once." % identifier
- )
- if not identifierValidator(identifier):
- raise GlifLibError("The identifier %s is not valid." % identifier)
- identifiers.add(identifier)
- try:
- pen.addPoint(
- (x, y),
- segmentType=segmentType,
- smooth=smooth,
- name=name,
- identifier=identifier,
- )
- except TypeError:
- pen.addPoint((x, y), segmentType=segmentType, smooth=smooth, name=name)
- warn(
- "The addPoint method needs an identifier kwarg. The point's identifier value has been discarded.",
- DeprecationWarning,
- )
-
-
-def _buildOutlineComponentFormat2(pen, component, identifiers, validate):
- if validate:
- if len(component):
- raise GlifLibError("Unknown child elements of component element.")
- for attr in component.attrib.keys():
- if attr not in componentAttributesFormat2:
- raise GlifLibError("Unknown attribute in component element: %s" % attr)
- baseGlyphName = component.get("base")
- if validate and baseGlyphName is None:
- raise GlifLibError("The base attribute is not defined in the component.")
- transformation = []
- for attr, default in _transformationInfo:
- value = component.get(attr)
- if value is None:
- value = default
- else:
- value = _number(value)
- transformation.append(value)
- identifier = component.get("identifier")
- if identifier is not None:
- if validate:
- if identifier in identifiers:
- raise GlifLibError(
- "The identifier %s is used more than once." % identifier
- )
- if validate and not identifierValidator(identifier):
- raise GlifLibError("The identifier %s is not valid." % identifier)
- identifiers.add(identifier)
- try:
- pen.addComponent(baseGlyphName, tuple(transformation), identifier=identifier)
- except TypeError:
- pen.addComponent(baseGlyphName, tuple(transformation))
- warn(
- "The addComponent method needs an identifier kwarg. The component's identifier value has been discarded.",
- DeprecationWarning,
- )
-
-
-# all formats
-
-
-def _validateAndMassagePointStructures(
- contour, pointAttributes, openContourOffCurveLeniency=False, validate=True
-):
- if not len(contour):
- return
- # store some data for later validation
- lastOnCurvePoint = None
- haveOffCurvePoint = False
- # validate and massage the individual point elements
- massaged = []
- for index, element in enumerate(contour):
- # not
- if element.tag != "point":
- raise GlifLibError(
- "Unknown child element (%s) of contour element." % element.tag
- )
- point = dict(element.attrib)
- massaged.append(point)
- if validate:
- # unknown attributes
- for attr in point.keys():
- if attr not in pointAttributes:
- raise GlifLibError("Unknown attribute in point element: %s" % attr)
- # search for unknown children
- if len(element):
- raise GlifLibError("Unknown child elements in point element.")
- # x and y are required
- for attr in ("x", "y"):
- try:
- point[attr] = _number(point[attr])
- except KeyError as e:
- raise GlifLibError(
- f"Required {attr} attribute is missing in point element."
- ) from e
- # segment type
- pointType = point.pop("type", "offcurve")
- if validate and pointType not in pointTypeOptions:
- raise GlifLibError("Unknown point type: %s" % pointType)
- if pointType == "offcurve":
- pointType = None
- point["segmentType"] = pointType
- if pointType is None:
- haveOffCurvePoint = True
- else:
- lastOnCurvePoint = index
- # move can only occur as the first point
- if validate and pointType == "move" and index != 0:
- raise GlifLibError(
- "A move point occurs after the first point in the contour."
- )
- # smooth is optional
- smooth = point.get("smooth", "no")
- if validate and smooth is not None:
- if smooth not in pointSmoothOptions:
- raise GlifLibError("Unknown point smooth value: %s" % smooth)
- smooth = smooth == "yes"
- point["smooth"] = smooth
- # smooth can only be applied to curve and qcurve
- if validate and smooth and pointType is None:
- raise GlifLibError("smooth attribute set in an offcurve point.")
- # name is optional
- if "name" not in element.attrib:
- point["name"] = None
- if openContourOffCurveLeniency:
- # remove offcurves that precede a move. this is technically illegal,
- # but we let it slide because there are fonts out there in the wild like this.
- if massaged[0]["segmentType"] == "move":
- count = 0
- for point in reversed(massaged):
- if point["segmentType"] is None:
- count += 1
- else:
- break
- if count:
- massaged = massaged[:-count]
- # validate the off-curves in the segments
- if validate and haveOffCurvePoint and lastOnCurvePoint is not None:
- # we only care about how many offCurves there are before an onCurve
- # filter out the trailing offCurves
- offCurvesCount = len(massaged) - 1 - lastOnCurvePoint
- for point in massaged:
- segmentType = point["segmentType"]
- if segmentType is None:
- offCurvesCount += 1
- else:
- if offCurvesCount:
- # move and line can't be preceded by off-curves
- if segmentType == "move":
- # this will have been filtered out already
- raise GlifLibError("move can not have an offcurve.")
- elif segmentType == "line":
- raise GlifLibError("line can not have an offcurve.")
- elif segmentType == "curve":
- if offCurvesCount > 2:
- raise GlifLibError("Too many offcurves defined for curve.")
- elif segmentType == "qcurve":
- pass
- else:
- # unknown segment type. it'll be caught later.
- pass
- offCurvesCount = 0
- return massaged
-
-
-# ---------------------
-# Misc Helper Functions
-# ---------------------
-
-
-def _relaxedSetattr(object, attr, value):
- try:
- setattr(object, attr, value)
- except AttributeError:
- pass
-
-
-def _number(s):
- """
- Given a numeric string, return an integer or a float, whichever
- the string indicates. _number("1") will return the integer 1,
- _number("1.0") will return the float 1.0.
-
- >>> _number("1")
- 1
- >>> _number("1.0")
- 1.0
- >>> _number("a") # doctest: +IGNORE_EXCEPTION_DETAIL
- Traceback (most recent call last):
- ...
- GlifLibError: Could not convert a to an int or float.
- """
- try:
- n = int(s)
- return n
- except ValueError:
- pass
- try:
- n = float(s)
- return n
- except ValueError:
- raise GlifLibError("Could not convert %s to an int or float." % s)
-
-
-# --------------------
-# Rapid Value Fetching
-# --------------------
-
-# base
-
-
-class _DoneParsing(Exception):
- pass
-
-
-class _BaseParser:
- def __init__(self):
- self._elementStack = []
-
- def parse(self, text):
- from xml.parsers.expat import ParserCreate
-
- parser = ParserCreate()
- parser.StartElementHandler = self.startElementHandler
- parser.EndElementHandler = self.endElementHandler
- parser.Parse(text)
-
- def startElementHandler(self, name, attrs):
- self._elementStack.append(name)
-
- def endElementHandler(self, name):
- other = self._elementStack.pop(-1)
- assert other == name
-
-
-# unicodes
-
-
-def _fetchUnicodes(glif):
- """
- Get a list of unicodes listed in glif.
- """
- parser = _FetchUnicodesParser()
- parser.parse(glif)
- return parser.unicodes
-
-
-class _FetchUnicodesParser(_BaseParser):
- def __init__(self):
- self.unicodes = []
- super().__init__()
-
- def startElementHandler(self, name, attrs):
- if (
- name == "unicode"
- and self._elementStack
- and self._elementStack[-1] == "glyph"
- ):
- value = attrs.get("hex")
- if value is not None:
- try:
- value = int(value, 16)
- if value not in self.unicodes:
- self.unicodes.append(value)
- except ValueError:
- pass
- super().startElementHandler(name, attrs)
-
-
-# image
-
-
-def _fetchImageFileName(glif):
- """
- The image file name (if any) from glif.
- """
- parser = _FetchImageFileNameParser()
- try:
- parser.parse(glif)
- except _DoneParsing:
- pass
- return parser.fileName
-
-
-class _FetchImageFileNameParser(_BaseParser):
- def __init__(self):
- self.fileName = None
- super().__init__()
-
- def startElementHandler(self, name, attrs):
- if name == "image" and self._elementStack and self._elementStack[-1] == "glyph":
- self.fileName = attrs.get("fileName")
- raise _DoneParsing
- super().startElementHandler(name, attrs)
-
-
-# component references
-
-
-def _fetchComponentBases(glif):
- """
- Get a list of component base glyphs listed in glif.
- """
- parser = _FetchComponentBasesParser()
- try:
- parser.parse(glif)
- except _DoneParsing:
- pass
- return list(parser.bases)
-
-
-class _FetchComponentBasesParser(_BaseParser):
- def __init__(self):
- self.bases = []
- super().__init__()
-
- def startElementHandler(self, name, attrs):
- if (
- name == "component"
- and self._elementStack
- and self._elementStack[-1] == "outline"
- ):
- base = attrs.get("base")
- if base is not None:
- self.bases.append(base)
- super().startElementHandler(name, attrs)
-
- def endElementHandler(self, name):
- if name == "outline":
- raise _DoneParsing
- super().endElementHandler(name)
-
-
-# --------------
-# GLIF Point Pen
-# --------------
-
-_transformationInfo = [
- # field name, default value
- ("xScale", 1),
- ("xyScale", 0),
- ("yxScale", 0),
- ("yScale", 1),
- ("xOffset", 0),
- ("yOffset", 0),
-]
-
-
-class GLIFPointPen(AbstractPointPen):
-
- """
- Helper class using the PointPen protocol to write the
- part of .glif files.
- """
-
- def __init__(self, element, formatVersion=None, identifiers=None, validate=True):
- if identifiers is None:
- identifiers = set()
- self.formatVersion = GLIFFormatVersion(formatVersion)
- self.identifiers = identifiers
- self.outline = element
- self.contour = None
- self.prevOffCurveCount = 0
- self.prevPointTypes = []
- self.validate = validate
-
- def beginPath(self, identifier=None, **kwargs):
- attrs = OrderedDict()
- if identifier is not None and self.formatVersion.major >= 2:
- if self.validate:
- if identifier in self.identifiers:
- raise GlifLibError(
- "identifier used more than once: %s" % identifier
- )
- if not identifierValidator(identifier):
- raise GlifLibError(
- "identifier not formatted properly: %s" % identifier
- )
- attrs["identifier"] = identifier
- self.identifiers.add(identifier)
- self.contour = etree.SubElement(self.outline, "contour", attrs)
- self.prevOffCurveCount = 0
-
- def endPath(self):
- if self.prevPointTypes and self.prevPointTypes[0] == "move":
- if self.validate and self.prevPointTypes[-1] == "offcurve":
- raise GlifLibError("open contour has loose offcurve point")
- # prevent lxml from writing self-closing tags
- if not len(self.contour):
- self.contour.text = "\n "
- self.contour = None
- self.prevPointType = None
- self.prevOffCurveCount = 0
- self.prevPointTypes = []
-
- def addPoint(
- self, pt, segmentType=None, smooth=None, name=None, identifier=None, **kwargs
- ):
- attrs = OrderedDict()
- # coordinates
- if pt is not None:
- if self.validate:
- for coord in pt:
- if not isinstance(coord, numberTypes):
- raise GlifLibError("coordinates must be int or float")
- attrs["x"] = repr(pt[0])
- attrs["y"] = repr(pt[1])
- # segment type
- if segmentType == "offcurve":
- segmentType = None
- if self.validate:
- if segmentType == "move" and self.prevPointTypes:
- raise GlifLibError(
- "move occurs after a point has already been added to the contour."
- )
- if (
- segmentType in ("move", "line")
- and self.prevPointTypes
- and self.prevPointTypes[-1] == "offcurve"
- ):
- raise GlifLibError("offcurve occurs before %s point." % segmentType)
- if segmentType == "curve" and self.prevOffCurveCount > 2:
- raise GlifLibError("too many offcurve points before curve point.")
- if segmentType is not None:
- attrs["type"] = segmentType
- else:
- segmentType = "offcurve"
- if segmentType == "offcurve":
- self.prevOffCurveCount += 1
- else:
- self.prevOffCurveCount = 0
- self.prevPointTypes.append(segmentType)
- # smooth
- if smooth:
- if self.validate and segmentType == "offcurve":
- raise GlifLibError("can't set smooth in an offcurve point.")
- attrs["smooth"] = "yes"
- # name
- if name is not None:
- attrs["name"] = name
- # identifier
- if identifier is not None and self.formatVersion.major >= 2:
- if self.validate:
- if identifier in self.identifiers:
- raise GlifLibError(
- "identifier used more than once: %s" % identifier
- )
- if not identifierValidator(identifier):
- raise GlifLibError(
- "identifier not formatted properly: %s" % identifier
- )
- attrs["identifier"] = identifier
- self.identifiers.add(identifier)
- etree.SubElement(self.contour, "point", attrs)
-
- def addComponent(self, glyphName, transformation, identifier=None, **kwargs):
- attrs = OrderedDict([("base", glyphName)])
- for (attr, default), value in zip(_transformationInfo, transformation):
- if self.validate and not isinstance(value, numberTypes):
- raise GlifLibError("transformation values must be int or float")
- if value != default:
- attrs[attr] = repr(value)
- if identifier is not None and self.formatVersion.major >= 2:
- if self.validate:
- if identifier in self.identifiers:
- raise GlifLibError(
- "identifier used more than once: %s" % identifier
- )
- if self.validate and not identifierValidator(identifier):
- raise GlifLibError(
- "identifier not formatted properly: %s" % identifier
- )
- attrs["identifier"] = identifier
- self.identifiers.add(identifier)
- etree.SubElement(self.outline, "component", attrs)
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod()
diff --git a/spaces/DamianMH/Mlove/README.md b/spaces/DamianMH/Mlove/README.md
deleted file mode 100644
index 69a0b8c483e45022ed2c78320d0c838518ff2991..0000000000000000000000000000000000000000
--- a/spaces/DamianMH/Mlove/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Mlove
-emoji: 🏢
-colorFrom: pink
-colorTo: pink
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Daniel-Saeedi/sent-debias/README.md b/spaces/Daniel-Saeedi/sent-debias/README.md
deleted file mode 100644
index 4ff2f05b3ddca74eac479cbd504cb4ca600a2d16..0000000000000000000000000000000000000000
--- a/spaces/Daniel-Saeedi/sent-debias/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Sent Debias
-emoji: 🌖
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Detomo/ai-comic-generation/postcss.config.js b/spaces/Detomo/ai-comic-generation/postcss.config.js
deleted file mode 100644
index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/postcss.config.js
+++ /dev/null
@@ -1,6 +0,0 @@
-module.exports = {
- plugins: {
- tailwindcss: {},
- autoprefixer: {},
- },
-}
diff --git a/spaces/DonDoesStuff/streamusic/README.md b/spaces/DonDoesStuff/streamusic/README.md
deleted file mode 100644
index 4adb97a1367107d63cc67a5486bfae7144f966eb..0000000000000000000000000000000000000000
--- a/spaces/DonDoesStuff/streamusic/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Streamusic
-emoji: 🌖
-colorFrom: yellow
-colorTo: green
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/optimizer.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/optimizer.py
deleted file mode 100644
index cd130a8b5ca8e1af555365620fd01104a3be13ce..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/dnnlib/tflib/optimizer.py
+++ /dev/null
@@ -1,389 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.
-#
-# This work is made available under the Nvidia Source Code License-NC.
-# To view a copy of this license, visit
-# https://nvlabs.github.io/stylegan2/license.html
-
-"""Helper wrapper for a Tensorflow optimizer."""
-
-import numpy as np
-import tensorflow as tf
-
-from collections import OrderedDict
-from typing import List, Union
-
-from . import autosummary
-from . import tfutil
-from .. import util
-
-from .tfutil import TfExpression, TfExpressionEx
-
-try:
- # TensorFlow 1.13
- from tensorflow.python.ops import nccl_ops
-except:
- # Older TensorFlow versions
- import tensorflow.contrib.nccl as nccl_ops
-
-
-class Optimizer:
- """A Wrapper for tf.train.Optimizer.
-
- Automatically takes care of:
- - Gradient averaging for multi-GPU training.
- - Gradient accumulation for arbitrarily large minibatches.
- - Dynamic loss scaling and typecasts for FP16 training.
- - Ignoring corrupted gradients that contain NaNs/Infs.
- - Reporting statistics.
- - Well-chosen default settings.
- """
-
- def __init__(self,
- # Name string that will appear in TensorFlow graph.
- name: str = "Train",
- # Underlying optimizer class.
- tf_optimizer: str = "tf.train.AdamOptimizer",
- # Learning rate. Can vary over time.
- learning_rate: TfExpressionEx = 0.001,
- # Treat N consecutive minibatches as one by accumulating gradients.
- minibatch_multiplier: TfExpressionEx = None,
- # Share internal state with a previously created optimizer?
- share: "Optimizer" = None,
- # Enable dynamic loss scaling for robust mixed-precision training?
- use_loss_scaling: bool = False,
- # Log2 of initial loss scaling factor.
- loss_scaling_init: float = 64.0,
- # Log2 of per-minibatch loss scaling increment when there is no overflow.
- loss_scaling_inc: float = 0.0005,
- # Log2 of per-minibatch loss scaling decrement when there is an overflow.
- loss_scaling_dec: float = 1.0,
- # Report fine-grained memory usage statistics in TensorBoard?
- report_mem_usage: bool = False,
- **kwargs):
-
- # Public fields.
- self.name = name
- self.learning_rate = learning_rate
- self.minibatch_multiplier = minibatch_multiplier
- self.id = self.name.replace("/", ".")
- self.scope = tf.get_default_graph().unique_name(self.id)
- self.optimizer_class = util.get_obj_by_name(tf_optimizer)
- self.optimizer_kwargs = dict(kwargs)
- self.use_loss_scaling = use_loss_scaling
- self.loss_scaling_init = loss_scaling_init
- self.loss_scaling_inc = loss_scaling_inc
- self.loss_scaling_dec = loss_scaling_dec
-
- # Private fields.
- self._updates_applied = False
- self._devices = OrderedDict() # device_name => EasyDict()
- self._shared_optimizers = OrderedDict() # device_name => optimizer_class
- self._gradient_shapes = None # [shape, ...]
- self._report_mem_usage = report_mem_usage
-
- # Validate arguments.
- assert callable(self.optimizer_class)
-
- # Share internal state if requested.
- if share is not None:
- assert isinstance(share, Optimizer)
- assert self.optimizer_class is share.optimizer_class
- assert self.learning_rate is share.learning_rate
- assert self.optimizer_kwargs == share.optimizer_kwargs
- self._shared_optimizers = share._shared_optimizers # pylint: disable=protected-access
-
- def _get_device(self, device_name: str):
- """Get internal state for the given TensorFlow device."""
- tfutil.assert_tf_initialized()
- if device_name in self._devices:
- return self._devices[device_name]
-
- # Initialize fields.
- device = util.EasyDict()
- device.name = device_name
- device.optimizer = None # Underlying optimizer: optimizer_class
- device.loss_scaling_var = None # Log2 of loss scaling: tf.Variable
- # Raw gradients: var => [grad, ...]
- device.grad_raw = OrderedDict()
- device.grad_clean = OrderedDict() # Clean gradients: var => grad
- # Accumulation sums: var => tf.Variable
- device.grad_acc_vars = OrderedDict()
- device.grad_acc_count = None # Accumulation counter: tf.Variable
- device.grad_acc = OrderedDict() # Accumulated gradients: var => grad
-
- # Setup TensorFlow objects.
- with tfutil.absolute_name_scope(self.scope + "/Devices"), tf.device(device_name), tf.control_dependencies(None):
- if device_name not in self._shared_optimizers:
- optimizer_name = self.scope.replace(
- "/", "_") + "_opt%d" % len(self._shared_optimizers)
- self._shared_optimizers[device_name] = self.optimizer_class(
- name=optimizer_name, learning_rate=self.learning_rate, **self.optimizer_kwargs)
- device.optimizer = self._shared_optimizers[device_name]
- if self.use_loss_scaling:
- device.loss_scaling_var = tf.Variable(np.float32(
- self.loss_scaling_init), trainable=False, name="loss_scaling_var")
-
- # Register device.
- self._devices[device_name] = device
- return device
-
- def register_gradients(self, loss: TfExpression, trainable_vars: Union[List, dict]) -> None:
- """Register the gradients of the given loss function with respect to the given variables.
- Intended to be called once per GPU."""
- tfutil.assert_tf_initialized()
- assert not self._updates_applied
- device = self._get_device(loss.device)
-
- # Validate trainables.
- if isinstance(trainable_vars, dict):
- # allow passing in Network.trainables as vars
- trainable_vars = list(trainable_vars.values())
- assert isinstance(trainable_vars, list) and len(trainable_vars) >= 1
- assert all(tfutil.is_tf_expression(expr)
- for expr in trainable_vars + [loss])
- assert all(var.device == device.name for var in trainable_vars)
-
- # Validate shapes.
- if self._gradient_shapes is None:
- self._gradient_shapes = [var.shape.as_list()
- for var in trainable_vars]
- assert len(trainable_vars) == len(self._gradient_shapes)
- assert all(var.shape.as_list() == var_shape for var,
- var_shape in zip(trainable_vars, self._gradient_shapes))
-
- # Report memory usage if requested.
- deps = []
- if self._report_mem_usage:
- self._report_mem_usage = False
- try:
- with tf.name_scope(self.id + '_mem'), tf.device(device.name), tf.control_dependencies([loss]):
- deps.append(autosummary.autosummary(
- self.id + "/mem_usage_gb", tf.contrib.memory_stats.BytesInUse() / 2**30))
- except tf.errors.NotFoundError:
- pass
-
- # Compute gradients.
- with tf.name_scope(self.id + "_grad"), tf.device(device.name), tf.control_dependencies(deps):
- loss = self.apply_loss_scaling(tf.cast(loss, tf.float32))
- gate = tf.train.Optimizer.GATE_NONE # disable gating to reduce memory usage
- grad_list = device.optimizer.compute_gradients(
- loss=loss, var_list=trainable_vars, gate_gradients=gate)
-
- # Register gradients.
- for grad, var in grad_list:
- if var not in device.grad_raw:
- device.grad_raw[var] = []
- device.grad_raw[var].append(grad)
-
- def apply_updates(self, allow_no_op: bool = False) -> tf.Operation:
- """Construct training op to update the registered variables based on their gradients."""
- tfutil.assert_tf_initialized()
- assert not self._updates_applied
- self._updates_applied = True
- all_ops = []
-
- # Check for no-op.
- if allow_no_op and len(self._devices) == 0:
- with tfutil.absolute_name_scope(self.scope):
- return tf.no_op(name='TrainingOp')
-
- # Clean up gradients.
- for device_idx, device in enumerate(self._devices.values()):
- with tfutil.absolute_name_scope(self.scope + "/Clean%d" % device_idx), tf.device(device.name):
- for var, grad in device.grad_raw.items():
-
- # Filter out disconnected gradients and convert to float32.
- grad = [g for g in grad if g is not None]
- grad = [tf.cast(g, tf.float32) for g in grad]
-
- # Sum within the device.
- if len(grad) == 0:
- grad = tf.zeros(var.shape) # No gradients => zero.
- elif len(grad) == 1:
- # Single gradient => use as is.
- grad = grad[0]
- else:
- # Multiple gradients => sum.
- grad = tf.add_n(grad)
-
- # Scale as needed.
- scale = 1.0 / \
- len(device.grad_raw[var]) / len(self._devices)
- scale = tf.constant(scale, dtype=tf.float32, name="scale")
- if self.minibatch_multiplier is not None:
- scale /= tf.cast(self.minibatch_multiplier, tf.float32)
- scale = self.undo_loss_scaling(scale)
- device.grad_clean[var] = grad * scale
-
- # Sum gradients across devices.
- if len(self._devices) > 1:
- with tfutil.absolute_name_scope(self.scope + "/Broadcast"), tf.device(None):
- for all_vars in zip(*[device.grad_clean.keys() for device in self._devices.values()]):
- # NCCL does not support zero-sized tensors.
- if len(all_vars) > 0 and all(dim > 0 for dim in all_vars[0].shape.as_list()):
- all_grads = [device.grad_clean[var] for device, var in zip(
- self._devices.values(), all_vars)]
- all_grads = nccl_ops.all_sum(all_grads)
- for device, var, grad in zip(self._devices.values(), all_vars, all_grads):
- device.grad_clean[var] = grad
-
- # Apply updates separately on each device.
- for device_idx, device in enumerate(self._devices.values()):
- with tfutil.absolute_name_scope(self.scope + "/Apply%d" % device_idx), tf.device(device.name):
- # pylint: disable=cell-var-from-loop
-
- # Accumulate gradients over time.
- if self.minibatch_multiplier is None:
- acc_ok = tf.constant(True, name='acc_ok')
- device.grad_acc = OrderedDict(device.grad_clean)
- else:
- # Create variables.
- with tf.control_dependencies(None):
- for var in device.grad_clean.keys():
- device.grad_acc_vars[var] = tf.Variable(
- tf.zeros(var.shape), trainable=False, name="grad_acc_var")
- device.grad_acc_count = tf.Variable(
- tf.zeros([]), trainable=False, name="grad_acc_count")
-
- # Track counter.
- count_cur = device.grad_acc_count + 1.0
- def count_inc_op(): return tf.assign(device.grad_acc_count, count_cur)
- def count_reset_op(): return tf.assign(device.grad_acc_count, tf.zeros([]))
- acc_ok = (count_cur >= tf.cast(
- self.minibatch_multiplier, tf.float32))
- all_ops.append(
- tf.cond(acc_ok, count_reset_op, count_inc_op))
-
- # Track gradients.
- for var, grad in device.grad_clean.items():
- acc_var = device.grad_acc_vars[var]
- acc_cur = acc_var + grad
- device.grad_acc[var] = acc_cur
- with tf.control_dependencies([acc_cur]):
- def acc_inc_op(): return tf.assign(acc_var, acc_cur)
- def acc_reset_op(): return tf.assign(acc_var, tf.zeros(var.shape))
- all_ops.append(
- tf.cond(acc_ok, acc_reset_op, acc_inc_op))
-
- # No overflow => apply gradients.
- all_ok = tf.reduce_all(tf.stack(
- [acc_ok] + [tf.reduce_all(tf.is_finite(g)) for g in device.grad_acc.values()]))
-
- def apply_op(): return device.optimizer.apply_gradients(
- [(tf.cast(grad, var.dtype), var) for var, grad in device.grad_acc.items()])
- all_ops.append(tf.cond(all_ok, apply_op, tf.no_op))
-
- # Adjust loss scaling.
- if self.use_loss_scaling:
- def ls_inc_op(): return tf.assign_add(
- device.loss_scaling_var, self.loss_scaling_inc)
- def ls_dec_op(): return tf.assign_sub(
- device.loss_scaling_var, self.loss_scaling_dec)
-
- def ls_update_op(): return tf.group(tf.cond(all_ok, ls_inc_op, ls_dec_op))
- all_ops.append(tf.cond(acc_ok, ls_update_op, tf.no_op))
-
- # Last device => report statistics.
- if device_idx == len(self._devices) - 1:
- all_ops.append(autosummary.autosummary(
- self.id + "/learning_rate", self.learning_rate))
- all_ops.append(autosummary.autosummary(
- self.id + "/overflow_frequency", tf.where(all_ok, 0, 1), condition=acc_ok))
- if self.use_loss_scaling:
- all_ops.append(autosummary.autosummary(
- self.id + "/loss_scaling_log2", device.loss_scaling_var))
-
- # Initialize variables.
- self.reset_optimizer_state()
- if self.use_loss_scaling:
- tfutil.init_uninitialized_vars(
- [device.loss_scaling_var for device in self._devices.values()])
- if self.minibatch_multiplier is not None:
- tfutil.run([var.initializer for device in self._devices.values() for var in list(
- device.grad_acc_vars.values()) + [device.grad_acc_count]])
-
- # Group everything into a single op.
- with tfutil.absolute_name_scope(self.scope):
- return tf.group(*all_ops, name="TrainingOp")
-
- def reset_optimizer_state(self) -> None:
- """Reset internal state of the underlying optimizer."""
- tfutil.assert_tf_initialized()
- tfutil.run([var.initializer for device in self._devices.values()
- for var in device.optimizer.variables()])
-
- def get_loss_scaling_var(self, device: str) -> Union[tf.Variable, None]:
- """Get or create variable representing log2 of the current dynamic loss scaling factor."""
- return self._get_device(device).loss_scaling_var
-
- def apply_loss_scaling(self, value: TfExpression) -> TfExpression:
- """Apply dynamic loss scaling for the given expression."""
- assert tfutil.is_tf_expression(value)
- if not self.use_loss_scaling:
- return value
- return value * tfutil.exp2(self.get_loss_scaling_var(value.device))
-
- def undo_loss_scaling(self, value: TfExpression) -> TfExpression:
- """Undo the effect of dynamic loss scaling for the given expression."""
- assert tfutil.is_tf_expression(value)
- if not self.use_loss_scaling:
- return value
- return value * tfutil.exp2(-self.get_loss_scaling_var(value.device)) # pylint: disable=invalid-unary-operand-type
-
-
-class SimpleAdam:
- """Simplified version of tf.train.AdamOptimizer that behaves identically when used with dnnlib.tflib.Optimizer."""
-
- def __init__(self, name="Adam", learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8):
- self.name = name
- self.learning_rate = learning_rate
- self.beta1 = beta1
- self.beta2 = beta2
- self.epsilon = epsilon
- self.all_state_vars = []
-
- def variables(self):
- return self.all_state_vars
-
- def compute_gradients(self, loss, var_list, gate_gradients=tf.train.Optimizer.GATE_NONE):
- assert gate_gradients == tf.train.Optimizer.GATE_NONE
- return list(zip(tf.gradients(loss, var_list), var_list))
-
- def apply_gradients(self, grads_and_vars):
- with tf.name_scope(self.name):
- state_vars = []
- update_ops = []
-
- # Adjust learning rate to deal with startup bias.
- with tf.control_dependencies(None):
- b1pow_var = tf.Variable(
- dtype=tf.float32, initial_value=1, trainable=False)
- b2pow_var = tf.Variable(
- dtype=tf.float32, initial_value=1, trainable=False)
- state_vars += [b1pow_var, b2pow_var]
- b1pow_new = b1pow_var * self.beta1
- b2pow_new = b2pow_var * self.beta2
- update_ops += [tf.assign(b1pow_var, b1pow_new),
- tf.assign(b2pow_var, b2pow_new)]
- lr_new = self.learning_rate * \
- tf.sqrt(1 - b2pow_new) / (1 - b1pow_new)
-
- # Construct ops to update each variable.
- for grad, var in grads_and_vars:
- with tf.control_dependencies(None):
- m_var = tf.Variable(
- dtype=tf.float32, initial_value=tf.zeros_like(var), trainable=False)
- v_var = tf.Variable(
- dtype=tf.float32, initial_value=tf.zeros_like(var), trainable=False)
- state_vars += [m_var, v_var]
- m_new = self.beta1 * m_var + (1 - self.beta1) * grad
- v_new = self.beta2 * v_var + (1 - self.beta2) * tf.square(grad)
- var_delta = lr_new * m_new / (tf.sqrt(v_new) + self.epsilon)
- update_ops += [tf.assign(m_var, m_new), tf.assign(v_var,
- v_new), tf.assign_sub(var, var_delta)]
-
- # Group everything together.
- self.all_state_vars += state_vars
- return tf.group(*update_ops)
diff --git a/spaces/ECCV2022/Screen_Image_Demoireing/app.py b/spaces/ECCV2022/Screen_Image_Demoireing/app.py
deleted file mode 100644
index 6630d3442a86270a6257dcfc1cbf5a5f18ea881a..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/Screen_Image_Demoireing/app.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import gradio as gr
-from model.nets import my_model
-import torch
-import cv2
-import torch.utils.data as data
-import torchvision.transforms as transforms
-import PIL
-from PIL import Image
-from PIL import ImageFile
-import math
-import os
-import torch.nn.functional as F
-
-os.environ["CUDA_VISIBLE_DEVICES"] = "1"
-device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
-model1 = my_model(en_feature_num=48,
- en_inter_num=32,
- de_feature_num=64,
- de_inter_num=32,
- sam_number=1,
- ).to(device)
-
-load_path1 = "./mix.pth"
-model_state_dict1 = torch.load(load_path1, map_location=device)
-model1.load_state_dict(model_state_dict1)
-
-
-def default_toTensor(img):
- t_list = [transforms.ToTensor()]
- composed_transform = transforms.Compose(t_list)
- return composed_transform(img)
-
-def predict1(img):
- in_img = transforms.ToTensor()(img).to(device).unsqueeze(0)
- b, c, h, w = in_img.size()
- # pad image such that the resolution is a multiple of 32
- w_pad = (math.ceil(w / 32) * 32 - w) // 2
- w_odd_pad = w_pad
- h_pad = (math.ceil(h / 32) * 32 - h) // 2
- h_odd_pad = h_pad
-
- if w % 2 == 1:
- w_odd_pad += 1
- if h % 2 == 1:
- h_odd_pad += 1
-
- in_img = img_pad(in_img, w_pad=w_pad, h_pad=h_pad, w_odd_pad=w_odd_pad, h_odd_pad=h_odd_pad)
- with torch.no_grad():
- out_1, out_2, out_3 = model1(in_img)
- if h_pad != 0:
- out_1 = out_1[:, :, h_pad:-h_odd_pad, :]
- if w_pad != 0:
- out_1 = out_1[:, :, :, w_pad:-w_odd_pad]
- out_1 = out_1.squeeze(0)
- out_1 = PIL.Image.fromarray(torch.clamp(out_1 * 255, min=0, max=255
- ).byte().permute(1, 2, 0).cpu().numpy())
-
- return out_1
-
-def img_pad(x, w_pad, h_pad, w_odd_pad, h_odd_pad):
- '''
- Here the padding values are determined by the average r,g,b values across the training set
- in FHDMi dataset. For the evaluation on the UHDM, you can also try the commented lines where
- the mean values are calculated from UHDM training set, yielding similar performance.
- '''
- x1 = F.pad(x[:, 0:1, ...], (w_pad, w_odd_pad, h_pad, h_odd_pad), value=0.3827)
- x2 = F.pad(x[:, 1:2, ...], (w_pad, w_odd_pad, h_pad, h_odd_pad), value=0.4141)
- x3 = F.pad(x[:, 2:3, ...], (w_pad, w_odd_pad, h_pad, h_odd_pad), value=0.3912)
-
- y = torch.cat([x1, x2, x3], dim=1)
-
- return y
-
-
-title = "Clean Your Moire Images!"
-description = " The model was trained to remove the moire patterns from your captured screen images! Specially, this model is capable of tackling \
-images up to 4K resolution, which adapts to most of the modern mobile phones. \
- \
-(Note: It may cost 80s per 4K image (e.g., iPhone's resolution: 4032x3024) since this demo runs on the CPU. The model can run \
-on a NVIDIA 3090 GPU 17ms per standard 4K image). \
- \
-The best way for a demo testing is using your mobile phone to capture a screen image, which may cause moire patterns. \
-You can scan the [QR code](https://github.com/CVMI-Lab/UHDM/blob/main/figures/QR.jpg) to play on your mobile phone. "
-
-article = "Check out the [ECCV 2022 paper](https://arxiv.org/abs/2207.09935) and the \
- [official training code](https://github.com/CVMI-Lab/UHDM) which the demo is based on.\
-
"
-
-
-iface1 = gr.Interface(fn=predict1,
- inputs=gr.inputs.Image(type="pil"),
- outputs=gr.inputs.Image(type="pil"),
- examples=['001.jpg',
- '002.jpg',
- '005.jpg'],
- title = title,
- description = description,
- article = article
- )
-
-
-iface1.launch()
\ No newline at end of file
diff --git a/spaces/ECCV2022/bytetrack/tools/convert_mot20_to_coco.py b/spaces/ECCV2022/bytetrack/tools/convert_mot20_to_coco.py
deleted file mode 100644
index 67bd9b55b94dc8511b8542d0391d73681238c8b7..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tools/convert_mot20_to_coco.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import os
-import numpy as np
-import json
-import cv2
-
-
-# Use the same script for MOT16
-DATA_PATH = 'datasets/MOT20'
-OUT_PATH = os.path.join(DATA_PATH, 'annotations')
-SPLITS = ['train_half', 'val_half', 'train', 'test'] # --> split training data to train_half and val_half.
-HALF_VIDEO = True
-CREATE_SPLITTED_ANN = True
-CREATE_SPLITTED_DET = True
-
-
-if __name__ == '__main__':
-
- if not os.path.exists(OUT_PATH):
- os.makedirs(OUT_PATH)
-
- for split in SPLITS:
- if split == "test":
- data_path = os.path.join(DATA_PATH, 'test')
- else:
- data_path = os.path.join(DATA_PATH, 'train')
- out_path = os.path.join(OUT_PATH, '{}.json'.format(split))
- out = {'images': [], 'annotations': [], 'videos': [],
- 'categories': [{'id': 1, 'name': 'pedestrian'}]}
- seqs = os.listdir(data_path)
- image_cnt = 0
- ann_cnt = 0
- video_cnt = 0
- tid_curr = 0
- tid_last = -1
- for seq in sorted(seqs):
- if '.DS_Store' in seq:
- continue
- video_cnt += 1 # video sequence number.
- out['videos'].append({'id': video_cnt, 'file_name': seq})
- seq_path = os.path.join(data_path, seq)
- img_path = os.path.join(seq_path, 'img1')
- ann_path = os.path.join(seq_path, 'gt/gt.txt')
- images = os.listdir(img_path)
- num_images = len([image for image in images if 'jpg' in image]) # half and half
-
- if HALF_VIDEO and ('half' in split):
- image_range = [0, num_images // 2] if 'train' in split else \
- [num_images // 2 + 1, num_images - 1]
- else:
- image_range = [0, num_images - 1]
-
- for i in range(num_images):
- if i < image_range[0] or i > image_range[1]:
- continue
- img = cv2.imread(os.path.join(data_path, '{}/img1/{:06d}.jpg'.format(seq, i + 1)))
- height, width = img.shape[:2]
- image_info = {'file_name': '{}/img1/{:06d}.jpg'.format(seq, i + 1), # image name.
- 'id': image_cnt + i + 1, # image number in the entire training set.
- 'frame_id': i + 1 - image_range[0], # image number in the video sequence, starting from 1.
- 'prev_image_id': image_cnt + i if i > 0 else -1, # image number in the entire training set.
- 'next_image_id': image_cnt + i + 2 if i < num_images - 1 else -1,
- 'video_id': video_cnt,
- 'height': height, 'width': width}
- out['images'].append(image_info)
- print('{}: {} images'.format(seq, num_images))
- if split != 'test':
- det_path = os.path.join(seq_path, 'det/det.txt')
- anns = np.loadtxt(ann_path, dtype=np.float32, delimiter=',')
- dets = np.loadtxt(det_path, dtype=np.float32, delimiter=',')
- if CREATE_SPLITTED_ANN and ('half' in split):
- anns_out = np.array([anns[i] for i in range(anns.shape[0])
- if int(anns[i][0]) - 1 >= image_range[0] and
- int(anns[i][0]) - 1 <= image_range[1]], np.float32)
- anns_out[:, 0] -= image_range[0]
- gt_out = os.path.join(seq_path, 'gt/gt_{}.txt'.format(split))
- fout = open(gt_out, 'w')
- for o in anns_out:
- fout.write('{:d},{:d},{:d},{:d},{:d},{:d},{:d},{:d},{:.6f}\n'.format(
- int(o[0]), int(o[1]), int(o[2]), int(o[3]), int(o[4]), int(o[5]),
- int(o[6]), int(o[7]), o[8]))
- fout.close()
- if CREATE_SPLITTED_DET and ('half' in split):
- dets_out = np.array([dets[i] for i in range(dets.shape[0])
- if int(dets[i][0]) - 1 >= image_range[0] and
- int(dets[i][0]) - 1 <= image_range[1]], np.float32)
- dets_out[:, 0] -= image_range[0]
- det_out = os.path.join(seq_path, 'det/det_{}.txt'.format(split))
- dout = open(det_out, 'w')
- for o in dets_out:
- dout.write('{:d},{:d},{:.1f},{:.1f},{:.1f},{:.1f},{:.6f}\n'.format(
- int(o[0]), int(o[1]), float(o[2]), float(o[3]), float(o[4]), float(o[5]),
- float(o[6])))
- dout.close()
-
- print('{} ann images'.format(int(anns[:, 0].max())))
- for i in range(anns.shape[0]):
- frame_id = int(anns[i][0])
- if frame_id - 1 < image_range[0] or frame_id - 1 > image_range[1]:
- continue
- track_id = int(anns[i][1])
- cat_id = int(anns[i][7])
- ann_cnt += 1
- if not ('15' in DATA_PATH):
- #if not (float(anns[i][8]) >= 0.25): # visibility.
- #continue
- if not (int(anns[i][6]) == 1): # whether ignore.
- continue
- if int(anns[i][7]) in [3, 4, 5, 6, 9, 10, 11]: # Non-person
- continue
- if int(anns[i][7]) in [2, 7, 8, 12]: # Ignored person
- #category_id = -1
- continue
- else:
- category_id = 1 # pedestrian(non-static)
- if not track_id == tid_last:
- tid_curr += 1
- tid_last = track_id
- else:
- category_id = 1
- ann = {'id': ann_cnt,
- 'category_id': category_id,
- 'image_id': image_cnt + frame_id,
- 'track_id': tid_curr,
- 'bbox': anns[i][2:6].tolist(),
- 'conf': float(anns[i][6]),
- 'iscrowd': 0,
- 'area': float(anns[i][4] * anns[i][5])}
- out['annotations'].append(ann)
- image_cnt += num_images
- print(tid_curr, tid_last)
- print('loaded {} for {} images and {} samples'.format(split, len(out['images']), len(out['annotations'])))
- json.dump(out, open(out_path, 'w'))
\ No newline at end of file
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/transformer/mingpt.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/transformer/mingpt.py
deleted file mode 100644
index d14b7b68117f4b9f297b2929397cd4f55089334c..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/modules/transformer/mingpt.py
+++ /dev/null
@@ -1,415 +0,0 @@
-"""
-taken from: https://github.com/karpathy/minGPT/
-GPT model:
-- the initial stem consists of a combination of token encoding and a positional encoding
-- the meat of it is a uniform sequence of Transformer blocks
- - each Transformer is a sequential combination of a 1-hidden-layer MLP block and a self-attention block
- - all blocks feed into a central residual pathway similar to resnets
-- the final decoder is a linear projection into a vanilla Softmax classifier
-"""
-
-import math
-import logging
-
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-from transformers import top_k_top_p_filtering
-
-logger = logging.getLogger(__name__)
-
-
-class GPTConfig:
- """ base GPT config, params common to all GPT versions """
- embd_pdrop = 0.1
- resid_pdrop = 0.1
- attn_pdrop = 0.1
-
- def __init__(self, vocab_size, block_size, **kwargs):
- self.vocab_size = vocab_size
- self.block_size = block_size
- for k,v in kwargs.items():
- setattr(self, k, v)
-
-
-class GPT1Config(GPTConfig):
- """ GPT-1 like network roughly 125M params """
- n_layer = 12
- n_head = 12
- n_embd = 768
-
-
-class CausalSelfAttention(nn.Module):
- """
- A vanilla multi-head masked self-attention layer with a projection at the end.
- It is possible to use torch.nn.MultiheadAttention here but I am including an
- explicit implementation here to show that there is nothing too scary here.
- """
-
- def __init__(self, config):
- super().__init__()
- assert config.n_embd % config.n_head == 0
- # key, query, value projections for all heads
- self.key = nn.Linear(config.n_embd, config.n_embd)
- self.query = nn.Linear(config.n_embd, config.n_embd)
- self.value = nn.Linear(config.n_embd, config.n_embd)
- # regularization
- self.attn_drop = nn.Dropout(config.attn_pdrop)
- self.resid_drop = nn.Dropout(config.resid_pdrop)
- # output projection
- self.proj = nn.Linear(config.n_embd, config.n_embd)
- # causal mask to ensure that attention is only applied to the left in the input sequence
- mask = torch.tril(torch.ones(config.block_size,
- config.block_size))
- if hasattr(config, "n_unmasked"):
- mask[:config.n_unmasked, :config.n_unmasked] = 1
- self.register_buffer("mask", mask.view(1, 1, config.block_size, config.block_size))
- self.n_head = config.n_head
-
- def forward(self, x, layer_past=None):
- B, T, C = x.size()
-
- # calculate query, key, values for all heads in batch and move head forward to be the batch dim
- k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
-
- present = torch.stack((k, v))
- if layer_past is not None:
- past_key, past_value = layer_past
- k = torch.cat((past_key, k), dim=-2)
- v = torch.cat((past_value, v), dim=-2)
-
- # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
- att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
- if layer_past is None:
- att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf'))
-
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
- y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side
-
- # output projection
- y = self.resid_drop(self.proj(y))
- return y, present # TODO: check that this does not break anything
-
-
-class Block(nn.Module):
- """ an unassuming Transformer block """
- def __init__(self, config):
- super().__init__()
- self.ln1 = nn.LayerNorm(config.n_embd)
- self.ln2 = nn.LayerNorm(config.n_embd)
- self.attn = CausalSelfAttention(config)
- self.mlp = nn.Sequential(
- nn.Linear(config.n_embd, 4 * config.n_embd),
- nn.GELU(), # nice
- nn.Linear(4 * config.n_embd, config.n_embd),
- nn.Dropout(config.resid_pdrop),
- )
-
- def forward(self, x, layer_past=None, return_present=False):
- # TODO: check that training still works
- if return_present: assert not self.training
- # layer past: tuple of length two with B, nh, T, hs
- attn, present = self.attn(self.ln1(x), layer_past=layer_past)
-
- x = x + attn
- x = x + self.mlp(self.ln2(x))
- if layer_past is not None or return_present:
- return x, present
- return x
-
-
-class GPT(nn.Module):
- """ the full GPT language model, with a context size of block_size """
- def __init__(self, vocab_size, block_size, n_layer=12, n_head=8, n_embd=256,
- embd_pdrop=0., resid_pdrop=0., attn_pdrop=0., n_unmasked=0):
- super().__init__()
- config = GPTConfig(vocab_size=vocab_size, block_size=block_size,
- embd_pdrop=embd_pdrop, resid_pdrop=resid_pdrop, attn_pdrop=attn_pdrop,
- n_layer=n_layer, n_head=n_head, n_embd=n_embd,
- n_unmasked=n_unmasked)
- # input embedding stem
- self.tok_emb = nn.Embedding(config.vocab_size, config.n_embd)
- self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd))
- self.drop = nn.Dropout(config.embd_pdrop)
- # transformer
- self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)])
- # decoder head
- self.ln_f = nn.LayerNorm(config.n_embd)
- self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
- self.block_size = config.block_size
- self.apply(self._init_weights)
- self.config = config
- logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters()))
-
- def get_block_size(self):
- return self.block_size
-
- def _init_weights(self, module):
- if isinstance(module, (nn.Linear, nn.Embedding)):
- module.weight.data.normal_(mean=0.0, std=0.02)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
- def forward(self, idx, embeddings=None, targets=None):
- # forward the GPT model
- token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
-
- if embeddings is not None: # prepend explicit embeddings
- token_embeddings = torch.cat((embeddings, token_embeddings), dim=1)
-
- t = token_embeddings.shape[1]
- assert t <= self.block_size, "Cannot forward, model block size is exhausted."
- position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
- x = self.drop(token_embeddings + position_embeddings)
- x = self.blocks(x)
- x = self.ln_f(x)
- logits = self.head(x)
-
- # if we are given some desired targets also calculate the loss
- loss = None
- if targets is not None:
- loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
-
- return logits, loss
-
- def forward_with_past(self, idx, embeddings=None, targets=None, past=None, past_length=None):
- # inference only
- assert not self.training
- token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
- if embeddings is not None: # prepend explicit embeddings
- token_embeddings = torch.cat((embeddings, token_embeddings), dim=1)
-
- if past is not None:
- assert past_length is not None
- past = torch.cat(past, dim=-2) # n_layer, 2, b, nh, len_past, dim_head
- past_shape = list(past.shape)
- expected_shape = [self.config.n_layer, 2, idx.shape[0], self.config.n_head, past_length, self.config.n_embd//self.config.n_head]
- assert past_shape == expected_shape, f"{past_shape} =/= {expected_shape}"
- position_embeddings = self.pos_emb[:, past_length, :] # each position maps to a (learnable) vector
- else:
- position_embeddings = self.pos_emb[:, :token_embeddings.shape[1], :]
-
- x = self.drop(token_embeddings + position_embeddings)
- presents = [] # accumulate over layers
- for i, block in enumerate(self.blocks):
- x, present = block(x, layer_past=past[i, ...] if past is not None else None, return_present=True)
- presents.append(present)
-
- x = self.ln_f(x)
- logits = self.head(x)
- # if we are given some desired targets also calculate the loss
- loss = None
- if targets is not None:
- loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
-
- return logits, loss, torch.stack(presents) # _, _, n_layer, 2, b, nh, 1, dim_head
-
-
-class DummyGPT(nn.Module):
- # for debugging
- def __init__(self, add_value=1):
- super().__init__()
- self.add_value = add_value
-
- def forward(self, idx):
- return idx + self.add_value, None
-
-
-class CodeGPT(nn.Module):
- """Takes in semi-embeddings"""
- def __init__(self, vocab_size, block_size, in_channels, n_layer=12, n_head=8, n_embd=256,
- embd_pdrop=0., resid_pdrop=0., attn_pdrop=0., n_unmasked=0):
- super().__init__()
- config = GPTConfig(vocab_size=vocab_size, block_size=block_size,
- embd_pdrop=embd_pdrop, resid_pdrop=resid_pdrop, attn_pdrop=attn_pdrop,
- n_layer=n_layer, n_head=n_head, n_embd=n_embd,
- n_unmasked=n_unmasked)
- # input embedding stem
- self.tok_emb = nn.Linear(in_channels, config.n_embd)
- self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd))
- self.drop = nn.Dropout(config.embd_pdrop)
- # transformer
- self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)])
- # decoder head
- self.ln_f = nn.LayerNorm(config.n_embd)
- self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
- self.block_size = config.block_size
- self.apply(self._init_weights)
- self.config = config
- logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters()))
-
- def get_block_size(self):
- return self.block_size
-
- def _init_weights(self, module):
- if isinstance(module, (nn.Linear, nn.Embedding)):
- module.weight.data.normal_(mean=0.0, std=0.02)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
- def forward(self, idx, embeddings=None, targets=None):
- # forward the GPT model
- token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
-
- if embeddings is not None: # prepend explicit embeddings
- token_embeddings = torch.cat((embeddings, token_embeddings), dim=1)
-
- t = token_embeddings.shape[1]
- assert t <= self.block_size, "Cannot forward, model block size is exhausted."
- position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
- x = self.drop(token_embeddings + position_embeddings)
- x = self.blocks(x)
- x = self.taming_cinln_f(x)
- logits = self.head(x)
-
- # if we are given some desired targets also calculate the loss
- loss = None
- if targets is not None:
- loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
-
- return logits, loss
-
-
-
-#### sampling utils
-
-def top_k_logits(logits, k):
- v, ix = torch.topk(logits, k)
- out = logits.clone()
- out[out < v[:, [-1]]] = -float('Inf')
- return out
-
-@torch.no_grad()
-def sample(model, x, steps, temperature=1.0, sample=False, top_k=None):
- """
- take a conditioning sequence of indices in x (of shape (b,t)) and predict the next token in
- the sequence, feeding the predictions back into the model each time. Clearly the sampling
- has quadratic complexity unlike an RNN that is only linear, and has a finite context window
- of block_size, unlike an RNN that has an infinite context window.
- """
- block_size = model.get_block_size()
- model.eval()
- for k in range(steps):
- x_cond = x if x.size(1) <= block_size else x[:, -block_size:] # crop context if needed
- logits, _ = model(x_cond)
- # pluck the logits at the final step and scale by temperature
- logits = logits[:, -1, :] / temperature
- # optionally crop probabilities to only the top k options
- if top_k is not None:
- logits = top_k_logits(logits, top_k)
- # apply softmax to convert to probabilities
- probs = F.softmax(logits, dim=-1)
- # sample from the distribution or take the most likely
- if sample:
- ix = torch.multinomial(probs, num_samples=1)
- else:
- _, ix = torch.topk(probs, k=1, dim=-1)
- # append to the sequence and continue
- x = torch.cat((x, ix), dim=1)
-
- return x
-
-
-@torch.no_grad()
-def sample_with_past(x, model, steps, temperature=1., sample_logits=True,
- top_k=None, top_p=None, callback=None):
- # x is conditioning
- sample = x
- cond_len = x.shape[1]
- past = None
- for n in range(steps):
- if callback is not None:
- callback(n)
- logits, _, present = model.forward_with_past(x, past=past, past_length=(n+cond_len-1))
- if past is None:
- past = [present]
- else:
- past.append(present)
- logits = logits[:, -1, :] / temperature
- if top_k is not None:
- logits = top_k_top_p_filtering(logits, top_k=top_k, top_p=top_p)
-
- probs = F.softmax(logits, dim=-1)
- if not sample_logits:
- _, x = torch.topk(probs, k=1, dim=-1)
- else:
- x = torch.multinomial(probs, num_samples=1)
- # append to the sequence and continue
- sample = torch.cat((sample, x), dim=1)
- del past
- sample = sample[:, cond_len:] # cut conditioning off
- return sample
-
-
-#### clustering utils
-
-class KMeans(nn.Module):
- def __init__(self, ncluster=512, nc=3, niter=10):
- super().__init__()
- self.ncluster = ncluster
- self.nc = nc
- self.niter = niter
- self.shape = (3,32,32)
- self.register_buffer("C", torch.zeros(self.ncluster,nc))
- self.register_buffer('initialized', torch.tensor(0, dtype=torch.uint8))
-
- def is_initialized(self):
- return self.initialized.item() == 1
-
- @torch.no_grad()
- def initialize(self, x):
- N, D = x.shape
- assert D == self.nc, D
- c = x[torch.randperm(N)[:self.ncluster]] # init clusters at random
- for i in range(self.niter):
- # assign all pixels to the closest codebook element
- a = ((x[:, None, :] - c[None, :, :])**2).sum(-1).argmin(1)
- # move each codebook element to be the mean of the pixels that assigned to it
- c = torch.stack([x[a==k].mean(0) for k in range(self.ncluster)])
- # re-assign any poorly positioned codebook elements
- nanix = torch.any(torch.isnan(c), dim=1)
- ndead = nanix.sum().item()
- print('done step %d/%d, re-initialized %d dead clusters' % (i+1, self.niter, ndead))
- c[nanix] = x[torch.randperm(N)[:ndead]] # re-init dead clusters
-
- self.C.copy_(c)
- self.initialized.fill_(1)
-
-
- def forward(self, x, reverse=False, shape=None):
- if not reverse:
- # flatten
- bs,c,h,w = x.shape
- assert c == self.nc
- x = x.reshape(bs,c,h*w,1)
- C = self.C.permute(1,0)
- C = C.reshape(1,c,1,self.ncluster)
- a = ((x-C)**2).sum(1).argmin(-1) # bs, h*w indices
- return a
- else:
- # flatten
- bs, HW = x.shape
- """
- c = self.C.reshape( 1, self.nc, 1, self.ncluster)
- c = c[bs*[0],:,:,:]
- c = c[:,:,HW*[0],:]
- x = x.reshape(bs, 1, HW, 1)
- x = x[:,3*[0],:,:]
- x = torch.gather(c, dim=3, index=x)
- """
- x = self.C[x]
- x = x.permute(0,2,1)
- shape = shape if shape is not None else self.shape
- x = x.reshape(bs, *shape)
-
- return x
diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets.py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets.py
deleted file mode 100644
index db4c5e339f7a96cd24ed1cbbf88c4f35d5031309..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-import layers
-from . import spec_utils
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 16)
- self.stg1_high_band_net = BaseASPPNet(2, 16)
-
- self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(8, 16)
-
- self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(16, 32)
-
- self.out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(16, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(16, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/EsoCode/text-generation-webui/docs/FlexGen.md b/spaces/EsoCode/text-generation-webui/docs/FlexGen.md
deleted file mode 100644
index 931cc36f274cda2e629589e1968475aa9c2fa578..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/docs/FlexGen.md
+++ /dev/null
@@ -1,64 +0,0 @@
->FlexGen is a high-throughput generation engine for running large language models with limited GPU memory (e.g., a 16GB T4 GPU or a 24GB RTX3090 gaming card!).
-
-https://github.com/FMInference/FlexGen
-
-## Installation
-
-No additional installation steps are necessary. FlexGen is in the `requirements.txt` file for this project.
-
-## Converting a model
-
-FlexGen only works with the OPT model, and it needs to be converted to numpy format before starting the web UI:
-
-```
-python convert-to-flexgen.py models/opt-1.3b/
-```
-
-The output will be saved to `models/opt-1.3b-np/`.
-
-## Usage
-
-The basic command is the following:
-
-```
-python server.py --model opt-1.3b --loader flexgen
-```
-
-For large models, the RAM usage may be too high and your computer may freeze. If that happens, you can try this:
-
-```
-python server.py --model opt-1.3b --loader flexgen --compress-weight
-```
-
-With this second command, I was able to run both OPT-6.7b and OPT-13B with **2GB VRAM**, and the speed was good in both cases.
-
-You can also manually set the offload strategy with
-
-```
-python server.py --model opt-1.3b --loader flexgen --percent 0 100 100 0 100 0
-```
-
-where the six numbers after `--percent` are:
-
-```
-the percentage of weight on GPU
-the percentage of weight on CPU
-the percentage of attention cache on GPU
-the percentage of attention cache on CPU
-the percentage of activations on GPU
-the percentage of activations on CPU
-```
-
-You should typically only change the first two numbers. If their sum is less than 100, the remaining layers will be offloaded to the disk, by default into the `text-generation-webui/cache` folder.
-
-## Performance
-
-In my experiments with OPT-30B using a RTX 3090 on Linux, I have obtained these results:
-
-* `--loader flexgen --compress-weight --percent 0 100 100 0 100 0`: 0.99 seconds per token.
-* `--loader flexgen --compress-weight --percent 100 0 100 0 100 0`: 0.765 seconds per token.
-
-## Limitations
-
-* Only works with the OPT models.
-* Only two generation parameters are available: `temperature` and `do_sample`.
\ No newline at end of file
diff --git a/spaces/Ezi/Licences_check/app.py b/spaces/Ezi/Licences_check/app.py
deleted file mode 100644
index 3fe9a0703bbdc3398e040d51ee37507d6dba212f..0000000000000000000000000000000000000000
--- a/spaces/Ezi/Licences_check/app.py
+++ /dev/null
@@ -1,261 +0,0 @@
-"""
-# Add file restriction in upload area - Done
-# Fix summary hanging issue
-# Custom highlight function
-# Slider for number of keywords - with re-running
-# Validate input - check for empty input - first run vs later runs,
-# in later runs, state variable will not be empty, so need to consider that case
-"""
-
-import streamlit as st
-from streamlit_tags import st_tags
-# import annotated_text as at
-from annotated_text import annotated_text
-
-from io import StringIO
-import pandas as pd
-
-import read_extract
-import re
-import threading
-import time
-import os
-
-def check(datatype, task, field, summ=True):
- """
- Function that calls the necessary internal functions for summary and keyword extraction
- """
- with result_placeholder:
- with st.spinner('Please wait while we process the keywords:'):
- if summ:
- st.session_state['summariser'] = read_extract.AbstractiveSummarizer()
- st.session_state['summ_thread'] = threading.Thread(target=st.session_state['summariser'].bart, args=(st.session_state['lic_txt'],))
- st.session_state['summ_thread'].start()
-
- res = read_extract.get_keywords(datatype, task, field, st.session_state['pos_text'], st.session_state['neg_text'])
-
-
- st.session_state['p_keywords'], st.session_state['n_keywords'], st.session_state['contrd'], st.session_state['hl_text'] = res
-
-
-
-def display(ph, title, content, highlight=False):
-
- """
- Helper function to display the different contents based on the button presses
- """
- with ph.container():
- st.markdown('### '+title)
- # st.markdown('---')
- if highlight:
- annotated_text(*content)
- # highlight_cont(content)
- else:
- st.markdown(content)
-
-def output():
-
- """
- Function to display the final output after extracting Summary and keywords
- """
- if st.session_state['p_keywords'] or st.session_state['n_keywords']:
- with result_placeholder.container():
- st.markdown('---')
- msg_placeholder = st.empty()
- col1, col2, col3, col4 = st.columns(4)
- content_placeholder = st.empty()
-
- with msg_placeholder:
- if not st.session_state.contrd:
- st.success("### Congrats!! There are no contradictions to the license.")
- else:
- st.error("### The chosen usage contradicts with the license.")
- with col4:
- st.session_state['op4'] = st.button(label='View Usage Contradictions', type='secondary')#,
- # on_click=display,
- # args=(content_placeholder, 'Contradictions', st.session_state['hl_text'], True))
- if st.session_state['op4']:
- display(content_placeholder, 'Contradictions', st.session_state['hl_text'], True)
- with col1:
- if st.session_state['summ']:
- st.session_state['op1'] = st.button(label='View License Summary', type='secondary')#,
- # on_click=display,
- # args=(content_placeholder, 'License Summary', st.session_state['summary']))
- if st.session_state['op1']:
- if (st.session_state['summ_thread']):
- if (not st.session_state['summ_thread'].is_alive()):
- with content_placeholder:
- st.markdown('The summary will take longer to generate. Please retry after sometime')
- st.session_state['op1'] = st.button(label='Retry')
- else:
- st.session_state['summ_thread'] = None
- st.session_state['summary'] = st.session_state['summariser'].summary
- display(content_placeholder, 'License Summary', st.session_state['summary'])
-
- with col2:
- st.session_state['op2'] = st.button(label='View Permitted Usage Tags', type='secondary')#,
- # on_click=display,
- # args=(content_placeholder, 'Permitted Usage Tags', ', '.join(st.session_state['p_keywords'])))
- if st.session_state['op2']:
- res = ', '.join(st.session_state['p_keywords']) if st.session_state['p_keywords'] else 'No permissions allowed'
- display(content_placeholder, 'Permitted Usage Tags', res)
-
- with col3:
- st.session_state['op3'] = st.button(label='View Restriction Tags', type='secondary')#,
- # on_click=display,
- # args=(content_placeholder, 'Restriction Tags', ', '.join(st.session_state['n_keywords'])))
- if st.session_state['op3']:
- res = ', '.join(st.session_state['n_keywords']) if st.session_state['n_keywords'] else 'No usage restrictions, free to use'
- display(content_placeholder, 'Restriction Tags', res)
-
- st.button(label='Check New Use Case', type='primary')#, on_click=reset, use_container_width=True)
-
-def read_license(spdx):
- """
- Function to read the processes and unprocesses license texts
- Licenses_split.csv contains the parts of the data manually separated into Dos and Dont's,
- to generate permitted(pos) and restricted(neg) use tags
-
- Argument:
- spdx: id used to identify the licenses. The license files are named with the spdx id for easier reading and access
-
- Returns / Added to session state:
- pos_text: The part of the License containing information for permitted use
- neg_text: The part of the License containing information about usage restrictions
- lic_txt: The full license text
- """
- try:
- file_name = spdx+'.txt'
- loc = os.path.join(os.getcwd(), 'Licenses', file_name)
- f = open(loc, encoding="utf-8")
- st.session_state.lic_txt = f.read()
- except FileNotFoundError:
- msg = 'Error: {} not found in {}'.format(file_name, os.path.dirname(loc))
- msg += """
-
- Please make sure the Licenses folder is in the same directory as app.py"""
- st.error(msg)
- reset()
-
- try:
- loc1 = os.path.join(os.getcwd(), 'Licenses_split.csv')
- loc2 = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'Licenses_split.csv')
-
- loc = loc1 if os.path.exists(loc1) else loc2
- sheet = pd.read_csv(loc, index_col='spdx')
-
- sheet = sheet.fillna('')
- st.session_state.pos_text = sheet.loc[spdx,'Dos']
- st.session_state.neg_text = sheet.loc[spdx,'Donts']
-
- except FileNotFoundError:
- msg = 'Error: {} not found in {}'.format('Licenses_split.csv', os.path.dirname(loc))
- msg += """
-
- Please make sure the Licenses_split.csv file is in the same directory as app.py"""
- st.error(msg)
-
- reset()
-
-
-
-
-def get_input():
-
- """
- Function to get all the necessary input for the program
- """
- st.title("License Permissions Checker")
- st.markdown("This mini-app extracts keywords from Software License texts to check valid usage of the models")
-
- # Input Fields
-
- asset_type = st.selectbox(label='Select the type of asset being used with the License:',
- options=['Data', 'Model', 'Source Code', 'Model Derivatives'],
- index=1, key="asset")
- task = st.selectbox(label='Select the task to be performed:',
- options=['Text Classification', 'Token Classification', 'Table Question Answering',
- 'Question Answering', 'Zero-Shot Classification', 'Translation', 'Summarization',
- 'Conversational', 'Feature Extraction', 'Text Generation', 'Text2Text Generation',
- 'Fill-Mask', 'Sentence Similarity', 'Text-to-Speech', 'Automatic Speech Recognition',
- 'Audio-to-Audio', 'Audio Classification', 'Voice Activity Detection',
- 'Depth Estimation', 'Image Classification', 'Object Detection', 'Image Segmentation',
- 'Text-to-Image', 'Image-to-Text', 'Image-to-Image', 'Unconditional Image Generation',
- 'Video Classification', 'Reinforcement Learning', 'Robotics', 'Tabular Classification',
- 'Tabular Regression', 'Text-to-Video', 'Visual Question Answering', 'Document Question Answering',
- 'Zero-Shot Image Classification', 'Graph Machine Learning'],
- index=0, key="task")
-
- field = st_tags(label='Select the associated field (Eg. Medical, Research, Commercial, Non commercial, etc.):',
- suggestions=['Medical Conditions', 'Research', 'Commercial Use', 'Non Commercial Use', 'Criminal Likelihood Prediction', 'Synthesize Media', 'Insurance claims prediction'],
- maxtags = 1, text='', key='field')
- if field:
- field = field[0]
-
- lic_names = ['OpenRAIL-S', 'BigScience RAIL License v1.0', 'CreativeML OpenRAIL-M', 'BigScience BLOOM RAIL 1.0', 'Academic Free License v3.0', 'Apache license 2.0', 'Artistic license 2.0', 'Boost Software License 1.0', 'BSD 1-clause', 'BSD 2-clause "Simplified" license', 'BSD 3-clause "New" or "Revised" license', 'BSD 3-clause Clear license', 'Computational Use of Data Agreement', 'Creative Commons Zero v1.0 Universal', 'Creative Commons Attribution 4.0', 'Creative Commons Attribution Share Alike 4.0', 'Creative Commons Attribution Non Commercial 4.0', 'Creative Commons Attribution No Derivatives 4.0', 'Creative Commons Attribution Non Commercial No Derivatives 4.0', 'Creative Commons Attribution Non Commercial Share Alike 4.0', 'Educational Community License v2.0', 'Eclipse Public License 1.0', 'Eclipse Public License 2.0', 'European Union Public License 1.2', 'GNU Affero General Public License v3.0', 'GNU Free Documentation License family', 'GNU General Public License v2.0', 'GNU General Public License v3.0', 'GNU Lesser General Public License v2.1', 'GNU Lesser General Public License v3.0', 'ISC', 'LaTeX Project Public License v1.3c', 'Microsoft Public License', 'MIT', 'Mozilla Public License 2.0', 'Open Data Commons License Attribution family', 'Open Database License family', 'Open Rail++-M License', 'Open Software License 3.0', 'PostgreSQL License', 'University of Illinois/NCSA Open Source License', 'The Unlicense', 'zLib License', 'Open Data Commons Public Domain Dedication and License']
- lic_ids = ['openrail-s', 'bigscience-bloom-rail-1.0', 'creativeml-openrail-m', 'bigscience-bloom-rail-1.0', 'afl-3.0', 'apache-2.0', 'artistic-2.0', 'bsl-1.0', 'bsd-1-clause', 'bsd-2-clause', 'bsd-3-clause', 'bsd-3-clause-clear', 'c-uda', 'cc0-1.0', 'cc-by-4.0', 'cc-by-sa-4.0', 'cc-by-nc-4.0', 'cc-by-nd-4.0', 'cc-by-nc-nd-4.0', 'cc-by-nc-sa-4.0', 'ecl-2.0', 'epl-1.0', 'epl-2.0', 'eupl-1.2', 'agpl-3.0', 'gfdl', 'gpl-2.0', 'gpl-3.0', 'lgpl-2.1', 'lgpl-3.0', 'isc', 'lppl-1.3c', 'ms-pl', 'mit', 'mpl-2.0', 'odc-by', 'odbl', 'openrail++', 'osl-3.0', 'postgresql', 'ncsa', 'unlicense', 'zlib', 'pddl']
-
- lic_ind = st.selectbox(label='Select the License to check against:',
- options=list(range(len(lic_ids))),
- index=0, key='license',
- format_func=lambda opt:lic_names[opt])
- spdx = lic_ids[lic_ind]
-
- read_license(spdx)
-
- summ = st.checkbox(label='Generate Summary', value=True, key='summ')
-
- if st.session_state.lic_txt and (st.session_state.pos_text or st.session_state.neg_text):
- submit = st.button(label='Check permissions',
- type = 'secondary',
- on_click=check,
- args=(asset_type, task, field, summ), )
-
-
-def reset():
- """
- Function to reset all state variables to check a new use case
- """
- st.session_state['pos_text'] = ""
- st.session_state['neg_text'] = ""
- st.session_state['lic_txt'] = ""
- st.session_state['summary'] = ""
- st.session_state['p_keywords'] = []
- st.session_state['n_keywords'] = []
- st.session_state['contrd'] = False
- st.session_state['hl_text'] = ""
- for op in ['op1', 'op2', 'op3', 'op4']:
- st.session_state[op] = False
- st.session_state['summ_thread'] = None
-
-
-
-
-
-st.set_page_config(page_title="License Usage Checker")
-
-for key in ['pos_text', 'neg_text', 'lic_txt', 'summary', 'hl_text']:
- if key not in st.session_state:
- st.session_state[key] = ""
-
-if "p_keywords" not in st.session_state:
- st.session_state['p_keywords'] = []
-
-if "n_keywords" not in st.session_state:
- st.session_state['n_keywords'] = []
-
-if "contrd" not in st.session_state:
- st.session_state['contrd'] = False
-
-for op in ['op1', 'op2', 'op3', 'op4']:
- if op not in st.session_state:
- st.session_state[op] = False
-
-if "summ_thread" not in st.session_state:
- st.session_state['summ_thread'] = None
-
-get_input()
-
-result_placeholder = st.empty()
-
-output()
\ No newline at end of file
diff --git a/spaces/FFusion/FFusion.AI-beta-Playground/README.md b/spaces/FFusion/FFusion.AI-beta-Playground/README.md
deleted file mode 100644
index 4dba8ae84f5009b76d6de62f7e2234917fce6287..0000000000000000000000000000000000000000
--- a/spaces/FFusion/FFusion.AI-beta-Playground/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: FFusion.AI -beta- Playground
-emoji: 😻
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
\ No newline at end of file
diff --git a/spaces/Farazquraishi/pendora/retinaface/anchor.py b/spaces/Farazquraishi/pendora/retinaface/anchor.py
deleted file mode 100644
index bac3a361582b839e5a0de0659b408bdd2420db67..0000000000000000000000000000000000000000
--- a/spaces/Farazquraishi/pendora/retinaface/anchor.py
+++ /dev/null
@@ -1,296 +0,0 @@
-"""Anchor utils modified from https://github.com/biubug6/Pytorch_Retinaface"""
-import math
-import tensorflow as tf
-import numpy as np
-from itertools import product as product
-
-
-###############################################################################
-# Tensorflow / Numpy Priors #
-###############################################################################
-def prior_box(image_sizes, min_sizes, steps, clip=False):
- """prior box"""
- feature_maps = [
- [math.ceil(image_sizes[0] / step), math.ceil(image_sizes[1] / step)]
- for step in steps]
-
- anchors = []
- for k, f in enumerate(feature_maps):
- for i, j in product(range(f[0]), range(f[1])):
- for min_size in min_sizes[k]:
- s_kx = min_size / image_sizes[1]
- s_ky = min_size / image_sizes[0]
- cx = (j + 0.5) * steps[k] / image_sizes[1]
- cy = (i + 0.5) * steps[k] / image_sizes[0]
- anchors += [cx, cy, s_kx, s_ky]
-
- output = np.asarray(anchors).reshape([-1, 4])
-
- if clip:
- output = np.clip(output, 0, 1)
-
- return output
-
-
-def prior_box_tf(image_sizes, min_sizes, steps, clip=False):
- """prior box"""
- image_sizes = tf.cast(tf.convert_to_tensor(image_sizes), tf.float32)
- feature_maps = tf.math.ceil(
- tf.reshape(image_sizes, [1, 2]) /
- tf.reshape(tf.cast(steps, tf.float32), [-1, 1]))
-
- anchors = []
- for k in range(len(min_sizes)):
- grid_x, grid_y = _meshgrid_tf(tf.range(feature_maps[k][1]),
- tf.range(feature_maps[k][0]))
- cx = (grid_x + 0.5) * steps[k] / image_sizes[1]
- cy = (grid_y + 0.5) * steps[k] / image_sizes[0]
- cxcy = tf.stack([cx, cy], axis=-1)
- cxcy = tf.reshape(cxcy, [-1, 2])
- cxcy = tf.repeat(cxcy, repeats=tf.shape(min_sizes[k])[0], axis=0)
-
- sx = min_sizes[k] / image_sizes[1]
- sy = min_sizes[k] / image_sizes[0]
- sxsy = tf.stack([sx, sy], 1)
- sxsy = tf.repeat(sxsy[tf.newaxis],
- repeats=tf.shape(grid_x)[0] * tf.shape(grid_x)[1],
- axis=0)
- sxsy = tf.reshape(sxsy, [-1, 2])
-
- anchors.append(tf.concat([cxcy, sxsy], 1))
-
- output = tf.concat(anchors, axis=0)
-
- if clip:
- output = tf.clip_by_value(output, 0, 1)
-
- return output
-
-
-def _meshgrid_tf(x, y):
- """ workaround solution of the tf.meshgrid() issue:
- https://github.com/tensorflow/tensorflow/issues/34470"""
- grid_shape = [tf.shape(y)[0], tf.shape(x)[0]]
- grid_x = tf.broadcast_to(tf.reshape(x, [1, -1]), grid_shape)
- grid_y = tf.broadcast_to(tf.reshape(y, [-1, 1]), grid_shape)
- return grid_x, grid_y
-
-
-###############################################################################
-# Tensorflow Encoding #
-###############################################################################
-def encode_tf(labels, priors, match_thresh, ignore_thresh,
- variances=[0.1, 0.2]):
- """tensorflow encoding"""
- assert ignore_thresh <= match_thresh
- priors = tf.cast(priors, tf.float32)
- bbox = labels[:, :4]
- landm = labels[:, 4:-1]
- landm_valid = labels[:, -1] # 1: with landm, 0: w/o landm.
-
- # jaccard index
- overlaps = _jaccard(bbox, _point_form(priors))
-
- # (Bipartite Matching)
- # [num_objects] best prior for each ground truth
- best_prior_overlap, best_prior_idx = tf.math.top_k(overlaps, k=1)
- best_prior_overlap = best_prior_overlap[:, 0]
- best_prior_idx = best_prior_idx[:, 0]
-
- # [num_priors] best ground truth for each prior
- overlaps_t = tf.transpose(overlaps)
- best_truth_overlap, best_truth_idx = tf.math.top_k(overlaps_t, k=1)
- best_truth_overlap = best_truth_overlap[:, 0]
- best_truth_idx = best_truth_idx[:, 0]
-
- # ensure best prior
- def _loop_body(i, bt_idx, bt_overlap):
- bp_mask = tf.one_hot(best_prior_idx[i], tf.shape(bt_idx)[0])
- bp_mask_int = tf.cast(bp_mask, tf.int32)
- new_bt_idx = bt_idx * (1 - bp_mask_int) + bp_mask_int * i
- bp_mask_float = tf.cast(bp_mask, tf.float32)
- new_bt_overlap = bt_overlap * (1 - bp_mask_float) + bp_mask_float * 2
- return tf.cond(best_prior_overlap[i] > match_thresh,
- lambda: (i + 1, new_bt_idx, new_bt_overlap),
- lambda: (i + 1, bt_idx, bt_overlap))
- _, best_truth_idx, best_truth_overlap = tf.while_loop(
- lambda i, bt_idx, bt_overlap: tf.less(i, tf.shape(best_prior_idx)[0]),
- _loop_body, [tf.constant(0), best_truth_idx, best_truth_overlap])
-
- matches_bbox = tf.gather(bbox, best_truth_idx) # [num_priors, 4]
- matches_landm = tf.gather(landm, best_truth_idx) # [num_priors, 10]
- matches_landm_v = tf.gather(landm_valid, best_truth_idx) # [num_priors]
-
- loc_t = _encode_bbox(matches_bbox, priors, variances)
- landm_t = _encode_landm(matches_landm, priors, variances)
- landm_valid_t = tf.cast(matches_landm_v > 0, tf.float32)
- conf_t = tf.cast(best_truth_overlap > match_thresh, tf.float32)
- conf_t = tf.where(
- tf.logical_and(best_truth_overlap < match_thresh,
- best_truth_overlap > ignore_thresh),
- tf.ones_like(conf_t) * -1, conf_t) # 1: pos, 0: neg, -1: ignore
-
- return tf.concat([loc_t, landm_t, landm_valid_t[..., tf.newaxis],
- conf_t[..., tf.newaxis]], axis=1)
-
-
-def _encode_bbox(matched, priors, variances):
- """Encode the variances from the priorbox layers into the ground truth
- boxes we have matched (based on jaccard overlap) with the prior boxes.
- Args:
- matched: (tensor) Coords of ground truth for each prior in point-form
- Shape: [num_priors, 4].
- priors: (tensor) Prior boxes in center-offset form
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- encoded boxes (tensor), Shape: [num_priors, 4]
- """
-
- # dist b/t match center and prior's center
- g_cxcy = (matched[:, :2] + matched[:, 2:]) / 2 - priors[:, :2]
- # encode variance
- g_cxcy /= (variances[0] * priors[:, 2:])
- # match wh / prior wh
- g_wh = (matched[:, 2:] - matched[:, :2]) / priors[:, 2:]
- g_wh = tf.math.log(g_wh) / variances[1]
- # return target for smooth_l1_loss
- return tf.concat([g_cxcy, g_wh], 1) # [num_priors,4]
-
-
-def _encode_landm(matched, priors, variances):
- """Encode the variances from the priorbox layers into the ground truth
- boxes we have matched (based on jaccard overlap) with the prior boxes.
- Args:
- matched: (tensor) Coords of ground truth for each prior in point-form
- Shape: [num_priors, 10].
- priors: (tensor) Prior boxes in center-offset form
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- encoded landm (tensor), Shape: [num_priors, 10]
- """
-
- # dist b/t match center and prior's center
- matched = tf.reshape(matched, [tf.shape(matched)[0], 5, 2])
- priors = tf.broadcast_to(
- tf.expand_dims(priors, 1), [tf.shape(matched)[0], 5, 4])
- g_cxcy = matched[:, :, :2] - priors[:, :, :2]
- # encode variance
- g_cxcy /= (variances[0] * priors[:, :, 2:])
- # g_cxcy /= priors[:, :, 2:]
- g_cxcy = tf.reshape(g_cxcy, [tf.shape(g_cxcy)[0], -1])
- # return target for smooth_l1_loss
- return g_cxcy
-
-
-def _point_form(boxes):
- """ Convert prior_boxes to (xmin, ymin, xmax, ymax)
- representation for comparison to point form ground truth data.
- Args:
- boxes: (tensor) center-size default boxes from priorbox layers.
- Return:
- boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.
- """
- return tf.concat((boxes[:, :2] - boxes[:, 2:] / 2,
- boxes[:, :2] + boxes[:, 2:] / 2), axis=1)
-
-
-def _intersect(box_a, box_b):
- """ We resize both tensors to [A,B,2]:
- [A,2] -> [A,1,2] -> [A,B,2]
- [B,2] -> [1,B,2] -> [A,B,2]
- Then we compute the area of intersect between box_a and box_b.
- Args:
- box_a: (tensor) bounding boxes, Shape: [A,4].
- box_b: (tensor) bounding boxes, Shape: [B,4].
- Return:
- (tensor) intersection area, Shape: [A,B].
- """
- A = tf.shape(box_a)[0]
- B = tf.shape(box_b)[0]
- max_xy = tf.minimum(
- tf.broadcast_to(tf.expand_dims(box_a[:, 2:], 1), [A, B, 2]),
- tf.broadcast_to(tf.expand_dims(box_b[:, 2:], 0), [A, B, 2]))
- min_xy = tf.maximum(
- tf.broadcast_to(tf.expand_dims(box_a[:, :2], 1), [A, B, 2]),
- tf.broadcast_to(tf.expand_dims(box_b[:, :2], 0), [A, B, 2]))
- inter = tf.maximum((max_xy - min_xy), tf.zeros_like(max_xy - min_xy))
- return inter[:, :, 0] * inter[:, :, 1]
-
-
-def _jaccard(box_a, box_b):
- """Compute the jaccard overlap of two sets of boxes. The jaccard overlap
- is simply the intersection over union of two boxes. Here we operate on
- ground truth boxes and default boxes.
- E.g.:
- A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)
- Args:
- box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]
- box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]
- Return:
- jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)]
- """
- inter = _intersect(box_a, box_b)
- area_a = tf.broadcast_to(
- tf.expand_dims(
- (box_a[:, 2] - box_a[:, 0]) * (box_a[:, 3] - box_a[:, 1]), 1),
- tf.shape(inter)) # [A,B]
- area_b = tf.broadcast_to(
- tf.expand_dims(
- (box_b[:, 2] - box_b[:, 0]) * (box_b[:, 3] - box_b[:, 1]), 0),
- tf.shape(inter)) # [A,B]
- union = area_a + area_b - inter
- return inter / union # [A,B]
-
-
-###############################################################################
-# Tensorflow Decoding #
-###############################################################################
-def decode_tf(labels, priors, variances=[0.1, 0.2]):
- """tensorflow decoding"""
- bbox = _decode_bbox(labels[:, :4], priors, variances)
- landm = _decode_landm(labels[:, 4:14], priors, variances)
- landm_valid = labels[:, 14][:, tf.newaxis]
- conf = labels[:, 15][:, tf.newaxis]
-
- return tf.concat([bbox, landm, landm_valid, conf], axis=1)
-
-
-def _decode_bbox(pre, priors, variances=[0.1, 0.2]):
- """Decode locations from predictions using priors to undo
- the encoding we did for offset regression at train time.
- Args:
- pre (tensor): location predictions for loc layers,
- Shape: [num_priors,4]
- priors (tensor): Prior boxes in center-offset form.
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- decoded bounding box predictions
- """
- centers = priors[:, :2] + pre[:, :2] * variances[0] * priors[:, 2:]
- sides = priors[:, 2:] * tf.math.exp(pre[:, 2:] * variances[1])
-
- return tf.concat([centers - sides / 2, centers + sides / 2], axis=1)
-
-
-def _decode_landm(pre, priors, variances=[0.1, 0.2]):
- """Decode landm from predictions using priors to undo
- the encoding we did for offset regression at train time.
- Args:
- pre (tensor): landm predictions for loc layers,
- Shape: [num_priors,10]
- priors (tensor): Prior boxes in center-offset form.
- Shape: [num_priors,4].
- variances: (list[float]) Variances of priorboxes
- Return:
- decoded landm predictions
- """
- landms = tf.concat(
- [priors[:, :2] + pre[:, :2] * variances[0] * priors[:, 2:],
- priors[:, :2] + pre[:, 2:4] * variances[0] * priors[:, 2:],
- priors[:, :2] + pre[:, 4:6] * variances[0] * priors[:, 2:],
- priors[:, :2] + pre[:, 6:8] * variances[0] * priors[:, 2:],
- priors[:, :2] + pre[:, 8:10] * variances[0] * priors[:, 2:]], axis=1)
- return landms
\ No newline at end of file
diff --git a/spaces/Ferion/image-matting-app/ppmatting/core/val_ml.py b/spaces/Ferion/image-matting-app/ppmatting/core/val_ml.py
deleted file mode 100644
index 77628925bec1fa08a4a24de685355cc71157db92..0000000000000000000000000000000000000000
--- a/spaces/Ferion/image-matting-app/ppmatting/core/val_ml.py
+++ /dev/null
@@ -1,162 +0,0 @@
-# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-
-import cv2
-import numpy as np
-import time
-import paddle
-import paddle.nn.functional as F
-from paddleseg.utils import TimeAverager, calculate_eta, logger, progbar
-
-from ppmatting.metrics import metric
-from pymatting.util.util import load_image, save_image, stack_images
-from pymatting.foreground.estimate_foreground_ml import estimate_foreground_ml
-
-np.set_printoptions(suppress=True)
-
-
-def save_alpha_pred(alpha, path):
- """
- The value of alpha is range [0, 1], shape should be [h,w]
- """
- dirname = os.path.dirname(path)
- if not os.path.exists(dirname):
- os.makedirs(dirname)
-
- alpha = (alpha).astype('uint8')
- cv2.imwrite(path, alpha)
-
-
-def reverse_transform(alpha, trans_info):
- """recover pred to origin shape"""
- for item in trans_info[::-1]:
- if item[0][0] == 'resize':
- h, w = item[1][0].numpy()[0], item[1][1].numpy()[0]
- alpha = cv2.resize(alpha, dsize=(w, h))
- elif item[0][0] == 'padding':
- h, w = item[1][0].numpy()[0], item[1][1].numpy()[0]
- alpha = alpha[0:h, 0:w]
- else:
- raise Exception("Unexpected info '{}' in im_info".format(item[0]))
- return alpha
-
-
-def evaluate_ml(model,
- eval_dataset,
- num_workers=0,
- print_detail=True,
- save_dir='output/results',
- save_results=True):
-
- loader = paddle.io.DataLoader(
- eval_dataset,
- batch_size=1,
- drop_last=False,
- num_workers=num_workers,
- return_list=True, )
-
- total_iters = len(loader)
- mse_metric = metric.MSE()
- sad_metric = metric.SAD()
- grad_metric = metric.Grad()
- conn_metric = metric.Conn()
-
- if print_detail:
- logger.info("Start evaluating (total_samples: {}, total_iters: {})...".
- format(len(eval_dataset), total_iters))
- progbar_val = progbar.Progbar(target=total_iters, verbose=1)
- reader_cost_averager = TimeAverager()
- batch_cost_averager = TimeAverager()
- batch_start = time.time()
-
- img_name = ''
- i = 0
- ignore_cnt = 0
- for iter, data in enumerate(loader):
-
- reader_cost_averager.record(time.time() - batch_start)
-
- image_rgb_chw = data['img'].numpy()[0]
- image_rgb_hwc = np.transpose(image_rgb_chw, (1, 2, 0))
- trimap = data['trimap'].numpy().squeeze() / 255.0
- image = image_rgb_hwc * 0.5 + 0.5 # reverse normalize (x/255 - mean) / std
-
- is_fg = trimap >= 0.9
- is_bg = trimap <= 0.1
-
- if is_fg.sum() == 0 or is_bg.sum() == 0:
- ignore_cnt += 1
- logger.info(str(iter))
- continue
-
- alpha_pred = model(image, trimap)
-
- alpha_pred = reverse_transform(alpha_pred, data['trans_info'])
-
- alpha_gt = data['alpha'].numpy().squeeze() * 255
-
- trimap = data['ori_trimap'].numpy().squeeze()
-
- alpha_pred = np.round(alpha_pred * 255)
- mse = mse_metric.update(alpha_pred, alpha_gt, trimap)
- sad = sad_metric.update(alpha_pred, alpha_gt, trimap)
- grad = grad_metric.update(alpha_pred, alpha_gt, trimap)
- conn = conn_metric.update(alpha_pred, alpha_gt, trimap)
-
- if sad > 1000:
- print(data['img_name'][0])
-
- if save_results:
- alpha_pred_one = alpha_pred
- alpha_pred_one[trimap == 255] = 255
- alpha_pred_one[trimap == 0] = 0
-
- save_name = data['img_name'][0]
- name, ext = os.path.splitext(save_name)
- if save_name == img_name:
- save_name = name + '_' + str(i) + ext
- i += 1
- else:
- img_name = save_name
- save_name = name + '_' + str(0) + ext
- i = 1
- save_alpha_pred(alpha_pred_one, os.path.join(save_dir, save_name))
-
- batch_cost_averager.record(
- time.time() - batch_start, num_samples=len(alpha_gt))
- batch_cost = batch_cost_averager.get_average()
- reader_cost = reader_cost_averager.get_average()
-
- if print_detail:
- progbar_val.update(iter + 1,
- [('SAD', sad), ('MSE', mse), ('Grad', grad),
- ('Conn', conn), ('batch_cost', batch_cost),
- ('reader cost', reader_cost)])
-
- reader_cost_averager.reset()
- batch_cost_averager.reset()
- batch_start = time.time()
-
- mse = mse_metric.evaluate()
- sad = sad_metric.evaluate()
- grad = grad_metric.evaluate()
- conn = conn_metric.evaluate()
-
- logger.info('[EVAL] SAD: {:.4f}, MSE: {:.4f}, Grad: {:.4f}, Conn: {:.4f}'.
- format(sad, mse, grad, conn))
- logger.info('{}'.format(ignore_cnt))
-
- return sad, mse, grad, conn
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L9_Onnx.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L9_Onnx.py
deleted file mode 100644
index fae2b928252801795b038f51451b234e007f6f03..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L9_Onnx.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from vencoder.encoder import SpeechEncoder
-import onnxruntime
-import torch
-
-class ContentVec256L9_Onnx(SpeechEncoder):
- def __init__(self,vec_path = "pretrain/vec-256-layer-9.onnx",device=None):
- print("load model(s) from {}".format(vec_path))
- self.hidden_dim = 256
- if device is None:
- self.dev = torch.device("cpu")
- else:
- self.dev = torch.device(device)
- if device == 'cpu' or device == torch.device("cpu") or device is None:
- providers = ['CPUExecutionProvider']
- elif device == 'cuda' or device == torch.device("cuda"):
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def encoder(self, wav):
- feats = wav
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- feats = feats.unsqueeze(0).cpu().detach().numpy()
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)
- return torch.tensor(logits[0]).transpose(1, 2).to(self.dev)
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/demucs/parser.py b/spaces/FridaZuley/RVC_HFKawaii/demucs/parser.py
deleted file mode 100644
index 4e8a19cf976e3c6dfe411da64b8dce3e9a4548e0..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/demucs/parser.py
+++ /dev/null
@@ -1,244 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-from pathlib import Path
-
-
-def get_parser():
- parser = argparse.ArgumentParser("demucs", description="Train and evaluate Demucs.")
- default_raw = None
- default_musdb = None
- if 'DEMUCS_RAW' in os.environ:
- default_raw = Path(os.environ['DEMUCS_RAW'])
- if 'DEMUCS_MUSDB' in os.environ:
- default_musdb = Path(os.environ['DEMUCS_MUSDB'])
- parser.add_argument(
- "--raw",
- type=Path,
- default=default_raw,
- help="Path to raw audio, can be faster, see python3 -m demucs.raw to extract.")
- parser.add_argument("--no_raw", action="store_const", const=None, dest="raw")
- parser.add_argument("-m",
- "--musdb",
- type=Path,
- default=default_musdb,
- help="Path to musdb root")
- parser.add_argument("--is_wav", action="store_true",
- help="Indicate that the MusDB dataset is in wav format (i.e. MusDB-HQ).")
- parser.add_argument("--metadata", type=Path, default=Path("metadata/"),
- help="Folder where metadata information is stored.")
- parser.add_argument("--wav", type=Path,
- help="Path to a wav dataset. This should contain a 'train' and a 'valid' "
- "subfolder.")
- parser.add_argument("--samplerate", type=int, default=44100)
- parser.add_argument("--audio_channels", type=int, default=2)
- parser.add_argument("--samples",
- default=44100 * 10,
- type=int,
- help="number of samples to feed in")
- parser.add_argument("--data_stride",
- default=44100,
- type=int,
- help="Stride for chunks, shorter = longer epochs")
- parser.add_argument("-w", "--workers", default=10, type=int, help="Loader workers")
- parser.add_argument("--eval_workers", default=2, type=int, help="Final evaluation workers")
- parser.add_argument("-d",
- "--device",
- help="Device to train on, default is cuda if available else cpu")
- parser.add_argument("--eval_cpu", action="store_true", help="Eval on test will be run on cpu.")
- parser.add_argument("--dummy", help="Dummy parameter, useful to create a new checkpoint file")
- parser.add_argument("--test", help="Just run the test pipeline + one validation. "
- "This should be a filename relative to the models/ folder.")
- parser.add_argument("--test_pretrained", help="Just run the test pipeline + one validation, "
- "on a pretrained model. ")
-
- parser.add_argument("--rank", default=0, type=int)
- parser.add_argument("--world_size", default=1, type=int)
- parser.add_argument("--master")
-
- parser.add_argument("--checkpoints",
- type=Path,
- default=Path("checkpoints"),
- help="Folder where to store checkpoints etc")
- parser.add_argument("--evals",
- type=Path,
- default=Path("evals"),
- help="Folder where to store evals and waveforms")
- parser.add_argument("--save",
- action="store_true",
- help="Save estimated for the test set waveforms")
- parser.add_argument("--logs",
- type=Path,
- default=Path("logs"),
- help="Folder where to store logs")
- parser.add_argument("--models",
- type=Path,
- default=Path("models"),
- help="Folder where to store trained models")
- parser.add_argument("-R",
- "--restart",
- action='store_true',
- help='Restart training, ignoring previous run')
-
- parser.add_argument("--seed", type=int, default=42)
- parser.add_argument("-e", "--epochs", type=int, default=180, help="Number of epochs")
- parser.add_argument("-r",
- "--repeat",
- type=int,
- default=2,
- help="Repeat the train set, longer epochs")
- parser.add_argument("-b", "--batch_size", type=int, default=64)
- parser.add_argument("--lr", type=float, default=3e-4)
- parser.add_argument("--mse", action="store_true", help="Use MSE instead of L1")
- parser.add_argument("--init", help="Initialize from a pre-trained model.")
-
- # Augmentation options
- parser.add_argument("--no_augment",
- action="store_false",
- dest="augment",
- default=True,
- help="No basic data augmentation.")
- parser.add_argument("--repitch", type=float, default=0.2,
- help="Probability to do tempo/pitch change")
- parser.add_argument("--max_tempo", type=float, default=12,
- help="Maximum relative tempo change in %% when using repitch.")
-
- parser.add_argument("--remix_group_size",
- type=int,
- default=4,
- help="Shuffle sources using group of this size. Useful to somewhat "
- "replicate multi-gpu training "
- "on less GPUs.")
- parser.add_argument("--shifts",
- type=int,
- default=10,
- help="Number of random shifts used for the shift trick.")
- parser.add_argument("--overlap",
- type=float,
- default=0.25,
- help="Overlap when --split_valid is passed.")
-
- # See model.py for doc
- parser.add_argument("--growth",
- type=float,
- default=2.,
- help="Number of channels between two layers will increase by this factor")
- parser.add_argument("--depth",
- type=int,
- default=6,
- help="Number of layers for the encoder and decoder")
- parser.add_argument("--lstm_layers", type=int, default=2, help="Number of layers for the LSTM")
- parser.add_argument("--channels",
- type=int,
- default=64,
- help="Number of channels for the first encoder layer")
- parser.add_argument("--kernel_size",
- type=int,
- default=8,
- help="Kernel size for the (transposed) convolutions")
- parser.add_argument("--conv_stride",
- type=int,
- default=4,
- help="Stride for the (transposed) convolutions")
- parser.add_argument("--context",
- type=int,
- default=3,
- help="Context size for the decoder convolutions "
- "before the transposed convolutions")
- parser.add_argument("--rescale",
- type=float,
- default=0.1,
- help="Initial weight rescale reference")
- parser.add_argument("--no_resample", action="store_false",
- default=True, dest="resample",
- help="No Resampling of the input/output x2")
- parser.add_argument("--no_glu",
- action="store_false",
- default=True,
- dest="glu",
- help="Replace all GLUs by ReLUs")
- parser.add_argument("--no_rewrite",
- action="store_false",
- default=True,
- dest="rewrite",
- help="No 1x1 rewrite convolutions")
- parser.add_argument("--normalize", action="store_true")
- parser.add_argument("--no_norm_wav", action="store_false", dest='norm_wav', default=True)
-
- # Tasnet options
- parser.add_argument("--tasnet", action="store_true")
- parser.add_argument("--split_valid",
- action="store_true",
- help="Predict chunks by chunks for valid and test. Required for tasnet")
- parser.add_argument("--X", type=int, default=8)
-
- # Other options
- parser.add_argument("--show",
- action="store_true",
- help="Show model architecture, size and exit")
- parser.add_argument("--save_model", action="store_true",
- help="Skip traning, just save final model "
- "for the current checkpoint value.")
- parser.add_argument("--save_state",
- help="Skip training, just save state "
- "for the current checkpoint value. You should "
- "provide a model name as argument.")
-
- # Quantization options
- parser.add_argument("--q-min-size", type=float, default=1,
- help="Only quantize layers over this size (in MB)")
- parser.add_argument(
- "--qat", type=int, help="If provided, use QAT training with that many bits.")
-
- parser.add_argument("--diffq", type=float, default=0)
- parser.add_argument(
- "--ms-target", type=float, default=162,
- help="Model size target in MB, when using DiffQ. Best model will be kept "
- "only if it is smaller than this target.")
-
- return parser
-
-
-def get_name(parser, args):
- """
- Return the name of an experiment given the args. Some parameters are ignored,
- for instance --workers, as they do not impact the final result.
- """
- ignore_args = set([
- "checkpoints",
- "deterministic",
- "eval",
- "evals",
- "eval_cpu",
- "eval_workers",
- "logs",
- "master",
- "rank",
- "restart",
- "save",
- "save_model",
- "save_state",
- "show",
- "workers",
- "world_size",
- ])
- parts = []
- name_args = dict(args.__dict__)
- for name, value in name_args.items():
- if name in ignore_args:
- continue
- if value != parser.get_default(name):
- if isinstance(value, Path):
- parts.append(f"{name}={value.name}")
- else:
- parts.append(f"{name}={value}")
- if parts:
- name = " ".join(parts)
- else:
- name = "default"
- return name
diff --git a/spaces/Funbi/Chat2/app.py b/spaces/Funbi/Chat2/app.py
deleted file mode 100644
index 3277a028e3cd943201e608e54a0b193b1a14dae9..0000000000000000000000000000000000000000
--- a/spaces/Funbi/Chat2/app.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import random
-import gradio as gr
-import requests
-
-API_URL = "https://api-inference.huggingface.co/models/facebook/blenderbot-3B"
-headers = {"Authorization": "Bearer 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"}
-
-def query(payload):
- response = requests.post(API_URL, headers=headers, json=payload)
- return response.json()
-
-
-def chat(message):
- past_user=["what is your name?"]
- generated=["I am Sade, Funbi's AI chatbot"]
- message = message.lower()
- if message.startswith("what is your name"):
- response = random.choice(["I am Sade an AI chatbot made by Funbi,how are you?","Sade, an AI chatbot made by Funbi, feel free to ask me anything"])
- elif "your name" in message:
- response = random.choice(["I am Sade an AI chatbot made by Funbi,how are you?","Sade, an AI chatbot made by Funbi, feel free to ask me anything"])
- elif "who are you" in message:
- response = random.choice(["I am Sade an AI chatbot made by Funbi,how are you?","Sade, an AI chatbot made by Funbi, feel free to ask me anything"])
- else:
- response = query({"inputs": {"past_user_inputs":past_user,"generated_responses":generated,"text":message},})
- response = response['generated_text']
- past_user.append(message)
- generated.append(response)
- #history.append((message, response))
- return response
-
-demo = gr.Interface(
- chat,
- inputs="text",
- outputs="text",
- title="Chatbot",
- description="This is chatbot made by using a pre-train model by Facebook called blender and I then primed it with a little extra information",
-
-)
-demo.launch()
diff --git a/spaces/GeorgeOrville/bingo/src/pages/api/image.ts b/spaces/GeorgeOrville/bingo/src/pages/api/image.ts
deleted file mode 100644
index fbc0c8def432ba212d27347471670d3b6202463d..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/pages/api/image.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { debug } from '@/lib/isomorphic'
-import { createHeaders } from '@/lib/utils'
-import { createImage } from '@/lib/bots/bing/utils'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- const { prompt, id } = req.query
- if (!prompt) {
- return res.json({
- result: {
- value: 'Image',
- message: 'No Prompt'
- }
- })
- }
- try {
- const headers = createHeaders(req.cookies, {
- IMAGE_BING_COOKIE: process.env.IMAGE_BING_COOKIE
- }, 'image')
-
- debug('headers', headers)
- const response = await createImage(String(prompt), String(id), {
- ...headers,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- })
- res.writeHead(200, {
- 'Content-Type': 'text/plain; charset=UTF-8',
- })
- return res.end(response)
- } catch (e) {
- return res.json({
- result: {
- value: 'Error',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/samples/README.md b/spaces/GipAdonimus/Real-Time-Voice-Cloning/samples/README.md
deleted file mode 100644
index 1a392d86e42f72e83954619f563f4881da327236..0000000000000000000000000000000000000000
--- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/samples/README.md
+++ /dev/null
@@ -1,22 +0,0 @@
-The audio files in this folder are provided for toolbox testing and
-benchmarking purposes. These are the same reference utterances
-used by the SV2TTS authors to generate the audio samples located at:
-https://google.github.io/tacotron/publications/speaker_adaptation/index.html
-
-The `p240_00000.mp3` and `p260_00000.mp3` files are compressed
-versions of audios from the VCTK corpus available at:
-https://datashare.is.ed.ac.uk/handle/10283/3443
-VCTK.txt contains the copyright notices and licensing information.
-
-The `1320_00000.mp3`, `3575_00000.mp3`, `6829_00000.mp3`
-and `8230_00000.mp3` files are compressed versions of audios
-from the LibriSpeech dataset available at: https://openslr.org/12
-For these files, the following notice applies:
-```
-LibriSpeech (c) 2014 by Vassil Panayotov
-
-LibriSpeech ASR corpus is licensed under a
-Creative Commons Attribution 4.0 International License.
-
-See .
-```
diff --git a/spaces/Gmq-x/gpt-academic/Dockerfile b/spaces/Gmq-x/gpt-academic/Dockerfile
deleted file mode 100644
index da5053dbc7fc0accbd7b10fab87ca72feced8fe8..0000000000000000000000000000000000000000
--- a/spaces/Gmq-x/gpt-academic/Dockerfile
+++ /dev/null
@@ -1,20 +0,0 @@
-# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
-# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic .
-# 如何运行: docker run --rm -it --net=host gpt-academic
-FROM python:3.11
-
-RUN echo '[global]' > /etc/pip.conf && \
- echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
- echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
-
-
-WORKDIR /gpt
-COPY requirements.txt .
-RUN pip3 install -r requirements.txt
-
-COPY . .
-
-# 可选步骤,用于预热模块
-RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
-
-CMD ["python3", "-u", "main.py"]
diff --git a/spaces/Godrose0728/sound-link/commons.py b/spaces/Godrose0728/sound-link/commons.py
deleted file mode 100644
index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000
--- a/spaces/Godrose0728/sound-link/commons.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/Gradio-Blocks/pubmed-abstract-retriever/nltkmodule.py b/spaces/Gradio-Blocks/pubmed-abstract-retriever/nltkmodule.py
deleted file mode 100644
index 156e12862eada3d3dad70601d930d67c19ad1dd1..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/pubmed-abstract-retriever/nltkmodule.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import nltk
-nltk.download('wordnet')
-nltk.download('punkt')
-nltk.download('stopwords')
-nltk.download('averaged_perceptron_tagger')
-nltk.download('maxent_ne_chunker')
-nltk.download('words')
-nltk.download('brown')
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/faster_rcnn_r50_fpn_1x_coco_v1.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/faster_rcnn_r50_fpn_1x_coco_v1.py
deleted file mode 100644
index fb2f2d1e13b8c97dbf5f785dadebcccf874ff7be..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/legacy_1.x/faster_rcnn_r50_fpn_1x_coco_v1.py
+++ /dev/null
@@ -1,37 +0,0 @@
-_base_ = [
- '../_base_/models/faster_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-
-model = dict(
- type='FasterRCNN',
- pretrained='torchvision://resnet50',
- rpn_head=dict(
- type='RPNHead',
- anchor_generator=dict(
- type='LegacyAnchorGenerator',
- center_offset=0.5,
- scales=[8],
- ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64]),
- bbox_coder=dict(type='LegacyDeltaXYWHBBoxCoder'),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
- roi_head=dict(
- type='StandardRoIHead',
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(
- type='RoIAlign',
- output_size=7,
- sampling_ratio=2,
- aligned=False),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=dict(
- bbox_coder=dict(type='LegacyDeltaXYWHBBoxCoder'),
- loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rpn_proposal=dict(max_per_img=2000),
- rcnn=dict(assigner=dict(match_low_quality=True))))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 9888120f65b045df1c7d4d05fb010373abf82ccf..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './gcnet_r50-d8_512x512_160k_ade20k.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k.py
deleted file mode 100644
index 7615a7c19a3f19635b71801a55e4544be4d215b5..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,12 +0,0 @@
-_base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k.py'
-model = dict(
- pretrained='mmcls://mobilenet_v2',
- backbone=dict(
- _delete_=True,
- type='MobileNetV2',
- widen_factor=1.,
- strides=(1, 2, 2, 1, 1, 1, 1),
- dilations=(1, 1, 1, 2, 2, 4, 4),
- out_indices=(1, 2, 4, 6)),
- decode_head=dict(in_channels=320, c1_in_channels=24),
- auxiliary_head=dict(in_channels=96))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/segmentors/base.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/segmentors/base.py
deleted file mode 100644
index 7b53757537a129fb92795caa7f661738c7252fb4..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/segmentors/base.py
+++ /dev/null
@@ -1,273 +0,0 @@
-import logging
-import warnings
-from abc import ABCMeta, abstractmethod
-from collections import OrderedDict
-
-import mmcv
-import numpy as np
-import torch
-import torch.distributed as dist
-import torch.nn as nn
-from mmcv.runner import auto_fp16
-
-
-class BaseSegmentor(nn.Module):
- """Base class for segmentors."""
-
- __metaclass__ = ABCMeta
-
- def __init__(self):
- super(BaseSegmentor, self).__init__()
- self.fp16_enabled = False
-
- @property
- def with_neck(self):
- """bool: whether the segmentor has neck"""
- return hasattr(self, 'neck') and self.neck is not None
-
- @property
- def with_auxiliary_head(self):
- """bool: whether the segmentor has auxiliary head"""
- return hasattr(self,
- 'auxiliary_head') and self.auxiliary_head is not None
-
- @property
- def with_decode_head(self):
- """bool: whether the segmentor has decode head"""
- return hasattr(self, 'decode_head') and self.decode_head is not None
-
- @abstractmethod
- def extract_feat(self, imgs):
- """Placeholder for extract features from images."""
- pass
-
- @abstractmethod
- def encode_decode(self, img, img_metas):
- """Placeholder for encode images with backbone and decode into a
- semantic segmentation map of the same size as input."""
- pass
-
- @abstractmethod
- def forward_train(self, imgs, img_metas, **kwargs):
- """Placeholder for Forward function for training."""
- pass
-
- @abstractmethod
- def simple_test(self, img, img_meta, **kwargs):
- """Placeholder for single image test."""
- pass
-
- @abstractmethod
- def aug_test(self, imgs, img_metas, **kwargs):
- """Placeholder for augmentation test."""
- pass
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in segmentor.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if pretrained is not None:
- logger = logging.getLogger()
- logger.info(f'load model from: {pretrained}')
-
- def forward_test(self, imgs, img_metas, **kwargs):
- """
- Args:
- imgs (List[Tensor]): the outer list indicates test-time
- augmentations and inner Tensor should have a shape NxCxHxW,
- which contains all images in the batch.
- img_metas (List[List[dict]]): the outer list indicates test-time
- augs (multiscale, flip, etc.) and the inner list indicates
- images in a batch.
- """
- for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]:
- if not isinstance(var, list):
- raise TypeError(f'{name} must be a list, but got '
- f'{type(var)}')
-
- num_augs = len(imgs)
- if num_augs != len(img_metas):
- raise ValueError(f'num of augmentations ({len(imgs)}) != '
- f'num of image meta ({len(img_metas)})')
- # all images in the same aug batch all of the same ori_shape and pad
- # shape
- for img_meta in img_metas:
- ori_shapes = [_['ori_shape'] for _ in img_meta]
- assert all(shape == ori_shapes[0] for shape in ori_shapes)
- img_shapes = [_['img_shape'] for _ in img_meta]
- assert all(shape == img_shapes[0] for shape in img_shapes)
- pad_shapes = [_['pad_shape'] for _ in img_meta]
- assert all(shape == pad_shapes[0] for shape in pad_shapes)
-
- if num_augs == 1:
- return self.simple_test(imgs[0], img_metas[0], **kwargs)
- else:
- return self.aug_test(imgs, img_metas, **kwargs)
-
- @auto_fp16(apply_to=('img', ))
- def forward(self, img, img_metas, return_loss=True, **kwargs):
- """Calls either :func:`forward_train` or :func:`forward_test` depending
- on whether ``return_loss`` is ``True``.
-
- Note this setting will change the expected inputs. When
- ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor
- and List[dict]), and when ``resturn_loss=False``, img and img_meta
- should be double nested (i.e. List[Tensor], List[List[dict]]), with
- the outer list indicating test time augmentations.
- """
- if return_loss:
- return self.forward_train(img, img_metas, **kwargs)
- else:
- return self.forward_test(img, img_metas, **kwargs)
-
- def train_step(self, data_batch, optimizer, **kwargs):
- """The iteration step during training.
-
- This method defines an iteration step during training, except for the
- back propagation and optimizer updating, which are done in an optimizer
- hook. Note that in some complicated cases or models, the whole process
- including back propagation and optimizer updating is also defined in
- this method, such as GAN.
-
- Args:
- data (dict): The output of dataloader.
- optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of
- runner is passed to ``train_step()``. This argument is unused
- and reserved.
-
- Returns:
- dict: It should contain at least 3 keys: ``loss``, ``log_vars``,
- ``num_samples``.
- ``loss`` is a tensor for back propagation, which can be a
- weighted sum of multiple losses.
- ``log_vars`` contains all the variables to be sent to the
- logger.
- ``num_samples`` indicates the batch size (when the model is
- DDP, it means the batch size on each GPU), which is used for
- averaging the logs.
- """
- losses = self(**data_batch)
- loss, log_vars = self._parse_losses(losses)
-
- outputs = dict(
- loss=loss,
- log_vars=log_vars,
- num_samples=len(data_batch['img_metas']))
-
- return outputs
-
- def val_step(self, data_batch, **kwargs):
- """The iteration step during validation.
-
- This method shares the same signature as :func:`train_step`, but used
- during val epochs. Note that the evaluation after training epochs is
- not implemented with this method, but an evaluation hook.
- """
- output = self(**data_batch, **kwargs)
- return output
-
- @staticmethod
- def _parse_losses(losses):
- """Parse the raw outputs (losses) of the network.
-
- Args:
- losses (dict): Raw output of the network, which usually contain
- losses and other necessary information.
-
- Returns:
- tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor
- which may be a weighted sum of all losses, log_vars contains
- all the variables to be sent to the logger.
- """
- log_vars = OrderedDict()
- for loss_name, loss_value in losses.items():
- if isinstance(loss_value, torch.Tensor):
- log_vars[loss_name] = loss_value.mean()
- elif isinstance(loss_value, list):
- log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value)
- else:
- raise TypeError(
- f'{loss_name} is not a tensor or list of tensors')
-
- loss = sum(_value for _key, _value in log_vars.items()
- if 'loss' in _key)
-
- log_vars['loss'] = loss
- for loss_name, loss_value in log_vars.items():
- # reduce loss when distributed training
- if dist.is_available() and dist.is_initialized():
- loss_value = loss_value.data.clone()
- dist.all_reduce(loss_value.div_(dist.get_world_size()))
- log_vars[loss_name] = loss_value.item()
-
- return loss, log_vars
-
- def show_result(self,
- img,
- result,
- palette=None,
- win_name='',
- show=False,
- wait_time=0,
- out_file=None,
- opacity=0.5):
- """Draw `result` over `img`.
-
- Args:
- img (str or Tensor): The image to be displayed.
- result (Tensor): The semantic segmentation results to draw over
- `img`.
- palette (list[list[int]]] | np.ndarray | None): The palette of
- segmentation map. If None is given, random palette will be
- generated. Default: None
- win_name (str): The window name.
- wait_time (int): Value of waitKey param.
- Default: 0.
- show (bool): Whether to show the image.
- Default: False.
- out_file (str or None): The filename to write the image.
- Default: None.
- opacity(float): Opacity of painted segmentation map.
- Default 0.5.
- Must be in (0, 1] range.
- Returns:
- img (Tensor): Only if not `show` or `out_file`
- """
- img = mmcv.imread(img)
- img = img.copy()
- seg = result[0]
- if palette is None:
- if self.PALETTE is None:
- palette = np.random.randint(
- 0, 255, size=(len(self.CLASSES), 3))
- else:
- palette = self.PALETTE
- palette = np.array(palette)
- assert palette.shape[0] == len(self.CLASSES)
- assert palette.shape[1] == 3
- assert len(palette.shape) == 2
- assert 0 < opacity <= 1.0
- color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8)
- for label, color in enumerate(palette):
- color_seg[seg == label, :] = color
- # convert to BGR
- color_seg = color_seg[..., ::-1]
-
- img = img * (1 - opacity) + color_seg * opacity
- img = img.astype(np.uint8)
- # if out_file specified, do not show image in window
- if out_file is not None:
- show = False
-
- if show:
- mmcv.imshow(img, win_name, wait_time)
- if out_file is not None:
- mmcv.imwrite(img, out_file)
-
- if not (show or out_file):
- warnings.warn('show==False and out_file is not specified, only '
- 'result image will be returned')
- return img
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_base_ocnli.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_base_ocnli.sh
deleted file mode 100644
index f635330a4b260391a3f9d4b01998ce8305d55b8e..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_base_ocnli.sh
+++ /dev/null
@@ -1,93 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=zen2_base_ocnli # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=1 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id)
-
-
-export CUDA_VISIBLE_DEVICES='1'
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-
-MODEL_NAME=zen2_base
-
-TASK=ocnli
-
-ZERO_STAGE=1
-STRATEGY=deepspeed_stage_${ZERO_STAGE}
-
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/classification_finetune/${MODEL_NAME}_${TASK}
-if [ ! -d ${ROOT_DIR} ];then
- mkdir -p ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-DATA_DIR=/cognitive_comp/yangping/data/ChineseCLUE_DATA/${TASK}_public/
-PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_base_2.0
-
-CHECKPOINT_PATH=${ROOT_DIR}/ckpt/
-OUTPUT_PATH=${ROOT_DIR}/predict.json
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train.json \
- --valid_data dev.json \
- --test_data test.json \
- --train_batchsize 32 \
- --valid_batchsize 16 \
- --max_seq_length 128 \
- --texta_name sentence \
- --label_name label \
- --id_name id \
- --task_name ocnli \
- "
-
-MODEL_ARGS="\
- --learning_rate 2e-5 \
- --weight_decay 0.1 \
- --warmup_ratio 0.01 \
- --num_labels 3 \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_acc \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 100 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_acc:.4f} \
- "
-
-TRAINER_ARGS="\
- --max_epochs 10 \
- --gpus 1 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 100 \
- --default_root_dir $ROOT_DIR \
- "
-
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \
- --do_lower_case \
- --output_save_path $OUTPUT_PATH \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
-"
-SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_sequence_level_ft_task.py
-/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# python3 $SCRIPT_PATH $options
-# source activate base
-# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/unsupervised_mt/eval.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/unsupervised_mt/eval.sh
deleted file mode 100644
index 03b773ed5a522eb82186fea8ffbb6c557e14b6d3..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/unsupervised_mt/eval.sh
+++ /dev/null
@@ -1,37 +0,0 @@
-#!/bin/bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-#
-SRC=si_LK
-TGT=en_XX
-MODEL=criss_checkpoints/criss.3rd.pt
-
-MULTIBLEU=mosesdecoder/scripts/generic/multi-bleu.perl
-MOSES=mosesdecoder
-REPLACE_UNICODE_PUNCT=$MOSES/scripts/tokenizer/replace-unicode-punctuation.perl
-NORM_PUNC=$MOSES/scripts/tokenizer/normalize-punctuation.perl
-REM_NON_PRINT_CHAR=$MOSES/scripts/tokenizer/remove-non-printing-char.perl
-TOKENIZER=$MOSES/scripts/tokenizer/tokenizer.perl
-GEN_TMP_DIR=gen_tmp
-LANG_DICT=criss_checkpoints/lang_dict.txt
-
-if [ ! -d "mosesdecoder" ]; then
- git clone https://github.com/moses-smt/mosesdecoder
-fi
-mkdir -p $GEN_TMP_DIR
-fairseq-generate data_tmp/${SRC}-${TGT}-flores \
- --task translation_multi_simple_epoch \
- --max-tokens 2000 \
- --path ${MODEL} \
- --skip-invalid-size-inputs-valid-test \
- --beam 5 --lenpen 1.0 --gen-subset test \
- --remove-bpe=sentencepiece \
- --source-lang ${SRC} --target-lang ${TGT} \
- --decoder-langtok --lang-pairs 'en_XX-ar_AR,en_XX-de_DE,en_XX-es_XX,en_XX-fr_XX,en_XX-hi_IN,en_XX-it_IT,en_XX-ja_XX,en_XX-ko_KR,en_XX-nl_XX,en_XX-ru_RU,en_XX-zh_CN,en_XX-tr_TR,en_XX-vi_VN,en_XX-ro_RO,en_XX-my_MM,en_XX-ne_NP,en_XX-si_LK,en_XX-cs_CZ,en_XX-lt_LT,en_XX-kk_KZ,en_XX-gu_IN,en_XX-fi_FI,en_XX-et_EE,en_XX-lv_LV,ar_AR-en_XX,cs_CZ-en_XX,de_DE-en_XX,es_XX-en_XX,et_EE-en_XX,fi_FI-en_XX,fr_XX-en_XX,gu_IN-en_XX,hi_IN-en_XX,it_IT-en_XX,ja_XX-en_XX,kk_KZ-en_XX,ko_KR-en_XX,lt_LT-en_XX,lv_LV-en_XX,my_MM-en_XX,ne_NP-en_XX,nl_XX-en_XX,ro_RO-en_XX,ru_RU-en_XX,si_LK-en_XX,tr_TR-en_XX,vi_VN-en_XX,zh_CN-en_XX,ar_AR-es_XX,es_XX-ar_AR,ar_AR-hi_IN,hi_IN-ar_AR,ar_AR-zh_CN,zh_CN-ar_AR,cs_CZ-es_XX,es_XX-cs_CZ,cs_CZ-hi_IN,hi_IN-cs_CZ,cs_CZ-zh_CN,zh_CN-cs_CZ,de_DE-es_XX,es_XX-de_DE,de_DE-hi_IN,hi_IN-de_DE,de_DE-zh_CN,zh_CN-de_DE,es_XX-hi_IN,hi_IN-es_XX,es_XX-zh_CN,zh_CN-es_XX,et_EE-es_XX,es_XX-et_EE,et_EE-hi_IN,hi_IN-et_EE,et_EE-zh_CN,zh_CN-et_EE,fi_FI-es_XX,es_XX-fi_FI,fi_FI-hi_IN,hi_IN-fi_FI,fi_FI-zh_CN,zh_CN-fi_FI,fr_XX-es_XX,es_XX-fr_XX,fr_XX-hi_IN,hi_IN-fr_XX,fr_XX-zh_CN,zh_CN-fr_XX,gu_IN-es_XX,es_XX-gu_IN,gu_IN-hi_IN,hi_IN-gu_IN,gu_IN-zh_CN,zh_CN-gu_IN,hi_IN-zh_CN,zh_CN-hi_IN,it_IT-es_XX,es_XX-it_IT,it_IT-hi_IN,hi_IN-it_IT,it_IT-zh_CN,zh_CN-it_IT,ja_XX-es_XX,es_XX-ja_XX,ja_XX-hi_IN,hi_IN-ja_XX,ja_XX-zh_CN,zh_CN-ja_XX,kk_KZ-es_XX,es_XX-kk_KZ,kk_KZ-hi_IN,hi_IN-kk_KZ,kk_KZ-zh_CN,zh_CN-kk_KZ,ko_KR-es_XX,es_XX-ko_KR,ko_KR-hi_IN,hi_IN-ko_KR,ko_KR-zh_CN,zh_CN-ko_KR,lt_LT-es_XX,es_XX-lt_LT,lt_LT-hi_IN,hi_IN-lt_LT,lt_LT-zh_CN,zh_CN-lt_LT,lv_LV-es_XX,es_XX-lv_LV,lv_LV-hi_IN,hi_IN-lv_LV,lv_LV-zh_CN,zh_CN-lv_LV,my_MM-es_XX,es_XX-my_MM,my_MM-hi_IN,hi_IN-my_MM,my_MM-zh_CN,zh_CN-my_MM,ne_NP-es_XX,es_XX-ne_NP,ne_NP-hi_IN,hi_IN-ne_NP,ne_NP-zh_CN,zh_CN-ne_NP,nl_XX-es_XX,es_XX-nl_XX,nl_XX-hi_IN,hi_IN-nl_XX,nl_XX-zh_CN,zh_CN-nl_XX,ro_RO-es_XX,es_XX-ro_RO,ro_RO-hi_IN,hi_IN-ro_RO,ro_RO-zh_CN,zh_CN-ro_RO,ru_RU-es_XX,es_XX-ru_RU,ru_RU-hi_IN,hi_IN-ru_RU,ru_RU-zh_CN,zh_CN-ru_RU,si_LK-es_XX,es_XX-si_LK,si_LK-hi_IN,hi_IN-si_LK,si_LK-zh_CN,zh_CN-si_LK,tr_TR-es_XX,es_XX-tr_TR,tr_TR-hi_IN,hi_IN-tr_TR,tr_TR-zh_CN,zh_CN-tr_TR,vi_VN-es_XX,es_XX-vi_VN,vi_VN-hi_IN,hi_IN-vi_VN,vi_VN-zh_CN,zh_CN-vi_VN' \
- --lang-dict ${LANG_DICT} --lang-tok-style 'mbart' --sampling-method 'temperature' --sampling-temperature '1.0' > $GEN_TMP_DIR/${SRC}_${TGT}.gen
-cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^T-" | cut -f2 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.hyp
-cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^H-" | cut -f3 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.ref
-${MULTIBLEU} $GEN_TMP_DIR/${SRC}_${TGT}.ref < $GEN_TMP_DIR/${SRC}_${TGT}.hyp
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/segment_char_ngrams.py b/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/segment_char_ngrams.py
deleted file mode 100644
index 8d94bc7a36eb3163271e95e167190d7423564308..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/subword-nmt/subword_nmt/segment_char_ngrams.py
+++ /dev/null
@@ -1,95 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-# Author: Rico Sennrich
-
-from __future__ import unicode_literals, division
-
-import sys
-import codecs
-import argparse
-
-# hack for python2/3 compatibility
-from io import open
-argparse.open = open
-
-def create_parser(subparsers=None):
-
- if subparsers:
- parser = subparsers.add_parser('segment-char-ngrams',
- formatter_class=argparse.RawDescriptionHelpFormatter,
- description="segment rare words into character n-grams")
- else:
- parser = argparse.ArgumentParser(
- formatter_class=argparse.RawDescriptionHelpFormatter,
- description="segment rare words into character n-grams")
-
- parser.add_argument(
- '--input', '-i', type=argparse.FileType('r'), default=sys.stdin,
- metavar='PATH',
- help="Input file (default: standard input).")
- parser.add_argument(
- '--vocab', type=argparse.FileType('r'), metavar='PATH',
- required=True,
- help="Vocabulary file.")
- parser.add_argument(
- '--shortlist', type=int, metavar='INT', default=0,
- help="do not segment INT most frequent words in vocabulary (default: '%(default)s')).")
- parser.add_argument(
- '-n', type=int, metavar='INT', default=2,
- help="segment rare words into character n-grams of size INT (default: '%(default)s')).")
- parser.add_argument(
- '--output', '-o', type=argparse.FileType('w'), default=sys.stdout,
- metavar='PATH',
- help="Output file (default: standard output)")
- parser.add_argument(
- '--separator', '-s', type=str, default='@@', metavar='STR',
- help="Separator between non-final subword units (default: '%(default)s'))")
-
- return parser
-
-def segment_char_ngrams(args):
-
- vocab = [line.split()[0] for line in args.vocab if len(line.split()) == 2]
- vocab = dict((y,x) for (x,y) in enumerate(vocab))
-
- for line in args.input:
- for word in line.split():
- if word not in vocab or vocab[word] > args.shortlist:
- i = 0
- while i*args.n < len(word):
- args.output.write(word[i*args.n:i*args.n+args.n])
- i += 1
- if i*args.n < len(word):
- args.output.write(args.separator)
- args.output.write(' ')
- else:
- args.output.write(word + ' ')
- args.output.write('\n')
-
-
-if __name__ == '__main__':
-
- # python 2/3 compatibility
- if sys.version_info < (3, 0):
- sys.stderr = codecs.getwriter('UTF-8')(sys.stderr)
- sys.stdout = codecs.getwriter('UTF-8')(sys.stdout)
- sys.stdin = codecs.getreader('UTF-8')(sys.stdin)
- else:
- sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer)
- sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer)
- sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer)
-
- parser = create_parser()
- args = parser.parse_args()
-
- if sys.version_info < (3, 0):
- args.separator = args.separator.decode('UTF-8')
-
- # read/write files as UTF-8
- args.vocab = codecs.open(args.vocab.name, encoding='utf-8')
- if args.input.name != '':
- args.input = codecs.open(args.input.name, encoding='utf-8')
- if args.output.name != '':
- args.output = codecs.open(args.output.name, 'w', encoding='utf-8')
-
- segment_char_ngrams(args)
\ No newline at end of file
diff --git a/spaces/Hexamind/swarms/sort_wrap.py b/spaces/Hexamind/swarms/sort_wrap.py
deleted file mode 100644
index 9b4838e5e56449cdd675d541f3c0921fc6320949..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/swarms/sort_wrap.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import numpy as np
-import gym
-
-
-class SortWrapper(gym.Wrapper):
- """
- :param env: (gym.Env) Gym environment that will be wrapped
- """
-
- def __init__(self, env):
- # Call the parent constructor, so we can access self.env later
- super(SortWrapper, self).__init__(env)
- self.blue_signature = None
- self.red_signature = None
-
- def reset(self):
- """
- Reset the environment
- """
- obs = self.env.reset()
- obs = self.sort_obs(obs)
-
- return obs
-
- def step(self, action):
- """
- :param action: ([float] or int) Action taken by the agent
- :return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
- """
-
- action = self.unsort_action(action)
-
- obs, reward, done, info = self.env.step(action)
-
- obs = self.post_obs(obs)
-
- return obs, reward, done, info
-
- def post_obs(self, obs):
- return self.sort_obs(obs)
-
- def sort_obs(self, obs):
-
- blue_obs, red_obs, blue_fire, red_fire = obs
-
- blue_obs, self.blue_signature = self.sort_and_sign(blue_obs)
- red_obs, self.red_signature = self.sort_and_sign(red_obs)
-
- blue_fire = self.unsort_matrix_with_signatures(blue_fire, self.blue_signature, self.red_signature)
- red_fire = self.unsort_matrix_with_signatures(red_fire, self.red_signature, self.blue_signature)
-
- obs = blue_obs, red_obs, blue_fire, red_fire
-
- return obs
-
- def unsort_action(self, action):
-
- blue_action, red_action = action
-
- unsorted_blue_action = self.unsort_with_signature(blue_action, self.blue_signature)
- unsorted_red_action = self.unsort_with_signature(red_action, self.red_signature)
-
- action = unsorted_blue_action, unsorted_red_action
-
- return action
-
- def sort_and_sign(self, an_array: np.ndarray) -> [np.ndarray, []]:
- """
- allows to sort an ndarray of 6 columns of floats and to keep the "signature": the positions of the items
- before being sorted in order to retrieve the initial order after modifications of the arrays.
- the order is retrieved with the unsort_with_signature function
- :param an_array:
- :return:
- """
- zip_list = zip(an_array, range(len(an_array)))
- zip_sorted = sorted(zip_list, key=lambda x: (x[0][0], x[0][1], x[0][2], x[0][3], x[0][4], x[0][5]))
- sorted_array, signature = zip(*zip_sorted)
- return np.array(sorted_array), signature
-
- def unsort_with_signature(self, an_array: np.ndarray, signature: []) -> np.ndarray:
- """
- see above
- :param an_array:
- :param signature:
- :return:
- """
- zip_list = zip(signature, an_array)
- zip_unsorted = sorted(zip_list)
- _, unsorted = zip(*zip_unsorted)
- return np.array(unsorted)
-
- def unsort_matrix_with_signatures(self, matrix: np.ndarray, sign_line: np.ndarray, sign_col: np.ndarray) \
- -> np.ndarray:
-
- matrix = self.unsort_with_signature(matrix, sign_line)
- matrix = self.unsort_with_signature(matrix.T, sign_col).T
-
- return matrix
diff --git a/spaces/HighCWu/GPEN/face_model/op/fused_act.py b/spaces/HighCWu/GPEN/face_model/op/fused_act.py
deleted file mode 100644
index a2099ca2ab5f7dd574c15543930364b4ab817ef8..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GPEN/face_model/op/fused_act.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import os
-import platform
-
-import torch
-from torch import nn
-import torch.nn.functional as F
-from torch.autograd import Function
-from torch.utils.cpp_extension import load, _import_module_from_library
-
-# if running GPEN without cuda, please comment line 11-19
-if platform.system() == 'Linux' and torch.cuda.is_available():
- module_path = os.path.dirname(__file__)
- fused = load(
- 'fused',
- sources=[
- os.path.join(module_path, 'fused_bias_act.cpp'),
- os.path.join(module_path, 'fused_bias_act_kernel.cu'),
- ],
- )
-
-
-#fused = _import_module_from_library('fused', '/tmp/torch_extensions/fused', True)
-
-
-class FusedLeakyReLUFunctionBackward(Function):
- @staticmethod
- def forward(ctx, grad_output, out, negative_slope, scale):
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- empty = grad_output.new_empty(0)
-
- grad_input = fused.fused_bias_act(
- grad_output, empty, out, 3, 1, negative_slope, scale
- )
-
- dim = [0]
-
- if grad_input.ndim > 2:
- dim += list(range(2, grad_input.ndim))
-
- grad_bias = grad_input.sum(dim).detach()
-
- return grad_input, grad_bias
-
- @staticmethod
- def backward(ctx, gradgrad_input, gradgrad_bias):
- out, = ctx.saved_tensors
- gradgrad_out = fused.fused_bias_act(
- gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale
- )
-
- return gradgrad_out, None, None, None
-
-
-class FusedLeakyReLUFunction(Function):
- @staticmethod
- def forward(ctx, input, bias, negative_slope, scale):
- empty = input.new_empty(0)
- out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- out, = ctx.saved_tensors
-
- grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
- grad_output, out, ctx.negative_slope, ctx.scale
- )
-
- return grad_input, grad_bias, None, None
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5, device='cpu'):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
- self.device = device
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale, self.device)
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5, device='cpu'):
- if platform.system() == 'Linux' and torch.cuda.is_available() and device != 'cpu':
- return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
- else:
- return scale * F.leaky_relu(input + bias.view((1, -1)+(1,)*(len(input.shape)-2)), negative_slope=negative_slope)
\ No newline at end of file
diff --git a/spaces/HighCWu/GPEN/retinaface/facemodels/net.py b/spaces/HighCWu/GPEN/retinaface/facemodels/net.py
deleted file mode 100644
index beb6040b24258f8b96020c1c9fc2610819718017..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GPEN/retinaface/facemodels/net.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import time
-import torch
-import torch.nn as nn
-import torchvision.models._utils as _utils
-import torchvision.models as models
-import torch.nn.functional as F
-from torch.autograd import Variable
-
-def conv_bn(inp, oup, stride = 1, leaky = 0):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True)
- )
-
-def conv_bn_no_relu(inp, oup, stride):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
- nn.BatchNorm2d(oup),
- )
-
-def conv_bn1X1(inp, oup, stride, leaky=0):
- return nn.Sequential(
- nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False),
- nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope=leaky, inplace=True)
- )
-
-def conv_dw(inp, oup, stride, leaky=0.1):
- return nn.Sequential(
- nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
- nn.BatchNorm2d(inp),
- nn.LeakyReLU(negative_slope= leaky,inplace=True),
-
- nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
- nn.BatchNorm2d(oup),
- nn.LeakyReLU(negative_slope= leaky,inplace=True),
- )
-
-class SSH(nn.Module):
- def __init__(self, in_channel, out_channel):
- super(SSH, self).__init__()
- assert out_channel % 4 == 0
- leaky = 0
- if (out_channel <= 64):
- leaky = 0.1
- self.conv3X3 = conv_bn_no_relu(in_channel, out_channel//2, stride=1)
-
- self.conv5X5_1 = conv_bn(in_channel, out_channel//4, stride=1, leaky = leaky)
- self.conv5X5_2 = conv_bn_no_relu(out_channel//4, out_channel//4, stride=1)
-
- self.conv7X7_2 = conv_bn(out_channel//4, out_channel//4, stride=1, leaky = leaky)
- self.conv7x7_3 = conv_bn_no_relu(out_channel//4, out_channel//4, stride=1)
-
- def forward(self, input):
- conv3X3 = self.conv3X3(input)
-
- conv5X5_1 = self.conv5X5_1(input)
- conv5X5 = self.conv5X5_2(conv5X5_1)
-
- conv7X7_2 = self.conv7X7_2(conv5X5_1)
- conv7X7 = self.conv7x7_3(conv7X7_2)
-
- out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1)
- out = F.relu(out)
- return out
-
-class FPN(nn.Module):
- def __init__(self,in_channels_list,out_channels):
- super(FPN,self).__init__()
- leaky = 0
- if (out_channels <= 64):
- leaky = 0.1
- self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride = 1, leaky = leaky)
- self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride = 1, leaky = leaky)
- self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride = 1, leaky = leaky)
-
- self.merge1 = conv_bn(out_channels, out_channels, leaky = leaky)
- self.merge2 = conv_bn(out_channels, out_channels, leaky = leaky)
-
- def forward(self, input):
- # names = list(input.keys())
- input = list(input.values())
-
- output1 = self.output1(input[0])
- output2 = self.output2(input[1])
- output3 = self.output3(input[2])
-
- up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode="nearest")
- output2 = output2 + up3
- output2 = self.merge2(output2)
-
- up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode="nearest")
- output1 = output1 + up2
- output1 = self.merge1(output1)
-
- out = [output1, output2, output3]
- return out
-
-
-
-class MobileNetV1(nn.Module):
- def __init__(self):
- super(MobileNetV1, self).__init__()
- self.stage1 = nn.Sequential(
- conv_bn(3, 8, 2, leaky = 0.1), # 3
- conv_dw(8, 16, 1), # 7
- conv_dw(16, 32, 2), # 11
- conv_dw(32, 32, 1), # 19
- conv_dw(32, 64, 2), # 27
- conv_dw(64, 64, 1), # 43
- )
- self.stage2 = nn.Sequential(
- conv_dw(64, 128, 2), # 43 + 16 = 59
- conv_dw(128, 128, 1), # 59 + 32 = 91
- conv_dw(128, 128, 1), # 91 + 32 = 123
- conv_dw(128, 128, 1), # 123 + 32 = 155
- conv_dw(128, 128, 1), # 155 + 32 = 187
- conv_dw(128, 128, 1), # 187 + 32 = 219
- )
- self.stage3 = nn.Sequential(
- conv_dw(128, 256, 2), # 219 +3 2 = 241
- conv_dw(256, 256, 1), # 241 + 64 = 301
- )
- self.avg = nn.AdaptiveAvgPool2d((1,1))
- self.fc = nn.Linear(256, 1000)
-
- def forward(self, x):
- x = self.stage1(x)
- x = self.stage2(x)
- x = self.stage3(x)
- x = self.avg(x)
- # x = self.model(x)
- x = x.view(-1, 256)
- x = self.fc(x)
- return x
-
diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/losses/vqperceptual.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/losses/vqperceptual.py
deleted file mode 100644
index fd3874011472c423f059e573029564e979dd225d..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/losses/vqperceptual.py
+++ /dev/null
@@ -1,182 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from taming.modules.losses.lpips import LPIPS
-from taming.modules.discriminator.model import NLayerDiscriminator, weights_init
-
-
-class DummyLoss(nn.Module):
- def __init__(self):
- super().__init__()
-
-
-def adopt_weight(weight, global_step, threshold=0, value=0.0):
- if global_step < threshold:
- weight = value
- return weight
-
-
-def hinge_d_loss(logits_real, logits_fake):
- loss_real = torch.mean(F.relu(1.0 - logits_real))
- loss_fake = torch.mean(F.relu(1.0 + logits_fake))
- d_loss = 0.5 * (loss_real + loss_fake)
- return d_loss
-
-
-def vanilla_d_loss(logits_real, logits_fake):
- d_loss = 0.5 * (
- torch.mean(torch.nn.functional.softplus(-logits_real))
- + torch.mean(torch.nn.functional.softplus(logits_fake))
- )
- return d_loss
-
-
-class VQLPIPSWithDiscriminator(nn.Module):
- def __init__(
- self,
- disc_start,
- codebook_weight=1.0,
- pixelloss_weight=1.0,
- disc_num_layers=3,
- disc_in_channels=3,
- disc_factor=1.0,
- disc_weight=1.0,
- perceptual_weight=1.0,
- use_actnorm=False,
- disc_conditional=False,
- disc_ndf=64,
- disc_loss="hinge",
- ):
- super().__init__()
- assert disc_loss in ["hinge", "vanilla"]
- self.codebook_weight = codebook_weight
- self.pixel_weight = pixelloss_weight
- self.perceptual_loss = LPIPS().eval()
- self.perceptual_weight = perceptual_weight
-
- self.discriminator = NLayerDiscriminator(
- input_nc=disc_in_channels,
- n_layers=disc_num_layers,
- use_actnorm=use_actnorm,
- ndf=disc_ndf,
- ).apply(weights_init)
- self.discriminator_iter_start = disc_start
- if disc_loss == "hinge":
- self.disc_loss = hinge_d_loss
- elif disc_loss == "vanilla":
- self.disc_loss = vanilla_d_loss
- else:
- raise ValueError(f"Unknown GAN loss '{disc_loss}'.")
- print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.")
- self.disc_factor = disc_factor
- self.discriminator_weight = disc_weight
- self.disc_conditional = disc_conditional
-
- def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
- if last_layer is not None:
- nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
- else:
- nll_grads = torch.autograd.grad(
- nll_loss, self.last_layer[0], retain_graph=True
- )[0]
- g_grads = torch.autograd.grad(
- g_loss, self.last_layer[0], retain_graph=True
- )[0]
-
- d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
- d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
- d_weight = d_weight * self.discriminator_weight
- return d_weight
-
- def forward(
- self,
- codebook_loss,
- inputs,
- reconstructions,
- optimizer_idx,
- global_step,
- last_layer=None,
- cond=None,
- split="train",
- ):
- rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
- if self.perceptual_weight > 0:
- p_loss = self.perceptual_loss(
- inputs.contiguous(), reconstructions.contiguous()
- )
- rec_loss = rec_loss + self.perceptual_weight * p_loss
- else:
- p_loss = torch.tensor([0.0])
-
- nll_loss = rec_loss
- # nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
- nll_loss = torch.mean(nll_loss)
-
- # now the GAN part
- if optimizer_idx == 0:
- # generator update
- if cond is None:
- assert not self.disc_conditional
- logits_fake = self.discriminator(reconstructions.contiguous())
- else:
- assert self.disc_conditional
- logits_fake = self.discriminator(
- torch.cat((reconstructions.contiguous(), cond), dim=1)
- )
- g_loss = -torch.mean(logits_fake)
-
- try:
- d_weight = self.calculate_adaptive_weight(
- nll_loss, g_loss, last_layer=last_layer
- )
- except RuntimeError:
- assert not self.training
- d_weight = torch.tensor(0.0)
-
- disc_factor = adopt_weight(
- self.disc_factor, global_step, threshold=self.discriminator_iter_start
- )
- loss = (
- nll_loss
- + d_weight * disc_factor * g_loss
- + self.codebook_weight * codebook_loss.mean()
- )
-
- log = {
- "{}/total_loss".format(split): loss.clone().detach().mean(),
- "{}/quant_loss".format(split): codebook_loss.detach().mean(),
- "{}/nll_loss".format(split): nll_loss.detach().mean(),
- "{}/rec_loss".format(split): rec_loss.detach().mean(),
- "{}/p_loss".format(split): p_loss.detach().mean(),
- "{}/d_weight".format(split): d_weight.detach(),
- "{}/disc_factor".format(split): torch.tensor(disc_factor),
- "{}/g_loss".format(split): g_loss.detach().mean(),
- }
- return loss, log
-
- if optimizer_idx == 1:
- # second pass for discriminator update
- if cond is None:
- logits_real = self.discriminator(inputs.contiguous().detach())
- logits_fake = self.discriminator(reconstructions.contiguous().detach())
- else:
- logits_real = self.discriminator(
- torch.cat((inputs.contiguous().detach(), cond), dim=1)
- )
- logits_fake = self.discriminator(
- torch.cat((reconstructions.contiguous().detach(), cond), dim=1)
- )
-
- disc_factor = adopt_weight(
- self.disc_factor, global_step, threshold=self.discriminator_iter_start
- )
- d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
-
- log = {
- "{}/disc_loss".format(split): d_loss.clone().detach().mean(),
- "{}/logits_real".format(split): logits_real.detach().mean(),
- "{}/logits_fake".format(split): logits_fake.detach().mean(),
- }
- return d_loss, log
diff --git a/spaces/HugoDzz/spaceship_drift/static/game/index.html b/spaces/HugoDzz/spaceship_drift/static/game/index.html
deleted file mode 100644
index c75912c8ab4c1b5cd339f71821f0056a2a1836a1..0000000000000000000000000000000000000000
--- a/spaces/HugoDzz/spaceship_drift/static/game/index.html
+++ /dev/null
@@ -1,248 +0,0 @@
-
-
-
-
-
- dear_spaceship_3.5
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/README.md b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/README.md
deleted file mode 100644
index 7a76ffd57c066c20af94aa3fca24c18e2ba4c3dd..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/README.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# Generative Spoken Language Modeling
-
-* [Paper](https://arxiv.org/abs/2102.01192)
-* [Demo](https://speechbot.github.io/gslm/index.html)
-
-We build and evaluate generative speech2speech systems using [Log Mel Filtebank](https://pytorch.org/audio/stable/compliance.kaldi.html#fbank), [Modified CPC](https://github.com/facebookresearch/CPC_audio), [HuBERT Base](https://github.com/pytorch/fairseq/tree/main/examples/hubert) and [Wav2Vec 2.0 Large](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec). Our system is composed of three components, namely, *speech2unit*, *ulm* and *unit2speech*. We explain about models and usage of these components in their respective sub-directories. See the links below.
-
-## Speech to Unit Model (speech2unit)
-Speech to unit model is used for quantizing raw speech into learned discrete speech units. [More details](speech2unit)
-
-## Unit Language Model (ulm)
-Unit Language Model is a generative language model trained on discrete speech units. [More details](ulm)
-
-## Unit to Speech Model (unit2speech)
-Unit to speech model is used for synthesizing speech from discrete speech units. [More details](unit2speech)
-
-## Metrics
-We show how to compute ASR based metrics as well as zero-shot metrics proposed in our paper [here](metrics).
-
-## Tools
-We share two tools to resynthesize a given spoken utterance, and generate novel spoken language given a spoken prompt. [More detail](tools)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py
deleted file mode 100644
index d6a40e4d359bdcae6d64f53ba06d8a533aec01ac..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py
+++ /dev/null
@@ -1,122 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import torch
-import numpy as np
-import warnings
-
-
-def get_target_sequences(manifest, ground_truth, to_take=1000):
- import json
- import pathlib
-
- with open(ground_truth, 'r') as fin:
- original_continuations = json.loads(fin.read())
-
- sequence2length = [(k, v[0]) for k, v in original_continuations.items()]
- assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds
-
- sequence2length.sort(key=lambda x: x[1])
- to_take_sequences = set(v[0] for v in sequence2length[:to_take])
- to_take_ids = []
-
- with open(manifest, 'r') as f:
- f.readline()
-
- for i, line in enumerate(f.readlines()):
- seq_id = line.split()[0]
- seq_id = pathlib.Path(seq_id).name.split('__')[0]
-
- if seq_id in to_take_sequences:
- to_take_ids.append(i)
-
- print(f'Took {len(to_take_ids)} ids')
- return set(to_take_ids)
-
-
-def get_args():
- import argparse
-
- parser = argparse.ArgumentParser("Evaluate PPX metric of a transcript.")
- parser.add_argument('--asr-transcript', type=str,
- help='Path to the transcript file.')
- parser.add_argument('--cut-id', action='store_true',
- help='Whether cut the first token (typically a seq id)')
- parser.add_argument('--cut-tail', action='store_true',
- help='Whether cut the last token (typically a speaker id)')
-
- parser.add_argument('--manifest', type=str, default=None)
- parser.add_argument('--prompts-description', type=str, default=None)
-
- args = parser.parse_args()
-
- return args
-
-
-def main():
- args = get_args()
-
- lm = torch.hub.load(
- 'pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe')
-
- lm.eval().cuda() # disable dropout
-
- if args.manifest is None and args.prompts_description is None:
- target_ids = None
- else:
- target_ids = get_target_sequences(
- args.manifest, args.prompts_description)
-
- with open(args.asr_transcript, 'r') as fin:
- lines = fin.readlines()
-
- if target_ids is not None:
- filtered = []
- for line in lines:
- line_id = line.split()[-1]
- line_id = int(line_id.split('-')[1][:-1])
- if line_id in target_ids:
- filtered.append(line)
- lines = filtered
- else:
- pass
-
- if args.cut_id:
- lines = [' '.join(x.split()[1:]) for x in lines]
- if args.cut_tail:
- lines = [' '.join(x.split()[:-1]) for x in lines]
- lines = [x.strip().lower() for x in lines]
-
- def get_logprob(sent): return \
- lm.score(sent)['positional_scores'].mean().neg().item()
-
- logprobs = [get_logprob(l) for l in lines]
-
- filtered = [x for x in logprobs if not np.isnan(x)]
- if len(filtered) != len(logprobs):
- warnings.warn("NaNs detected!")
- logprobs = filtered
-
- perplexities = [np.exp(l) for l in logprobs]
-
- for name, stats in [('logprob', logprobs), ('perplexity', perplexities)]:
- mean = np.mean(stats)
- sem = np.std(stats) / np.sqrt(len(stats))
-
- median = np.median(stats)
- interval = list(np.percentile(stats, [10, 90]))
-
- mean, sem, median, percentile10, percentile90 = [
- round(x, 2) for x in [mean, sem, median] + interval]
-
- print(name)
- print(f"\tMean {mean} +- {sem}")
- print(
- f"\tMedian {median}, 90% confidence interval {percentile10}...{percentile90}")
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/hooks/createContext.tsx b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/hooks/createContext.tsx
deleted file mode 100644
index c363be6afed0ea17e0f9fabf6ec67b3cf168be7a..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/hooks/createContext.tsx
+++ /dev/null
@@ -1,27 +0,0 @@
-// Copyright (c) Meta Platforms, Inc. and affiliates.
-// All rights reserved.
-
-// This source code is licensed under the license found in the
-// LICENSE file in the root directory of this source tree.
-
-import { createContext } from "react";
-import { modelInputProps } from "../helpers/Interfaces";
-
-interface contextProps {
- clicks: [
- clicks: modelInputProps[] | null,
- setClicks: (e: modelInputProps[] | null) => void
- ];
- image: [
- image: HTMLImageElement | null,
- setImage: (e: HTMLImageElement | null) => void
- ];
- maskImg: [
- maskImg: HTMLImageElement | null,
- setMaskImg: (e: HTMLImageElement | null) => void
- ];
-}
-
-const AppContext = createContext(null);
-
-export default AppContext;
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_pndm.py b/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_pndm.py
deleted file mode 100644
index a29f7d6d44cc628ac64bcb7225c5c494d4c70131..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_pndm.py
+++ /dev/null
@@ -1,425 +0,0 @@
-# Copyright 2022 Zhejiang University Team and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# DISCLAIMER: This file is strongly influenced by https://github.com/ermongroup/ddim
-
-import math
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS
-from .scheduling_utils import SchedulerMixin, SchedulerOutput
-
-
-def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
-
- def alpha_bar(time_step):
- return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return torch.tensor(betas, dtype=torch.float32)
-
-
-class PNDMScheduler(SchedulerMixin, ConfigMixin):
- """
- Pseudo numerical methods for diffusion models (PNDM) proposes using more advanced ODE integration techniques,
- namely Runge-Kutta method and a linear multi-step method.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/abs/2202.09778
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- skip_prk_steps (`bool`):
- allows the scheduler to skip the Runge-Kutta steps that are defined in the original paper as being required
- before plms steps; defaults to `False`.
- set_alpha_to_one (`bool`, default `False`):
- each diffusion step uses the value of alphas product at that step and at the previous one. For the final
- step there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
- otherwise it uses the value of alpha at step 0.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- steps_offset (`int`, default `0`):
- an offset added to the inference steps. You can use a combination of `offset=1` and
- `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in
- stable diffusion.
-
- """
-
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- skip_prk_steps: bool = False,
- set_alpha_to_one: bool = False,
- prediction_type: str = "epsilon",
- steps_offset: int = 0,
- ):
- if trained_betas is not None:
- self.betas = torch.tensor(trained_betas, dtype=torch.float32)
- elif beta_schedule == "linear":
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = (
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
- )
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
-
- self.final_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0]
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # For now we only support F-PNDM, i.e. the runge-kutta method
- # For more information on the algorithm please take a look at the paper: https://arxiv.org/pdf/2202.09778.pdf
- # mainly at formula (9), (12), (13) and the Algorithm 2.
- self.pndm_order = 4
-
- # running values
- self.cur_model_output = 0
- self.counter = 0
- self.cur_sample = None
- self.ets = []
-
- # setable values
- self.num_inference_steps = None
- self._timesteps = np.arange(0, num_train_timesteps)[::-1].copy()
- self.prk_timesteps = None
- self.plms_timesteps = None
- self.timesteps = None
-
- def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None):
- """
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
-
- self.num_inference_steps = num_inference_steps
- step_ratio = self.config.num_train_timesteps // self.num_inference_steps
- # creates integer timesteps by multiplying by ratio
- # casting to int to avoid issues when num_inference_step is power of 3
- self._timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()
- self._timesteps += self.config.steps_offset
-
- if self.config.skip_prk_steps:
- # for some models like stable diffusion the prk steps can/should be skipped to
- # produce better results. When using PNDM with `self.config.skip_prk_steps` the implementation
- # is based on crowsonkb's PLMS sampler implementation: https://github.com/CompVis/latent-diffusion/pull/51
- self.prk_timesteps = np.array([])
- self.plms_timesteps = np.concatenate([self._timesteps[:-1], self._timesteps[-2:-1], self._timesteps[-1:]])[
- ::-1
- ].copy()
- else:
- prk_timesteps = np.array(self._timesteps[-self.pndm_order :]).repeat(2) + np.tile(
- np.array([0, self.config.num_train_timesteps // num_inference_steps // 2]), self.pndm_order
- )
- self.prk_timesteps = (prk_timesteps[:-1].repeat(2)[1:-1])[::-1].copy()
- self.plms_timesteps = self._timesteps[:-3][
- ::-1
- ].copy() # we copy to avoid having negative strides which are not supported by torch.from_numpy
-
- timesteps = np.concatenate([self.prk_timesteps, self.plms_timesteps]).astype(np.int64)
- self.timesteps = torch.from_numpy(timesteps).to(device)
-
- self.ets = []
- self.counter = 0
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- return_dict: bool = True,
- ) -> Union[SchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- This function calls `step_prk()` or `step_plms()` depending on the internal variable `counter`.
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.SchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
-
- """
- if self.counter < len(self.prk_timesteps) and not self.config.skip_prk_steps:
- return self.step_prk(model_output=model_output, timestep=timestep, sample=sample, return_dict=return_dict)
- else:
- return self.step_plms(model_output=model_output, timestep=timestep, sample=sample, return_dict=return_dict)
-
- def step_prk(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- return_dict: bool = True,
- ) -> Union[SchedulerOutput, Tuple]:
- """
- Step function propagating the sample with the Runge-Kutta method. RK takes 4 forward passes to approximate the
- solution to the differential equation.
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
-
- Returns:
- [`~scheduling_utils.SchedulerOutput`] or `tuple`: [`~scheduling_utils.SchedulerOutput`] if `return_dict` is
- True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- if self.num_inference_steps is None:
- raise ValueError(
- "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
- )
-
- diff_to_prev = 0 if self.counter % 2 else self.config.num_train_timesteps // self.num_inference_steps // 2
- prev_timestep = timestep - diff_to_prev
- timestep = self.prk_timesteps[self.counter // 4 * 4]
-
- if self.counter % 4 == 0:
- self.cur_model_output += 1 / 6 * model_output
- self.ets.append(model_output)
- self.cur_sample = sample
- elif (self.counter - 1) % 4 == 0:
- self.cur_model_output += 1 / 3 * model_output
- elif (self.counter - 2) % 4 == 0:
- self.cur_model_output += 1 / 3 * model_output
- elif (self.counter - 3) % 4 == 0:
- model_output = self.cur_model_output + 1 / 6 * model_output
- self.cur_model_output = 0
-
- # cur_sample should not be `None`
- cur_sample = self.cur_sample if self.cur_sample is not None else sample
-
- prev_sample = self._get_prev_sample(cur_sample, timestep, prev_timestep, model_output)
- self.counter += 1
-
- if not return_dict:
- return (prev_sample,)
-
- return SchedulerOutput(prev_sample=prev_sample)
-
- def step_plms(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- return_dict: bool = True,
- ) -> Union[SchedulerOutput, Tuple]:
- """
- Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple
- times to approximate the solution.
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
-
- Returns:
- [`~scheduling_utils.SchedulerOutput`] or `tuple`: [`~scheduling_utils.SchedulerOutput`] if `return_dict` is
- True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- if self.num_inference_steps is None:
- raise ValueError(
- "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler"
- )
-
- if not self.config.skip_prk_steps and len(self.ets) < 3:
- raise ValueError(
- f"{self.__class__} can only be run AFTER scheduler has been run "
- "in 'prk' mode for at least 12 iterations "
- "See: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pipeline_pndm.py "
- "for more information."
- )
-
- prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
-
- if self.counter != 1:
- self.ets = self.ets[-3:]
- self.ets.append(model_output)
- else:
- prev_timestep = timestep
- timestep = timestep + self.config.num_train_timesteps // self.num_inference_steps
-
- if len(self.ets) == 1 and self.counter == 0:
- model_output = model_output
- self.cur_sample = sample
- elif len(self.ets) == 1 and self.counter == 1:
- model_output = (model_output + self.ets[-1]) / 2
- sample = self.cur_sample
- self.cur_sample = None
- elif len(self.ets) == 2:
- model_output = (3 * self.ets[-1] - self.ets[-2]) / 2
- elif len(self.ets) == 3:
- model_output = (23 * self.ets[-1] - 16 * self.ets[-2] + 5 * self.ets[-3]) / 12
- else:
- model_output = (1 / 24) * (55 * self.ets[-1] - 59 * self.ets[-2] + 37 * self.ets[-3] - 9 * self.ets[-4])
-
- prev_sample = self._get_prev_sample(sample, timestep, prev_timestep, model_output)
- self.counter += 1
-
- if not return_dict:
- return (prev_sample,)
-
- return SchedulerOutput(prev_sample=prev_sample)
-
- def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs) -> torch.FloatTensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`torch.FloatTensor`): input sample
-
- Returns:
- `torch.FloatTensor`: scaled input sample
- """
- return sample
-
- def _get_prev_sample(self, sample, timestep, prev_timestep, model_output):
- # See formula (9) of PNDM paper https://arxiv.org/pdf/2202.09778.pdf
- # this function computes x_(t−δ) using the formula of (9)
- # Note that x_t needs to be added to both sides of the equation
-
- # Notation ( ->
- # alpha_prod_t -> α_t
- # alpha_prod_t_prev -> α_(t−δ)
- # beta_prod_t -> (1 - α_t)
- # beta_prod_t_prev -> (1 - α_(t−δ))
- # sample -> x_t
- # model_output -> e_θ(x_t, t)
- # prev_sample -> x_(t−δ)
- alpha_prod_t = self.alphas_cumprod[timestep]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- if self.config.prediction_type == "v_prediction":
- model_output = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample
- elif self.config.prediction_type != "epsilon":
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon` or `v_prediction`"
- )
-
- # corresponds to (α_(t−δ) - α_t) divided by
- # denominator of x_t in formula (9) and plus 1
- # Note: (α_(t−δ) - α_t) / (sqrt(α_t) * (sqrt(α_(t−δ)) + sqr(α_t))) =
- # sqrt(α_(t−δ)) / sqrt(α_t))
- sample_coeff = (alpha_prod_t_prev / alpha_prod_t) ** (0.5)
-
- # corresponds to denominator of e_θ(x_t, t) in formula (9)
- model_output_denom_coeff = alpha_prod_t * beta_prod_t_prev ** (0.5) + (
- alpha_prod_t * beta_prod_t * alpha_prod_t_prev
- ) ** (0.5)
-
- # full formula (9)
- prev_sample = (
- sample_coeff * sample - (alpha_prod_t_prev - alpha_prod_t) * model_output / model_output_denom_coeff
- )
-
- return prev_sample
-
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.IntTensor,
- ) -> torch.Tensor:
- # Make sure alphas_cumprod and timestep have same device and dtype as original_samples
- self.alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype)
- timesteps = timesteps.to(original_samples.device)
-
- sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
- while len(sqrt_alpha_prod.shape) < len(original_samples.shape):
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
-
- sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
- while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape):
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
-
- noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/Jamkonams/AutoGPT/autogpt/commands/__init__.py b/spaces/Jamkonams/AutoGPT/autogpt/commands/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/__init__.py b/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/KenjieDec/GPEN/retinaface/layers/__init__.py b/spaces/KenjieDec/GPEN/retinaface/layers/__init__.py
deleted file mode 100644
index 53a3f4b5160995d93bc7911e808b3045d74362c9..0000000000000000000000000000000000000000
--- a/spaces/KenjieDec/GPEN/retinaface/layers/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .functions import *
-from .modules import *
diff --git a/spaces/Kevin676/Clone-Your-Voice/app.py b/spaces/Kevin676/Clone-Your-Voice/app.py
deleted file mode 100644
index c331876b40730dd3ac7c23993c0c71c97b064338..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Clone-Your-Voice/app.py
+++ /dev/null
@@ -1,377 +0,0 @@
-import gradio as gr
-import os
-from utils.default_models import ensure_default_models
-import sys
-import traceback
-from pathlib import Path
-from time import perf_counter as timer
-import numpy as np
-import torch
-from encoder import inference as encoder
-from synthesizer.inference import Synthesizer
-#from toolbox.utterance import Utterance
-from vocoder import inference as vocoder
-import time
-import librosa
-import numpy as np
-#import sounddevice as sd
-import soundfile as sf
-import argparse
-from utils.argutils import print_args
-
-parser = argparse.ArgumentParser(
- formatter_class=argparse.ArgumentDefaultsHelpFormatter
-)
-parser.add_argument("-e", "--enc_model_fpath", type=Path,
- default="saved_models/default/encoder.pt",
- help="Path to a saved encoder")
-parser.add_argument("-s", "--syn_model_fpath", type=Path,
- default="saved_models/default/synthesizer.pt",
- help="Path to a saved synthesizer")
-parser.add_argument("-v", "--voc_model_fpath", type=Path,
- default="saved_models/default/vocoder.pt",
- help="Path to a saved vocoder")
-parser.add_argument("--cpu", action="store_true", help=\
- "If True, processing is done on CPU, even when a GPU is available.")
-parser.add_argument("--no_sound", action="store_true", help=\
- "If True, audio won't be played.")
-parser.add_argument("--seed", type=int, default=None, help=\
- "Optional random number seed value to make toolbox deterministic.")
-args = parser.parse_args()
-arg_dict = vars(args)
-print_args(args, parser)
-
-# Maximum of generated wavs to keep on memory
-MAX_WAVS = 15
-utterances = set()
-current_generated = (None, None, None, None) # speaker_name, spec, breaks, wav
-synthesizer = None # type: Synthesizer
-current_wav = None
-waves_list = []
-waves_count = 0
-waves_namelist = []
-
-# Hide GPUs from Pytorch to force CPU processing
-if arg_dict.pop("cpu"):
- os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
-
-print("Running a test of your configuration...\n")
-
-if torch.cuda.is_available():
- device_id = torch.cuda.current_device()
- gpu_properties = torch.cuda.get_device_properties(device_id)
- ## Print some environment information (for debugging purposes)
- print("Found %d GPUs available. Using GPU %d (%s) of compute capability %d.%d with "
- "%.1fGb total memory.\n" %
- (torch.cuda.device_count(),
- device_id,
- gpu_properties.name,
- gpu_properties.major,
- gpu_properties.minor,
- gpu_properties.total_memory / 1e9))
-else:
- print("Using CPU for inference.\n")
-
-## Load the models one by one.
-print("Preparing the encoder, the synthesizer and the vocoder...")
-ensure_default_models(Path("saved_models"))
-#encoder.load_model(args.enc_model_fpath)
-#synthesizer = Synthesizer(args.syn_model_fpath)
-#vocoder.load_model(args.voc_model_fpath)
-
-def compute_embedding(in_fpath):
-
- if not encoder.is_loaded():
- model_fpath = args.enc_model_fpath
- print("Loading the encoder %s... " % model_fpath)
- start = time.time()
- encoder.load_model(model_fpath)
- print("Done (%dms)." % int(1000 * (time.time() - start)), "append")
-
-
- ## Computing the embedding
- # First, we load the wav using the function that the speaker encoder provides. This is
-
- # Get the wav from the disk. We take the wav with the vocoder/synthesizer format for
- # playback, so as to have a fair comparison with the generated audio
- wav = Synthesizer.load_preprocess_wav(in_fpath)
-
- # important: there is preprocessing that must be applied.
-
- # The following two methods are equivalent:
- # - Directly load from the filepath:
- preprocessed_wav = encoder.preprocess_wav(wav)
-
- # - If the wav is already loaded:
- #original_wav, sampling_rate = librosa.load(str(in_fpath))
- #preprocessed_wav = encoder.preprocess_wav(original_wav, sampling_rate)
-
- # Compute the embedding
- embed, partial_embeds, _ = encoder.embed_utterance(preprocessed_wav, return_partials=True)
-
-
- print("Loaded file succesfully")
-
- # Then we derive the embedding. There are many functions and parameters that the
- # speaker encoder interfaces. These are mostly for in-depth research. You will typically
- # only use this function (with its default parameters):
- #embed = encoder.embed_utterance(preprocessed_wav)
-
- return embed
-def create_spectrogram(text,embed):
- # If seed is specified, reset torch seed and force synthesizer reload
- if args.seed is not None:
- torch.manual_seed(args.seed)
- synthesizer = Synthesizer(args.syn_model_fpath)
-
-
- # Synthesize the spectrogram
- model_fpath = args.syn_model_fpath
- print("Loading the synthesizer %s... " % model_fpath)
- start = time.time()
- synthesizer = Synthesizer(model_fpath)
- print("Done (%dms)." % int(1000 * (time.time()- start)), "append")
-
-
- # The synthesizer works in batch, so you need to put your data in a list or numpy array
- texts = [text]
- embeds = [embed]
- # If you know what the attention layer alignments are, you can retrieve them here by
- # passing return_alignments=True
- specs = synthesizer.synthesize_spectrograms(texts, embeds)
- breaks = [spec.shape[1] for spec in specs]
- spec = np.concatenate(specs, axis=1)
- sample_rate=synthesizer.sample_rate
- return spec, breaks , sample_rate
-
-
-def generate_waveform(current_generated):
-
- speaker_name, spec, breaks = current_generated
- assert spec is not None
-
- ## Generating the waveform
- print("Synthesizing the waveform:")
- # If seed is specified, reset torch seed and reload vocoder
- if args.seed is not None:
- torch.manual_seed(args.seed)
- vocoder.load_model(args.voc_model_fpath)
-
- model_fpath = args.voc_model_fpath
- # Synthesize the waveform
- if not vocoder.is_loaded():
- print("Loading the vocoder %s... " % model_fpath)
- start = time.time()
- vocoder.load_model(model_fpath)
- print("Done (%dms)." % int(1000 * (time.time()- start)), "append")
-
- current_vocoder_fpath= model_fpath
- def vocoder_progress(i, seq_len, b_size, gen_rate):
- real_time_factor = (gen_rate / Synthesizer.sample_rate) * 1000
- line = "Waveform generation: %d/%d (batch size: %d, rate: %.1fkHz - %.2fx real time)" \
- % (i * b_size, seq_len * b_size, b_size, gen_rate, real_time_factor)
- print(line, "overwrite")
-
-
- # Synthesizing the waveform is fairly straightforward. Remember that the longer the
- # spectrogram, the more time-efficient the vocoder.
- if current_vocoder_fpath is not None:
- print("")
- generated_wav = vocoder.infer_waveform(spec, progress_callback=vocoder_progress)
- else:
- print("Waveform generation with Griffin-Lim... ")
- generated_wav = Synthesizer.griffin_lim(spec)
-
- print(" Done!", "append")
-
-
- ## Post-generation
- # There's a bug with sounddevice that makes the audio cut one second earlier, so we
- # pad it.
- generated_wav = np.pad(generated_wav, (0, Synthesizer.sample_rate), mode="constant")
-
- # Add breaks
- b_ends = np.cumsum(np.array(breaks) * Synthesizer.hparams.hop_size)
- b_starts = np.concatenate(([0], b_ends[:-1]))
- wavs = [generated_wav[start:end] for start, end, in zip(b_starts, b_ends)]
- breaks = [np.zeros(int(0.15 * Synthesizer.sample_rate))] * len(breaks)
- generated_wav = np.concatenate([i for w, b in zip(wavs, breaks) for i in (w, b)])
-
-
- # Trim excess silences to compensate for gaps in spectrograms (issue #53)
- generated_wav = encoder.preprocess_wav(generated_wav)
-
-
- return generated_wav
-
-
-def save_on_disk(generated_wav,sample_rate):
- # Save it on the disk
- filename = "cloned_voice.wav"
- print(generated_wav.dtype)
- #OUT=os.environ['OUT_PATH']
- # Returns `None` if key doesn't exist
- #OUT=os.environ.get('OUT_PATH')
- #result = os.path.join(OUT, filename)
- result = filename
- print(" > Saving output to {}".format(result))
- sf.write(result, generated_wav.astype(np.float32), sample_rate)
- print("\nSaved output as %s\n\n" % result)
-
- return result
-def play_audio(generated_wav,sample_rate):
- # Play the audio (non-blocking)
- if not args.no_sound:
-
- try:
- sd.stop()
- sd.play(generated_wav, sample_rate)
- except sd.PortAudioError as e:
- print("\nCaught exception: %s" % repr(e))
- print("Continuing without audio playback. Suppress this message with the \"--no_sound\" flag.\n")
- except:
- raise
-
-
-def clean_memory():
- import gc
- #import GPUtil
- # To see memory usage
- print('Before clean ')
- #GPUtil.showUtilization()
- #cleaning memory 1
- gc.collect()
- torch.cuda.empty_cache()
- time.sleep(2)
- print('After Clean GPU')
- #GPUtil.showUtilization()
-
-def clone_voice(in_fpath, text):
- try:
- speaker_name = "output"
- # Compute embedding
- embed=compute_embedding(in_fpath)
- print("Created the embedding")
- # Generating the spectrogram
- spec, breaks, sample_rate = create_spectrogram(text,embed)
- current_generated = (speaker_name, spec, breaks)
- print("Created the mel spectrogram")
-
- # Create waveform
- generated_wav=generate_waveform(current_generated)
- print("Created the the waveform ")
-
- # Save it on the disk
- save_on_disk(generated_wav,sample_rate)
-
- #Play the audio
- #play_audio(generated_wav,sample_rate)
-
- return
- except Exception as e:
- print("Caught exception: %s" % repr(e))
- print("Restarting\n")
-
-# Set environment variables
-home_dir = os.getcwd()
-OUT_PATH=os.path.join(home_dir, "out/")
-os.environ['OUT_PATH'] = OUT_PATH
-
-# create output path
-os.makedirs(OUT_PATH, exist_ok=True)
-
-USE_CUDA = torch.cuda.is_available()
-
-os.system('pip install -q pydub ffmpeg-normalize')
-CONFIG_SE_PATH = "config_se.json"
-CHECKPOINT_SE_PATH = "SE_checkpoint.pth.tar"
-def greet(Text,Voicetoclone ,input_mic=None):
- text= "%s" % (Text)
- #reference_files= "%s" % (Voicetoclone)
-
- clean_memory()
- print(text,len(text),type(text))
- print(Voicetoclone,type(Voicetoclone))
-
- if len(text) == 0 :
- print("Please add text to the program")
- Text="Please add text to the program, thank you."
- is_no_text=True
- else:
- is_no_text=False
-
-
- if Voicetoclone==None and input_mic==None:
- print("There is no input audio")
- Text="Please add audio input, to the program, thank you."
- Voicetoclone='trump.mp3'
- if is_no_text:
- Text="Please add text and audio, to the program, thank you."
-
- if input_mic != "" and input_mic != None :
- # Get the wav file from the microphone
- print('The value of MIC IS :',input_mic,type(input_mic))
- Voicetoclone= input_mic
-
- text= "%s" % (Text)
- reference_files= Voicetoclone
- print("path url")
- print(Voicetoclone)
- sample= str(Voicetoclone)
- os.environ['sample'] = sample
- size= len(reference_files)*sys.getsizeof(reference_files)
- size2= size / 1000000
- if (size2 > 0.012) or len(text)>2000:
- message="File is greater than 30mb or Text inserted is longer than 2000 characters. Please re-try with smaller sizes."
- print(message)
- raise SystemExit("File is greater than 30mb. Please re-try or Text inserted is longer than 2000 characters. Please re-try with smaller sizes.")
- else:
-
- env_var = 'sample'
- if env_var in os.environ:
- print(f'{env_var} value is {os.environ[env_var]}')
- else:
- print(f'{env_var} does not exist')
- #os.system(f'ffmpeg-normalize {os.environ[env_var]} -nt rms -t=-27 -o {os.environ[env_var]} -ar 16000 -f')
- in_fpath = Path(Voicetoclone)
- #in_fpath= in_fpath.replace("\"", "").replace("\'", "")
-
- out_path=clone_voice(in_fpath, text)
-
- print(" > text: {}".format(text))
-
- print("Generated Audio")
- return "cloned_voice.wav"
-
-demo = gr.Interface(
- fn=greet,
- inputs=[gr.inputs.Textbox(label='What would you like the voice to say? (max. 2000 characters per request)'),
- gr.Audio(
- type="filepath",
- source="upload",
- label='Please upload a voice to clone (max. 30mb)'),
- gr.inputs.Audio(
- source="microphone",
- label='or record',
- type="filepath",
- optional=True)
- ],
- outputs="audio",
-
- title = 'Clone Your Voice',
- description = 'A simple application that Clone Your Voice. Wait one minute to process.',
- article =
- '''
-
All you need to do is record your voice, type what you want be say
- ,then wait for compiling. After that click on Play/Pause for listen the audio. The audio is saved in an wav format.
- For more information visit ruslanmv.com
-
-
''',
-
- examples = [["I am the cloned version of Donald Trump. Well. I think what's happening to this country is unbelievably bad. We're no longer a respected country","trump.mp3","trump.mp3"],
- ["I am the cloned version of Elon Musk. Persistence is very important. You should not give up unless you are forced to give up.","musk.mp3","musk.mp3"] ,
- ["I am the cloned version of Elizabeth. It has always been easy to hate and destroy. To build and to cherish is much more difficult." ,"queen.mp3","queen.mp3"]
- ]
-
- )
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Kevin676/Clone-Your-Voice/utils/__init__.py b/spaces/Kevin676/Clone-Your-Voice/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Keyurmistry/Joeythemonster-anything-midjourney-v-4-1/README.md b/spaces/Keyurmistry/Joeythemonster-anything-midjourney-v-4-1/README.md
deleted file mode 100644
index 2dfbe716ed135ae7aa607d462354c055127ca9d6..0000000000000000000000000000000000000000
--- a/spaces/Keyurmistry/Joeythemonster-anything-midjourney-v-4-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Joeythemonster Anything Midjourney V 4 1
-emoji: 🚀
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kimata/Sanskrit-TTS/monotonic_align/__init__.py b/spaces/Kimata/Sanskrit-TTS/monotonic_align/__init__.py
deleted file mode 100644
index 49e32c9a128aeadc2044c362ff27f6a43f6d7815..0000000000000000000000000000000000000000
--- a/spaces/Kimata/Sanskrit-TTS/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from numpy import zeros, int32, float32
-from torch import from_numpy
-
-from .core import maximum_path_jit
-
-def maximum_path(neg_cent, mask):
- """ numba optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
- path = zeros(neg_cent.shape, dtype=int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
- return from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/Kororinpa/Amadeus_Project/preprocess.py b/spaces/Kororinpa/Amadeus_Project/preprocess.py
deleted file mode 100644
index aaedbf076c30114b3ac6c27dfb42fd54ac81a71c..0000000000000000000000000000000000000000
--- a/spaces/Kororinpa/Amadeus_Project/preprocess.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import argparse
-import text
-from utils import load_filepaths_and_text
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("--out_extension", default="cleaned")
- parser.add_argument("--text_index", default=1, type=int)
- parser.add_argument("--filelists", nargs="+", default=["filelists/ljs_audio_text_val_filelist.txt", "filelists/ljs_audio_text_test_filelist.txt"])
- parser.add_argument("--text_cleaners", nargs="+", default=["english_cleaners2"])
-
- args = parser.parse_args()
-
-
- for filelist in args.filelists:
- print("START:", filelist)
- filepaths_and_text = load_filepaths_and_text(filelist)
- for i in range(len(filepaths_and_text)):
- original_text = filepaths_and_text[i][args.text_index]
- cleaned_text = text._clean_text(original_text, args.text_cleaners)
- filepaths_and_text[i][args.text_index] = cleaned_text
-
- new_filelist = filelist + "." + args.out_extension
- with open(new_filelist, "w", encoding="utf-8") as f:
- f.writelines(["|".join(x) + "\n" for x in filepaths_and_text])
diff --git a/spaces/KyanChen/FunSR/models/baselines/expansion.py b/spaces/KyanChen/FunSR/models/baselines/expansion.py
deleted file mode 100644
index e848a21a31739e7ec4abc6173127383330a2caa4..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/FunSR/models/baselines/expansion.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import math
-from argparse import Namespace
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import models
-from models import register
-import numpy as np
-
-class ExpansionNet(nn.Module):
- def __init__(self, args):
- super(ExpansionNet, self).__init__()
- self.args = args
- self.in_dim = args.in_dim
- self.out_dim = args.out_dim
- self.hidden_list = args.hidden_list
- layers = []
- lastv = self.in_dim
- hidden_list = self.hidden_list
- out_dim = self.out_dim
- for hidden in hidden_list:
- layers.append(nn.Linear(lastv, hidden))
- layers.append(nn.ReLU())
- lastv = hidden
- layers.append(nn.Linear(lastv, out_dim))
- self.layers = nn.Sequential(*layers)
-
- def forward(self, x):
- b, _, c = x.shape
- x = x.view(-1, c)
- logits = self.layers(x)
- out = nn.functional.normalize(logits, dim=1)
- return out.view(b,_,self.out_dim)
-
-
-@register('ExpansionNet')
-def make_ExpansionNet(in_dim=580,out_dim=10,hidden_list=None):
- args = Namespace()
- args.in_dim = in_dim
- args.out_dim = out_dim
- args.hidden_list = hidden_list
- return ExpansionNet(args)
diff --git a/spaces/KyanChen/RSPrompter/mmdet/datasets/transforms/transforms.py b/spaces/KyanChen/RSPrompter/mmdet/datasets/transforms/transforms.py
deleted file mode 100644
index b844d0a3fe7d14c6a4192b26dfcdb8008d6c0288..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/datasets/transforms/transforms.py
+++ /dev/null
@@ -1,3636 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-import inspect
-import math
-from typing import List, Optional, Sequence, Tuple, Union
-
-import cv2
-import mmcv
-import numpy as np
-from mmcv.image.geometric import _scale_size
-from mmcv.transforms import BaseTransform
-from mmcv.transforms import Pad as MMCV_Pad
-from mmcv.transforms import RandomFlip as MMCV_RandomFlip
-from mmcv.transforms import Resize as MMCV_Resize
-from mmcv.transforms.utils import avoid_cache_randomness, cache_randomness
-from mmengine.dataset import BaseDataset
-from mmengine.utils import is_str
-from numpy import random
-
-from mmdet.registry import TRANSFORMS
-from mmdet.structures.bbox import HorizontalBoxes, autocast_box_type
-from mmdet.structures.mask import BitmapMasks, PolygonMasks
-from mmdet.utils import log_img_scale
-
-try:
- from imagecorruptions import corrupt
-except ImportError:
- corrupt = None
-
-try:
- import albumentations
- from albumentations import Compose
-except ImportError:
- albumentations = None
- Compose = None
-
-Number = Union[int, float]
-
-
-@TRANSFORMS.register_module()
-class Resize(MMCV_Resize):
- """Resize images & bbox & seg.
-
- This transform resizes the input image according to ``scale`` or
- ``scale_factor``. Bboxes, masks, and seg map are then resized
- with the same scale factor.
- if ``scale`` and ``scale_factor`` are both set, it will use ``scale`` to
- resize.
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
- - gt_seg_map (np.uint8) (optional)
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes
- - gt_masks
- - gt_seg_map
-
-
- Added Keys:
-
- - scale
- - scale_factor
- - keep_ratio
- - homography_matrix
-
- Args:
- scale (int or tuple): Images scales for resizing. Defaults to None
- scale_factor (float or tuple[float]): Scale factors for resizing.
- Defaults to None.
- keep_ratio (bool): Whether to keep the aspect ratio when resizing the
- image. Defaults to False.
- clip_object_border (bool): Whether to clip the objects
- outside the border of the image. In some dataset like MOT17, the gt
- bboxes are allowed to cross the border of images. Therefore, we
- don't need to clip the gt bboxes in these cases. Defaults to True.
- backend (str): Image resize backend, choices are 'cv2' and 'pillow'.
- These two backends generates slightly different results. Defaults
- to 'cv2'.
- interpolation (str): Interpolation method, accepted values are
- "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2'
- backend, "nearest", "bilinear" for 'pillow' backend. Defaults
- to 'bilinear'.
- """
-
- def _resize_masks(self, results: dict) -> None:
- """Resize masks with ``results['scale']``"""
- if results.get('gt_masks', None) is not None:
- if self.keep_ratio:
- results['gt_masks'] = results['gt_masks'].rescale(
- results['scale'])
- else:
- results['gt_masks'] = results['gt_masks'].resize(
- results['img_shape'])
-
- def _resize_bboxes(self, results: dict) -> None:
- """Resize bounding boxes with ``results['scale_factor']``."""
- if results.get('gt_bboxes', None) is not None:
- results['gt_bboxes'].rescale_(results['scale_factor'])
- if self.clip_object_border:
- results['gt_bboxes'].clip_(results['img_shape'])
-
- def _resize_seg(self, results: dict) -> None:
- """Resize semantic segmentation map with ``results['scale']``."""
- if results.get('gt_seg_map', None) is not None:
- if self.keep_ratio:
- gt_seg = mmcv.imrescale(
- results['gt_seg_map'],
- results['scale'],
- interpolation='nearest',
- backend=self.backend)
- else:
- gt_seg = mmcv.imresize(
- results['gt_seg_map'],
- results['scale'],
- interpolation='nearest',
- backend=self.backend)
- results['gt_seg_map'] = gt_seg
-
- def _record_homography_matrix(self, results: dict) -> None:
- """Record the homography matrix for the Resize."""
- w_scale, h_scale = results['scale_factor']
- homography_matrix = np.array(
- [[w_scale, 0, 0], [0, h_scale, 0], [0, 0, 1]], dtype=np.float32)
- if results.get('homography_matrix', None) is None:
- results['homography_matrix'] = homography_matrix
- else:
- results['homography_matrix'] = homography_matrix @ results[
- 'homography_matrix']
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Transform function to resize images, bounding boxes and semantic
- segmentation map.
-
- Args:
- results (dict): Result dict from loading pipeline.
- Returns:
- dict: Resized results, 'img', 'gt_bboxes', 'gt_seg_map',
- 'scale', 'scale_factor', 'height', 'width', and 'keep_ratio' keys
- are updated in result dict.
- """
- if self.scale:
- results['scale'] = self.scale
- else:
- img_shape = results['img'].shape[:2]
- results['scale'] = _scale_size(img_shape[::-1], self.scale_factor)
- self._resize_img(results)
- self._resize_bboxes(results)
- self._resize_masks(results)
- self._resize_seg(results)
- self._record_homography_matrix(results)
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f'(scale={self.scale}, '
- repr_str += f'scale_factor={self.scale_factor}, '
- repr_str += f'keep_ratio={self.keep_ratio}, '
- repr_str += f'clip_object_border={self.clip_object_border}), '
- repr_str += f'backend={self.backend}), '
- repr_str += f'interpolation={self.interpolation})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class FixShapeResize(Resize):
- """Resize images & bbox & seg to the specified size.
-
- This transform resizes the input image according to ``width`` and
- ``height``. Bboxes, masks, and seg map are then resized
- with the same parameters.
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
- - gt_seg_map (np.uint8) (optional)
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes
- - gt_masks
- - gt_seg_map
-
-
- Added Keys:
-
- - scale
- - scale_factor
- - keep_ratio
- - homography_matrix
-
- Args:
- width (int): width for resizing.
- height (int): height for resizing.
- Defaults to None.
- pad_val (Number | dict[str, Number], optional): Padding value for if
- the pad_mode is "constant". If it is a single number, the value
- to pad the image is the number and to pad the semantic
- segmentation map is 255. If it is a dict, it should have the
- following keys:
-
- - img: The value to pad the image.
- - seg: The value to pad the semantic segmentation map.
- Defaults to dict(img=0, seg=255).
- keep_ratio (bool): Whether to keep the aspect ratio when resizing the
- image. Defaults to False.
- clip_object_border (bool): Whether to clip the objects
- outside the border of the image. In some dataset like MOT17, the gt
- bboxes are allowed to cross the border of images. Therefore, we
- don't need to clip the gt bboxes in these cases. Defaults to True.
- backend (str): Image resize backend, choices are 'cv2' and 'pillow'.
- These two backends generates slightly different results. Defaults
- to 'cv2'.
- interpolation (str): Interpolation method, accepted values are
- "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2'
- backend, "nearest", "bilinear" for 'pillow' backend. Defaults
- to 'bilinear'.
- """
-
- def __init__(self,
- width: int,
- height: int,
- pad_val: Union[Number, dict] = dict(img=0, seg=255),
- keep_ratio: bool = False,
- clip_object_border: bool = True,
- backend: str = 'cv2',
- interpolation: str = 'bilinear') -> None:
- assert width is not None and height is not None, (
- '`width` and'
- '`height` can not be `None`')
-
- self.width = width
- self.height = height
- self.scale = (width, height)
-
- self.backend = backend
- self.interpolation = interpolation
- self.keep_ratio = keep_ratio
- self.clip_object_border = clip_object_border
-
- if keep_ratio is True:
- # padding to the fixed size when keep_ratio=True
- self.pad_transform = Pad(size=self.scale, pad_val=pad_val)
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Transform function to resize images, bounding boxes and semantic
- segmentation map.
-
- Args:
- results (dict): Result dict from loading pipeline.
- Returns:
- dict: Resized results, 'img', 'gt_bboxes', 'gt_seg_map',
- 'scale', 'scale_factor', 'height', 'width', and 'keep_ratio' keys
- are updated in result dict.
- """
- img = results['img']
- h, w = img.shape[:2]
- if self.keep_ratio:
- scale_factor = min(self.width / w, self.height / h)
- results['scale_factor'] = (scale_factor, scale_factor)
- real_w, real_h = int(w * float(scale_factor) +
- 0.5), int(h * float(scale_factor) + 0.5)
- img, scale_factor = mmcv.imrescale(
- results['img'], (real_w, real_h),
- interpolation=self.interpolation,
- return_scale=True,
- backend=self.backend)
- # the w_scale and h_scale has minor difference
- # a real fix should be done in the mmcv.imrescale in the future
- results['img'] = img
- results['img_shape'] = img.shape[:2]
- results['keep_ratio'] = self.keep_ratio
- results['scale'] = (real_w, real_h)
- else:
- results['scale'] = (self.width, self.height)
- results['scale_factor'] = (self.width / w, self.height / h)
- super()._resize_img(results)
-
- self._resize_bboxes(results)
- self._resize_masks(results)
- self._resize_seg(results)
- self._record_homography_matrix(results)
- if self.keep_ratio:
- self.pad_transform(results)
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f'(width={self.width}, height={self.height}, '
- repr_str += f'keep_ratio={self.keep_ratio}, '
- repr_str += f'clip_object_border={self.clip_object_border}), '
- repr_str += f'backend={self.backend}), '
- repr_str += f'interpolation={self.interpolation})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class RandomFlip(MMCV_RandomFlip):
- """Flip the image & bbox & mask & segmentation map. Added or Updated keys:
- flip, flip_direction, img, gt_bboxes, and gt_seg_map. There are 3 flip
- modes:
-
- - ``prob`` is float, ``direction`` is string: the image will be
- ``direction``ly flipped with probability of ``prob`` .
- E.g., ``prob=0.5``, ``direction='horizontal'``,
- then image will be horizontally flipped with probability of 0.5.
- - ``prob`` is float, ``direction`` is list of string: the image will
- be ``direction[i]``ly flipped with probability of
- ``prob/len(direction)``.
- E.g., ``prob=0.5``, ``direction=['horizontal', 'vertical']``,
- then image will be horizontally flipped with probability of 0.25,
- vertically with probability of 0.25.
- - ``prob`` is list of float, ``direction`` is list of string:
- given ``len(prob) == len(direction)``, the image will
- be ``direction[i]``ly flipped with probability of ``prob[i]``.
- E.g., ``prob=[0.3, 0.5]``, ``direction=['horizontal',
- 'vertical']``, then image will be horizontally flipped with
- probability of 0.3, vertically with probability of 0.5.
-
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
- - gt_seg_map (np.uint8) (optional)
-
- Modified Keys:
-
- - img
- - gt_bboxes
- - gt_masks
- - gt_seg_map
-
- Added Keys:
-
- - flip
- - flip_direction
- - homography_matrix
-
-
- Args:
- prob (float | list[float], optional): The flipping probability.
- Defaults to None.
- direction(str | list[str]): The flipping direction. Options
- If input is a list, the length must equal ``prob``. Each
- element in ``prob`` indicates the flip probability of
- corresponding direction. Defaults to 'horizontal'.
- """
-
- def _record_homography_matrix(self, results: dict) -> None:
- """Record the homography matrix for the RandomFlip."""
- cur_dir = results['flip_direction']
- h, w = results['img'].shape[:2]
-
- if cur_dir == 'horizontal':
- homography_matrix = np.array([[-1, 0, w], [0, 1, 0], [0, 0, 1]],
- dtype=np.float32)
- elif cur_dir == 'vertical':
- homography_matrix = np.array([[1, 0, 0], [0, -1, h], [0, 0, 1]],
- dtype=np.float32)
- elif cur_dir == 'diagonal':
- homography_matrix = np.array([[-1, 0, w], [0, -1, h], [0, 0, 1]],
- dtype=np.float32)
- else:
- homography_matrix = np.eye(3, dtype=np.float32)
-
- if results.get('homography_matrix', None) is None:
- results['homography_matrix'] = homography_matrix
- else:
- results['homography_matrix'] = homography_matrix @ results[
- 'homography_matrix']
-
- @autocast_box_type()
- def _flip(self, results: dict) -> None:
- """Flip images, bounding boxes, and semantic segmentation map."""
- # flip image
- results['img'] = mmcv.imflip(
- results['img'], direction=results['flip_direction'])
-
- img_shape = results['img'].shape[:2]
-
- # flip bboxes
- if results.get('gt_bboxes', None) is not None:
- results['gt_bboxes'].flip_(img_shape, results['flip_direction'])
-
- # flip masks
- if results.get('gt_masks', None) is not None:
- results['gt_masks'] = results['gt_masks'].flip(
- results['flip_direction'])
-
- # flip segs
- if results.get('gt_seg_map', None) is not None:
- results['gt_seg_map'] = mmcv.imflip(
- results['gt_seg_map'], direction=results['flip_direction'])
-
- # record homography matrix for flip
- self._record_homography_matrix(results)
-
-
-@TRANSFORMS.register_module()
-class RandomShift(BaseTransform):
- """Shift the image and box given shift pixels and probability.
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32])
- - gt_bboxes_labels (np.int64)
- - gt_ignore_flags (bool) (optional)
-
- Modified Keys:
-
- - img
- - gt_bboxes
- - gt_bboxes_labels
- - gt_ignore_flags (bool) (optional)
-
- Args:
- prob (float): Probability of shifts. Defaults to 0.5.
- max_shift_px (int): The max pixels for shifting. Defaults to 32.
- filter_thr_px (int): The width and height threshold for filtering.
- The bbox and the rest of the targets below the width and
- height threshold will be filtered. Defaults to 1.
- """
-
- def __init__(self,
- prob: float = 0.5,
- max_shift_px: int = 32,
- filter_thr_px: int = 1) -> None:
- assert 0 <= prob <= 1
- assert max_shift_px >= 0
- self.prob = prob
- self.max_shift_px = max_shift_px
- self.filter_thr_px = int(filter_thr_px)
-
- @cache_randomness
- def _random_prob(self) -> float:
- return random.uniform(0, 1)
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Transform function to random shift images, bounding boxes.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Shift results.
- """
- if self._random_prob() < self.prob:
- img_shape = results['img'].shape[:2]
-
- random_shift_x = random.randint(-self.max_shift_px,
- self.max_shift_px)
- random_shift_y = random.randint(-self.max_shift_px,
- self.max_shift_px)
- new_x = max(0, random_shift_x)
- ori_x = max(0, -random_shift_x)
- new_y = max(0, random_shift_y)
- ori_y = max(0, -random_shift_y)
-
- # TODO: support mask and semantic segmentation maps.
- bboxes = results['gt_bboxes'].clone()
- bboxes.translate_([random_shift_x, random_shift_y])
-
- # clip border
- bboxes.clip_(img_shape)
-
- # remove invalid bboxes
- valid_inds = (bboxes.widths > self.filter_thr_px).numpy() & (
- bboxes.heights > self.filter_thr_px).numpy()
- # If the shift does not contain any gt-bbox area, skip this
- # image.
- if not valid_inds.any():
- return results
- bboxes = bboxes[valid_inds]
- results['gt_bboxes'] = bboxes
- results['gt_bboxes_labels'] = results['gt_bboxes_labels'][
- valid_inds]
-
- if results.get('gt_ignore_flags', None) is not None:
- results['gt_ignore_flags'] = \
- results['gt_ignore_flags'][valid_inds]
-
- # shift img
- img = results['img']
- new_img = np.zeros_like(img)
- img_h, img_w = img.shape[:2]
- new_h = img_h - np.abs(random_shift_y)
- new_w = img_w - np.abs(random_shift_x)
- new_img[new_y:new_y + new_h, new_x:new_x + new_w] \
- = img[ori_y:ori_y + new_h, ori_x:ori_x + new_w]
- results['img'] = new_img
-
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(prob={self.prob}, '
- repr_str += f'max_shift_px={self.max_shift_px}, '
- repr_str += f'filter_thr_px={self.filter_thr_px})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class Pad(MMCV_Pad):
- """Pad the image & segmentation map.
-
- There are three padding modes: (1) pad to a fixed size and (2) pad to the
- minimum size that is divisible by some number. and (3)pad to square. Also,
- pad to square and pad to the minimum size can be used as the same time.
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
- - gt_seg_map (np.uint8) (optional)
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_masks
- - gt_seg_map
-
- Added Keys:
-
- - pad_shape
- - pad_fixed_size
- - pad_size_divisor
-
- Args:
- size (tuple, optional): Fixed padding size.
- Expected padding shape (width, height). Defaults to None.
- size_divisor (int, optional): The divisor of padded size. Defaults to
- None.
- pad_to_square (bool): Whether to pad the image into a square.
- Currently only used for YOLOX. Defaults to False.
- pad_val (Number | dict[str, Number], optional) - Padding value for if
- the pad_mode is "constant". If it is a single number, the value
- to pad the image is the number and to pad the semantic
- segmentation map is 255. If it is a dict, it should have the
- following keys:
-
- - img: The value to pad the image.
- - seg: The value to pad the semantic segmentation map.
- Defaults to dict(img=0, seg=255).
- padding_mode (str): Type of padding. Should be: constant, edge,
- reflect or symmetric. Defaults to 'constant'.
-
- - constant: pads with a constant value, this value is specified
- with pad_val.
- - edge: pads with the last value at the edge of the image.
- - reflect: pads with reflection of image without repeating the last
- value on the edge. For example, padding [1, 2, 3, 4] with 2
- elements on both sides in reflect mode will result in
- [3, 2, 1, 2, 3, 4, 3, 2].
- - symmetric: pads with reflection of image repeating the last value
- on the edge. For example, padding [1, 2, 3, 4] with 2 elements on
- both sides in symmetric mode will result in
- [2, 1, 1, 2, 3, 4, 4, 3]
- """
-
- def _pad_masks(self, results: dict) -> None:
- """Pad masks according to ``results['pad_shape']``."""
- if results.get('gt_masks', None) is not None:
- pad_val = self.pad_val.get('masks', 0)
- pad_shape = results['pad_shape'][:2]
- results['gt_masks'] = results['gt_masks'].pad(
- pad_shape, pad_val=pad_val)
-
- def transform(self, results: dict) -> dict:
- """Call function to pad images, masks, semantic segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Updated result dict.
- """
- self._pad_img(results)
- self._pad_seg(results)
- self._pad_masks(results)
- return results
-
-
-@TRANSFORMS.register_module()
-class RandomCrop(BaseTransform):
- """Random crop the image & bboxes & masks.
-
- The absolute ``crop_size`` is sampled based on ``crop_type`` and
- ``image_size``, then the cropped results are generated.
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
- - gt_ignore_flags (bool) (optional)
- - gt_seg_map (np.uint8) (optional)
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes (optional)
- - gt_bboxes_labels (optional)
- - gt_masks (optional)
- - gt_ignore_flags (optional)
- - gt_seg_map (optional)
-
- Added Keys:
-
- - homography_matrix
-
- Args:
- crop_size (tuple): The relative ratio or absolute pixels of
- (width, height).
- crop_type (str, optional): One of "relative_range", "relative",
- "absolute", "absolute_range". "relative" randomly crops
- (h * crop_size[0], w * crop_size[1]) part from an input of size
- (h, w). "relative_range" uniformly samples relative crop size from
- range [crop_size[0], 1] and [crop_size[1], 1] for height and width
- respectively. "absolute" crops from an input with absolute size
- (crop_size[0], crop_size[1]). "absolute_range" uniformly samples
- crop_h in range [crop_size[0], min(h, crop_size[1])] and crop_w
- in range [crop_size[0], min(w, crop_size[1])].
- Defaults to "absolute".
- allow_negative_crop (bool, optional): Whether to allow a crop that does
- not contain any bbox area. Defaults to False.
- recompute_bbox (bool, optional): Whether to re-compute the boxes based
- on cropped instance masks. Defaults to False.
- bbox_clip_border (bool, optional): Whether clip the objects outside
- the border of the image. Defaults to True.
-
- Note:
- - If the image is smaller than the absolute crop size, return the
- original image.
- - The keys for bboxes, labels and masks must be aligned. That is,
- ``gt_bboxes`` corresponds to ``gt_labels`` and ``gt_masks``, and
- ``gt_bboxes_ignore`` corresponds to ``gt_labels_ignore`` and
- ``gt_masks_ignore``.
- - If the crop does not contain any gt-bbox region and
- ``allow_negative_crop`` is set to False, skip this image.
- """
-
- def __init__(self,
- crop_size: tuple,
- crop_type: str = 'absolute',
- allow_negative_crop: bool = False,
- recompute_bbox: bool = False,
- bbox_clip_border: bool = True) -> None:
- if crop_type not in [
- 'relative_range', 'relative', 'absolute', 'absolute_range'
- ]:
- raise ValueError(f'Invalid crop_type {crop_type}.')
- if crop_type in ['absolute', 'absolute_range']:
- assert crop_size[0] > 0 and crop_size[1] > 0
- assert isinstance(crop_size[0], int) and isinstance(
- crop_size[1], int)
- if crop_type == 'absolute_range':
- assert crop_size[0] <= crop_size[1]
- else:
- assert 0 < crop_size[0] <= 1 and 0 < crop_size[1] <= 1
- self.crop_size = crop_size
- self.crop_type = crop_type
- self.allow_negative_crop = allow_negative_crop
- self.bbox_clip_border = bbox_clip_border
- self.recompute_bbox = recompute_bbox
-
- def _crop_data(self, results: dict, crop_size: Tuple[int, int],
- allow_negative_crop: bool) -> Union[dict, None]:
- """Function to randomly crop images, bounding boxes, masks, semantic
- segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
- crop_size (Tuple[int, int]): Expected absolute size after
- cropping, (h, w).
- allow_negative_crop (bool): Whether to allow a crop that does not
- contain any bbox area.
-
- Returns:
- results (Union[dict, None]): Randomly cropped results, 'img_shape'
- key in result dict is updated according to crop size. None will
- be returned when there is no valid bbox after cropping.
- """
- assert crop_size[0] > 0 and crop_size[1] > 0
- img = results['img']
- margin_h = max(img.shape[0] - crop_size[0], 0)
- margin_w = max(img.shape[1] - crop_size[1], 0)
- offset_h, offset_w = self._rand_offset((margin_h, margin_w))
- crop_y1, crop_y2 = offset_h, offset_h + crop_size[0]
- crop_x1, crop_x2 = offset_w, offset_w + crop_size[1]
-
- # Record the homography matrix for the RandomCrop
- homography_matrix = np.array(
- [[1, 0, -offset_w], [0, 1, -offset_h], [0, 0, 1]],
- dtype=np.float32)
- if results.get('homography_matrix', None) is None:
- results['homography_matrix'] = homography_matrix
- else:
- results['homography_matrix'] = homography_matrix @ results[
- 'homography_matrix']
-
- # crop the image
- img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...]
- img_shape = img.shape
- results['img'] = img
- results['img_shape'] = img_shape[:2]
-
- # crop bboxes accordingly and clip to the image boundary
- if results.get('gt_bboxes', None) is not None:
- bboxes = results['gt_bboxes']
- bboxes.translate_([-offset_w, -offset_h])
- if self.bbox_clip_border:
- bboxes.clip_(img_shape[:2])
- valid_inds = bboxes.is_inside(img_shape[:2]).numpy()
- # If the crop does not contain any gt-bbox area and
- # allow_negative_crop is False, skip this image.
- if (not valid_inds.any() and not allow_negative_crop):
- return None
-
- results['gt_bboxes'] = bboxes[valid_inds]
-
- if results.get('gt_ignore_flags', None) is not None:
- results['gt_ignore_flags'] = \
- results['gt_ignore_flags'][valid_inds]
-
- if results.get('gt_bboxes_labels', None) is not None:
- results['gt_bboxes_labels'] = \
- results['gt_bboxes_labels'][valid_inds]
-
- if results.get('gt_masks', None) is not None:
- results['gt_masks'] = results['gt_masks'][
- valid_inds.nonzero()[0]].crop(
- np.asarray([crop_x1, crop_y1, crop_x2, crop_y2]))
- if self.recompute_bbox:
- results['gt_bboxes'] = results['gt_masks'].get_bboxes(
- type(results['gt_bboxes']))
-
- # crop semantic seg
- if results.get('gt_seg_map', None) is not None:
- results['gt_seg_map'] = results['gt_seg_map'][crop_y1:crop_y2,
- crop_x1:crop_x2]
-
- return results
-
- @cache_randomness
- def _rand_offset(self, margin: Tuple[int, int]) -> Tuple[int, int]:
- """Randomly generate crop offset.
-
- Args:
- margin (Tuple[int, int]): The upper bound for the offset generated
- randomly.
-
- Returns:
- Tuple[int, int]: The random offset for the crop.
- """
- margin_h, margin_w = margin
- offset_h = np.random.randint(0, margin_h + 1)
- offset_w = np.random.randint(0, margin_w + 1)
-
- return offset_h, offset_w
-
- @cache_randomness
- def _get_crop_size(self, image_size: Tuple[int, int]) -> Tuple[int, int]:
- """Randomly generates the absolute crop size based on `crop_type` and
- `image_size`.
-
- Args:
- image_size (Tuple[int, int]): (h, w).
-
- Returns:
- crop_size (Tuple[int, int]): (crop_h, crop_w) in absolute pixels.
- """
- h, w = image_size
- if self.crop_type == 'absolute':
- return min(self.crop_size[1], h), min(self.crop_size[0], w)
- elif self.crop_type == 'absolute_range':
- crop_h = np.random.randint(
- min(h, self.crop_size[0]),
- min(h, self.crop_size[1]) + 1)
- crop_w = np.random.randint(
- min(w, self.crop_size[0]),
- min(w, self.crop_size[1]) + 1)
- return crop_h, crop_w
- elif self.crop_type == 'relative':
- crop_w, crop_h = self.crop_size
- return int(h * crop_h + 0.5), int(w * crop_w + 0.5)
- else:
- # 'relative_range'
- crop_size = np.asarray(self.crop_size, dtype=np.float32)
- crop_h, crop_w = crop_size + np.random.rand(2) * (1 - crop_size)
- return int(h * crop_h + 0.5), int(w * crop_w + 0.5)
-
- @autocast_box_type()
- def transform(self, results: dict) -> Union[dict, None]:
- """Transform function to randomly crop images, bounding boxes, masks,
- semantic segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- results (Union[dict, None]): Randomly cropped results, 'img_shape'
- key in result dict is updated according to crop size. None will
- be returned when there is no valid bbox after cropping.
- """
- image_size = results['img'].shape[:2]
- crop_size = self._get_crop_size(image_size)
- results = self._crop_data(results, crop_size, self.allow_negative_crop)
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f'(crop_size={self.crop_size}, '
- repr_str += f'crop_type={self.crop_type}, '
- repr_str += f'allow_negative_crop={self.allow_negative_crop}, '
- repr_str += f'recompute_bbox={self.recompute_bbox}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class SegRescale(BaseTransform):
- """Rescale semantic segmentation maps.
-
- This transform rescale the ``gt_seg_map`` according to ``scale_factor``.
-
- Required Keys:
-
- - gt_seg_map
-
- Modified Keys:
-
- - gt_seg_map
-
- Args:
- scale_factor (float): The scale factor of the final output. Defaults
- to 1.
- backend (str): Image rescale backend, choices are 'cv2' and 'pillow'.
- These two backends generates slightly different results. Defaults
- to 'cv2'.
- """
-
- def __init__(self, scale_factor: float = 1, backend: str = 'cv2') -> None:
- self.scale_factor = scale_factor
- self.backend = backend
-
- def transform(self, results: dict) -> dict:
- """Transform function to scale the semantic segmentation map.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with semantic segmentation map scaled.
- """
- if self.scale_factor != 1:
- results['gt_seg_map'] = mmcv.imrescale(
- results['gt_seg_map'],
- self.scale_factor,
- interpolation='nearest',
- backend=self.backend)
-
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f'(scale_factor={self.scale_factor}, '
- repr_str += f'backend={self.backend})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class PhotoMetricDistortion(BaseTransform):
- """Apply photometric distortion to image sequentially, every transformation
- is applied with a probability of 0.5. The position of random contrast is in
- second or second to last.
-
- 1. random brightness
- 2. random contrast (mode 0)
- 3. convert color from BGR to HSV
- 4. random saturation
- 5. random hue
- 6. convert color from HSV to BGR
- 7. random contrast (mode 1)
- 8. randomly swap channels
-
- Required Keys:
-
- - img (np.uint8)
-
- Modified Keys:
-
- - img (np.float32)
-
- Args:
- brightness_delta (int): delta of brightness.
- contrast_range (sequence): range of contrast.
- saturation_range (sequence): range of saturation.
- hue_delta (int): delta of hue.
- """
-
- def __init__(self,
- brightness_delta: int = 32,
- contrast_range: Sequence[Number] = (0.5, 1.5),
- saturation_range: Sequence[Number] = (0.5, 1.5),
- hue_delta: int = 18) -> None:
- self.brightness_delta = brightness_delta
- self.contrast_lower, self.contrast_upper = contrast_range
- self.saturation_lower, self.saturation_upper = saturation_range
- self.hue_delta = hue_delta
-
- @cache_randomness
- def _random_flags(self) -> Sequence[Number]:
- mode = random.randint(2)
- brightness_flag = random.randint(2)
- contrast_flag = random.randint(2)
- saturation_flag = random.randint(2)
- hue_flag = random.randint(2)
- swap_flag = random.randint(2)
- delta_value = random.uniform(-self.brightness_delta,
- self.brightness_delta)
- alpha_value = random.uniform(self.contrast_lower, self.contrast_upper)
- saturation_value = random.uniform(self.saturation_lower,
- self.saturation_upper)
- hue_value = random.uniform(-self.hue_delta, self.hue_delta)
- swap_value = random.permutation(3)
-
- return (mode, brightness_flag, contrast_flag, saturation_flag,
- hue_flag, swap_flag, delta_value, alpha_value,
- saturation_value, hue_value, swap_value)
-
- def transform(self, results: dict) -> dict:
- """Transform function to perform photometric distortion on images.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with images distorted.
- """
- assert 'img' in results, '`img` is not found in results'
- img = results['img']
- img = img.astype(np.float32)
-
- (mode, brightness_flag, contrast_flag, saturation_flag, hue_flag,
- swap_flag, delta_value, alpha_value, saturation_value, hue_value,
- swap_value) = self._random_flags()
-
- # random brightness
- if brightness_flag:
- img += delta_value
-
- # mode == 0 --> do random contrast first
- # mode == 1 --> do random contrast last
- if mode == 1:
- if contrast_flag:
- img *= alpha_value
-
- # convert color from BGR to HSV
- img = mmcv.bgr2hsv(img)
-
- # random saturation
- if saturation_flag:
- img[..., 1] *= saturation_value
- # For image(type=float32), after convert bgr to hsv by opencv,
- # valid saturation value range is [0, 1]
- if saturation_value > 1:
- img[..., 1] = img[..., 1].clip(0, 1)
-
- # random hue
- if hue_flag:
- img[..., 0] += hue_value
- img[..., 0][img[..., 0] > 360] -= 360
- img[..., 0][img[..., 0] < 0] += 360
-
- # convert color from HSV to BGR
- img = mmcv.hsv2bgr(img)
-
- # random contrast
- if mode == 0:
- if contrast_flag:
- img *= alpha_value
-
- # randomly swap channels
- if swap_flag:
- img = img[..., swap_value]
-
- results['img'] = img
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f'(brightness_delta={self.brightness_delta}, '
- repr_str += 'contrast_range='
- repr_str += f'{(self.contrast_lower, self.contrast_upper)}, '
- repr_str += 'saturation_range='
- repr_str += f'{(self.saturation_lower, self.saturation_upper)}, '
- repr_str += f'hue_delta={self.hue_delta})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class Expand(BaseTransform):
- """Random expand the image & bboxes & masks & segmentation map.
-
- Randomly place the original image on a canvas of ``ratio`` x original image
- size filled with mean values. The ratio is in the range of ratio_range.
-
- Required Keys:
-
- - img
- - img_shape
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
- - gt_seg_map (np.uint8) (optional)
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes
- - gt_masks
- - gt_seg_map
-
-
- Args:
- mean (sequence): mean value of dataset.
- to_rgb (bool): if need to convert the order of mean to align with RGB.
- ratio_range (sequence)): range of expand ratio.
- seg_ignore_label (int): label of ignore segmentation map.
- prob (float): probability of applying this transformation
- """
-
- def __init__(self,
- mean: Sequence[Number] = (0, 0, 0),
- to_rgb: bool = True,
- ratio_range: Sequence[Number] = (1, 4),
- seg_ignore_label: int = None,
- prob: float = 0.5) -> None:
- self.to_rgb = to_rgb
- self.ratio_range = ratio_range
- if to_rgb:
- self.mean = mean[::-1]
- else:
- self.mean = mean
- self.min_ratio, self.max_ratio = ratio_range
- self.seg_ignore_label = seg_ignore_label
- self.prob = prob
-
- @cache_randomness
- def _random_prob(self) -> float:
- return random.uniform(0, 1)
-
- @cache_randomness
- def _random_ratio(self) -> float:
- return random.uniform(self.min_ratio, self.max_ratio)
-
- @cache_randomness
- def _random_left_top(self, ratio: float, h: int,
- w: int) -> Tuple[int, int]:
- left = int(random.uniform(0, w * ratio - w))
- top = int(random.uniform(0, h * ratio - h))
- return left, top
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Transform function to expand images, bounding boxes, masks,
- segmentation map.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with images, bounding boxes, masks, segmentation
- map expanded.
- """
- if self._random_prob() > self.prob:
- return results
- assert 'img' in results, '`img` is not found in results'
- img = results['img']
- h, w, c = img.shape
- ratio = self._random_ratio()
- # speedup expand when meets large image
- if np.all(self.mean == self.mean[0]):
- expand_img = np.empty((int(h * ratio), int(w * ratio), c),
- img.dtype)
- expand_img.fill(self.mean[0])
- else:
- expand_img = np.full((int(h * ratio), int(w * ratio), c),
- self.mean,
- dtype=img.dtype)
- left, top = self._random_left_top(ratio, h, w)
- expand_img[top:top + h, left:left + w] = img
- results['img'] = expand_img
- results['img_shape'] = expand_img.shape[:2]
-
- # expand bboxes
- if results.get('gt_bboxes', None) is not None:
- results['gt_bboxes'].translate_([left, top])
-
- # expand masks
- if results.get('gt_masks', None) is not None:
- results['gt_masks'] = results['gt_masks'].expand(
- int(h * ratio), int(w * ratio), top, left)
-
- # expand segmentation map
- if results.get('gt_seg_map', None) is not None:
- gt_seg = results['gt_seg_map']
- expand_gt_seg = np.full((int(h * ratio), int(w * ratio)),
- self.seg_ignore_label,
- dtype=gt_seg.dtype)
- expand_gt_seg[top:top + h, left:left + w] = gt_seg
- results['gt_seg_map'] = expand_gt_seg
-
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f'(mean={self.mean}, to_rgb={self.to_rgb}, '
- repr_str += f'ratio_range={self.ratio_range}, '
- repr_str += f'seg_ignore_label={self.seg_ignore_label}, '
- repr_str += f'prob={self.prob})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class MinIoURandomCrop(BaseTransform):
- """Random crop the image & bboxes & masks & segmentation map, the cropped
- patches have minimum IoU requirement with original image & bboxes & masks.
-
- & segmentation map, the IoU threshold is randomly selected from min_ious.
-
-
- Required Keys:
-
- - img
- - img_shape
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
- - gt_ignore_flags (bool) (optional)
- - gt_seg_map (np.uint8) (optional)
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes
- - gt_bboxes_labels
- - gt_masks
- - gt_ignore_flags
- - gt_seg_map
-
-
- Args:
- min_ious (Sequence[float]): minimum IoU threshold for all intersections
- with bounding boxes.
- min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w,
- where a >= min_crop_size).
- bbox_clip_border (bool, optional): Whether clip the objects outside
- the border of the image. Defaults to True.
- """
-
- def __init__(self,
- min_ious: Sequence[float] = (0.1, 0.3, 0.5, 0.7, 0.9),
- min_crop_size: float = 0.3,
- bbox_clip_border: bool = True) -> None:
-
- self.min_ious = min_ious
- self.sample_mode = (1, *min_ious, 0)
- self.min_crop_size = min_crop_size
- self.bbox_clip_border = bbox_clip_border
-
- @cache_randomness
- def _random_mode(self) -> Number:
- return random.choice(self.sample_mode)
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Transform function to crop images and bounding boxes with minimum
- IoU constraint.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with images and bounding boxes cropped, \
- 'img_shape' key is updated.
- """
- assert 'img' in results, '`img` is not found in results'
- assert 'gt_bboxes' in results, '`gt_bboxes` is not found in results'
- img = results['img']
- boxes = results['gt_bboxes']
- h, w, c = img.shape
- while True:
- mode = self._random_mode()
- self.mode = mode
- if mode == 1:
- return results
-
- min_iou = self.mode
- for i in range(50):
- new_w = random.uniform(self.min_crop_size * w, w)
- new_h = random.uniform(self.min_crop_size * h, h)
-
- # h / w in [0.5, 2]
- if new_h / new_w < 0.5 or new_h / new_w > 2:
- continue
-
- left = random.uniform(w - new_w)
- top = random.uniform(h - new_h)
-
- patch = np.array(
- (int(left), int(top), int(left + new_w), int(top + new_h)))
- # Line or point crop is not allowed
- if patch[2] == patch[0] or patch[3] == patch[1]:
- continue
- overlaps = boxes.overlaps(
- HorizontalBoxes(patch.reshape(-1, 4).astype(np.float32)),
- boxes).numpy().reshape(-1)
- if len(overlaps) > 0 and overlaps.min() < min_iou:
- continue
-
- # center of boxes should inside the crop img
- # only adjust boxes and instance masks when the gt is not empty
- if len(overlaps) > 0:
- # adjust boxes
- def is_center_of_bboxes_in_patch(boxes, patch):
- centers = boxes.centers.numpy()
- mask = ((centers[:, 0] > patch[0]) *
- (centers[:, 1] > patch[1]) *
- (centers[:, 0] < patch[2]) *
- (centers[:, 1] < patch[3]))
- return mask
-
- mask = is_center_of_bboxes_in_patch(boxes, patch)
- if not mask.any():
- continue
- if results.get('gt_bboxes', None) is not None:
- boxes = results['gt_bboxes']
- mask = is_center_of_bboxes_in_patch(boxes, patch)
- boxes = boxes[mask]
- boxes.translate_([-patch[0], -patch[1]])
- if self.bbox_clip_border:
- boxes.clip_(
- [patch[3] - patch[1], patch[2] - patch[0]])
- results['gt_bboxes'] = boxes
-
- # ignore_flags
- if results.get('gt_ignore_flags', None) is not None:
- results['gt_ignore_flags'] = \
- results['gt_ignore_flags'][mask]
-
- # labels
- if results.get('gt_bboxes_labels', None) is not None:
- results['gt_bboxes_labels'] = results[
- 'gt_bboxes_labels'][mask]
-
- # mask fields
- if results.get('gt_masks', None) is not None:
- results['gt_masks'] = results['gt_masks'][
- mask.nonzero()[0]].crop(patch)
- # adjust the img no matter whether the gt is empty before crop
- img = img[patch[1]:patch[3], patch[0]:patch[2]]
- results['img'] = img
- results['img_shape'] = img.shape[:2]
-
- # seg fields
- if results.get('gt_seg_map', None) is not None:
- results['gt_seg_map'] = results['gt_seg_map'][
- patch[1]:patch[3], patch[0]:patch[2]]
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f'(min_ious={self.min_ious}, '
- repr_str += f'min_crop_size={self.min_crop_size}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class Corrupt(BaseTransform):
- """Corruption augmentation.
-
- Corruption transforms implemented based on
- `imagecorruptions `_.
-
- Required Keys:
-
- - img (np.uint8)
-
-
- Modified Keys:
-
- - img (np.uint8)
-
-
- Args:
- corruption (str): Corruption name.
- severity (int): The severity of corruption. Defaults to 1.
- """
-
- def __init__(self, corruption: str, severity: int = 1) -> None:
- self.corruption = corruption
- self.severity = severity
-
- def transform(self, results: dict) -> dict:
- """Call function to corrupt image.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Result dict with images corrupted.
- """
-
- if corrupt is None:
- raise RuntimeError('imagecorruptions is not installed')
- results['img'] = corrupt(
- results['img'].astype(np.uint8),
- corruption_name=self.corruption,
- severity=self.severity)
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__
- repr_str += f'(corruption={self.corruption}, '
- repr_str += f'severity={self.severity})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-@avoid_cache_randomness
-class Albu(BaseTransform):
- """Albumentation augmentation.
-
- Adds custom transformations from Albumentations library.
- Please, visit `https://albumentations.readthedocs.io`
- to get more information.
-
- Required Keys:
-
- - img (np.uint8)
- - gt_bboxes (HorizontalBoxes[torch.float32]) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
-
- Modified Keys:
-
- - img (np.uint8)
- - gt_bboxes (HorizontalBoxes[torch.float32]) (optional)
- - gt_masks (BitmapMasks | PolygonMasks) (optional)
- - img_shape (tuple)
-
- An example of ``transforms`` is as followed:
-
- .. code-block::
-
- [
- dict(
- type='ShiftScaleRotate',
- shift_limit=0.0625,
- scale_limit=0.0,
- rotate_limit=0,
- interpolation=1,
- p=0.5),
- dict(
- type='RandomBrightnessContrast',
- brightness_limit=[0.1, 0.3],
- contrast_limit=[0.1, 0.3],
- p=0.2),
- dict(type='ChannelShuffle', p=0.1),
- dict(
- type='OneOf',
- transforms=[
- dict(type='Blur', blur_limit=3, p=1.0),
- dict(type='MedianBlur', blur_limit=3, p=1.0)
- ],
- p=0.1),
- ]
-
- Args:
- transforms (list[dict]): A list of albu transformations
- bbox_params (dict, optional): Bbox_params for albumentation `Compose`
- keymap (dict, optional): Contains
- {'input key':'albumentation-style key'}
- skip_img_without_anno (bool): Whether to skip the image if no ann left
- after aug. Defaults to False.
- """
-
- def __init__(self,
- transforms: List[dict],
- bbox_params: Optional[dict] = None,
- keymap: Optional[dict] = None,
- skip_img_without_anno: bool = False) -> None:
- if Compose is None:
- raise RuntimeError('albumentations is not installed')
-
- # Args will be modified later, copying it will be safer
- transforms = copy.deepcopy(transforms)
- if bbox_params is not None:
- bbox_params = copy.deepcopy(bbox_params)
- if keymap is not None:
- keymap = copy.deepcopy(keymap)
- self.transforms = transforms
- self.filter_lost_elements = False
- self.skip_img_without_anno = skip_img_without_anno
-
- # A simple workaround to remove masks without boxes
- if (isinstance(bbox_params, dict) and 'label_fields' in bbox_params
- and 'filter_lost_elements' in bbox_params):
- self.filter_lost_elements = True
- self.origin_label_fields = bbox_params['label_fields']
- bbox_params['label_fields'] = ['idx_mapper']
- del bbox_params['filter_lost_elements']
-
- self.bbox_params = (
- self.albu_builder(bbox_params) if bbox_params else None)
- self.aug = Compose([self.albu_builder(t) for t in self.transforms],
- bbox_params=self.bbox_params)
-
- if not keymap:
- self.keymap_to_albu = {
- 'img': 'image',
- 'gt_masks': 'masks',
- 'gt_bboxes': 'bboxes'
- }
- else:
- self.keymap_to_albu = keymap
- self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()}
-
- def albu_builder(self, cfg: dict) -> albumentations:
- """Import a module from albumentations.
-
- It inherits some of :func:`build_from_cfg` logic.
-
- Args:
- cfg (dict): Config dict. It should at least contain the key "type".
-
- Returns:
- obj: The constructed object.
- """
-
- assert isinstance(cfg, dict) and 'type' in cfg
- args = cfg.copy()
- obj_type = args.pop('type')
- if is_str(obj_type):
- if albumentations is None:
- raise RuntimeError('albumentations is not installed')
- obj_cls = getattr(albumentations, obj_type)
- elif inspect.isclass(obj_type):
- obj_cls = obj_type
- else:
- raise TypeError(
- f'type must be a str or valid type, but got {type(obj_type)}')
-
- if 'transforms' in args:
- args['transforms'] = [
- self.albu_builder(transform)
- for transform in args['transforms']
- ]
-
- return obj_cls(**args)
-
- @staticmethod
- def mapper(d: dict, keymap: dict) -> dict:
- """Dictionary mapper. Renames keys according to keymap provided.
-
- Args:
- d (dict): old dict
- keymap (dict): {'old_key':'new_key'}
- Returns:
- dict: new dict.
- """
- updated_dict = {}
- for k, v in zip(d.keys(), d.values()):
- new_k = keymap.get(k, k)
- updated_dict[new_k] = d[k]
- return updated_dict
-
- @autocast_box_type()
- def transform(self, results: dict) -> Union[dict, None]:
- """Transform function of Albu."""
- # TODO: gt_seg_map is not currently supported
- # dict to albumentations format
- results = self.mapper(results, self.keymap_to_albu)
- results, ori_masks = self._preprocess_results(results)
- results = self.aug(**results)
- results = self._postprocess_results(results, ori_masks)
- if results is None:
- return None
- # back to the original format
- results = self.mapper(results, self.keymap_back)
- results['img_shape'] = results['img'].shape[:2]
- return results
-
- def _preprocess_results(self, results: dict) -> tuple:
- """Pre-processing results to facilitate the use of Albu."""
- if 'bboxes' in results:
- # to list of boxes
- if not isinstance(results['bboxes'], HorizontalBoxes):
- raise NotImplementedError(
- 'Albu only supports horizontal boxes now')
- bboxes = results['bboxes'].numpy()
- results['bboxes'] = [x for x in bboxes]
- # add pseudo-field for filtration
- if self.filter_lost_elements:
- results['idx_mapper'] = np.arange(len(results['bboxes']))
-
- # TODO: Support mask structure in albu
- ori_masks = None
- if 'masks' in results:
- if isinstance(results['masks'], PolygonMasks):
- raise NotImplementedError(
- 'Albu only supports BitMap masks now')
- ori_masks = results['masks']
- if albumentations.__version__ < '0.5':
- results['masks'] = results['masks'].masks
- else:
- results['masks'] = [mask for mask in results['masks'].masks]
-
- return results, ori_masks
-
- def _postprocess_results(
- self,
- results: dict,
- ori_masks: Optional[Union[BitmapMasks,
- PolygonMasks]] = None) -> dict:
- """Post-processing Albu output."""
- # albumentations may return np.array or list on different versions
- if 'gt_bboxes_labels' in results and isinstance(
- results['gt_bboxes_labels'], list):
- results['gt_bboxes_labels'] = np.array(
- results['gt_bboxes_labels'], dtype=np.int64)
- if 'gt_ignore_flags' in results and isinstance(
- results['gt_ignore_flags'], list):
- results['gt_ignore_flags'] = np.array(
- results['gt_ignore_flags'], dtype=bool)
-
- if 'bboxes' in results:
- if isinstance(results['bboxes'], list):
- results['bboxes'] = np.array(
- results['bboxes'], dtype=np.float32)
- results['bboxes'] = results['bboxes'].reshape(-1, 4)
- results['bboxes'] = HorizontalBoxes(results['bboxes'])
-
- # filter label_fields
- if self.filter_lost_elements:
-
- for label in self.origin_label_fields:
- results[label] = np.array(
- [results[label][i] for i in results['idx_mapper']])
- if 'masks' in results:
- assert ori_masks is not None
- results['masks'] = np.array(
- [results['masks'][i] for i in results['idx_mapper']])
- results['masks'] = ori_masks.__class__(
- results['masks'], ori_masks.height, ori_masks.width)
-
- if (not len(results['idx_mapper'])
- and self.skip_img_without_anno):
- return None
- elif 'masks' in results:
- results['masks'] = ori_masks.__class__(results['masks'],
- ori_masks.height,
- ori_masks.width)
-
- return results
-
- def __repr__(self) -> str:
- repr_str = self.__class__.__name__ + f'(transforms={self.transforms})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-@avoid_cache_randomness
-class RandomCenterCropPad(BaseTransform):
- """Random center crop and random around padding for CornerNet.
-
- This operation generates randomly cropped image from the original image and
- pads it simultaneously. Different from :class:`RandomCrop`, the output
- shape may not equal to ``crop_size`` strictly. We choose a random value
- from ``ratios`` and the output shape could be larger or smaller than
- ``crop_size``. The padding operation is also different from :class:`Pad`,
- here we use around padding instead of right-bottom padding.
-
- The relation between output image (padding image) and original image:
-
- .. code:: text
-
- output image
-
- +----------------------------+
- | padded area |
- +------|----------------------------|----------+
- | | cropped area | |
- | | +---------------+ | |
- | | | . center | | | original image
- | | | range | | |
- | | +---------------+ | |
- +------|----------------------------|----------+
- | padded area |
- +----------------------------+
-
- There are 5 main areas in the figure:
-
- - output image: output image of this operation, also called padding
- image in following instruction.
- - original image: input image of this operation.
- - padded area: non-intersect area of output image and original image.
- - cropped area: the overlap of output image and original image.
- - center range: a smaller area where random center chosen from.
- center range is computed by ``border`` and original image's shape
- to avoid our random center is too close to original image's border.
-
- Also this operation act differently in train and test mode, the summary
- pipeline is listed below.
-
- Train pipeline:
-
- 1. Choose a ``random_ratio`` from ``ratios``, the shape of padding image
- will be ``random_ratio * crop_size``.
- 2. Choose a ``random_center`` in center range.
- 3. Generate padding image with center matches the ``random_center``.
- 4. Initialize the padding image with pixel value equals to ``mean``.
- 5. Copy the cropped area to padding image.
- 6. Refine annotations.
-
- Test pipeline:
-
- 1. Compute output shape according to ``test_pad_mode``.
- 2. Generate padding image with center matches the original image
- center.
- 3. Initialize the padding image with pixel value equals to ``mean``.
- 4. Copy the ``cropped area`` to padding image.
-
- Required Keys:
-
- - img (np.float32)
- - img_shape (tuple)
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
-
- Modified Keys:
-
- - img (np.float32)
- - img_shape (tuple)
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
-
- Args:
- crop_size (tuple, optional): expected size after crop, final size will
- computed according to ratio. Requires (width, height)
- in train mode, and None in test mode.
- ratios (tuple, optional): random select a ratio from tuple and crop
- image to (crop_size[0] * ratio) * (crop_size[1] * ratio).
- Only available in train mode. Defaults to (0.9, 1.0, 1.1).
- border (int, optional): max distance from center select area to image
- border. Only available in train mode. Defaults to 128.
- mean (sequence, optional): Mean values of 3 channels.
- std (sequence, optional): Std values of 3 channels.
- to_rgb (bool, optional): Whether to convert the image from BGR to RGB.
- test_mode (bool): whether involve random variables in transform.
- In train mode, crop_size is fixed, center coords and ratio is
- random selected from predefined lists. In test mode, crop_size
- is image's original shape, center coords and ratio is fixed.
- Defaults to False.
- test_pad_mode (tuple, optional): padding method and padding shape
- value, only available in test mode. Default is using
- 'logical_or' with 127 as padding shape value.
-
- - 'logical_or': final_shape = input_shape | padding_shape_value
- - 'size_divisor': final_shape = int(
- ceil(input_shape / padding_shape_value) * padding_shape_value)
-
- Defaults to ('logical_or', 127).
- test_pad_add_pix (int): Extra padding pixel in test mode.
- Defaults to 0.
- bbox_clip_border (bool): Whether clip the objects outside
- the border of the image. Defaults to True.
- """
-
- def __init__(self,
- crop_size: Optional[tuple] = None,
- ratios: Optional[tuple] = (0.9, 1.0, 1.1),
- border: Optional[int] = 128,
- mean: Optional[Sequence] = None,
- std: Optional[Sequence] = None,
- to_rgb: Optional[bool] = None,
- test_mode: bool = False,
- test_pad_mode: Optional[tuple] = ('logical_or', 127),
- test_pad_add_pix: int = 0,
- bbox_clip_border: bool = True) -> None:
- if test_mode:
- assert crop_size is None, 'crop_size must be None in test mode'
- assert ratios is None, 'ratios must be None in test mode'
- assert border is None, 'border must be None in test mode'
- assert isinstance(test_pad_mode, (list, tuple))
- assert test_pad_mode[0] in ['logical_or', 'size_divisor']
- else:
- assert isinstance(crop_size, (list, tuple))
- assert crop_size[0] > 0 and crop_size[1] > 0, (
- 'crop_size must > 0 in train mode')
- assert isinstance(ratios, (list, tuple))
- assert test_pad_mode is None, (
- 'test_pad_mode must be None in train mode')
-
- self.crop_size = crop_size
- self.ratios = ratios
- self.border = border
- # We do not set default value to mean, std and to_rgb because these
- # hyper-parameters are easy to forget but could affect the performance.
- # Please use the same setting as Normalize for performance assurance.
- assert mean is not None and std is not None and to_rgb is not None
- self.to_rgb = to_rgb
- self.input_mean = mean
- self.input_std = std
- if to_rgb:
- self.mean = mean[::-1]
- self.std = std[::-1]
- else:
- self.mean = mean
- self.std = std
- self.test_mode = test_mode
- self.test_pad_mode = test_pad_mode
- self.test_pad_add_pix = test_pad_add_pix
- self.bbox_clip_border = bbox_clip_border
-
- def _get_border(self, border, size):
- """Get final border for the target size.
-
- This function generates a ``final_border`` according to image's shape.
- The area between ``final_border`` and ``size - final_border`` is the
- ``center range``. We randomly choose center from the ``center range``
- to avoid our random center is too close to original image's border.
- Also ``center range`` should be larger than 0.
-
- Args:
- border (int): The initial border, default is 128.
- size (int): The width or height of original image.
- Returns:
- int: The final border.
- """
- k = 2 * border / size
- i = pow(2, np.ceil(np.log2(np.ceil(k))) + (k == int(k)))
- return border // i
-
- def _filter_boxes(self, patch, boxes):
- """Check whether the center of each box is in the patch.
-
- Args:
- patch (list[int]): The cropped area, [left, top, right, bottom].
- boxes (numpy array, (N x 4)): Ground truth boxes.
-
- Returns:
- mask (numpy array, (N,)): Each box is inside or outside the patch.
- """
- center = boxes.centers.numpy()
- mask = (center[:, 0] > patch[0]) * (center[:, 1] > patch[1]) * (
- center[:, 0] < patch[2]) * (
- center[:, 1] < patch[3])
- return mask
-
- def _crop_image_and_paste(self, image, center, size):
- """Crop image with a given center and size, then paste the cropped
- image to a blank image with two centers align.
-
- This function is equivalent to generating a blank image with ``size``
- as its shape. Then cover it on the original image with two centers (
- the center of blank image and the random center of original image)
- aligned. The overlap area is paste from the original image and the
- outside area is filled with ``mean pixel``.
-
- Args:
- image (np array, H x W x C): Original image.
- center (list[int]): Target crop center coord.
- size (list[int]): Target crop size. [target_h, target_w]
-
- Returns:
- cropped_img (np array, target_h x target_w x C): Cropped image.
- border (np array, 4): The distance of four border of
- ``cropped_img`` to the original image area, [top, bottom,
- left, right]
- patch (list[int]): The cropped area, [left, top, right, bottom].
- """
- center_y, center_x = center
- target_h, target_w = size
- img_h, img_w, img_c = image.shape
-
- x0 = max(0, center_x - target_w // 2)
- x1 = min(center_x + target_w // 2, img_w)
- y0 = max(0, center_y - target_h // 2)
- y1 = min(center_y + target_h // 2, img_h)
- patch = np.array((int(x0), int(y0), int(x1), int(y1)))
-
- left, right = center_x - x0, x1 - center_x
- top, bottom = center_y - y0, y1 - center_y
-
- cropped_center_y, cropped_center_x = target_h // 2, target_w // 2
- cropped_img = np.zeros((target_h, target_w, img_c), dtype=image.dtype)
- for i in range(img_c):
- cropped_img[:, :, i] += self.mean[i]
- y_slice = slice(cropped_center_y - top, cropped_center_y + bottom)
- x_slice = slice(cropped_center_x - left, cropped_center_x + right)
- cropped_img[y_slice, x_slice, :] = image[y0:y1, x0:x1, :]
-
- border = np.array([
- cropped_center_y - top, cropped_center_y + bottom,
- cropped_center_x - left, cropped_center_x + right
- ],
- dtype=np.float32)
-
- return cropped_img, border, patch
-
- def _train_aug(self, results):
- """Random crop and around padding the original image.
-
- Args:
- results (dict): Image infomations in the augment pipeline.
-
- Returns:
- results (dict): The updated dict.
- """
- img = results['img']
- h, w, c = img.shape
- gt_bboxes = results['gt_bboxes']
- while True:
- scale = random.choice(self.ratios)
- new_h = int(self.crop_size[1] * scale)
- new_w = int(self.crop_size[0] * scale)
- h_border = self._get_border(self.border, h)
- w_border = self._get_border(self.border, w)
-
- for i in range(50):
- center_x = random.randint(low=w_border, high=w - w_border)
- center_y = random.randint(low=h_border, high=h - h_border)
-
- cropped_img, border, patch = self._crop_image_and_paste(
- img, [center_y, center_x], [new_h, new_w])
-
- if len(gt_bboxes) == 0:
- results['img'] = cropped_img
- results['img_shape'] = cropped_img.shape[:2]
- return results
-
- # if image do not have valid bbox, any crop patch is valid.
- mask = self._filter_boxes(patch, gt_bboxes)
- if not mask.any():
- continue
-
- results['img'] = cropped_img
- results['img_shape'] = cropped_img.shape[:2]
-
- x0, y0, x1, y1 = patch
-
- left_w, top_h = center_x - x0, center_y - y0
- cropped_center_x, cropped_center_y = new_w // 2, new_h // 2
-
- # crop bboxes accordingly and clip to the image boundary
- gt_bboxes = gt_bboxes[mask]
- gt_bboxes.translate_([
- cropped_center_x - left_w - x0,
- cropped_center_y - top_h - y0
- ])
- if self.bbox_clip_border:
- gt_bboxes.clip_([new_h, new_w])
- keep = gt_bboxes.is_inside([new_h, new_w]).numpy()
- gt_bboxes = gt_bboxes[keep]
-
- results['gt_bboxes'] = gt_bboxes
-
- # ignore_flags
- if results.get('gt_ignore_flags', None) is not None:
- gt_ignore_flags = results['gt_ignore_flags'][mask]
- results['gt_ignore_flags'] = \
- gt_ignore_flags[keep]
-
- # labels
- if results.get('gt_bboxes_labels', None) is not None:
- gt_labels = results['gt_bboxes_labels'][mask]
- results['gt_bboxes_labels'] = gt_labels[keep]
-
- if 'gt_masks' in results or 'gt_seg_map' in results:
- raise NotImplementedError(
- 'RandomCenterCropPad only supports bbox.')
-
- return results
-
- def _test_aug(self, results):
- """Around padding the original image without cropping.
-
- The padding mode and value are from ``test_pad_mode``.
-
- Args:
- results (dict): Image infomations in the augment pipeline.
-
- Returns:
- results (dict): The updated dict.
- """
- img = results['img']
- h, w, c = img.shape
- if self.test_pad_mode[0] in ['logical_or']:
- # self.test_pad_add_pix is only used for centernet
- target_h = (h | self.test_pad_mode[1]) + self.test_pad_add_pix
- target_w = (w | self.test_pad_mode[1]) + self.test_pad_add_pix
- elif self.test_pad_mode[0] in ['size_divisor']:
- divisor = self.test_pad_mode[1]
- target_h = int(np.ceil(h / divisor)) * divisor
- target_w = int(np.ceil(w / divisor)) * divisor
- else:
- raise NotImplementedError(
- 'RandomCenterCropPad only support two testing pad mode:'
- 'logical-or and size_divisor.')
-
- cropped_img, border, _ = self._crop_image_and_paste(
- img, [h // 2, w // 2], [target_h, target_w])
- results['img'] = cropped_img
- results['img_shape'] = cropped_img.shape[:2]
- results['border'] = border
- return results
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- img = results['img']
- assert img.dtype == np.float32, (
- 'RandomCenterCropPad needs the input image of dtype np.float32,'
- ' please set "to_float32=True" in "LoadImageFromFile" pipeline')
- h, w, c = img.shape
- assert c == len(self.mean)
- if self.test_mode:
- return self._test_aug(results)
- else:
- return self._train_aug(results)
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(crop_size={self.crop_size}, '
- repr_str += f'ratios={self.ratios}, '
- repr_str += f'border={self.border}, '
- repr_str += f'mean={self.input_mean}, '
- repr_str += f'std={self.input_std}, '
- repr_str += f'to_rgb={self.to_rgb}, '
- repr_str += f'test_mode={self.test_mode}, '
- repr_str += f'test_pad_mode={self.test_pad_mode}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class CutOut(BaseTransform):
- """CutOut operation.
-
- Randomly drop some regions of image used in
- `Cutout `_.
-
- Required Keys:
-
- - img
-
- Modified Keys:
-
- - img
-
- Args:
- n_holes (int or tuple[int, int]): Number of regions to be dropped.
- If it is given as a list, number of holes will be randomly
- selected from the closed interval [``n_holes[0]``, ``n_holes[1]``].
- cutout_shape (tuple[int, int] or list[tuple[int, int]], optional):
- The candidate shape of dropped regions. It can be
- ``tuple[int, int]`` to use a fixed cutout shape, or
- ``list[tuple[int, int]]`` to randomly choose shape
- from the list. Defaults to None.
- cutout_ratio (tuple[float, float] or list[tuple[float, float]],
- optional): The candidate ratio of dropped regions. It can be
- ``tuple[float, float]`` to use a fixed ratio or
- ``list[tuple[float, float]]`` to randomly choose ratio
- from the list. Please note that ``cutout_shape`` and
- ``cutout_ratio`` cannot be both given at the same time.
- Defaults to None.
- fill_in (tuple[float, float, float] or tuple[int, int, int]): The value
- of pixel to fill in the dropped regions. Defaults to (0, 0, 0).
- """
-
- def __init__(
- self,
- n_holes: Union[int, Tuple[int, int]],
- cutout_shape: Optional[Union[Tuple[int, int],
- List[Tuple[int, int]]]] = None,
- cutout_ratio: Optional[Union[Tuple[float, float],
- List[Tuple[float, float]]]] = None,
- fill_in: Union[Tuple[float, float, float], Tuple[int, int,
- int]] = (0, 0, 0)
- ) -> None:
-
- assert (cutout_shape is None) ^ (cutout_ratio is None), \
- 'Either cutout_shape or cutout_ratio should be specified.'
- assert (isinstance(cutout_shape, (list, tuple))
- or isinstance(cutout_ratio, (list, tuple)))
- if isinstance(n_holes, tuple):
- assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1]
- else:
- n_holes = (n_holes, n_holes)
- self.n_holes = n_holes
- self.fill_in = fill_in
- self.with_ratio = cutout_ratio is not None
- self.candidates = cutout_ratio if self.with_ratio else cutout_shape
- if not isinstance(self.candidates, list):
- self.candidates = [self.candidates]
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Call function to drop some regions of image."""
- h, w, c = results['img'].shape
- n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1)
- for _ in range(n_holes):
- x1 = np.random.randint(0, w)
- y1 = np.random.randint(0, h)
- index = np.random.randint(0, len(self.candidates))
- if not self.with_ratio:
- cutout_w, cutout_h = self.candidates[index]
- else:
- cutout_w = int(self.candidates[index][0] * w)
- cutout_h = int(self.candidates[index][1] * h)
-
- x2 = np.clip(x1 + cutout_w, 0, w)
- y2 = np.clip(y1 + cutout_h, 0, h)
- results['img'][y1:y2, x1:x2, :] = self.fill_in
-
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(n_holes={self.n_holes}, '
- repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio
- else f'cutout_shape={self.candidates}, ')
- repr_str += f'fill_in={self.fill_in})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class Mosaic(BaseTransform):
- """Mosaic augmentation.
-
- Given 4 images, mosaic transform combines them into
- one output image. The output image is composed of the parts from each sub-
- image.
-
- .. code:: text
-
- mosaic transform
- center_x
- +------------------------------+
- | pad | pad |
- | +-----------+ |
- | | | |
- | | image1 |--------+ |
- | | | | |
- | | | image2 | |
- center_y |----+-------------+-----------|
- | | cropped | |
- |pad | image3 | image4 |
- | | | |
- +----|-------------+-----------+
- | |
- +-------------+
-
- The mosaic transform steps are as follows:
-
- 1. Choose the mosaic center as the intersections of 4 images
- 2. Get the left top image according to the index, and randomly
- sample another 3 images from the custom dataset.
- 3. Sub image will be cropped if image is larger than mosaic patch
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
- - mix_results (List[dict])
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes (optional)
- - gt_bboxes_labels (optional)
- - gt_ignore_flags (optional)
-
- Args:
- img_scale (Sequence[int]): Image size after mosaic pipeline of single
- image. The shape order should be (width, height).
- Defaults to (640, 640).
- center_ratio_range (Sequence[float]): Center ratio range of mosaic
- output. Defaults to (0.5, 1.5).
- bbox_clip_border (bool, optional): Whether to clip the objects outside
- the border of the image. In some dataset like MOT17, the gt bboxes
- are allowed to cross the border of images. Therefore, we don't
- need to clip the gt bboxes in these cases. Defaults to True.
- pad_val (int): Pad value. Defaults to 114.
- prob (float): Probability of applying this transformation.
- Defaults to 1.0.
- """
-
- def __init__(self,
- img_scale: Tuple[int, int] = (640, 640),
- center_ratio_range: Tuple[float, float] = (0.5, 1.5),
- bbox_clip_border: bool = True,
- pad_val: float = 114.0,
- prob: float = 1.0) -> None:
- assert isinstance(img_scale, tuple)
- assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. ' \
- f'got {prob}.'
-
- log_img_scale(img_scale, skip_square=True, shape_order='wh')
- self.img_scale = img_scale
- self.center_ratio_range = center_ratio_range
- self.bbox_clip_border = bbox_clip_border
- self.pad_val = pad_val
- self.prob = prob
-
- @cache_randomness
- def get_indexes(self, dataset: BaseDataset) -> int:
- """Call function to collect indexes.
-
- Args:
- dataset (:obj:`MultiImageMixDataset`): The dataset.
-
- Returns:
- list: indexes.
- """
-
- indexes = [random.randint(0, len(dataset)) for _ in range(3)]
- return indexes
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Mosaic transform function.
-
- Args:
- results (dict): Result dict.
-
- Returns:
- dict: Updated result dict.
- """
- if random.uniform(0, 1) > self.prob:
- return results
-
- assert 'mix_results' in results
- mosaic_bboxes = []
- mosaic_bboxes_labels = []
- mosaic_ignore_flags = []
- if len(results['img'].shape) == 3:
- mosaic_img = np.full(
- (int(self.img_scale[1] * 2), int(self.img_scale[0] * 2), 3),
- self.pad_val,
- dtype=results['img'].dtype)
- else:
- mosaic_img = np.full(
- (int(self.img_scale[1] * 2), int(self.img_scale[0] * 2)),
- self.pad_val,
- dtype=results['img'].dtype)
-
- # mosaic center x, y
- center_x = int(
- random.uniform(*self.center_ratio_range) * self.img_scale[0])
- center_y = int(
- random.uniform(*self.center_ratio_range) * self.img_scale[1])
- center_position = (center_x, center_y)
-
- loc_strs = ('top_left', 'top_right', 'bottom_left', 'bottom_right')
- for i, loc in enumerate(loc_strs):
- if loc == 'top_left':
- results_patch = copy.deepcopy(results)
- else:
- results_patch = copy.deepcopy(results['mix_results'][i - 1])
-
- img_i = results_patch['img']
- h_i, w_i = img_i.shape[:2]
- # keep_ratio resize
- scale_ratio_i = min(self.img_scale[1] / h_i,
- self.img_scale[0] / w_i)
- img_i = mmcv.imresize(
- img_i, (int(w_i * scale_ratio_i), int(h_i * scale_ratio_i)))
-
- # compute the combine parameters
- paste_coord, crop_coord = self._mosaic_combine(
- loc, center_position, img_i.shape[:2][::-1])
- x1_p, y1_p, x2_p, y2_p = paste_coord
- x1_c, y1_c, x2_c, y2_c = crop_coord
-
- # crop and paste image
- mosaic_img[y1_p:y2_p, x1_p:x2_p] = img_i[y1_c:y2_c, x1_c:x2_c]
-
- # adjust coordinate
- gt_bboxes_i = results_patch['gt_bboxes']
- gt_bboxes_labels_i = results_patch['gt_bboxes_labels']
- gt_ignore_flags_i = results_patch['gt_ignore_flags']
-
- padw = x1_p - x1_c
- padh = y1_p - y1_c
- gt_bboxes_i.rescale_([scale_ratio_i, scale_ratio_i])
- gt_bboxes_i.translate_([padw, padh])
- mosaic_bboxes.append(gt_bboxes_i)
- mosaic_bboxes_labels.append(gt_bboxes_labels_i)
- mosaic_ignore_flags.append(gt_ignore_flags_i)
-
- mosaic_bboxes = mosaic_bboxes[0].cat(mosaic_bboxes, 0)
- mosaic_bboxes_labels = np.concatenate(mosaic_bboxes_labels, 0)
- mosaic_ignore_flags = np.concatenate(mosaic_ignore_flags, 0)
-
- if self.bbox_clip_border:
- mosaic_bboxes.clip_([2 * self.img_scale[1], 2 * self.img_scale[0]])
- # remove outside bboxes
- inside_inds = mosaic_bboxes.is_inside(
- [2 * self.img_scale[1], 2 * self.img_scale[0]]).numpy()
- mosaic_bboxes = mosaic_bboxes[inside_inds]
- mosaic_bboxes_labels = mosaic_bboxes_labels[inside_inds]
- mosaic_ignore_flags = mosaic_ignore_flags[inside_inds]
-
- results['img'] = mosaic_img
- results['img_shape'] = mosaic_img.shape[:2]
- results['gt_bboxes'] = mosaic_bboxes
- results['gt_bboxes_labels'] = mosaic_bboxes_labels
- results['gt_ignore_flags'] = mosaic_ignore_flags
- return results
-
- def _mosaic_combine(
- self, loc: str, center_position_xy: Sequence[float],
- img_shape_wh: Sequence[int]) -> Tuple[Tuple[int], Tuple[int]]:
- """Calculate global coordinate of mosaic image and local coordinate of
- cropped sub-image.
-
- Args:
- loc (str): Index for the sub-image, loc in ('top_left',
- 'top_right', 'bottom_left', 'bottom_right').
- center_position_xy (Sequence[float]): Mixing center for 4 images,
- (x, y).
- img_shape_wh (Sequence[int]): Width and height of sub-image
-
- Returns:
- tuple[tuple[float]]: Corresponding coordinate of pasting and
- cropping
- - paste_coord (tuple): paste corner coordinate in mosaic image.
- - crop_coord (tuple): crop corner coordinate in mosaic image.
- """
- assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right')
- if loc == 'top_left':
- # index0 to top left part of image
- x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \
- max(center_position_xy[1] - img_shape_wh[1], 0), \
- center_position_xy[0], \
- center_position_xy[1]
- crop_coord = img_shape_wh[0] - (x2 - x1), img_shape_wh[1] - (
- y2 - y1), img_shape_wh[0], img_shape_wh[1]
-
- elif loc == 'top_right':
- # index1 to top right part of image
- x1, y1, x2, y2 = center_position_xy[0], \
- max(center_position_xy[1] - img_shape_wh[1], 0), \
- min(center_position_xy[0] + img_shape_wh[0],
- self.img_scale[0] * 2), \
- center_position_xy[1]
- crop_coord = 0, img_shape_wh[1] - (y2 - y1), min(
- img_shape_wh[0], x2 - x1), img_shape_wh[1]
-
- elif loc == 'bottom_left':
- # index2 to bottom left part of image
- x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \
- center_position_xy[1], \
- center_position_xy[0], \
- min(self.img_scale[1] * 2, center_position_xy[1] +
- img_shape_wh[1])
- crop_coord = img_shape_wh[0] - (x2 - x1), 0, img_shape_wh[0], min(
- y2 - y1, img_shape_wh[1])
-
- else:
- # index3 to bottom right part of image
- x1, y1, x2, y2 = center_position_xy[0], \
- center_position_xy[1], \
- min(center_position_xy[0] + img_shape_wh[0],
- self.img_scale[0] * 2), \
- min(self.img_scale[1] * 2, center_position_xy[1] +
- img_shape_wh[1])
- crop_coord = 0, 0, min(img_shape_wh[0],
- x2 - x1), min(y2 - y1, img_shape_wh[1])
-
- paste_coord = x1, y1, x2, y2
- return paste_coord, crop_coord
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(img_scale={self.img_scale}, '
- repr_str += f'center_ratio_range={self.center_ratio_range}, '
- repr_str += f'pad_val={self.pad_val}, '
- repr_str += f'prob={self.prob})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class MixUp(BaseTransform):
- """MixUp data augmentation.
-
- .. code:: text
-
- mixup transform
- +------------------------------+
- | mixup image | |
- | +--------|--------+ |
- | | | | |
- |---------------+ | |
- | | | |
- | | image | |
- | | | |
- | | | |
- | |-----------------+ |
- | pad |
- +------------------------------+
-
- The mixup transform steps are as follows:
-
- 1. Another random image is picked by dataset and embedded in
- the top left patch(after padding and resizing)
- 2. The target of mixup transform is the weighted average of mixup
- image and origin image.
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
- - mix_results (List[dict])
-
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes (optional)
- - gt_bboxes_labels (optional)
- - gt_ignore_flags (optional)
-
-
- Args:
- img_scale (Sequence[int]): Image output size after mixup pipeline.
- The shape order should be (width, height). Defaults to (640, 640).
- ratio_range (Sequence[float]): Scale ratio of mixup image.
- Defaults to (0.5, 1.5).
- flip_ratio (float): Horizontal flip ratio of mixup image.
- Defaults to 0.5.
- pad_val (int): Pad value. Defaults to 114.
- max_iters (int): The maximum number of iterations. If the number of
- iterations is greater than `max_iters`, but gt_bbox is still
- empty, then the iteration is terminated. Defaults to 15.
- bbox_clip_border (bool, optional): Whether to clip the objects outside
- the border of the image. In some dataset like MOT17, the gt bboxes
- are allowed to cross the border of images. Therefore, we don't
- need to clip the gt bboxes in these cases. Defaults to True.
- """
-
- def __init__(self,
- img_scale: Tuple[int, int] = (640, 640),
- ratio_range: Tuple[float, float] = (0.5, 1.5),
- flip_ratio: float = 0.5,
- pad_val: float = 114.0,
- max_iters: int = 15,
- bbox_clip_border: bool = True) -> None:
- assert isinstance(img_scale, tuple)
- log_img_scale(img_scale, skip_square=True, shape_order='wh')
- self.dynamic_scale = img_scale
- self.ratio_range = ratio_range
- self.flip_ratio = flip_ratio
- self.pad_val = pad_val
- self.max_iters = max_iters
- self.bbox_clip_border = bbox_clip_border
-
- @cache_randomness
- def get_indexes(self, dataset: BaseDataset) -> int:
- """Call function to collect indexes.
-
- Args:
- dataset (:obj:`MultiImageMixDataset`): The dataset.
-
- Returns:
- list: indexes.
- """
-
- for i in range(self.max_iters):
- index = random.randint(0, len(dataset))
- gt_bboxes_i = dataset[index]['gt_bboxes']
- if len(gt_bboxes_i) != 0:
- break
-
- return index
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """MixUp transform function.
-
- Args:
- results (dict): Result dict.
-
- Returns:
- dict: Updated result dict.
- """
-
- assert 'mix_results' in results
- assert len(
- results['mix_results']) == 1, 'MixUp only support 2 images now !'
-
- if results['mix_results'][0]['gt_bboxes'].shape[0] == 0:
- # empty bbox
- return results
-
- retrieve_results = results['mix_results'][0]
- retrieve_img = retrieve_results['img']
-
- jit_factor = random.uniform(*self.ratio_range)
- is_filp = random.uniform(0, 1) > self.flip_ratio
-
- if len(retrieve_img.shape) == 3:
- out_img = np.ones(
- (self.dynamic_scale[1], self.dynamic_scale[0], 3),
- dtype=retrieve_img.dtype) * self.pad_val
- else:
- out_img = np.ones(
- self.dynamic_scale[::-1],
- dtype=retrieve_img.dtype) * self.pad_val
-
- # 1. keep_ratio resize
- scale_ratio = min(self.dynamic_scale[1] / retrieve_img.shape[0],
- self.dynamic_scale[0] / retrieve_img.shape[1])
- retrieve_img = mmcv.imresize(
- retrieve_img, (int(retrieve_img.shape[1] * scale_ratio),
- int(retrieve_img.shape[0] * scale_ratio)))
-
- # 2. paste
- out_img[:retrieve_img.shape[0], :retrieve_img.shape[1]] = retrieve_img
-
- # 3. scale jit
- scale_ratio *= jit_factor
- out_img = mmcv.imresize(out_img, (int(out_img.shape[1] * jit_factor),
- int(out_img.shape[0] * jit_factor)))
-
- # 4. flip
- if is_filp:
- out_img = out_img[:, ::-1, :]
-
- # 5. random crop
- ori_img = results['img']
- origin_h, origin_w = out_img.shape[:2]
- target_h, target_w = ori_img.shape[:2]
- padded_img = np.ones((max(origin_h, target_h), max(
- origin_w, target_w), 3)) * self.pad_val
- padded_img = padded_img.astype(np.uint8)
- padded_img[:origin_h, :origin_w] = out_img
-
- x_offset, y_offset = 0, 0
- if padded_img.shape[0] > target_h:
- y_offset = random.randint(0, padded_img.shape[0] - target_h)
- if padded_img.shape[1] > target_w:
- x_offset = random.randint(0, padded_img.shape[1] - target_w)
- padded_cropped_img = padded_img[y_offset:y_offset + target_h,
- x_offset:x_offset + target_w]
-
- # 6. adjust bbox
- retrieve_gt_bboxes = retrieve_results['gt_bboxes']
- retrieve_gt_bboxes.rescale_([scale_ratio, scale_ratio])
- if self.bbox_clip_border:
- retrieve_gt_bboxes.clip_([origin_h, origin_w])
-
- if is_filp:
- retrieve_gt_bboxes.flip_([origin_h, origin_w],
- direction='horizontal')
-
- # 7. filter
- cp_retrieve_gt_bboxes = retrieve_gt_bboxes.clone()
- cp_retrieve_gt_bboxes.translate_([-x_offset, -y_offset])
- if self.bbox_clip_border:
- cp_retrieve_gt_bboxes.clip_([target_h, target_w])
-
- # 8. mix up
- ori_img = ori_img.astype(np.float32)
- mixup_img = 0.5 * ori_img + 0.5 * padded_cropped_img.astype(np.float32)
-
- retrieve_gt_bboxes_labels = retrieve_results['gt_bboxes_labels']
- retrieve_gt_ignore_flags = retrieve_results['gt_ignore_flags']
-
- mixup_gt_bboxes = cp_retrieve_gt_bboxes.cat(
- (results['gt_bboxes'], cp_retrieve_gt_bboxes), dim=0)
- mixup_gt_bboxes_labels = np.concatenate(
- (results['gt_bboxes_labels'], retrieve_gt_bboxes_labels), axis=0)
- mixup_gt_ignore_flags = np.concatenate(
- (results['gt_ignore_flags'], retrieve_gt_ignore_flags), axis=0)
-
- # remove outside bbox
- inside_inds = mixup_gt_bboxes.is_inside([target_h, target_w]).numpy()
- mixup_gt_bboxes = mixup_gt_bboxes[inside_inds]
- mixup_gt_bboxes_labels = mixup_gt_bboxes_labels[inside_inds]
- mixup_gt_ignore_flags = mixup_gt_ignore_flags[inside_inds]
-
- results['img'] = mixup_img.astype(np.uint8)
- results['img_shape'] = mixup_img.shape[:2]
- results['gt_bboxes'] = mixup_gt_bboxes
- results['gt_bboxes_labels'] = mixup_gt_bboxes_labels
- results['gt_ignore_flags'] = mixup_gt_ignore_flags
-
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(dynamic_scale={self.dynamic_scale}, '
- repr_str += f'ratio_range={self.ratio_range}, '
- repr_str += f'flip_ratio={self.flip_ratio}, '
- repr_str += f'pad_val={self.pad_val}, '
- repr_str += f'max_iters={self.max_iters}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class RandomAffine(BaseTransform):
- """Random affine transform data augmentation.
-
- This operation randomly generates affine transform matrix which including
- rotation, translation, shear and scaling transforms.
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes (optional)
- - gt_bboxes_labels (optional)
- - gt_ignore_flags (optional)
-
- Args:
- max_rotate_degree (float): Maximum degrees of rotation transform.
- Defaults to 10.
- max_translate_ratio (float): Maximum ratio of translation.
- Defaults to 0.1.
- scaling_ratio_range (tuple[float]): Min and max ratio of
- scaling transform. Defaults to (0.5, 1.5).
- max_shear_degree (float): Maximum degrees of shear
- transform. Defaults to 2.
- border (tuple[int]): Distance from width and height sides of input
- image to adjust output shape. Only used in mosaic dataset.
- Defaults to (0, 0).
- border_val (tuple[int]): Border padding values of 3 channels.
- Defaults to (114, 114, 114).
- bbox_clip_border (bool, optional): Whether to clip the objects outside
- the border of the image. In some dataset like MOT17, the gt bboxes
- are allowed to cross the border of images. Therefore, we don't
- need to clip the gt bboxes in these cases. Defaults to True.
- """
-
- def __init__(self,
- max_rotate_degree: float = 10.0,
- max_translate_ratio: float = 0.1,
- scaling_ratio_range: Tuple[float, float] = (0.5, 1.5),
- max_shear_degree: float = 2.0,
- border: Tuple[int, int] = (0, 0),
- border_val: Tuple[int, int, int] = (114, 114, 114),
- bbox_clip_border: bool = True) -> None:
- assert 0 <= max_translate_ratio <= 1
- assert scaling_ratio_range[0] <= scaling_ratio_range[1]
- assert scaling_ratio_range[0] > 0
- self.max_rotate_degree = max_rotate_degree
- self.max_translate_ratio = max_translate_ratio
- self.scaling_ratio_range = scaling_ratio_range
- self.max_shear_degree = max_shear_degree
- self.border = border
- self.border_val = border_val
- self.bbox_clip_border = bbox_clip_border
-
- @cache_randomness
- def _get_random_homography_matrix(self, height, width):
- # Rotation
- rotation_degree = random.uniform(-self.max_rotate_degree,
- self.max_rotate_degree)
- rotation_matrix = self._get_rotation_matrix(rotation_degree)
-
- # Scaling
- scaling_ratio = random.uniform(self.scaling_ratio_range[0],
- self.scaling_ratio_range[1])
- scaling_matrix = self._get_scaling_matrix(scaling_ratio)
-
- # Shear
- x_degree = random.uniform(-self.max_shear_degree,
- self.max_shear_degree)
- y_degree = random.uniform(-self.max_shear_degree,
- self.max_shear_degree)
- shear_matrix = self._get_shear_matrix(x_degree, y_degree)
-
- # Translation
- trans_x = random.uniform(-self.max_translate_ratio,
- self.max_translate_ratio) * width
- trans_y = random.uniform(-self.max_translate_ratio,
- self.max_translate_ratio) * height
- translate_matrix = self._get_translation_matrix(trans_x, trans_y)
-
- warp_matrix = (
- translate_matrix @ shear_matrix @ rotation_matrix @ scaling_matrix)
- return warp_matrix
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- img = results['img']
- height = img.shape[0] + self.border[1] * 2
- width = img.shape[1] + self.border[0] * 2
-
- warp_matrix = self._get_random_homography_matrix(height, width)
-
- img = cv2.warpPerspective(
- img,
- warp_matrix,
- dsize=(width, height),
- borderValue=self.border_val)
- results['img'] = img
- results['img_shape'] = img.shape[:2]
-
- bboxes = results['gt_bboxes']
- num_bboxes = len(bboxes)
- if num_bboxes:
- bboxes.project_(warp_matrix)
- if self.bbox_clip_border:
- bboxes.clip_([height, width])
- # remove outside bbox
- valid_index = bboxes.is_inside([height, width]).numpy()
- results['gt_bboxes'] = bboxes[valid_index]
- results['gt_bboxes_labels'] = results['gt_bboxes_labels'][
- valid_index]
- results['gt_ignore_flags'] = results['gt_ignore_flags'][
- valid_index]
-
- if 'gt_masks' in results:
- raise NotImplementedError('RandomAffine only supports bbox.')
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(max_rotate_degree={self.max_rotate_degree}, '
- repr_str += f'max_translate_ratio={self.max_translate_ratio}, '
- repr_str += f'scaling_ratio_range={self.scaling_ratio_range}, '
- repr_str += f'max_shear_degree={self.max_shear_degree}, '
- repr_str += f'border={self.border}, '
- repr_str += f'border_val={self.border_val}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border})'
- return repr_str
-
- @staticmethod
- def _get_rotation_matrix(rotate_degrees: float) -> np.ndarray:
- radian = math.radians(rotate_degrees)
- rotation_matrix = np.array(
- [[np.cos(radian), -np.sin(radian), 0.],
- [np.sin(radian), np.cos(radian), 0.], [0., 0., 1.]],
- dtype=np.float32)
- return rotation_matrix
-
- @staticmethod
- def _get_scaling_matrix(scale_ratio: float) -> np.ndarray:
- scaling_matrix = np.array(
- [[scale_ratio, 0., 0.], [0., scale_ratio, 0.], [0., 0., 1.]],
- dtype=np.float32)
- return scaling_matrix
-
- @staticmethod
- def _get_shear_matrix(x_shear_degrees: float,
- y_shear_degrees: float) -> np.ndarray:
- x_radian = math.radians(x_shear_degrees)
- y_radian = math.radians(y_shear_degrees)
- shear_matrix = np.array([[1, np.tan(x_radian), 0.],
- [np.tan(y_radian), 1, 0.], [0., 0., 1.]],
- dtype=np.float32)
- return shear_matrix
-
- @staticmethod
- def _get_translation_matrix(x: float, y: float) -> np.ndarray:
- translation_matrix = np.array([[1, 0., x], [0., 1, y], [0., 0., 1.]],
- dtype=np.float32)
- return translation_matrix
-
-
-@TRANSFORMS.register_module()
-class YOLOXHSVRandomAug(BaseTransform):
- """Apply HSV augmentation to image sequentially. It is referenced from
- https://github.com/Megvii-
- BaseDetection/YOLOX/blob/main/yolox/data/data_augment.py#L21.
-
- Required Keys:
-
- - img
-
- Modified Keys:
-
- - img
-
- Args:
- hue_delta (int): delta of hue. Defaults to 5.
- saturation_delta (int): delta of saturation. Defaults to 30.
- value_delta (int): delat of value. Defaults to 30.
- """
-
- def __init__(self,
- hue_delta: int = 5,
- saturation_delta: int = 30,
- value_delta: int = 30) -> None:
- self.hue_delta = hue_delta
- self.saturation_delta = saturation_delta
- self.value_delta = value_delta
-
- @cache_randomness
- def _get_hsv_gains(self):
- hsv_gains = np.random.uniform(-1, 1, 3) * [
- self.hue_delta, self.saturation_delta, self.value_delta
- ]
- # random selection of h, s, v
- hsv_gains *= np.random.randint(0, 2, 3)
- # prevent overflow
- hsv_gains = hsv_gains.astype(np.int16)
- return hsv_gains
-
- def transform(self, results: dict) -> dict:
- img = results['img']
- hsv_gains = self._get_hsv_gains()
- img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV).astype(np.int16)
-
- img_hsv[..., 0] = (img_hsv[..., 0] + hsv_gains[0]) % 180
- img_hsv[..., 1] = np.clip(img_hsv[..., 1] + hsv_gains[1], 0, 255)
- img_hsv[..., 2] = np.clip(img_hsv[..., 2] + hsv_gains[2], 0, 255)
- cv2.cvtColor(img_hsv.astype(img.dtype), cv2.COLOR_HSV2BGR, dst=img)
-
- results['img'] = img
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(hue_delta={self.hue_delta}, '
- repr_str += f'saturation_delta={self.saturation_delta}, '
- repr_str += f'value_delta={self.value_delta})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class CopyPaste(BaseTransform):
- """Simple Copy-Paste is a Strong Data Augmentation Method for Instance
- Segmentation The simple copy-paste transform steps are as follows:
-
- 1. The destination image is already resized with aspect ratio kept,
- cropped and padded.
- 2. Randomly select a source image, which is also already resized
- with aspect ratio kept, cropped and padded in a similar way
- as the destination image.
- 3. Randomly select some objects from the source image.
- 4. Paste these source objects to the destination image directly,
- due to the source and destination image have the same size.
- 5. Update object masks of the destination image, for some origin objects
- may be occluded.
- 6. Generate bboxes from the updated destination masks and
- filter some objects which are totally occluded, and adjust bboxes
- which are partly occluded.
- 7. Append selected source bboxes, masks, and labels.
-
- Required Keys:
-
- - img
- - gt_bboxes (BaseBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
- - gt_masks (BitmapMasks) (optional)
-
- Modified Keys:
-
- - img
- - gt_bboxes (optional)
- - gt_bboxes_labels (optional)
- - gt_ignore_flags (optional)
- - gt_masks (optional)
-
- Args:
- max_num_pasted (int): The maximum number of pasted objects.
- Defaults to 100.
- bbox_occluded_thr (int): The threshold of occluded bbox.
- Defaults to 10.
- mask_occluded_thr (int): The threshold of occluded mask.
- Defaults to 300.
- selected (bool): Whether select objects or not. If select is False,
- all objects of the source image will be pasted to the
- destination image.
- Defaults to True.
- """
-
- def __init__(
- self,
- max_num_pasted: int = 100,
- bbox_occluded_thr: int = 10,
- mask_occluded_thr: int = 300,
- selected: bool = True,
- ) -> None:
- self.max_num_pasted = max_num_pasted
- self.bbox_occluded_thr = bbox_occluded_thr
- self.mask_occluded_thr = mask_occluded_thr
- self.selected = selected
-
- @cache_randomness
- def get_indexes(self, dataset: BaseDataset) -> int:
- """Call function to collect indexes.s.
-
- Args:
- dataset (:obj:`MultiImageMixDataset`): The dataset.
- Returns:
- list: Indexes.
- """
- return random.randint(0, len(dataset))
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Transform function to make a copy-paste of image.
-
- Args:
- results (dict): Result dict.
- Returns:
- dict: Result dict with copy-paste transformed.
- """
-
- assert 'mix_results' in results
- num_images = len(results['mix_results'])
- assert num_images == 1, \
- f'CopyPaste only supports processing 2 images, got {num_images}'
- if self.selected:
- selected_results = self._select_object(results['mix_results'][0])
- else:
- selected_results = results['mix_results'][0]
- return self._copy_paste(results, selected_results)
-
- @cache_randomness
- def _get_selected_inds(self, num_bboxes: int) -> np.ndarray:
- max_num_pasted = min(num_bboxes + 1, self.max_num_pasted)
- num_pasted = np.random.randint(0, max_num_pasted)
- return np.random.choice(num_bboxes, size=num_pasted, replace=False)
-
- def _select_object(self, results: dict) -> dict:
- """Select some objects from the source results."""
- bboxes = results['gt_bboxes']
- labels = results['gt_bboxes_labels']
- masks = results['gt_masks']
- ignore_flags = results['gt_ignore_flags']
-
- selected_inds = self._get_selected_inds(bboxes.shape[0])
-
- selected_bboxes = bboxes[selected_inds]
- selected_labels = labels[selected_inds]
- selected_masks = masks[selected_inds]
- selected_ignore_flags = ignore_flags[selected_inds]
-
- results['gt_bboxes'] = selected_bboxes
- results['gt_bboxes_labels'] = selected_labels
- results['gt_masks'] = selected_masks
- results['gt_ignore_flags'] = selected_ignore_flags
- return results
-
- def _copy_paste(self, dst_results: dict, src_results: dict) -> dict:
- """CopyPaste transform function.
-
- Args:
- dst_results (dict): Result dict of the destination image.
- src_results (dict): Result dict of the source image.
- Returns:
- dict: Updated result dict.
- """
- dst_img = dst_results['img']
- dst_bboxes = dst_results['gt_bboxes']
- dst_labels = dst_results['gt_bboxes_labels']
- dst_masks = dst_results['gt_masks']
- dst_ignore_flags = dst_results['gt_ignore_flags']
-
- src_img = src_results['img']
- src_bboxes = src_results['gt_bboxes']
- src_labels = src_results['gt_bboxes_labels']
- src_masks = src_results['gt_masks']
- src_ignore_flags = src_results['gt_ignore_flags']
-
- if len(src_bboxes) == 0:
- return dst_results
-
- # update masks and generate bboxes from updated masks
- composed_mask = np.where(np.any(src_masks.masks, axis=0), 1, 0)
- updated_dst_masks = self._get_updated_masks(dst_masks, composed_mask)
- updated_dst_bboxes = updated_dst_masks.get_bboxes(type(dst_bboxes))
- assert len(updated_dst_bboxes) == len(updated_dst_masks)
-
- # filter totally occluded objects
- l1_distance = (updated_dst_bboxes.tensor - dst_bboxes.tensor).abs()
- bboxes_inds = (l1_distance <= self.bbox_occluded_thr).all(
- dim=-1).numpy()
- masks_inds = updated_dst_masks.masks.sum(
- axis=(1, 2)) > self.mask_occluded_thr
- valid_inds = bboxes_inds | masks_inds
-
- # Paste source objects to destination image directly
- img = dst_img * (1 - composed_mask[..., np.newaxis]
- ) + src_img * composed_mask[..., np.newaxis]
- bboxes = src_bboxes.cat([updated_dst_bboxes[valid_inds], src_bboxes])
- labels = np.concatenate([dst_labels[valid_inds], src_labels])
- masks = np.concatenate(
- [updated_dst_masks.masks[valid_inds], src_masks.masks])
- ignore_flags = np.concatenate(
- [dst_ignore_flags[valid_inds], src_ignore_flags])
-
- dst_results['img'] = img
- dst_results['gt_bboxes'] = bboxes
- dst_results['gt_bboxes_labels'] = labels
- dst_results['gt_masks'] = BitmapMasks(masks, masks.shape[1],
- masks.shape[2])
- dst_results['gt_ignore_flags'] = ignore_flags
-
- return dst_results
-
- def _get_updated_masks(self, masks: BitmapMasks,
- composed_mask: np.ndarray) -> BitmapMasks:
- """Update masks with composed mask."""
- assert masks.masks.shape[-2:] == composed_mask.shape[-2:], \
- 'Cannot compare two arrays of different size'
- masks.masks = np.where(composed_mask, 0, masks.masks)
- return masks
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(max_num_pasted={self.max_num_pasted}, '
- repr_str += f'bbox_occluded_thr={self.bbox_occluded_thr}, '
- repr_str += f'mask_occluded_thr={self.mask_occluded_thr}, '
- repr_str += f'selected={self.selected})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class RandomErasing(BaseTransform):
- """RandomErasing operation.
-
- Random Erasing randomly selects a rectangle region
- in an image and erases its pixels with random values.
- `RandomErasing `_.
-
- Required Keys:
-
- - img
- - gt_bboxes (HorizontalBoxes[torch.float32]) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
- - gt_masks (BitmapMasks) (optional)
-
- Modified Keys:
- - img
- - gt_bboxes (optional)
- - gt_bboxes_labels (optional)
- - gt_ignore_flags (optional)
- - gt_masks (optional)
-
- Args:
- n_patches (int or tuple[int, int]): Number of regions to be dropped.
- If it is given as a tuple, number of patches will be randomly
- selected from the closed interval [``n_patches[0]``,
- ``n_patches[1]``].
- ratio (float or tuple[float, float]): The ratio of erased regions.
- It can be ``float`` to use a fixed ratio or ``tuple[float, float]``
- to randomly choose ratio from the interval.
- squared (bool): Whether to erase square region. Defaults to True.
- bbox_erased_thr (float): The threshold for the maximum area proportion
- of the bbox to be erased. When the proportion of the area where the
- bbox is erased is greater than the threshold, the bbox will be
- removed. Defaults to 0.9.
- img_border_value (int or float or tuple): The filled values for
- image border. If float, the same fill value will be used for
- all the three channels of image. If tuple, it should be 3 elements.
- Defaults to 128.
- mask_border_value (int): The fill value used for masks. Defaults to 0.
- seg_ignore_label (int): The fill value used for segmentation map.
- Note this value must equals ``ignore_label`` in ``semantic_head``
- of the corresponding config. Defaults to 255.
- """
-
- def __init__(
- self,
- n_patches: Union[int, Tuple[int, int]],
- ratio: Union[float, Tuple[float, float]],
- squared: bool = True,
- bbox_erased_thr: float = 0.9,
- img_border_value: Union[int, float, tuple] = 128,
- mask_border_value: int = 0,
- seg_ignore_label: int = 255,
- ) -> None:
- if isinstance(n_patches, tuple):
- assert len(n_patches) == 2 and 0 <= n_patches[0] < n_patches[1]
- else:
- n_patches = (n_patches, n_patches)
- if isinstance(ratio, tuple):
- assert len(ratio) == 2 and 0 <= ratio[0] < ratio[1] <= 1
- else:
- ratio = (ratio, ratio)
-
- self.n_patches = n_patches
- self.ratio = ratio
- self.squared = squared
- self.bbox_erased_thr = bbox_erased_thr
- self.img_border_value = img_border_value
- self.mask_border_value = mask_border_value
- self.seg_ignore_label = seg_ignore_label
-
- @cache_randomness
- def _get_patches(self, img_shape: Tuple[int, int]) -> List[list]:
- """Get patches for random erasing."""
- patches = []
- n_patches = np.random.randint(self.n_patches[0], self.n_patches[1] + 1)
- for _ in range(n_patches):
- if self.squared:
- ratio = np.random.random() * (self.ratio[1] -
- self.ratio[0]) + self.ratio[0]
- ratio = (ratio, ratio)
- else:
- ratio = (np.random.random() * (self.ratio[1] - self.ratio[0]) +
- self.ratio[0], np.random.random() *
- (self.ratio[1] - self.ratio[0]) + self.ratio[0])
- ph, pw = int(img_shape[0] * ratio[0]), int(img_shape[1] * ratio[1])
- px1, py1 = np.random.randint(0,
- img_shape[1] - pw), np.random.randint(
- 0, img_shape[0] - ph)
- px2, py2 = px1 + pw, py1 + ph
- patches.append([px1, py1, px2, py2])
- return np.array(patches)
-
- def _transform_img(self, results: dict, patches: List[list]) -> None:
- """Random erasing the image."""
- for patch in patches:
- px1, py1, px2, py2 = patch
- results['img'][py1:py2, px1:px2, :] = self.img_border_value
-
- def _transform_bboxes(self, results: dict, patches: List[list]) -> None:
- """Random erasing the bboxes."""
- bboxes = results['gt_bboxes']
- # TODO: unify the logic by using operators in BaseBoxes.
- assert isinstance(bboxes, HorizontalBoxes)
- bboxes = bboxes.numpy()
- left_top = np.maximum(bboxes[:, None, :2], patches[:, :2])
- right_bottom = np.minimum(bboxes[:, None, 2:], patches[:, 2:])
- wh = np.maximum(right_bottom - left_top, 0)
- inter_areas = wh[:, :, 0] * wh[:, :, 1]
- bbox_areas = (bboxes[:, 2] - bboxes[:, 0]) * (
- bboxes[:, 3] - bboxes[:, 1])
- bboxes_erased_ratio = inter_areas.sum(-1) / (bbox_areas + 1e-7)
- valid_inds = bboxes_erased_ratio < self.bbox_erased_thr
- results['gt_bboxes'] = HorizontalBoxes(bboxes[valid_inds])
- results['gt_bboxes_labels'] = results['gt_bboxes_labels'][valid_inds]
- results['gt_ignore_flags'] = results['gt_ignore_flags'][valid_inds]
- if results.get('gt_masks', None) is not None:
- results['gt_masks'] = results['gt_masks'][valid_inds]
-
- def _transform_masks(self, results: dict, patches: List[list]) -> None:
- """Random erasing the masks."""
- for patch in patches:
- px1, py1, px2, py2 = patch
- results['gt_masks'].masks[:, py1:py2,
- px1:px2] = self.mask_border_value
-
- def _transform_seg(self, results: dict, patches: List[list]) -> None:
- """Random erasing the segmentation map."""
- for patch in patches:
- px1, py1, px2, py2 = patch
- results['gt_seg_map'][py1:py2, px1:px2] = self.seg_ignore_label
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Transform function to erase some regions of image."""
- patches = self._get_patches(results['img_shape'])
- self._transform_img(results, patches)
- if results.get('gt_bboxes', None) is not None:
- self._transform_bboxes(results, patches)
- if results.get('gt_masks', None) is not None:
- self._transform_masks(results, patches)
- if results.get('gt_seg_map', None) is not None:
- self._transform_seg(results, patches)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(n_patches={self.n_patches}, '
- repr_str += f'ratio={self.ratio}, '
- repr_str += f'squared={self.squared}, '
- repr_str += f'bbox_erased_thr={self.bbox_erased_thr}, '
- repr_str += f'img_border_value={self.img_border_value}, '
- repr_str += f'mask_border_value={self.mask_border_value}, '
- repr_str += f'seg_ignore_label={self.seg_ignore_label})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class CachedMosaic(Mosaic):
- """Cached mosaic augmentation.
-
- Cached mosaic transform will random select images from the cache
- and combine them into one output image.
-
- .. code:: text
-
- mosaic transform
- center_x
- +------------------------------+
- | pad | pad |
- | +-----------+ |
- | | | |
- | | image1 |--------+ |
- | | | | |
- | | | image2 | |
- center_y |----+-------------+-----------|
- | | cropped | |
- |pad | image3 | image4 |
- | | | |
- +----|-------------+-----------+
- | |
- +-------------+
-
- The cached mosaic transform steps are as follows:
-
- 1. Append the results from the last transform into the cache.
- 2. Choose the mosaic center as the intersections of 4 images
- 3. Get the left top image according to the index, and randomly
- sample another 3 images from the result cache.
- 4. Sub image will be cropped if image is larger than mosaic patch
-
- Required Keys:
-
- - img
- - gt_bboxes (np.float32) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes (optional)
- - gt_bboxes_labels (optional)
- - gt_ignore_flags (optional)
-
- Args:
- img_scale (Sequence[int]): Image size after mosaic pipeline of single
- image. The shape order should be (width, height).
- Defaults to (640, 640).
- center_ratio_range (Sequence[float]): Center ratio range of mosaic
- output. Defaults to (0.5, 1.5).
- bbox_clip_border (bool, optional): Whether to clip the objects outside
- the border of the image. In some dataset like MOT17, the gt bboxes
- are allowed to cross the border of images. Therefore, we don't
- need to clip the gt bboxes in these cases. Defaults to True.
- pad_val (int): Pad value. Defaults to 114.
- prob (float): Probability of applying this transformation.
- Defaults to 1.0.
- max_cached_images (int): The maximum length of the cache. The larger
- the cache, the stronger the randomness of this transform. As a
- rule of thumb, providing 10 caches for each image suffices for
- randomness. Defaults to 40.
- random_pop (bool): Whether to randomly pop a result from the cache
- when the cache is full. If set to False, use FIFO popping method.
- Defaults to True.
- """
-
- def __init__(self,
- *args,
- max_cached_images: int = 40,
- random_pop: bool = True,
- **kwargs) -> None:
- super().__init__(*args, **kwargs)
- self.results_cache = []
- self.random_pop = random_pop
- assert max_cached_images >= 4, 'The length of cache must >= 4, ' \
- f'but got {max_cached_images}.'
- self.max_cached_images = max_cached_images
-
- @cache_randomness
- def get_indexes(self, cache: list) -> list:
- """Call function to collect indexes.
-
- Args:
- cache (list): The results cache.
-
- Returns:
- list: indexes.
- """
-
- indexes = [random.randint(0, len(cache) - 1) for _ in range(3)]
- return indexes
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """Mosaic transform function.
-
- Args:
- results (dict): Result dict.
-
- Returns:
- dict: Updated result dict.
- """
- # cache and pop images
- self.results_cache.append(copy.deepcopy(results))
- if len(self.results_cache) > self.max_cached_images:
- if self.random_pop:
- index = random.randint(0, len(self.results_cache) - 1)
- else:
- index = 0
- self.results_cache.pop(index)
-
- if len(self.results_cache) <= 4:
- return results
-
- if random.uniform(0, 1) > self.prob:
- return results
- indices = self.get_indexes(self.results_cache)
- mix_results = [copy.deepcopy(self.results_cache[i]) for i in indices]
-
- # TODO: refactor mosaic to reuse these code.
- mosaic_bboxes = []
- mosaic_bboxes_labels = []
- mosaic_ignore_flags = []
- mosaic_masks = []
- with_mask = True if 'gt_masks' in results else False
-
- if len(results['img'].shape) == 3:
- mosaic_img = np.full(
- (int(self.img_scale[1] * 2), int(self.img_scale[0] * 2), 3),
- self.pad_val,
- dtype=results['img'].dtype)
- else:
- mosaic_img = np.full(
- (int(self.img_scale[1] * 2), int(self.img_scale[0] * 2)),
- self.pad_val,
- dtype=results['img'].dtype)
-
- # mosaic center x, y
- center_x = int(
- random.uniform(*self.center_ratio_range) * self.img_scale[0])
- center_y = int(
- random.uniform(*self.center_ratio_range) * self.img_scale[1])
- center_position = (center_x, center_y)
-
- loc_strs = ('top_left', 'top_right', 'bottom_left', 'bottom_right')
- for i, loc in enumerate(loc_strs):
- if loc == 'top_left':
- results_patch = copy.deepcopy(results)
- else:
- results_patch = copy.deepcopy(mix_results[i - 1])
-
- img_i = results_patch['img']
- h_i, w_i = img_i.shape[:2]
- # keep_ratio resize
- scale_ratio_i = min(self.img_scale[1] / h_i,
- self.img_scale[0] / w_i)
- img_i = mmcv.imresize(
- img_i, (int(w_i * scale_ratio_i), int(h_i * scale_ratio_i)))
-
- # compute the combine parameters
- paste_coord, crop_coord = self._mosaic_combine(
- loc, center_position, img_i.shape[:2][::-1])
- x1_p, y1_p, x2_p, y2_p = paste_coord
- x1_c, y1_c, x2_c, y2_c = crop_coord
-
- # crop and paste image
- mosaic_img[y1_p:y2_p, x1_p:x2_p] = img_i[y1_c:y2_c, x1_c:x2_c]
-
- # adjust coordinate
- gt_bboxes_i = results_patch['gt_bboxes']
- gt_bboxes_labels_i = results_patch['gt_bboxes_labels']
- gt_ignore_flags_i = results_patch['gt_ignore_flags']
-
- padw = x1_p - x1_c
- padh = y1_p - y1_c
- gt_bboxes_i.rescale_([scale_ratio_i, scale_ratio_i])
- gt_bboxes_i.translate_([padw, padh])
- mosaic_bboxes.append(gt_bboxes_i)
- mosaic_bboxes_labels.append(gt_bboxes_labels_i)
- mosaic_ignore_flags.append(gt_ignore_flags_i)
- if with_mask and results_patch.get('gt_masks', None) is not None:
- gt_masks_i = results_patch['gt_masks']
- gt_masks_i = gt_masks_i.rescale(float(scale_ratio_i))
- gt_masks_i = gt_masks_i.translate(
- out_shape=(int(self.img_scale[0] * 2),
- int(self.img_scale[1] * 2)),
- offset=padw,
- direction='horizontal')
- gt_masks_i = gt_masks_i.translate(
- out_shape=(int(self.img_scale[0] * 2),
- int(self.img_scale[1] * 2)),
- offset=padh,
- direction='vertical')
- mosaic_masks.append(gt_masks_i)
-
- mosaic_bboxes = mosaic_bboxes[0].cat(mosaic_bboxes, 0)
- mosaic_bboxes_labels = np.concatenate(mosaic_bboxes_labels, 0)
- mosaic_ignore_flags = np.concatenate(mosaic_ignore_flags, 0)
-
- if self.bbox_clip_border:
- mosaic_bboxes.clip_([2 * self.img_scale[1], 2 * self.img_scale[0]])
- # remove outside bboxes
- inside_inds = mosaic_bboxes.is_inside(
- [2 * self.img_scale[1], 2 * self.img_scale[0]]).numpy()
- mosaic_bboxes = mosaic_bboxes[inside_inds]
- mosaic_bboxes_labels = mosaic_bboxes_labels[inside_inds]
- mosaic_ignore_flags = mosaic_ignore_flags[inside_inds]
-
- results['img'] = mosaic_img
- results['img_shape'] = mosaic_img.shape[:2]
- results['gt_bboxes'] = mosaic_bboxes
- results['gt_bboxes_labels'] = mosaic_bboxes_labels
- results['gt_ignore_flags'] = mosaic_ignore_flags
-
- if with_mask:
- mosaic_masks = mosaic_masks[0].cat(mosaic_masks)
- results['gt_masks'] = mosaic_masks[inside_inds]
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(img_scale={self.img_scale}, '
- repr_str += f'center_ratio_range={self.center_ratio_range}, '
- repr_str += f'pad_val={self.pad_val}, '
- repr_str += f'prob={self.prob}, '
- repr_str += f'max_cached_images={self.max_cached_images}, '
- repr_str += f'random_pop={self.random_pop})'
- return repr_str
-
-
-@TRANSFORMS.register_module()
-class CachedMixUp(BaseTransform):
- """Cached mixup data augmentation.
-
- .. code:: text
-
- mixup transform
- +------------------------------+
- | mixup image | |
- | +--------|--------+ |
- | | | | |
- |---------------+ | |
- | | | |
- | | image | |
- | | | |
- | | | |
- | |-----------------+ |
- | pad |
- +------------------------------+
-
- The cached mixup transform steps are as follows:
-
- 1. Append the results from the last transform into the cache.
- 2. Another random image is picked from the cache and embedded in
- the top left patch(after padding and resizing)
- 3. The target of mixup transform is the weighted average of mixup
- image and origin image.
-
- Required Keys:
-
- - img
- - gt_bboxes (np.float32) (optional)
- - gt_bboxes_labels (np.int64) (optional)
- - gt_ignore_flags (bool) (optional)
- - mix_results (List[dict])
-
-
- Modified Keys:
-
- - img
- - img_shape
- - gt_bboxes (optional)
- - gt_bboxes_labels (optional)
- - gt_ignore_flags (optional)
-
-
- Args:
- img_scale (Sequence[int]): Image output size after mixup pipeline.
- The shape order should be (width, height). Defaults to (640, 640).
- ratio_range (Sequence[float]): Scale ratio of mixup image.
- Defaults to (0.5, 1.5).
- flip_ratio (float): Horizontal flip ratio of mixup image.
- Defaults to 0.5.
- pad_val (int): Pad value. Defaults to 114.
- max_iters (int): The maximum number of iterations. If the number of
- iterations is greater than `max_iters`, but gt_bbox is still
- empty, then the iteration is terminated. Defaults to 15.
- bbox_clip_border (bool, optional): Whether to clip the objects outside
- the border of the image. In some dataset like MOT17, the gt bboxes
- are allowed to cross the border of images. Therefore, we don't
- need to clip the gt bboxes in these cases. Defaults to True.
- max_cached_images (int): The maximum length of the cache. The larger
- the cache, the stronger the randomness of this transform. As a
- rule of thumb, providing 10 caches for each image suffices for
- randomness. Defaults to 20.
- random_pop (bool): Whether to randomly pop a result from the cache
- when the cache is full. If set to False, use FIFO popping method.
- Defaults to True.
- prob (float): Probability of applying this transformation.
- Defaults to 1.0.
- """
-
- def __init__(self,
- img_scale: Tuple[int, int] = (640, 640),
- ratio_range: Tuple[float, float] = (0.5, 1.5),
- flip_ratio: float = 0.5,
- pad_val: float = 114.0,
- max_iters: int = 15,
- bbox_clip_border: bool = True,
- max_cached_images: int = 20,
- random_pop: bool = True,
- prob: float = 1.0) -> None:
- assert isinstance(img_scale, tuple)
- assert max_cached_images >= 2, 'The length of cache must >= 2, ' \
- f'but got {max_cached_images}.'
- assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. ' \
- f'got {prob}.'
- self.dynamic_scale = img_scale
- self.ratio_range = ratio_range
- self.flip_ratio = flip_ratio
- self.pad_val = pad_val
- self.max_iters = max_iters
- self.bbox_clip_border = bbox_clip_border
- self.results_cache = []
-
- self.max_cached_images = max_cached_images
- self.random_pop = random_pop
- self.prob = prob
-
- @cache_randomness
- def get_indexes(self, cache: list) -> int:
- """Call function to collect indexes.
-
- Args:
- cache (list): The result cache.
-
- Returns:
- int: index.
- """
-
- for i in range(self.max_iters):
- index = random.randint(0, len(cache) - 1)
- gt_bboxes_i = cache[index]['gt_bboxes']
- if len(gt_bboxes_i) != 0:
- break
- return index
-
- @autocast_box_type()
- def transform(self, results: dict) -> dict:
- """MixUp transform function.
-
- Args:
- results (dict): Result dict.
-
- Returns:
- dict: Updated result dict.
- """
- # cache and pop images
- self.results_cache.append(copy.deepcopy(results))
- if len(self.results_cache) > self.max_cached_images:
- if self.random_pop:
- index = random.randint(0, len(self.results_cache) - 1)
- else:
- index = 0
- self.results_cache.pop(index)
-
- if len(self.results_cache) <= 1:
- return results
-
- if random.uniform(0, 1) > self.prob:
- return results
-
- index = self.get_indexes(self.results_cache)
- retrieve_results = copy.deepcopy(self.results_cache[index])
-
- # TODO: refactor mixup to reuse these code.
- if retrieve_results['gt_bboxes'].shape[0] == 0:
- # empty bbox
- return results
-
- retrieve_img = retrieve_results['img']
- with_mask = True if 'gt_masks' in results else False
-
- jit_factor = random.uniform(*self.ratio_range)
- is_filp = random.uniform(0, 1) > self.flip_ratio
-
- if len(retrieve_img.shape) == 3:
- out_img = np.ones(
- (self.dynamic_scale[1], self.dynamic_scale[0], 3),
- dtype=retrieve_img.dtype) * self.pad_val
- else:
- out_img = np.ones(
- self.dynamic_scale[::-1],
- dtype=retrieve_img.dtype) * self.pad_val
-
- # 1. keep_ratio resize
- scale_ratio = min(self.dynamic_scale[1] / retrieve_img.shape[0],
- self.dynamic_scale[0] / retrieve_img.shape[1])
- retrieve_img = mmcv.imresize(
- retrieve_img, (int(retrieve_img.shape[1] * scale_ratio),
- int(retrieve_img.shape[0] * scale_ratio)))
-
- # 2. paste
- out_img[:retrieve_img.shape[0], :retrieve_img.shape[1]] = retrieve_img
-
- # 3. scale jit
- scale_ratio *= jit_factor
- out_img = mmcv.imresize(out_img, (int(out_img.shape[1] * jit_factor),
- int(out_img.shape[0] * jit_factor)))
-
- # 4. flip
- if is_filp:
- out_img = out_img[:, ::-1, :]
-
- # 5. random crop
- ori_img = results['img']
- origin_h, origin_w = out_img.shape[:2]
- target_h, target_w = ori_img.shape[:2]
- padded_img = np.ones((max(origin_h, target_h), max(
- origin_w, target_w), 3)) * self.pad_val
- padded_img = padded_img.astype(np.uint8)
- padded_img[:origin_h, :origin_w] = out_img
-
- x_offset, y_offset = 0, 0
- if padded_img.shape[0] > target_h:
- y_offset = random.randint(0, padded_img.shape[0] - target_h)
- if padded_img.shape[1] > target_w:
- x_offset = random.randint(0, padded_img.shape[1] - target_w)
- padded_cropped_img = padded_img[y_offset:y_offset + target_h,
- x_offset:x_offset + target_w]
-
- # 6. adjust bbox
- retrieve_gt_bboxes = retrieve_results['gt_bboxes']
- retrieve_gt_bboxes.rescale_([scale_ratio, scale_ratio])
- if with_mask:
- retrieve_gt_masks = retrieve_results['gt_masks'].rescale(
- scale_ratio)
-
- if self.bbox_clip_border:
- retrieve_gt_bboxes.clip_([origin_h, origin_w])
-
- if is_filp:
- retrieve_gt_bboxes.flip_([origin_h, origin_w],
- direction='horizontal')
- if with_mask:
- retrieve_gt_masks = retrieve_gt_masks.flip()
-
- # 7. filter
- cp_retrieve_gt_bboxes = retrieve_gt_bboxes.clone()
- cp_retrieve_gt_bboxes.translate_([-x_offset, -y_offset])
- if with_mask:
- retrieve_gt_masks = retrieve_gt_masks.translate(
- out_shape=(target_h, target_w),
- offset=-x_offset,
- direction='horizontal')
- retrieve_gt_masks = retrieve_gt_masks.translate(
- out_shape=(target_h, target_w),
- offset=-y_offset,
- direction='vertical')
-
- if self.bbox_clip_border:
- cp_retrieve_gt_bboxes.clip_([target_h, target_w])
-
- # 8. mix up
- ori_img = ori_img.astype(np.float32)
- mixup_img = 0.5 * ori_img + 0.5 * padded_cropped_img.astype(np.float32)
-
- retrieve_gt_bboxes_labels = retrieve_results['gt_bboxes_labels']
- retrieve_gt_ignore_flags = retrieve_results['gt_ignore_flags']
-
- mixup_gt_bboxes = cp_retrieve_gt_bboxes.cat(
- (results['gt_bboxes'], cp_retrieve_gt_bboxes), dim=0)
- mixup_gt_bboxes_labels = np.concatenate(
- (results['gt_bboxes_labels'], retrieve_gt_bboxes_labels), axis=0)
- mixup_gt_ignore_flags = np.concatenate(
- (results['gt_ignore_flags'], retrieve_gt_ignore_flags), axis=0)
- if with_mask:
- mixup_gt_masks = retrieve_gt_masks.cat(
- [results['gt_masks'], retrieve_gt_masks])
-
- # remove outside bbox
- inside_inds = mixup_gt_bboxes.is_inside([target_h, target_w]).numpy()
- mixup_gt_bboxes = mixup_gt_bboxes[inside_inds]
- mixup_gt_bboxes_labels = mixup_gt_bboxes_labels[inside_inds]
- mixup_gt_ignore_flags = mixup_gt_ignore_flags[inside_inds]
- if with_mask:
- mixup_gt_masks = mixup_gt_masks[inside_inds]
-
- results['img'] = mixup_img.astype(np.uint8)
- results['img_shape'] = mixup_img.shape[:2]
- results['gt_bboxes'] = mixup_gt_bboxes
- results['gt_bboxes_labels'] = mixup_gt_bboxes_labels
- results['gt_ignore_flags'] = mixup_gt_ignore_flags
- if with_mask:
- results['gt_masks'] = mixup_gt_masks
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(dynamic_scale={self.dynamic_scale}, '
- repr_str += f'ratio_range={self.ratio_range}, '
- repr_str += f'flip_ratio={self.flip_ratio}, '
- repr_str += f'pad_val={self.pad_val}, '
- repr_str += f'max_iters={self.max_iters}, '
- repr_str += f'bbox_clip_border={self.bbox_clip_border}, '
- repr_str += f'max_cached_images={self.max_cached_images}, '
- repr_str += f'random_pop={self.random_pop}, '
- repr_str += f'prob={self.prob})'
- return repr_str
diff --git a/spaces/KyanChen/RSPrompter/mmdet/structures/bbox/bbox_overlaps.py b/spaces/KyanChen/RSPrompter/mmdet/structures/bbox/bbox_overlaps.py
deleted file mode 100644
index 8e3435d28b38a5479a6c791f52a76d8ba293a6eb..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/structures/bbox/bbox_overlaps.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-
-
-def fp16_clamp(x, min=None, max=None):
- if not x.is_cuda and x.dtype == torch.float16:
- # clamp for cpu float16, tensor fp16 has no clamp implementation
- return x.float().clamp(min, max).half()
-
- return x.clamp(min, max)
-
-
-def bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-6):
- """Calculate overlap between two set of bboxes.
-
- FP16 Contributed by https://github.com/open-mmlab/mmdetection/pull/4889
- Note:
- Assume bboxes1 is M x 4, bboxes2 is N x 4, when mode is 'iou',
- there are some new generated variable when calculating IOU
- using bbox_overlaps function:
-
- 1) is_aligned is False
- area1: M x 1
- area2: N x 1
- lt: M x N x 2
- rb: M x N x 2
- wh: M x N x 2
- overlap: M x N x 1
- union: M x N x 1
- ious: M x N x 1
-
- Total memory:
- S = (9 x N x M + N + M) * 4 Byte,
-
- When using FP16, we can reduce:
- R = (9 x N x M + N + M) * 4 / 2 Byte
- R large than (N + M) * 4 * 2 is always true when N and M >= 1.
- Obviously, N + M <= N * M < 3 * N * M, when N >=2 and M >=2,
- N + 1 < 3 * N, when N or M is 1.
-
- Given M = 40 (ground truth), N = 400000 (three anchor boxes
- in per grid, FPN, R-CNNs),
- R = 275 MB (one times)
-
- A special case (dense detection), M = 512 (ground truth),
- R = 3516 MB = 3.43 GB
-
- When the batch size is B, reduce:
- B x R
-
- Therefore, CUDA memory runs out frequently.
-
- Experiments on GeForce RTX 2080Ti (11019 MiB):
-
- | dtype | M | N | Use | Real | Ideal |
- |:----:|:----:|:----:|:----:|:----:|:----:|
- | FP32 | 512 | 400000 | 8020 MiB | -- | -- |
- | FP16 | 512 | 400000 | 4504 MiB | 3516 MiB | 3516 MiB |
- | FP32 | 40 | 400000 | 1540 MiB | -- | -- |
- | FP16 | 40 | 400000 | 1264 MiB | 276MiB | 275 MiB |
-
- 2) is_aligned is True
- area1: N x 1
- area2: N x 1
- lt: N x 2
- rb: N x 2
- wh: N x 2
- overlap: N x 1
- union: N x 1
- ious: N x 1
-
- Total memory:
- S = 11 x N * 4 Byte
-
- When using FP16, we can reduce:
- R = 11 x N * 4 / 2 Byte
-
- So do the 'giou' (large than 'iou').
-
- Time-wise, FP16 is generally faster than FP32.
-
- When gpu_assign_thr is not -1, it takes more time on cpu
- but not reduce memory.
- There, we can reduce half the memory and keep the speed.
-
- If ``is_aligned`` is ``False``, then calculate the overlaps between each
- bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned
- pair of bboxes1 and bboxes2.
-
- Args:
- bboxes1 (Tensor): shape (B, m, 4) in format or empty.
- bboxes2 (Tensor): shape (B, n, 4) in format or empty.
- B indicates the batch dim, in shape (B1, B2, ..., Bn).
- If ``is_aligned`` is ``True``, then m and n must be equal.
- mode (str): "iou" (intersection over union), "iof" (intersection over
- foreground) or "giou" (generalized intersection over union).
- Default "iou".
- is_aligned (bool, optional): If True, then m and n must be equal.
- Default False.
- eps (float, optional): A value added to the denominator for numerical
- stability. Default 1e-6.
-
- Returns:
- Tensor: shape (m, n) if ``is_aligned`` is False else shape (m,)
-
- Example:
- >>> bboxes1 = torch.FloatTensor([
- >>> [0, 0, 10, 10],
- >>> [10, 10, 20, 20],
- >>> [32, 32, 38, 42],
- >>> ])
- >>> bboxes2 = torch.FloatTensor([
- >>> [0, 0, 10, 20],
- >>> [0, 10, 10, 19],
- >>> [10, 10, 20, 20],
- >>> ])
- >>> overlaps = bbox_overlaps(bboxes1, bboxes2)
- >>> assert overlaps.shape == (3, 3)
- >>> overlaps = bbox_overlaps(bboxes1, bboxes2, is_aligned=True)
- >>> assert overlaps.shape == (3, )
-
- Example:
- >>> empty = torch.empty(0, 4)
- >>> nonempty = torch.FloatTensor([[0, 0, 10, 9]])
- >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1)
- >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0)
- >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0)
- """
-
- assert mode in ['iou', 'iof', 'giou'], f'Unsupported mode {mode}'
- # Either the boxes are empty or the length of boxes' last dimension is 4
- assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0)
- assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0)
-
- # Batch dim must be the same
- # Batch dim: (B1, B2, ... Bn)
- assert bboxes1.shape[:-2] == bboxes2.shape[:-2]
- batch_shape = bboxes1.shape[:-2]
-
- rows = bboxes1.size(-2)
- cols = bboxes2.size(-2)
- if is_aligned:
- assert rows == cols
-
- if rows * cols == 0:
- if is_aligned:
- return bboxes1.new(batch_shape + (rows, ))
- else:
- return bboxes1.new(batch_shape + (rows, cols))
-
- area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * (
- bboxes1[..., 3] - bboxes1[..., 1])
- area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * (
- bboxes2[..., 3] - bboxes2[..., 1])
-
- if is_aligned:
- lt = torch.max(bboxes1[..., :2], bboxes2[..., :2]) # [B, rows, 2]
- rb = torch.min(bboxes1[..., 2:], bboxes2[..., 2:]) # [B, rows, 2]
-
- wh = fp16_clamp(rb - lt, min=0)
- overlap = wh[..., 0] * wh[..., 1]
-
- if mode in ['iou', 'giou']:
- union = area1 + area2 - overlap
- else:
- union = area1
- if mode == 'giou':
- enclosed_lt = torch.min(bboxes1[..., :2], bboxes2[..., :2])
- enclosed_rb = torch.max(bboxes1[..., 2:], bboxes2[..., 2:])
- else:
- lt = torch.max(bboxes1[..., :, None, :2],
- bboxes2[..., None, :, :2]) # [B, rows, cols, 2]
- rb = torch.min(bboxes1[..., :, None, 2:],
- bboxes2[..., None, :, 2:]) # [B, rows, cols, 2]
-
- wh = fp16_clamp(rb - lt, min=0)
- overlap = wh[..., 0] * wh[..., 1]
-
- if mode in ['iou', 'giou']:
- union = area1[..., None] + area2[..., None, :] - overlap
- else:
- union = area1[..., None]
- if mode == 'giou':
- enclosed_lt = torch.min(bboxes1[..., :, None, :2],
- bboxes2[..., None, :, :2])
- enclosed_rb = torch.max(bboxes1[..., :, None, 2:],
- bboxes2[..., None, :, 2:])
-
- eps = union.new_tensor([eps])
- union = torch.max(union, eps)
- ious = overlap / union
- if mode in ['iou', 'iof']:
- return ious
- # calculate gious
- enclose_wh = fp16_clamp(enclosed_rb - enclosed_lt, min=0)
- enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1]
- enclose_area = torch.max(enclose_area, eps)
- gious = ious - (enclose_area - union) / enclose_area
- return gious
diff --git a/spaces/Kyo-Kai/Fsg-pp/commands/exec_path.py b/spaces/Kyo-Kai/Fsg-pp/commands/exec_path.py
deleted file mode 100644
index 6b0d3e186398c13d0e8ec6d2b40d77df1c2b3b8c..0000000000000000000000000000000000000000
--- a/spaces/Kyo-Kai/Fsg-pp/commands/exec_path.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import os
-
-driver_path = "driver"
-folder_path = "Images"
-driverPath = os.path.join(os.getcwd(), driver_path)
-imagePath = os.path.join(os.getcwd(), folder_path)
-
-try:
- if len(os.listdir(driverPath)) > 1:
- raise Exception("Put 1 driver only")
- elif os.listdir(driverPath)[0] != "driver.exe":
- os.rename(driverPath+"/"+"driver.exe")
-except:
- pass
-finally:
- executable_path = os.path.join(driverPath, 'driver.exe')
-
-
-def imgList(mode=0):
- if mode==0: # Danbooru
- return [image.split(" ")[-1].split(".")[0] for image in os.listdir(imagePath) if image.split(".")[-1] in ["jpg","png","jpeg"]]
- if mode==1: # Pixiv
- return [image.split("_")[0] for image in os.listdir(imagePath) if image.split(".")[-1] in ["jpg","png","jpeg"]]
- if mode==2: # Zerochan
- return os.listdir(imagePath)
\ No newline at end of file
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/commons.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/commons.py
deleted file mode 100644
index 2618e3ad501d1d4745a34024c2bf1676546fae80..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/commons.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/ML610/Mistral-7b-instruct-GGUF/README.md b/spaces/ML610/Mistral-7b-instruct-GGUF/README.md
deleted file mode 100644
index 002c7b42617e27682e55e9f40e46f1f2090f3bc5..0000000000000000000000000000000000000000
--- a/spaces/ML610/Mistral-7b-instruct-GGUF/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Mistral 7b Instruct GGUF
-emoji: 🏆
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MWilinski/bot/tests/__init__.py b/spaces/MWilinski/bot/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/carafe.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/carafe.py
deleted file mode 100644
index 5154cb3abfccfbbe0a1b2daa67018dbf80aaf6d2..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/carafe.py
+++ /dev/null
@@ -1,287 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.autograd import Function
-from torch.nn.modules.module import Module
-
-from ..cnn import UPSAMPLE_LAYERS, normal_init, xavier_init
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
- 'carafe_naive_forward', 'carafe_naive_backward', 'carafe_forward',
- 'carafe_backward'
-])
-
-
-class CARAFENaiveFunction(Function):
-
- @staticmethod
- def symbolic(g, features, masks, kernel_size, group_size, scale_factor):
- return g.op(
- 'mmcv::MMCVCARAFENaive',
- features,
- masks,
- kernel_size_i=kernel_size,
- group_size_i=group_size,
- scale_factor_f=scale_factor)
-
- @staticmethod
- def forward(ctx, features, masks, kernel_size, group_size, scale_factor):
- assert scale_factor >= 1
- assert masks.size(1) == kernel_size * kernel_size * group_size
- assert masks.size(-1) == features.size(-1) * scale_factor
- assert masks.size(-2) == features.size(-2) * scale_factor
- assert features.size(1) % group_size == 0
- assert (kernel_size - 1) % 2 == 0 and kernel_size >= 1
- ctx.kernel_size = kernel_size
- ctx.group_size = group_size
- ctx.scale_factor = scale_factor
- ctx.feature_size = features.size()
- ctx.mask_size = masks.size()
-
- n, c, h, w = features.size()
- output = features.new_zeros((n, c, h * scale_factor, w * scale_factor))
- ext_module.carafe_naive_forward(
- features,
- masks,
- output,
- kernel_size=kernel_size,
- group_size=group_size,
- scale_factor=scale_factor)
-
- if features.requires_grad or masks.requires_grad:
- ctx.save_for_backward(features, masks)
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- assert grad_output.is_cuda
-
- features, masks = ctx.saved_tensors
- kernel_size = ctx.kernel_size
- group_size = ctx.group_size
- scale_factor = ctx.scale_factor
-
- grad_input = torch.zeros_like(features)
- grad_masks = torch.zeros_like(masks)
- ext_module.carafe_naive_backward(
- grad_output.contiguous(),
- features,
- masks,
- grad_input,
- grad_masks,
- kernel_size=kernel_size,
- group_size=group_size,
- scale_factor=scale_factor)
-
- return grad_input, grad_masks, None, None, None
-
-
-carafe_naive = CARAFENaiveFunction.apply
-
-
-class CARAFENaive(Module):
-
- def __init__(self, kernel_size, group_size, scale_factor):
- super(CARAFENaive, self).__init__()
-
- assert isinstance(kernel_size, int) and isinstance(
- group_size, int) and isinstance(scale_factor, int)
- self.kernel_size = kernel_size
- self.group_size = group_size
- self.scale_factor = scale_factor
-
- def forward(self, features, masks):
- return carafe_naive(features, masks, self.kernel_size, self.group_size,
- self.scale_factor)
-
-
-class CARAFEFunction(Function):
-
- @staticmethod
- def symbolic(g, features, masks, kernel_size, group_size, scale_factor):
- return g.op(
- 'mmcv::MMCVCARAFE',
- features,
- masks,
- kernel_size_i=kernel_size,
- group_size_i=group_size,
- scale_factor_f=scale_factor)
-
- @staticmethod
- def forward(ctx, features, masks, kernel_size, group_size, scale_factor):
- assert scale_factor >= 1
- assert masks.size(1) == kernel_size * kernel_size * group_size
- assert masks.size(-1) == features.size(-1) * scale_factor
- assert masks.size(-2) == features.size(-2) * scale_factor
- assert features.size(1) % group_size == 0
- assert (kernel_size - 1) % 2 == 0 and kernel_size >= 1
- ctx.kernel_size = kernel_size
- ctx.group_size = group_size
- ctx.scale_factor = scale_factor
- ctx.feature_size = features.size()
- ctx.mask_size = masks.size()
-
- n, c, h, w = features.size()
- output = features.new_zeros((n, c, h * scale_factor, w * scale_factor))
- routput = features.new_zeros(output.size(), requires_grad=False)
- rfeatures = features.new_zeros(features.size(), requires_grad=False)
- rmasks = masks.new_zeros(masks.size(), requires_grad=False)
- ext_module.carafe_forward(
- features,
- masks,
- rfeatures,
- routput,
- rmasks,
- output,
- kernel_size=kernel_size,
- group_size=group_size,
- scale_factor=scale_factor)
-
- if features.requires_grad or masks.requires_grad:
- ctx.save_for_backward(features, masks, rfeatures)
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- assert grad_output.is_cuda
-
- features, masks, rfeatures = ctx.saved_tensors
- kernel_size = ctx.kernel_size
- group_size = ctx.group_size
- scale_factor = ctx.scale_factor
-
- rgrad_output = torch.zeros_like(grad_output, requires_grad=False)
- rgrad_input_hs = torch.zeros_like(grad_output, requires_grad=False)
- rgrad_input = torch.zeros_like(features, requires_grad=False)
- rgrad_masks = torch.zeros_like(masks, requires_grad=False)
- grad_input = torch.zeros_like(features, requires_grad=False)
- grad_masks = torch.zeros_like(masks, requires_grad=False)
- ext_module.carafe_backward(
- grad_output.contiguous(),
- rfeatures,
- masks,
- rgrad_output,
- rgrad_input_hs,
- rgrad_input,
- rgrad_masks,
- grad_input,
- grad_masks,
- kernel_size=kernel_size,
- group_size=group_size,
- scale_factor=scale_factor)
- return grad_input, grad_masks, None, None, None
-
-
-carafe = CARAFEFunction.apply
-
-
-class CARAFE(Module):
- """ CARAFE: Content-Aware ReAssembly of FEatures
-
- Please refer to https://arxiv.org/abs/1905.02188 for more details.
-
- Args:
- kernel_size (int): reassemble kernel size
- group_size (int): reassemble group size
- scale_factor (int): upsample ratio
-
- Returns:
- upsampled feature map
- """
-
- def __init__(self, kernel_size, group_size, scale_factor):
- super(CARAFE, self).__init__()
-
- assert isinstance(kernel_size, int) and isinstance(
- group_size, int) and isinstance(scale_factor, int)
- self.kernel_size = kernel_size
- self.group_size = group_size
- self.scale_factor = scale_factor
-
- def forward(self, features, masks):
- return carafe(features, masks, self.kernel_size, self.group_size,
- self.scale_factor)
-
-
-@UPSAMPLE_LAYERS.register_module(name='carafe')
-class CARAFEPack(nn.Module):
- """A unified package of CARAFE upsampler that contains: 1) channel
- compressor 2) content encoder 3) CARAFE op.
-
- Official implementation of ICCV 2019 paper
- CARAFE: Content-Aware ReAssembly of FEatures
- Please refer to https://arxiv.org/abs/1905.02188 for more details.
-
- Args:
- channels (int): input feature channels
- scale_factor (int): upsample ratio
- up_kernel (int): kernel size of CARAFE op
- up_group (int): group size of CARAFE op
- encoder_kernel (int): kernel size of content encoder
- encoder_dilation (int): dilation of content encoder
- compressed_channels (int): output channels of channels compressor
-
- Returns:
- upsampled feature map
- """
-
- def __init__(self,
- channels,
- scale_factor,
- up_kernel=5,
- up_group=1,
- encoder_kernel=3,
- encoder_dilation=1,
- compressed_channels=64):
- super(CARAFEPack, self).__init__()
- self.channels = channels
- self.scale_factor = scale_factor
- self.up_kernel = up_kernel
- self.up_group = up_group
- self.encoder_kernel = encoder_kernel
- self.encoder_dilation = encoder_dilation
- self.compressed_channels = compressed_channels
- self.channel_compressor = nn.Conv2d(channels, self.compressed_channels,
- 1)
- self.content_encoder = nn.Conv2d(
- self.compressed_channels,
- self.up_kernel * self.up_kernel * self.up_group *
- self.scale_factor * self.scale_factor,
- self.encoder_kernel,
- padding=int((self.encoder_kernel - 1) * self.encoder_dilation / 2),
- dilation=self.encoder_dilation,
- groups=1)
- self.init_weights()
-
- def init_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
- normal_init(self.content_encoder, std=0.001)
-
- def kernel_normalizer(self, mask):
- mask = F.pixel_shuffle(mask, self.scale_factor)
- n, mask_c, h, w = mask.size()
- # use float division explicitly,
- # to void inconsistency while exporting to onnx
- mask_channel = int(mask_c / float(self.up_kernel**2))
- mask = mask.view(n, mask_channel, -1, h, w)
-
- mask = F.softmax(mask, dim=2, dtype=mask.dtype)
- mask = mask.view(n, mask_c, h, w).contiguous()
-
- return mask
-
- def feature_reassemble(self, x, mask):
- x = carafe(x, mask, self.up_kernel, self.up_group, self.scale_factor)
- return x
-
- def forward(self, x):
- compressed_x = self.channel_compressor(x)
- mask = self.content_encoder(compressed_x)
- mask = self.kernel_normalizer(mask)
-
- x = self.feature_reassemble(x, mask)
- return x
diff --git a/spaces/Mikey211/computing2/app.py b/spaces/Mikey211/computing2/app.py
deleted file mode 100644
index 11a2837c90c948a5ee7aed91eadb9e60aec54cfa..0000000000000000000000000000000000000000
--- a/spaces/Mikey211/computing2/app.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import gradio as gr
-alist=[]
-def add2list(num):
- num=int(num)
- alist.append(num)
- return alist
-
-iface = gr.Interface(fn=add2list, inputs="text", outputs="text")
-iface.launch()
\ No newline at end of file
diff --git a/spaces/MirageML/sjc/voxnerf/vox.py b/spaces/MirageML/sjc/voxnerf/vox.py
deleted file mode 100644
index 97141b3ab49347ddb29c12f82e7c254e92990096..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/voxnerf/vox.py
+++ /dev/null
@@ -1,268 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from einops import rearrange
-from my.registry import Registry
-
-VOXRF_REGISTRY = Registry("VoxRF")
-
-
-def to_grid_samp_coords(xyz_sampled, aabb):
- # output range is [-1, 1]
- aabbSize = aabb[1] - aabb[0]
- return (xyz_sampled - aabb[0]) / aabbSize * 2 - 1
-
-
-def add_non_state_tsr(nn_module, key, val):
- # tsr added here does not appear in module's state_dict;
- nn_module.register_buffer(key, val, persistent=False)
-
-
-@VOXRF_REGISTRY.register()
-class VoxRF(nn.Module):
- def __init__(
- self, aabb, grid_size, step_ratio=0.5,
- density_shift=-10, ray_march_weight_thres=0.0001, c=3,
- blend_bg_texture=True, bg_texture_hw=64
- ):
- assert aabb.shape == (2, 3)
- xyz = grid_size
- del grid_size
-
- super().__init__()
- add_non_state_tsr(self, "aabb", torch.tensor(aabb, dtype=torch.float32))
- add_non_state_tsr(self, "grid_size", torch.LongTensor(xyz))
-
- self.density_shift = density_shift
- self.ray_march_weight_thres = ray_march_weight_thres
- self.step_ratio = step_ratio
-
- zyx = xyz[::-1]
- self.density = torch.nn.Parameter(
- torch.zeros((1, 1, *zyx))
- )
- self.color = torch.nn.Parameter(
- torch.randn((1, c, *zyx))
- )
-
- self.blend_bg_texture = blend_bg_texture
- self.bg = torch.nn.Parameter(
- torch.randn((1, c, bg_texture_hw, bg_texture_hw))
- )
-
- self.c = c
- self.alphaMask = None
- self.feats2color = lambda feats: torch.sigmoid(feats)
-
- self.d_scale = torch.nn.Parameter(torch.tensor(0.0))
-
- @property
- def device(self):
- return self.density.device
-
- def compute_density_feats(self, xyz_sampled):
- xyz_sampled = to_grid_samp_coords(xyz_sampled, self.aabb)
- n = xyz_sampled.shape[0]
- xyz_sampled = xyz_sampled.reshape(1, n, 1, 1, 3)
- σ = F.grid_sample(self.density, xyz_sampled).view(n)
- # We notice that DreamFusion also uses an exp scaling on densities.
- # The technique here is developed BEFORE DreamFusion came out,
- # and forms part of our upcoming technical report discussing invariant
- # scaling for volume rendering. The reseach was presented to our
- # funding agency (TRI) on Aug. 25th, and discussed with a few researcher friends
- # during the period.
- σ = σ * torch.exp(self.d_scale)
- σ = F.softplus(σ + self.density_shift)
- return σ
-
- def compute_app_feats(self, xyz_sampled):
- xyz_sampled = to_grid_samp_coords(xyz_sampled, self.aabb)
- n = xyz_sampled.shape[0]
- xyz_sampled = xyz_sampled.reshape(1, n, 1, 1, 3)
- feats = F.grid_sample(self.color, xyz_sampled).view(self.c, n)
- feats = feats.T
- return feats
-
- def compute_bg(self, uv):
- n = uv.shape[0]
- uv = uv.reshape(1, n, 1, 2)
- feats = F.grid_sample(self.bg, uv).view(self.c, n)
- feats = feats.T
- return feats
-
- def get_per_voxel_length(self):
- aabb_size = self.aabb[1] - self.aabb[0]
- # NOTE I am not -1 on grid_size here;
- # I interpret a voxel as a square and val sits at the center; like pixel
- # this is consistent with align_corners=False
- vox_xyz_length = aabb_size / self.grid_size
- return vox_xyz_length
-
- def get_num_samples(self, max_size=None):
- # funny way to set step size; whatever
- unit = torch.mean(self.get_per_voxel_length())
- step_size = unit * self.step_ratio
- step_size = step_size.item() # get the float
-
- if max_size is None:
- aabb_size = self.aabb[1] - self.aabb[0]
- aabb_diag = torch.norm(aabb_size)
- max_size = aabb_diag
-
- num_samples = int((max_size / step_size).item()) + 1
- return num_samples, step_size
-
- @torch.no_grad()
- def resample(self, target_xyz: list):
- zyx = target_xyz[::-1]
- self.density = self._resamp_param(self.density, zyx)
- self.color = self._resamp_param(self.color, zyx)
- target_xyz = torch.LongTensor(target_xyz).to(self.aabb.device)
- add_non_state_tsr(self, "grid_size", target_xyz)
-
- @staticmethod
- def _resamp_param(param, target_size):
- return torch.nn.Parameter(F.interpolate(
- param.data, size=target_size, mode="trilinear"
- ))
-
- @torch.no_grad()
- def compute_volume_alpha(self):
- xyz = self.grid_size.tolist()
- unit_xyz = self.get_per_voxel_length()
- xs, ys, zs = torch.meshgrid(
- *[torch.arange(nd) for nd in xyz], indexing="ij"
- )
- pts = torch.stack([xs, ys, zs], dim=-1).to(unit_xyz.device) # [nx, ny, nz, 3]
- pts = self.aabb[0] + (pts + 0.5) * unit_xyz
- pts = pts.reshape(-1, 3)
- # could potentially filter with alpha mask itself if exists
- σ = self.compute_density_feats(pts)
- d = torch.mean(unit_xyz)
- α = 1 - torch.exp(-σ * d)
- α = rearrange(α.view(xyz), "x y z -> 1 1 z y x")
- α = α.contiguous()
- return α
-
- @torch.no_grad()
- def make_alpha_mask(self):
- α = self.compute_volume_alpha()
- ks = 3
- α = F.max_pool3d(α, kernel_size=ks, padding=ks // 2, stride=1)
- α = (α > 0.08).float()
- vol_mask = AlphaMask(self.aabb, α)
- self.alphaMask = vol_mask
-
- def state_dict(self, *args, **kwargs):
- state = super().state_dict(*args, **kwargs)
- if self.alphaMask is not None:
- state['alpha_mask'] = self.alphaMask.export_state()
- return state
-
- def load_state_dict(self, state_dict):
- if 'alpha_mask' in state_dict.keys():
- state = state_dict.pop("alpha_mask")
- self.alphaMask = AlphaMask.from_state(state)
- return super().load_state_dict(state_dict, strict=True)
-
-
-@VOXRF_REGISTRY.register()
-class V_SJC(VoxRF):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- # rendering color in [-1, 1] range, since score models all operate on centered img
- self.feats2color = lambda feats: torch.sigmoid(feats) * 2 - 1
-
- def opt_params(self):
- groups = []
- for name, param in self.named_parameters():
- # print(f"{name} {param.shape}")
- grp = {"params": param}
- if name in ["bg"]:
- grp["lr"] = 0.0001
- if name in ["density"]:
- # grp["lr"] = 0.
- pass
- groups.append(grp)
- return groups
-
- def annealed_opt_params(self, base_lr, σ):
- groups = []
- for name, param in self.named_parameters():
- # print(f"{name} {param.shape}")
- grp = {"params": param, "lr": base_lr * σ}
- if name in ["density"]:
- grp["lr"] = base_lr * σ
- if name in ["d_scale"]:
- grp["lr"] = 0.
- if name in ["color"]:
- grp["lr"] = base_lr * σ
- if name in ["bg"]:
- grp["lr"] = 0.01
- groups.append(grp)
- return groups
-
-
-@VOXRF_REGISTRY.register()
-class V_SD(V_SJC):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- # rendering in feature space; no sigmoid thresholding
- self.feats2color = lambda feats: feats
-
-
-class AlphaMask(nn.Module):
- def __init__(self, aabb, alphas):
- super().__init__()
- zyx = list(alphas.shape[-3:])
- add_non_state_tsr(self, "alphas", alphas.view(1, 1, *zyx))
- xyz = zyx[::-1]
- add_non_state_tsr(self, "grid_size", torch.LongTensor(xyz))
- add_non_state_tsr(self, "aabb", aabb)
-
- def sample_alpha(self, xyz_pts):
- xyz_pts = to_grid_samp_coords(xyz_pts, self.aabb)
- xyz_pts = xyz_pts.view(1, -1, 1, 1, 3)
- α = F.grid_sample(self.alphas, xyz_pts).view(-1)
- return α
-
- def export_state(self):
- state = {}
- alphas = self.alphas.bool().cpu().numpy()
- state['shape'] = alphas.shape
- state['mask'] = np.packbits(alphas.reshape(-1))
- state['aabb'] = self.aabb.cpu()
- return state
-
- @classmethod
- def from_state(cls, state):
- shape = state['shape']
- mask = state['mask']
- aabb = state['aabb']
-
- length = np.prod(shape)
- alphas = torch.from_numpy(
- np.unpackbits(mask)[:length].reshape(shape)
- )
- amask = cls(aabb, alphas.float())
- return amask
-
-
-def test():
- device = torch.device("cuda:1")
-
- aabb = 1.5 * np.array([
- [-1, -1, -1],
- [1, 1, 1]
- ])
- model = VoxRF(aabb, [10, 20, 30])
- model.to(device)
- print(model.density.shape)
- print(model.grid_size)
-
- return
-
-
-if __name__ == "__main__":
- test()
diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/train_color.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/train_color.py
deleted file mode 100644
index 3c1aeb9f33ff7ebf95489cef9a3e96e8af7ee3d7..0000000000000000000000000000000000000000
--- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/train_color.py
+++ /dev/null
@@ -1,191 +0,0 @@
-import sys
-import os
-
-sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
-ROOT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-import time
-import json
-import numpy as np
-import cv2
-import random
-import torch
-import torch.nn as nn
-from torch.utils.data import DataLoader
-from tqdm import tqdm
-
-from lib.options import BaseOptions
-from lib.mesh_util import *
-from lib.sample_util import *
-from lib.train_util import *
-from lib.data import *
-from lib.model import *
-from lib.geometry import index
-
-# get options
-opt = BaseOptions().parse()
-
-def train_color(opt):
- # set cuda
- cuda = torch.device('cuda:%d' % opt.gpu_id)
-
- train_dataset = TrainDataset(opt, phase='train')
- test_dataset = TrainDataset(opt, phase='test')
-
- projection_mode = train_dataset.projection_mode
-
- # create data loader
- train_data_loader = DataLoader(train_dataset,
- batch_size=opt.batch_size, shuffle=not opt.serial_batches,
- num_workers=opt.num_threads, pin_memory=opt.pin_memory)
-
- print('train data size: ', len(train_data_loader))
-
- # NOTE: batch size should be 1 and use all the points for evaluation
- test_data_loader = DataLoader(test_dataset,
- batch_size=1, shuffle=False,
- num_workers=opt.num_threads, pin_memory=opt.pin_memory)
- print('test data size: ', len(test_data_loader))
-
- # create net
- netG = HGPIFuNet(opt, projection_mode).to(device=cuda)
-
- lr = opt.learning_rate
-
- # Always use resnet for color regression
- netC = ResBlkPIFuNet(opt).to(device=cuda)
- optimizerC = torch.optim.Adam(netC.parameters(), lr=opt.learning_rate)
-
- def set_train():
- netG.eval()
- netC.train()
-
- def set_eval():
- netG.eval()
- netC.eval()
-
- print('Using NetworkG: ', netG.name, 'networkC: ', netC.name)
-
- # load checkpoints
- if opt.load_netG_checkpoint_path is not None:
- print('loading for net G ...', opt.load_netG_checkpoint_path)
- netG.load_state_dict(torch.load(opt.load_netG_checkpoint_path, map_location=cuda))
- else:
- model_path_G = '%s/%s/netG_latest' % (opt.checkpoints_path, opt.name)
- print('loading for net G ...', model_path_G)
- netG.load_state_dict(torch.load(model_path_G, map_location=cuda))
-
- if opt.load_netC_checkpoint_path is not None:
- print('loading for net C ...', opt.load_netC_checkpoint_path)
- netC.load_state_dict(torch.load(opt.load_netC_checkpoint_path, map_location=cuda))
-
- if opt.continue_train:
- if opt.resume_epoch < 0:
- model_path_C = '%s/%s/netC_latest' % (opt.checkpoints_path, opt.name)
- else:
- model_path_C = '%s/%s/netC_epoch_%d' % (opt.checkpoints_path, opt.name, opt.resume_epoch)
-
- print('Resuming from ', model_path_C)
- netC.load_state_dict(torch.load(model_path_C, map_location=cuda))
-
- os.makedirs(opt.checkpoints_path, exist_ok=True)
- os.makedirs(opt.results_path, exist_ok=True)
- os.makedirs('%s/%s' % (opt.checkpoints_path, opt.name), exist_ok=True)
- os.makedirs('%s/%s' % (opt.results_path, opt.name), exist_ok=True)
-
- opt_log = os.path.join(opt.results_path, opt.name, 'opt.txt')
- with open(opt_log, 'w') as outfile:
- outfile.write(json.dumps(vars(opt), indent=2))
-
- # training
- start_epoch = 0 if not opt.continue_train else max(opt.resume_epoch,0)
- for epoch in range(start_epoch, opt.num_epoch):
- epoch_start_time = time.time()
-
- set_train()
- iter_data_time = time.time()
- for train_idx, train_data in enumerate(train_data_loader):
- iter_start_time = time.time()
- # retrieve the data
- image_tensor = train_data['img'].to(device=cuda)
- calib_tensor = train_data['calib'].to(device=cuda)
- color_sample_tensor = train_data['color_samples'].to(device=cuda)
-
- image_tensor, calib_tensor = reshape_multiview_tensors(image_tensor, calib_tensor)
-
- if opt.num_views > 1:
- color_sample_tensor = reshape_sample_tensor(color_sample_tensor, opt.num_views)
-
- rgb_tensor = train_data['rgbs'].to(device=cuda)
-
- with torch.no_grad():
- netG.filter(image_tensor)
- resC, error = netC.forward(image_tensor, netG.get_im_feat(), color_sample_tensor, calib_tensor, labels=rgb_tensor)
-
- optimizerC.zero_grad()
- error.backward()
- optimizerC.step()
-
- iter_net_time = time.time()
- eta = ((iter_net_time - epoch_start_time) / (train_idx + 1)) * len(train_data_loader) - (
- iter_net_time - epoch_start_time)
-
- if train_idx % opt.freq_plot == 0:
- print(
- 'Name: {0} | Epoch: {1} | {2}/{3} | Err: {4:.06f} | LR: {5:.06f} | dataT: {6:.05f} | netT: {7:.05f} | ETA: {8:02d}:{9:02d}'.format(
- opt.name, epoch, train_idx, len(train_data_loader),
- error.item(),
- lr,
- iter_start_time - iter_data_time,
- iter_net_time - iter_start_time, int(eta // 60),
- int(eta - 60 * (eta // 60))))
-
- if train_idx % opt.freq_save == 0 and train_idx != 0:
- torch.save(netC.state_dict(), '%s/%s/netC_latest' % (opt.checkpoints_path, opt.name))
- torch.save(netC.state_dict(), '%s/%s/netC_epoch_%d' % (opt.checkpoints_path, opt.name, epoch))
-
- if train_idx % opt.freq_save_ply == 0:
- save_path = '%s/%s/pred_col.ply' % (opt.results_path, opt.name)
- rgb = resC[0].transpose(0, 1).cpu() * 0.5 + 0.5
- points = color_sample_tensor[0].transpose(0, 1).cpu()
- save_samples_rgb(save_path, points.detach().numpy(), rgb.detach().numpy())
-
- iter_data_time = time.time()
-
- #### test
- with torch.no_grad():
- set_eval()
-
- if not opt.no_num_eval:
- test_losses = {}
- print('calc error (test) ...')
- test_color_error = calc_error_color(opt, netG, netC, cuda, test_dataset, 100)
- print('eval test | color error:', test_color_error)
- test_losses['test_color'] = test_color_error
-
- print('calc error (train) ...')
- train_dataset.is_train = False
- train_color_error = calc_error_color(opt, netG, netC, cuda, train_dataset, 100)
- train_dataset.is_train = True
- print('eval train | color error:', train_color_error)
- test_losses['train_color'] = train_color_error
-
- if not opt.no_gen_mesh:
- print('generate mesh (test) ...')
- for gen_idx in tqdm(range(opt.num_gen_mesh_test)):
- test_data = random.choice(test_dataset)
- save_path = '%s/%s/test_eval_epoch%d_%s.obj' % (
- opt.results_path, opt.name, epoch, test_data['name'])
- gen_mesh_color(opt, netG, netC, cuda, test_data, save_path)
-
- print('generate mesh (train) ...')
- train_dataset.is_train = False
- for gen_idx in tqdm(range(opt.num_gen_mesh_test)):
- train_data = random.choice(train_dataset)
- save_path = '%s/%s/train_eval_epoch%d_%s.obj' % (
- opt.results_path, opt.name, epoch, train_data['name'])
- gen_mesh_color(opt, netG, netC, cuda, train_data, save_path)
- train_dataset.is_train = True
-
-if __name__ == '__main__':
- train_color(opt)
\ No newline at end of file
diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/train_shape.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/train_shape.py
deleted file mode 100644
index 241ce543c956ce51f6f8445739ef41f4ddf7a7d5..0000000000000000000000000000000000000000
--- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/train_shape.py
+++ /dev/null
@@ -1,183 +0,0 @@
-import sys
-import os
-
-sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
-ROOT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-import time
-import json
-import numpy as np
-import cv2
-import random
-import torch
-from torch.utils.data import DataLoader
-from tqdm import tqdm
-
-from lib.options import BaseOptions
-from lib.mesh_util import *
-from lib.sample_util import *
-from lib.train_util import *
-from lib.data import *
-from lib.model import *
-from lib.geometry import index
-
-# get options
-opt = BaseOptions().parse()
-
-def train(opt):
- # set cuda
- cuda = torch.device('cuda:%d' % opt.gpu_id)
-
- train_dataset = TrainDataset(opt, phase='train')
- test_dataset = TrainDataset(opt, phase='test')
-
- projection_mode = train_dataset.projection_mode
-
- # create data loader
- train_data_loader = DataLoader(train_dataset,
- batch_size=opt.batch_size, shuffle=not opt.serial_batches,
- num_workers=opt.num_threads, pin_memory=opt.pin_memory)
-
- print('train data size: ', len(train_data_loader))
-
- # NOTE: batch size should be 1 and use all the points for evaluation
- test_data_loader = DataLoader(test_dataset,
- batch_size=1, shuffle=False,
- num_workers=opt.num_threads, pin_memory=opt.pin_memory)
- print('test data size: ', len(test_data_loader))
-
- # create net
- netG = HGPIFuNet(opt, projection_mode).to(device=cuda)
- optimizerG = torch.optim.RMSprop(netG.parameters(), lr=opt.learning_rate, momentum=0, weight_decay=0)
- lr = opt.learning_rate
- print('Using Network: ', netG.name)
-
- def set_train():
- netG.train()
-
- def set_eval():
- netG.eval()
-
- # load checkpoints
- if opt.load_netG_checkpoint_path is not None:
- print('loading for net G ...', opt.load_netG_checkpoint_path)
- netG.load_state_dict(torch.load(opt.load_netG_checkpoint_path, map_location=cuda))
-
- if opt.continue_train:
- if opt.resume_epoch < 0:
- model_path = '%s/%s/netG_latest' % (opt.checkpoints_path, opt.name)
- else:
- model_path = '%s/%s/netG_epoch_%d' % (opt.checkpoints_path, opt.name, opt.resume_epoch)
- print('Resuming from ', model_path)
- netG.load_state_dict(torch.load(model_path, map_location=cuda))
-
- os.makedirs(opt.checkpoints_path, exist_ok=True)
- os.makedirs(opt.results_path, exist_ok=True)
- os.makedirs('%s/%s' % (opt.checkpoints_path, opt.name), exist_ok=True)
- os.makedirs('%s/%s' % (opt.results_path, opt.name), exist_ok=True)
-
- opt_log = os.path.join(opt.results_path, opt.name, 'opt.txt')
- with open(opt_log, 'w') as outfile:
- outfile.write(json.dumps(vars(opt), indent=2))
-
- # training
- start_epoch = 0 if not opt.continue_train else max(opt.resume_epoch,0)
- for epoch in range(start_epoch, opt.num_epoch):
- epoch_start_time = time.time()
-
- set_train()
- iter_data_time = time.time()
- for train_idx, train_data in enumerate(train_data_loader):
- iter_start_time = time.time()
-
- # retrieve the data
- image_tensor = train_data['img'].to(device=cuda)
- calib_tensor = train_data['calib'].to(device=cuda)
- sample_tensor = train_data['samples'].to(device=cuda)
-
- image_tensor, calib_tensor = reshape_multiview_tensors(image_tensor, calib_tensor)
-
- if opt.num_views > 1:
- sample_tensor = reshape_sample_tensor(sample_tensor, opt.num_views)
-
- label_tensor = train_data['labels'].to(device=cuda)
-
- res, error = netG.forward(image_tensor, sample_tensor, calib_tensor, labels=label_tensor)
-
- optimizerG.zero_grad()
- error.backward()
- optimizerG.step()
-
- iter_net_time = time.time()
- eta = ((iter_net_time - epoch_start_time) / (train_idx + 1)) * len(train_data_loader) - (
- iter_net_time - epoch_start_time)
-
- if train_idx % opt.freq_plot == 0:
- print(
- 'Name: {0} | Epoch: {1} | {2}/{3} | Err: {4:.06f} | LR: {5:.06f} | Sigma: {6:.02f} | dataT: {7:.05f} | netT: {8:.05f} | ETA: {9:02d}:{10:02d}'.format(
- opt.name, epoch, train_idx, len(train_data_loader), error.item(), lr, opt.sigma,
- iter_start_time - iter_data_time,
- iter_net_time - iter_start_time, int(eta // 60),
- int(eta - 60 * (eta // 60))))
-
- if train_idx % opt.freq_save == 0 and train_idx != 0:
- torch.save(netG.state_dict(), '%s/%s/netG_latest' % (opt.checkpoints_path, opt.name))
- torch.save(netG.state_dict(), '%s/%s/netG_epoch_%d' % (opt.checkpoints_path, opt.name, epoch))
-
- if train_idx % opt.freq_save_ply == 0:
- save_path = '%s/%s/pred.ply' % (opt.results_path, opt.name)
- r = res[0].cpu()
- points = sample_tensor[0].transpose(0, 1).cpu()
- save_samples_truncted_prob(save_path, points.detach().numpy(), r.detach().numpy())
-
- iter_data_time = time.time()
-
- # update learning rate
- lr = adjust_learning_rate(optimizerG, epoch, lr, opt.schedule, opt.gamma)
-
- #### test
- with torch.no_grad():
- set_eval()
-
- if not opt.no_num_eval:
- test_losses = {}
- print('calc error (test) ...')
- test_errors = calc_error(opt, netG, cuda, test_dataset, 100)
- print('eval test MSE: {0:06f} IOU: {1:06f} prec: {2:06f} recall: {3:06f}'.format(*test_errors))
- MSE, IOU, prec, recall = test_errors
- test_losses['MSE(test)'] = MSE
- test_losses['IOU(test)'] = IOU
- test_losses['prec(test)'] = prec
- test_losses['recall(test)'] = recall
-
- print('calc error (train) ...')
- train_dataset.is_train = False
- train_errors = calc_error(opt, netG, cuda, train_dataset, 100)
- train_dataset.is_train = True
- print('eval train MSE: {0:06f} IOU: {1:06f} prec: {2:06f} recall: {3:06f}'.format(*train_errors))
- MSE, IOU, prec, recall = train_errors
- test_losses['MSE(train)'] = MSE
- test_losses['IOU(train)'] = IOU
- test_losses['prec(train)'] = prec
- test_losses['recall(train)'] = recall
-
- if not opt.no_gen_mesh:
- print('generate mesh (test) ...')
- for gen_idx in tqdm(range(opt.num_gen_mesh_test)):
- test_data = random.choice(test_dataset)
- save_path = '%s/%s/test_eval_epoch%d_%s.obj' % (
- opt.results_path, opt.name, epoch, test_data['name'])
- gen_mesh(opt, netG, cuda, test_data, save_path)
-
- print('generate mesh (train) ...')
- train_dataset.is_train = False
- for gen_idx in tqdm(range(opt.num_gen_mesh_test)):
- train_data = random.choice(train_dataset)
- save_path = '%s/%s/train_eval_epoch%d_%s.obj' % (
- opt.results_path, opt.name, epoch, train_data['name'])
- gen_mesh(opt, netG, cuda, train_data, save_path)
- train_dataset.is_train = True
-
-
-if __name__ == '__main__':
- train(opt)
\ No newline at end of file
diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/DepthNormalizer.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/DepthNormalizer.py
deleted file mode 100644
index 84908ec131771b8d42f32535ab856017fe1143a1..0000000000000000000000000000000000000000
--- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/DepthNormalizer.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class DepthNormalizer(nn.Module):
- def __init__(self, opt):
- super(DepthNormalizer, self).__init__()
- self.opt = opt
-
- def forward(self, z, calibs=None, index_feat=None):
- '''
- Normalize z_feature
- :param z_feat: [B, 1, N] depth value for z in the image coordinate system
- :return:
- '''
- z_feat = z * (self.opt.loadSize // 2) / self.opt.z_size
- return z_feat
diff --git a/spaces/Miuzarte/SUI-svc-3.0/resample.py b/spaces/Miuzarte/SUI-svc-3.0/resample.py
deleted file mode 100644
index 11bb0bf74ea7ea2ae1fa321b52419089d4d83aee..0000000000000000000000000000000000000000
--- a/spaces/Miuzarte/SUI-svc-3.0/resample.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import os
-import argparse
-import librosa
-import numpy as np
-from multiprocessing import Pool, cpu_count
-from scipy.io import wavfile
-from tqdm import tqdm
-
-
-def process(item):
- spkdir, wav_name, args = item
- # speaker 's5', 'p280', 'p315' are excluded,
- speaker = spkdir.split(os.sep)[-1]
- wav_path = os.path.join(args.in_dir, speaker, wav_name)
- if os.path.exists(wav_path) and '.wav' in wav_path:
- os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True)
- wav, sr = librosa.load(wav_path, None)
- wav, _ = librosa.effects.trim(wav, top_db=20)
- peak = np.abs(wav).max()
- if peak > 1.0:
- wav = 0.98 * wav / peak
- wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2)
- save_name = wav_name
- save_path2 = os.path.join(args.out_dir2, speaker, save_name)
- wavfile.write(
- save_path2,
- args.sr2,
- (wav2 * np.iinfo(np.int16).max).astype(np.int16)
- )
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--sr2", type=int, default=48000, help="sampling rate")
- parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir")
- parser.add_argument("--out_dir2", type=str, default="./dataset/48k", help="path to target dir")
- args = parser.parse_args()
- processs = cpu_count()-2 if cpu_count() >4 else 1
- pool = Pool(processes=processs)
-
- for speaker in os.listdir(args.in_dir):
- spk_dir = os.path.join(args.in_dir, speaker)
- if os.path.isdir(spk_dir):
- print(spk_dir)
- for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])):
- pass
diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/keras_imagenet_benchmark.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/keras_imagenet_benchmark.py
deleted file mode 100644
index 63a48dfb1222b65311652e3bee4241854a55043e..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/keras_imagenet_benchmark.py
+++ /dev/null
@@ -1,1724 +0,0 @@
-# Lint as: python3
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Executes Keras benchmarks and accuracy tests."""
-# pylint: disable=line-too-long
-from __future__ import print_function
-
-import json
-import os
-import time
-
-from typing import Any, MutableMapping, Optional
-
-from absl import flags
-import tensorflow as tf # pylint: disable=g-bad-import-order
-
-from official.benchmark import benchmark_wrappers
-from official.benchmark import keras_benchmark
-from official.benchmark.models import resnet_imagenet_main
-from official.vision.image_classification import classifier_trainer
-
-MIN_TOP_1_ACCURACY = 0.76
-MAX_TOP_1_ACCURACY = 0.77
-
-MOBILENET_V1_MIN_TOP_1_ACCURACY = 0.65
-MOBILENET_V1_MAX_TOP_1_ACCURACY = 0.68
-
-# Range of top-1 accracies for model optimization techniques.
-# Each item indicates (MIN_TOP_1_ACCURACY, MAX_TOP_1_ACCURACY).
-MODEL_OPTIMIZATION_TOP_1_ACCURACY = {
- 'RESNET50_FINETUNE_PRUNING': (0.76, 0.77),
- 'MOBILENET_V1_FINETUNE_PRUNING': (0.67, 0.68),
-}
-
-FLAGS = flags.FLAGS
-
-
-def _get_classifier_parameters(
- num_gpus: int = 0,
- builder: str = 'records',
- skip_eval: bool = False,
- distribution_strategy: str = 'mirrored',
- per_replica_batch_size: int = 128,
- epochs: int = 90,
- steps: int = 0,
- epochs_between_evals: int = 1,
- dtype: str = 'float32',
- enable_xla: bool = False,
- run_eagerly: bool = False,
- gpu_thread_mode: Optional[str] = None,
- dataset_num_private_threads: Optional[int] = None,
- loss_scale: Optional[str] = None,
- report_metrics: bool = True,
- batchnorm_spatial_persistent: bool = False) -> MutableMapping[str, Any]:
- """Gets classifier trainer's ResNet parameters."""
- return {
- 'runtime': {
- 'num_gpus': num_gpus,
- 'distribution_strategy': distribution_strategy,
- 'run_eagerly': run_eagerly,
- 'enable_xla': enable_xla,
- 'dataset_num_private_threads': dataset_num_private_threads,
- 'gpu_thread_mode': gpu_thread_mode,
- 'loss_scale': loss_scale,
- 'batchnorm_spatial_persistent': batchnorm_spatial_persistent,
- },
- 'train_dataset': {
- 'builder': builder,
- 'use_per_replica_batch_size': True,
- 'batch_size': per_replica_batch_size,
- 'image_size': 224,
- 'dtype': dtype,
- },
- 'validation_dataset': {
- 'builder': builder,
- 'batch_size': per_replica_batch_size,
- 'use_per_replica_batch_size': True,
- 'image_size': 224,
- 'dtype': dtype,
- },
- 'train': {
- 'epochs': epochs,
- 'steps': steps,
- 'callbacks': {
- 'enable_tensorboard': False,
- 'enable_checkpoint_and_export': False,
- 'enable_time_history': True,
- },
- 'metrics': ['accuracy'] if report_metrics else [],
- },
- 'model': {
- 'loss': {
- 'label_smoothing': 0.1,
- },
- },
- 'evaluation': {
- 'epochs_between_evals': epochs_between_evals,
- 'skip_eval': skip_eval,
- },
- }
-
-
-class Resnet50KerasAccuracy(keras_benchmark.KerasBenchmark):
- """Benchmark accuracy tests for ResNet50 in Keras."""
-
- def __init__(self,
- output_dir: Optional[str] = None,
- root_data_dir: Optional[str] = None,
- **kwargs):
- """A benchmark class.
-
- Args:
- output_dir: directory where to output e.g. log files
- root_data_dir: directory under which to look for dataset
- **kwargs: arbitrary named arguments. This is needed to make the
- constructor forward compatible in case PerfZero provides more
- named arguments before updating the constructor.
- """
-
- flag_methods = [classifier_trainer.define_classifier_flags]
-
- self.data_dir = os.path.join(root_data_dir, 'imagenet')
- super(Resnet50KerasAccuracy, self).__init__(
- output_dir=output_dir, flag_methods=flag_methods)
-
- @benchmark_wrappers.enable_runtime_flags
- def _run_and_report_benchmark(
- self,
- experiment_name: str,
- top_1_min: float = MIN_TOP_1_ACCURACY,
- top_1_max: float = MAX_TOP_1_ACCURACY,
- num_gpus: int = 0,
- distribution_strategy: str = 'mirrored',
- per_replica_batch_size: int = 128,
- epochs: int = 90,
- steps: int = 0,
- epochs_between_evals: int = 1,
- dtype: str = 'float32',
- enable_xla: bool = False,
- run_eagerly: bool = False,
- gpu_thread_mode: Optional[str] = None,
- dataset_num_private_threads: Optional[int] = None,
- loss_scale: Optional[str] = None):
- """Runs and reports the benchmark given the provided configuration."""
- FLAGS.model_type = 'resnet'
- FLAGS.dataset = 'imagenet'
- FLAGS.mode = 'train_and_eval'
- FLAGS.data_dir = self.data_dir
- FLAGS.model_dir = self._get_model_dir(experiment_name)
- parameters = _get_classifier_parameters(
- num_gpus=num_gpus,
- distribution_strategy=distribution_strategy,
- per_replica_batch_size=per_replica_batch_size,
- epochs=epochs,
- steps=steps,
- epochs_between_evals=epochs_between_evals,
- dtype=dtype,
- enable_xla=enable_xla,
- run_eagerly=run_eagerly,
- gpu_thread_mode=gpu_thread_mode,
- dataset_num_private_threads=dataset_num_private_threads,
- report_metrics=True,
- loss_scale=loss_scale,
- batchnorm_spatial_persistent=True)
- FLAGS.params_override = json.dumps(parameters)
- total_batch_size = num_gpus * per_replica_batch_size
-
- start_time_sec = time.time()
- stats = classifier_trainer.run(flags.FLAGS)
- wall_time_sec = time.time() - start_time_sec
-
- super(Resnet50KerasAccuracy, self)._report_benchmark(
- stats,
- wall_time_sec,
- top_1_min=top_1_min,
- top_1_max=top_1_max,
- total_batch_size=total_batch_size,
- log_steps=100)
-
- def benchmark_8_gpu(self):
- """Tests Keras model with eager, dist_strat and 8 GPUs."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_8_gpu',
- num_gpus=8,
- per_replica_batch_size=128,
- epochs=90,
- epochs_between_evals=10,
- dtype='float32')
-
- def benchmark_8_gpu_fp16(self):
- """Tests Keras model with eager, dist_strat, 8 GPUs, and fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_8_gpu_fp16',
- num_gpus=8,
- per_replica_batch_size=256,
- epochs=90,
- epochs_between_evals=10,
- dtype='float16')
-
- def benchmark_xla_8_gpu_fp16(self):
- """Tests Keras model with XLA, eager, dist_strat, 8 GPUs and fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_8_gpu_fp16',
- num_gpus=8,
- per_replica_batch_size=256,
- epochs=90,
- epochs_between_evals=10,
- dtype='float16',
- enable_xla=True)
-
- def benchmark_xla_8_gpu_fp16_dynamic(self):
- """Tests Keras model with XLA, eager, dist_strat, 8 GPUs, dynamic fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_8_gpu_fp16_dynamic',
- top_1_min=0.736,
- num_gpus=8,
- per_replica_batch_size=256,
- epochs=90,
- epochs_between_evals=10,
- dtype='float16',
- loss_scale='dynamic')
-
- def _get_model_dir(self, folder_name):
- return os.path.join(self.output_dir, folder_name)
-
-
-class MobilenetV1KerasAccuracy(keras_benchmark.KerasBenchmark):
- """Benchmark accuracy tests for MobilenetV1 in Keras."""
-
- def __init__(self, output_dir=None, root_data_dir=None, **kwargs):
- """A benchmark class.
-
- Args:
- output_dir: directory where to output e.g. log files
- root_data_dir: directory under which to look for dataset
- **kwargs: arbitrary named arguments. This is needed to make the
- constructor forward compatible in case PerfZero provides more
- named arguments before updating the constructor.
- """
-
- flag_methods = [resnet_imagenet_main.define_imagenet_keras_flags]
-
- self.data_dir = os.path.join(root_data_dir, 'imagenet')
- super(MobilenetV1KerasAccuracy, self).__init__(
- output_dir=output_dir,
- flag_methods=flag_methods,
- default_flags={
- 'model': 'mobilenet',
- 'optimizer': 'mobilenet_default',
- 'initial_learning_rate_per_sample': 0.00039,
- })
-
- def benchmark_8_gpu(self):
- """Test Keras model with eager, dist_strat and 8 GPUs."""
- self._setup()
- FLAGS.num_gpus = 8
- FLAGS.data_dir = self.data_dir
- FLAGS.batch_size = 128 * 8
- FLAGS.train_epochs = 90
- FLAGS.epochs_between_evals = 10
- FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu')
- FLAGS.dtype = 'fp32'
- FLAGS.enable_eager = True
- self._run_and_report_benchmark()
-
- @benchmark_wrappers.enable_runtime_flags
- def _run_and_report_benchmark(self,
- top_1_min=MOBILENET_V1_MIN_TOP_1_ACCURACY,
- top_1_max=MOBILENET_V1_MAX_TOP_1_ACCURACY):
- start_time_sec = time.time()
- stats = resnet_imagenet_main.run(flags.FLAGS)
- wall_time_sec = time.time() - start_time_sec
-
- super(MobilenetV1KerasAccuracy, self)._report_benchmark(
- stats,
- wall_time_sec,
- top_1_min=top_1_min,
- top_1_max=top_1_max,
- total_batch_size=FLAGS.batch_size,
- log_steps=100)
-
- def _get_model_dir(self, folder_name):
- return os.path.join(self.output_dir, folder_name)
-
-
-class Resnet50KerasClassifierBenchmarkBase(keras_benchmark.KerasBenchmark):
- """Resnet50 (classifier_trainer) benchmarks."""
-
- def __init__(self, output_dir=None, default_flags=None,
- tpu=None, dataset_builder='records', train_epochs=1,
- train_steps=110, data_dir=None):
- flag_methods = [classifier_trainer.define_classifier_flags]
-
- self.dataset_builder = dataset_builder
- self.train_epochs = train_epochs
- self.train_steps = train_steps
- self.data_dir = data_dir
-
- super(Resnet50KerasClassifierBenchmarkBase, self).__init__(
- output_dir=output_dir,
- flag_methods=flag_methods,
- default_flags=default_flags,
- tpu=tpu)
-
- @benchmark_wrappers.enable_runtime_flags
- def _run_and_report_benchmark(
- self,
- experiment_name: str,
- skip_steps: Optional[int] = None,
- top_1_min: float = MIN_TOP_1_ACCURACY,
- top_1_max: float = MAX_TOP_1_ACCURACY,
- num_gpus: int = 0,
- num_tpus: int = 0,
- distribution_strategy: str = 'mirrored',
- per_replica_batch_size: int = 128,
- epochs_between_evals: int = 1,
- dtype: str = 'float32',
- enable_xla: bool = False,
- run_eagerly: bool = False,
- gpu_thread_mode: Optional[str] = None,
- dataset_num_private_threads: Optional[int] = None,
- loss_scale: Optional[str] = None):
- """Runs and reports the benchmark given the provided configuration."""
- FLAGS.model_type = 'resnet'
- FLAGS.dataset = 'imagenet'
- FLAGS.mode = 'train_and_eval'
- FLAGS.data_dir = self.data_dir
- FLAGS.model_dir = self._get_model_dir(experiment_name)
- parameters = _get_classifier_parameters(
- builder=self.dataset_builder,
- skip_eval=True,
- num_gpus=num_gpus,
- distribution_strategy=distribution_strategy,
- per_replica_batch_size=per_replica_batch_size,
- epochs=self.train_epochs,
- steps=self.train_steps,
- epochs_between_evals=epochs_between_evals,
- dtype=dtype,
- enable_xla=enable_xla,
- gpu_thread_mode=gpu_thread_mode,
- dataset_num_private_threads=dataset_num_private_threads,
- loss_scale=loss_scale,
- report_metrics=False,
- batchnorm_spatial_persistent=True)
- FLAGS.params_override = json.dumps(parameters)
- if distribution_strategy == 'tpu':
- total_batch_size = num_tpus * per_replica_batch_size
- else:
- total_batch_size = num_gpus * per_replica_batch_size
-
- start_time_sec = time.time()
- stats = classifier_trainer.run(flags.FLAGS)
- wall_time_sec = time.time() - start_time_sec
- # Number of logged step time entries that are excluded in performance
- # report. We keep results from last 100 batches, or skip the steps based on
- # input skip_steps.
- warmup = (skip_steps or (self.train_steps - 100)) // FLAGS.log_steps
-
- super(Resnet50KerasClassifierBenchmarkBase, self)._report_benchmark(
- stats,
- wall_time_sec,
- total_batch_size=total_batch_size,
- log_steps=FLAGS.log_steps,
- warmup=warmup,
- start_time_sec=start_time_sec)
-
- def benchmark_1_gpu_no_dist_strat(self):
- """Tests Keras model with 1 GPU, no distribution strategy."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_1_gpu_no_dist_strat',
- num_gpus=1,
- distribution_strategy='off',
- per_replica_batch_size=128)
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly(self):
- """Tests Keras model with 1 GPU, no distribution strategy, run eagerly."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_1_gpu_no_dist_strat_run_eagerly',
- num_gpus=1,
- run_eagerly=True,
- distribution_strategy='off',
- per_replica_batch_size=64)
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly_fp16(self):
- """Tests with 1 GPU, no distribution strategy, fp16, run eagerly."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_1_gpu_no_dist_strat_run_eagerly_fp16',
- num_gpus=1,
- run_eagerly=True,
- distribution_strategy='off',
- dtype='float16',
- per_replica_batch_size=128)
-
- def benchmark_1_gpu(self):
- """Tests Keras model with 1 GPU."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_1_gpu',
- num_gpus=1,
- distribution_strategy='one_device',
- per_replica_batch_size=128)
-
- def benchmark_xla_1_gpu(self):
- """Tests Keras model with XLA and 1 GPU."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_1_gpu',
- num_gpus=1,
- enable_xla=True,
- distribution_strategy='one_device',
- per_replica_batch_size=128)
-
- def benchmark_1_gpu_fp16(self):
- """Tests Keras model with 1 GPU and fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_1_gpu_fp16',
- num_gpus=1,
- distribution_strategy='one_device',
- dtype='float16',
- per_replica_batch_size=256)
-
- def benchmark_1_gpu_fp16_dynamic(self):
- """Tests Keras model with 1 GPU, fp16, and dynamic loss scaling."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_1_gpu_fp16_dynamic',
- num_gpus=1,
- distribution_strategy='one_device',
- dtype='float16',
- per_replica_batch_size=256,
- loss_scale='dynamic')
-
- def benchmark_xla_1_gpu_fp16(self):
- """Tests Keras model with XLA, 1 GPU and fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_1_gpu_fp16',
- num_gpus=1,
- enable_xla=True,
- distribution_strategy='one_device',
- dtype='float16',
- per_replica_batch_size=256)
-
- def benchmark_xla_1_gpu_fp16_tweaked(self):
- """Tests Keras model with XLA, 1 GPU, fp16, and manual config tuning."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_1_gpu_fp16_tweaked',
- num_gpus=1,
- enable_xla=True,
- distribution_strategy='one_device',
- dtype='float16',
- per_replica_batch_size=256,
- gpu_thread_mode='gpu_private')
-
- def benchmark_xla_1_gpu_fp16_dynamic(self):
- """Tests Keras model with XLA, 1 GPU, fp16, and dynamic loss scaling."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_1_gpu_fp16_dynamic',
- num_gpus=1,
- enable_xla=True,
- distribution_strategy='one_device',
- dtype='float16',
- per_replica_batch_size=256,
- loss_scale='dynamic')
-
- def benchmark_8_gpu(self):
- """Tests Keras model with 8 GPUs."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_8_gpu',
- num_gpus=8,
- distribution_strategy='mirrored',
- per_replica_batch_size=128)
-
- def benchmark_8_gpu_tweaked(self):
- """Tests Keras model with manual config tuning and 8 GPUs."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_8_gpu_tweaked',
- num_gpus=8,
- distribution_strategy='mirrored',
- per_replica_batch_size=128,
- dataset_num_private_threads=14)
-
- def benchmark_xla_8_gpu(self):
- """Tests Keras model with XLA and 8 GPUs."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_8_gpu',
- num_gpus=8,
- enable_xla=True,
- distribution_strategy='mirrored',
- per_replica_batch_size=128)
-
- def benchmark_xla_8_gpu_tweaked(self):
- """Tests Keras model with manual config tuning, 8 GPUs, and XLA."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_8_gpu_tweaked',
- num_gpus=8,
- enable_xla=True,
- distribution_strategy='mirrored',
- per_replica_batch_size=128,
- gpu_thread_mode='gpu_private',
- dataset_num_private_threads=24)
-
- def benchmark_8_gpu_fp16(self):
- """Tests Keras model with 8 GPUs and fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_8_gpu_fp16',
- num_gpus=8,
- dtype='float16',
- distribution_strategy='mirrored',
- per_replica_batch_size=256)
-
- def benchmark_8_gpu_fp16_tweaked(self):
- """Tests Keras model with 8 GPUs, fp16, and manual config tuning."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_8_gpu_fp16_tweaked',
- num_gpus=8,
- dtype='float16',
- distribution_strategy='mirrored',
- per_replica_batch_size=256,
- gpu_thread_mode='gpu_private',
- dataset_num_private_threads=40)
-
- def benchmark_8_gpu_fp16_dynamic_tweaked(self):
- """Tests Keras model with 8 GPUs, fp16, dynamic loss scaling, and tuned."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_8_gpu_fp16_dynamic_tweaked',
- num_gpus=8,
- dtype='float16',
- distribution_strategy='mirrored',
- per_replica_batch_size=256,
- loss_scale='dynamic',
- gpu_thread_mode='gpu_private',
- dataset_num_private_threads=40)
-
- def benchmark_xla_8_gpu_fp16(self):
- """Tests Keras model with XLA, 8 GPUs and fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_8_gpu_fp16',
- dtype='float16',
- num_gpus=8,
- enable_xla=True,
- distribution_strategy='mirrored',
- per_replica_batch_size=256)
-
- def benchmark_xla_8_gpu_fp16_tweaked(self):
- """Test Keras model with manual config tuning, XLA, 8 GPUs and fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_8_gpu_fp16_tweaked',
- dtype='float16',
- num_gpus=8,
- enable_xla=True,
- distribution_strategy='mirrored',
- per_replica_batch_size=256,
- gpu_thread_mode='gpu_private',
- dataset_num_private_threads=48)
-
- def benchmark_xla_8_gpu_fp16_tweaked_delay_measure(self):
- """Tests with manual config tuning, XLA, 8 GPUs and fp16.
-
- Delay performance measurement for stable performance on 96 vCPU platforms.
- """
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_8_gpu_fp16_tweaked_delay_measure',
- dtype='float16',
- num_gpus=8,
- enable_xla=True,
- distribution_strategy='mirrored',
- per_replica_batch_size=256,
- gpu_thread_mode='gpu_private',
- dataset_num_private_threads=48,
- steps=310)
-
- def benchmark_xla_8_gpu_fp16_dynamic_tweaked(self):
- """Tests Keras model with config tuning, XLA, 8 GPUs and dynamic fp16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_xla_8_gpu_fp16_dynamic_tweaked',
- dtype='float16',
- num_gpus=8,
- enable_xla=True,
- distribution_strategy='mirrored',
- per_replica_batch_size=256,
- gpu_thread_mode='gpu_private',
- loss_scale='dynamic',
- dataset_num_private_threads=48)
-
- def benchmark_2x2_tpu_bf16(self):
- """Test Keras model with 2x2 TPU, bf16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_2x2_tpu_bf16',
- dtype='bfloat16',
- num_tpus=8,
- distribution_strategy='tpu',
- per_replica_batch_size=128)
-
- def benchmark_4x4_tpu_bf16(self):
- """Test Keras model with 4x4 TPU, bf16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_4x4_tpu_bf16',
- dtype='bfloat16',
- num_tpus=32,
- distribution_strategy='tpu',
- per_replica_batch_size=128)
-
- def benchmark_8x8_tpu_bf16(self):
- """Test Keras model with 8x8 TPU, bf16."""
- self._setup()
- self._run_and_report_benchmark(
- experiment_name='benchmark_8x8_tpu_bf16',
- dtype='bfloat16',
- num_tpus=128,
- distribution_strategy='tpu',
- per_replica_batch_size=64)
-
- def fill_report_object(self, stats):
- super(Resnet50KerasClassifierBenchmarkBase, self).fill_report_object(
- stats,
- total_batch_size=FLAGS.batch_size,
- log_steps=FLAGS.log_steps)
-
-
-class Resnet50KerasBenchmarkBase(keras_benchmark.KerasBenchmark):
- """Resnet50 benchmarks."""
-
- def __init__(self, output_dir=None, default_flags=None, tpu=None):
- flag_methods = [resnet_imagenet_main.define_imagenet_keras_flags]
-
- super(Resnet50KerasBenchmarkBase, self).__init__(
- output_dir=output_dir,
- flag_methods=flag_methods,
- default_flags=default_flags,
- tpu=tpu)
-
- @benchmark_wrappers.enable_runtime_flags
- def _run_and_report_benchmark(self, skip_steps=None):
- start_time_sec = time.time()
- stats = resnet_imagenet_main.run(FLAGS)
- wall_time_sec = time.time() - start_time_sec
- # Number of logged step time entries that are excluded in performance
- # report. We keep results from last 100 batches, or skip the steps based on
- # input skip_steps.
- warmup = (skip_steps or (FLAGS.train_steps - 100)) // FLAGS.log_steps
-
- super(Resnet50KerasBenchmarkBase, self)._report_benchmark(
- stats,
- wall_time_sec,
- total_batch_size=FLAGS.batch_size,
- log_steps=FLAGS.log_steps,
- warmup=warmup,
- start_time_sec=start_time_sec)
-
- def benchmark_1_gpu_no_dist_strat(self):
- """Test Keras model with 1 GPU, no distribution strategy."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_no_dist_strat')
- FLAGS.batch_size = 128
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly(self):
- """Test Keras model with 1 GPU, no distribution strategy, run eagerly."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.run_eagerly = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_1_gpu_no_dist_strat_run_eagerly')
- FLAGS.batch_size = 64
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly_tweaked(self):
- """Test Keras model with 1 GPU, no distribution strategy, run eagerly."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.run_eagerly = True
- FLAGS.explicit_gpu_placement = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_1_gpu_no_dist_strat_run_eagerly_tweaked')
- FLAGS.batch_size = 64
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly_fp16(self):
- """Test with 1 GPU, no distribution strategy, fp16, run eagerly."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.run_eagerly = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_1_gpu_no_dist_strat_run_eagerly_fp16')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 128
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly_fp16_tweaked(self):
- """Test with 1 GPU, no distribution strategy, fp16, run eagerly."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.run_eagerly = True
- FLAGS.explicit_gpu_placement = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_1_gpu_no_dist_strat_run_eagerly_fp16_tweaked')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 128
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu(self):
- """Test Keras model with 1 GPU."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu')
- FLAGS.batch_size = 128
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_amp(self):
- """Test Keras model with 1 GPU with automatic mixed precision."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.dtype = 'fp16'
- FLAGS.fp16_implementation = 'graph_rewrite'
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_amp')
- FLAGS.batch_size = 256
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu(self):
- """Test Keras model with XLA and 1 GPU."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu')
- FLAGS.batch_size = 128
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu_amp(self):
- """Test Keras model with XLA and 1 GPU with automatic mixed precision."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.dtype = 'fp16'
- FLAGS.fp16_implementation = 'graph_rewrite'
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_amp')
- FLAGS.batch_size = 256
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_fp16(self):
- """Test Keras model with 1 GPU and fp16."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_fp16')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_fp16_dynamic(self):
- """Test Keras model with 1 GPU, fp16, and dynamic loss scaling."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_fp16_dynamic')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- FLAGS.loss_scale = 'dynamic'
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu_fp16(self):
- """Test Keras model with XLA, 1 GPU and fp16."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_fp16')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu_fp16_tweaked(self):
- """Test Keras model with XLA, 1 GPU, fp16, and manual config tuning."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_fp16_tweaked')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu_fp16_dynamic(self):
- """Test Keras model with XLA, 1 GPU, fp16, and dynamic loss scaling."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_fp16_dynamic')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- FLAGS.loss_scale = 'dynamic'
- self._run_and_report_benchmark()
-
- def benchmark_8_gpu(self):
- """Test Keras model with 8 GPUs."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu')
- FLAGS.batch_size = 128 * 8 # 8 GPUs
- self._run_and_report_benchmark()
-
- def benchmark_8_gpu_amp(self):
- """Test Keras model with 8 GPUs with automatic mixed precision."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.enable_eager = True
- FLAGS.dtype = 'fp16'
- FLAGS.fp16_implementation = 'graph_rewrite'
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_amp')
- FLAGS.batch_size = 256 * 8 # 8 GPUs
- self._run_and_report_benchmark()
-
- def benchmark_8_gpu_tweaked(self):
- """Test Keras model with manual config tuning and 8 GPUs."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_tweaked')
- FLAGS.batch_size = 128 * 8 # 8 GPUs
- FLAGS.datasets_num_private_threads = 14
- self._run_and_report_benchmark()
-
- def benchmark_xla_8_gpu(self):
- """Test Keras model with XLA and 8 GPUs."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_8_gpu')
- FLAGS.batch_size = 128 * 8 # 8 GPUs
- self._run_and_report_benchmark()
-
- def benchmark_xla_8_gpu_amp(self):
- """Test Keras model with XLA and 8 GPUs with automatic mixed precision."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.enable_eager = True
- FLAGS.dtype = 'fp16'
- FLAGS.fp16_implementation = 'graph_rewrite'
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_8_gpu_amp')
- FLAGS.batch_size = 256 * 8 # 8 GPUs
- self._run_and_report_benchmark()
-
- def benchmark_xla_8_gpu_tweaked(self):
- """Test Keras model with manual config tuning, 8 GPUs, and XLA."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_8_gpu_tweaked')
- FLAGS.batch_size = 128 * 8
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- FLAGS.datasets_num_private_threads = 24
- self._run_and_report_benchmark()
-
- def benchmark_8_gpu_fp16(self):
- """Test Keras model with 8 GPUs and fp16."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_fp16')
- FLAGS.batch_size = 256 * 8 # 8 GPUs
- self._run_and_report_benchmark()
-
- def benchmark_8_gpu_fp16_tweaked(self):
- """Test Keras model with 8 GPUs, fp16, and manual config tuning."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_fp16_tweaked')
- FLAGS.batch_size = 256 * 8 # 8 GPUs
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- FLAGS.dataset_num_private_threads = 40
- self._run_and_report_benchmark()
-
- def benchmark_8_gpu_fp16_dynamic_tweaked(self):
- """Test Keras model with 8 GPUs, fp16, dynamic loss scaling, and tuned."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_8_gpu_fp16_dynamic_tweaked')
- FLAGS.batch_size = 256 * 8 # 8 GPUs
- FLAGS.loss_scale = 'dynamic'
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- FLAGS.dataset_num_private_threads = 40
- self._run_and_report_benchmark()
-
- def benchmark_xla_8_gpu_fp16(self):
- """Test Keras model with XLA, 8 GPUs and fp16."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_8_gpu_fp16')
- FLAGS.batch_size = 256 * 8 # 8 GPUs
- self._run_and_report_benchmark()
-
- def benchmark_xla_8_gpu_fp16_tweaked(self):
- """Test Keras model with manual config tuning, XLA, 8 GPUs and fp16."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_8_gpu_fp16_tweaked')
- FLAGS.batch_size = 256 * 8 # 8 GPUs
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- FLAGS.datasets_num_private_threads = 48
- self._run_and_report_benchmark()
-
- def benchmark_xla_8_gpu_fp16_tweaked_delay_measure(self):
- """Test with manual config tuning, XLA, 8 GPUs and fp16.
-
- Delay performance measurement for stable performance on 96 vCPU platforms.
- """
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_xla_8_gpu_fp16_tweaked_delay_measure')
- FLAGS.batch_size = 256 * 8
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- FLAGS.datasets_num_private_threads = 48
- FLAGS.train_steps = 310
- self._run_and_report_benchmark()
-
- def benchmark_xla_8_gpu_fp16_dynamic_tweaked(self):
- """Test Keras model with config tuning, XLA, 8 GPUs and dynamic fp16."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'mirrored'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_xla_8_gpu_fp16_dynamic_tweaked')
- FLAGS.batch_size = 256 * 8 # 8 GPUs
- FLAGS.loss_scale = 'dynamic'
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- FLAGS.datasets_num_private_threads = 48
- self._run_and_report_benchmark()
-
- def benchmark_2x2_tpu_bf16(self):
- """Test Keras model with 2x2 TPU, bf16."""
- self._setup()
-
- FLAGS.dtype = 'bf16'
- FLAGS.distribution_strategy = 'tpu'
- FLAGS.model_dir = self._get_model_dir('benchmark_2x2_tpu_bf16')
- FLAGS.batch_size = 1024
- self._run_and_report_benchmark()
-
- def benchmark_4x4_tpu_bf16(self):
- """Test Keras model with 4x4 TPU, bf16."""
- self._setup()
-
- FLAGS.dtype = 'bf16'
- FLAGS.distribution_strategy = 'tpu'
- FLAGS.model_dir = self._get_model_dir('benchmark_4x4_tpu_bf16')
- FLAGS.batch_size = 4096
- self._run_and_report_benchmark()
-
- def benchmark_8x8_tpu_bf16(self):
- """Test Keras model with 8x8 TPU, bf16."""
- self._setup()
-
- FLAGS.dtype = 'bf16'
- FLAGS.distribution_strategy = 'tpu'
- FLAGS.model_dir = self._get_model_dir('benchmark_8x8_tpu_bf16')
- FLAGS.batch_size = 8192
- self._run_and_report_benchmark()
-
- def fill_report_object(self, stats):
- super(Resnet50KerasBenchmarkBase, self).fill_report_object(
- stats,
- total_batch_size=FLAGS.batch_size,
- log_steps=FLAGS.log_steps)
-
-
-class Resnet50KerasBenchmarkSynth(Resnet50KerasClassifierBenchmarkBase):
- """Resnet50 synthetic benchmark tests."""
-
- def __init__(self, output_dir=None, root_data_dir=None, tpu=None, **kwargs):
- def_flags = {}
- def_flags['log_steps'] = 10
-
- super(Resnet50KerasBenchmarkSynth, self).__init__(
- output_dir=output_dir, default_flags=def_flags, tpu=tpu,
- dataset_builder='synthetic', train_epochs=1, train_steps=110)
-
-
-class Resnet50KerasBenchmarkReal(Resnet50KerasClassifierBenchmarkBase):
- """Resnet50 real data benchmark tests."""
-
- def __init__(self, output_dir=None, root_data_dir=None, tpu=None, **kwargs):
- data_dir = os.path.join(root_data_dir, 'imagenet')
- def_flags = {}
- def_flags['log_steps'] = 10
-
- super(Resnet50KerasBenchmarkReal, self).__init__(
- output_dir=output_dir, default_flags=def_flags, tpu=tpu,
- dataset_builder='records', train_epochs=1, train_steps=110,
- data_dir=data_dir)
-
-
-class Resnet50KerasBenchmarkRemoteData(Resnet50KerasBenchmarkBase):
- """Resnet50 real data (stored in remote storage) benchmark tests."""
-
- def __init__(self, output_dir=None, root_data_dir=None, **kwargs):
- def_flags = {}
- def_flags['skip_eval'] = True
- def_flags['report_accuracy_metrics'] = False
- def_flags['data_dir'] = os.path.join(root_data_dir, 'imagenet')
- # Defining multiple epochs overrides the train_steps setting in benchmarks.
- def_flags['train_epochs'] = 2
- # Cache dataset so performance is stable after the first epoch.
- def_flags['training_dataset_cache'] = True
- def_flags['log_steps'] = 100
- # Note that for single GPU and pure eager tests which are less likely to be
- # input bound and more stable, these tests will run for shorter time by
- # overriding FLAGS.train_epochs, train_seteps, log_steps in benchmark
- # methods, and skip_steps in _run_and_report_benchmark().
-
- super(Resnet50KerasBenchmarkRemoteData, self).__init__(
- output_dir=output_dir, default_flags=def_flags)
-
- def _override_flags_to_run_test_shorter(self):
- FLAGS.train_epochs = 1
- FLAGS.train_steps = 300
- FLAGS.log_steps = 10
-
- def benchmark_1_gpu_no_dist_strat(self):
- """Test Keras model with 1 GPU, no distribution strategy."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_no_dist_strat')
- FLAGS.batch_size = 128
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly(self):
- """Test Keras model with 1 GPU, no distribution strategy, run eagerly."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.run_eagerly = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_1_gpu_no_dist_strat_run_eagerly')
- FLAGS.batch_size = 64
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly_tweaked(self):
- """Test Keras model with 1 GPU, no distribution strategy, run eagerly."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.run_eagerly = True
- FLAGS.explicit_gpu_placement = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_1_gpu_no_dist_strat_run_eagerly_tweaked')
- FLAGS.batch_size = 64
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly_fp16(self):
- """Test with 1 GPU, no distribution strategy, fp16, run eagerly."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.run_eagerly = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_1_gpu_no_dist_strat_run_eagerly_fp16')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 128
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_no_dist_strat_run_eagerly_fp16_tweaked(self):
- """Test with 1 GPU, no distribution strategy, fp16, run eagerly."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.run_eagerly = True
- FLAGS.explicit_gpu_placement = True
- FLAGS.distribution_strategy = 'off'
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_1_gpu_no_dist_strat_run_eagerly_fp16_tweaked')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 128
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu(self):
- """Test Keras model with 1 GPU."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu')
- FLAGS.batch_size = 128
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_amp(self):
- """Test Keras model with 1 GPU with automatic mixed precision."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.dtype = 'fp16'
- FLAGS.fp16_implementation = 'graph_rewrite'
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_amp')
- FLAGS.batch_size = 256
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu(self):
- """Test Keras model with XLA and 1 GPU."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu')
- FLAGS.batch_size = 128
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu_amp(self):
- """Test Keras model with XLA and 1 GPU with automatic mixed precision."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.dtype = 'fp16'
- FLAGS.fp16_implementation = 'graph_rewrite'
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_amp')
- FLAGS.batch_size = 256
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_fp16(self):
- """Test Keras model with 1 GPU and fp16."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_fp16')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_1_gpu_fp16_dynamic(self):
- """Test Keras model with 1 GPU, fp16, and dynamic loss scaling."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_1_gpu_fp16_dynamic')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- FLAGS.loss_scale = 'dynamic'
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu_fp16(self):
- """Test Keras model with XLA, 1 GPU and fp16."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_fp16')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu_fp16_tweaked(self):
- """Test Keras model with XLA, 1 GPU, fp16, and manual config tuning."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_fp16_tweaked')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- def benchmark_xla_1_gpu_fp16_dynamic(self):
- """Test Keras model with XLA, 1 GPU, fp16, and dynamic loss scaling."""
- self._setup()
-
- FLAGS.num_gpus = 1
- FLAGS.enable_eager = True
- FLAGS.enable_xla = True
- FLAGS.distribution_strategy = 'one_device'
- FLAGS.model_dir = self._get_model_dir('benchmark_xla_1_gpu_fp16_dynamic')
- FLAGS.dtype = 'fp16'
- FLAGS.batch_size = 256
- FLAGS.loss_scale = 'dynamic'
- self._override_flags_to_run_test_shorter()
- self._run_and_report_benchmark()
-
- @benchmark_wrappers.enable_runtime_flags
- def _run_and_report_benchmark(self):
- if FLAGS.num_gpus == 1 or FLAGS.run_eagerly:
- # For single GPU and pure eager tests which are less likely to be input
- # bound and more stable, run for shorter time and use the default
- # skip_steps.
- skip_steps = None
- else:
- # skip the first epoch for performance measurement.
- skip_steps = 600
- super(Resnet50KerasBenchmarkRemoteData,
- self)._run_and_report_benchmark(skip_steps=skip_steps)
-
-
-class TrivialKerasBenchmarkReal(keras_benchmark.KerasBenchmark):
- """Trivial model with real data benchmark tests."""
-
- def __init__(self, output_dir=None, root_data_dir=None, **kwargs):
- flag_methods = [resnet_imagenet_main.define_imagenet_keras_flags]
-
- def_flags = {}
- def_flags['use_trivial_model'] = True
- def_flags['skip_eval'] = True
- def_flags['report_accuracy_metrics'] = False
- def_flags['dtype'] = 'fp16'
- def_flags['data_dir'] = os.path.join(root_data_dir, 'imagenet')
- def_flags['train_steps'] = 600
- def_flags['log_steps'] = 100
- def_flags['distribution_strategy'] = 'mirrored'
-
- super(TrivialKerasBenchmarkReal, self).__init__(
- output_dir=output_dir,
- flag_methods=flag_methods,
- default_flags=def_flags)
-
- @benchmark_wrappers.enable_runtime_flags
- def _run_and_report_benchmark(self):
- start_time_sec = time.time()
- stats = resnet_imagenet_main.run(FLAGS)
- wall_time_sec = time.time() - start_time_sec
-
- super(TrivialKerasBenchmarkReal, self)._report_benchmark(
- stats,
- wall_time_sec,
- total_batch_size=FLAGS.batch_size,
- log_steps=FLAGS.log_steps)
-
- def benchmark_8_gpu_warmup(self):
- """Dummy test that runs over an epoch to warmup the machine."""
- self._setup()
-
- FLAGS.num_gpus = 8
- FLAGS.enable_eager = True
- FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu_warmup')
- FLAGS.batch_size = 256 * 8
- FLAGS.train_steps = 700
- self._run_and_report_benchmark()
-
- def fill_report_object(self, stats):
- super(TrivialKerasBenchmarkReal, self).fill_report_object(
- stats,
- total_batch_size=FLAGS.batch_size,
- log_steps=FLAGS.log_steps)
-
-
-class Resnet50MultiWorkerKerasAccuracy(keras_benchmark.KerasBenchmark):
- """Resnet50 distributed accuracy tests with multiple workers."""
-
- def __init__(self, output_dir=None, root_data_dir=None, **kwargs):
- flag_methods = [classifier_trainer.define_imagenet_keras_flags]
- self.data_dir = os.path.join(root_data_dir, 'imagenet')
- super(Resnet50MultiWorkerKerasAccuracy, self).__init__(
- output_dir=output_dir, flag_methods=flag_methods)
-
- def _benchmark_common(self, eager, num_workers, all_reduce_alg):
- """Common to all benchmarks in this class."""
- self._setup()
-
- num_gpus = 8
- FLAGS.num_gpus = num_gpus
- FLAGS.data_dir = self.data_dir
- FLAGS.train_epochs = 90
- FLAGS.epochs_between_evals = 10
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = eager
- FLAGS.enable_xla = False
- FLAGS.distribution_strategy = 'multi_worker_mirrored'
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- FLAGS.datasets_num_private_threads = 32
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_{}_8_gpu_{}_worker_fp16_{}_tweaked'.format(
- 'eager' if eager else 'graph', num_workers, all_reduce_alg))
- FLAGS.batch_size = 256 * num_gpus * num_workers
- FLAGS.all_reduce_alg = all_reduce_alg
-
- self._run_and_report_benchmark()
-
- @benchmark_wrappers.enable_runtime_flags
- def _run_and_report_benchmark(self,
- top_1_min=MIN_TOP_1_ACCURACY,
- top_1_max=MAX_TOP_1_ACCURACY):
- start_time_sec = time.time()
- stats = classifier_trainer.run(flags.FLAGS)
- wall_time_sec = time.time() - start_time_sec
-
- super(Resnet50MultiWorkerKerasAccuracy, self)._report_benchmark(
- stats,
- wall_time_sec,
- top_1_min=top_1_min,
- top_1_max=top_1_max,
- total_batch_size=FLAGS.batch_size,
- log_steps=100)
-
- def _get_model_dir(self, folder_name):
- return os.path.join(self.output_dir, folder_name)
-
- def benchmark_eager_8_gpu_2_workers_fp16_ring_tweaked(self):
- """Eager, 8 GPUs per worker, 2 workers, fp16, ring all-reduce."""
- self._benchmark_common(eager=True, num_workers=2, all_reduce_alg='ring')
-
- def benchmark_eager_8_gpu_2_workers_fp16_nccl_tweaked(self):
- """Eager, 8 GPUs per worker, 2 workers, fp16, nccl all-reduce."""
- self._benchmark_common(eager=True, num_workers=2, all_reduce_alg='nccl')
-
- def benchmark_eager_8_gpu_8_workers_fp16_ring_tweaked(self):
- """Eager, 8 GPUs per worker, 8 workers, fp16, ring all-reduce."""
- self._benchmark_common(eager=True, num_workers=8, all_reduce_alg='ring')
-
- def benchmark_eager_8_gpu_8_workers_fp16_nccl_tweaked(self):
- """Eager, 8 GPUs per worker, 8 workers, fp16, nccl all-reduce."""
- self._benchmark_common(eager=True, num_workers=8, all_reduce_alg='nccl')
-
-
-class Resnet50MultiWorkerKerasBenchmark(Resnet50KerasBenchmarkBase):
- """Resnet50 distributed benchmark tests with multiple workers."""
-
- def __init__(self, output_dir=None, default_flags=None):
- super(Resnet50MultiWorkerKerasBenchmark, self).__init__(
- output_dir=output_dir, default_flags=default_flags)
-
- def _benchmark_common(self, eager, num_workers, all_reduce_alg):
- """Common to all benchmarks in this class."""
- self._setup()
-
- num_gpus = 8
- FLAGS.num_gpus = num_gpus
- FLAGS.dtype = 'fp16'
- FLAGS.enable_eager = eager
- FLAGS.enable_xla = False
- FLAGS.distribution_strategy = 'multi_worker_mirrored'
- FLAGS.tf_gpu_thread_mode = 'gpu_private'
- FLAGS.datasets_num_private_threads = 32
- FLAGS.model_dir = self._get_model_dir(
- 'benchmark_{}_8_gpu_{}_worker_fp16_{}_tweaked'.format(
- 'eager' if eager else 'graph', num_workers, all_reduce_alg))
- FLAGS.batch_size = 256 * num_gpus * num_workers
- FLAGS.all_reduce_alg = all_reduce_alg
-
- self._run_and_report_benchmark()
-
- def benchmark_eager_8_gpu_1_worker_fp16_ring_tweaked(self):
- """Eager, 8 GPUs per worker, 1 worker, fp16, ring all-reduce."""
- self._benchmark_common(eager=True, num_workers=1, all_reduce_alg='ring')
-
- def benchmark_eager_8_gpu_1_worker_fp16_nccl_tweaked(self):
- """Eager, 8 GPUs per worker, 1 worker, fp16, nccl all-reduce."""
- self._benchmark_common(eager=True, num_workers=1, all_reduce_alg='nccl')
-
- def benchmark_eager_8_gpu_2_workers_fp16_ring_tweaked(self):
- """Eager, 8 GPUs per worker, 2 workers, fp16, ring all-reduce."""
- self._benchmark_common(eager=True, num_workers=2, all_reduce_alg='ring')
-
- def benchmark_eager_8_gpu_2_workers_fp16_nccl_tweaked(self):
- """Eager, 8 GPUs per worker, 2 workers, fp16, nccl all-reduce."""
- self._benchmark_common(eager=True, num_workers=2, all_reduce_alg='nccl')
-
- def benchmark_eager_8_gpu_8_workers_fp16_ring_tweaked(self):
- """Eager, 8 GPUs per worker, 8 workers, fp16, ring all-reduce."""
- self._benchmark_common(eager=True, num_workers=8, all_reduce_alg='ring')
-
- def benchmark_eager_8_gpu_8_workers_fp16_nccl_tweaked(self):
- """Eager, 8 GPUs per worker, 8 workers, fp16, nccl all-reduce."""
- self._benchmark_common(eager=True, num_workers=8, all_reduce_alg='nccl')
-
-
-class Resnet50MultiWorkerKerasBenchmarkSynth(Resnet50MultiWorkerKerasBenchmark):
- """Resnet50 multi-worker synthetic data benchmark tests."""
-
- def __init__(self, output_dir=None, root_data_dir=None, **kwargs):
- def_flags = {}
- def_flags['skip_eval'] = True
- def_flags['report_accuracy_metrics'] = False
- def_flags['use_synthetic_data'] = True
- def_flags['train_steps'] = 110
- def_flags['log_steps'] = 10
-
- super(Resnet50MultiWorkerKerasBenchmarkSynth, self).__init__(
- output_dir=output_dir, default_flags=def_flags)
-
-
-class Resnet50MultiWorkerKerasBenchmarkReal(Resnet50MultiWorkerKerasBenchmark):
- """Resnet50 multi-worker real data benchmark tests."""
-
- def __init__(self, output_dir=None, root_data_dir=None, **kwargs):
- def_flags = {}
- def_flags['skip_eval'] = True
- def_flags['report_accuracy_metrics'] = False
- def_flags['data_dir'] = os.path.join(root_data_dir, 'imagenet')
- def_flags['train_steps'] = 110
- def_flags['log_steps'] = 10
-
- super(Resnet50MultiWorkerKerasBenchmarkReal, self).__init__(
- output_dir=output_dir, default_flags=def_flags)
-
-
-# TODO(kimjaehong): It also should be also cover other metheods of model
-# optimization techniques. In that time, this class will change to something
-# like 'KerasModelOptimizationAccuracyBase'.
-class KerasPruningAccuracyBase(keras_benchmark.KerasBenchmark):
- """Benchmark accuracy tests for pruning method."""
-
- def __init__(self,
- output_dir=None,
- root_data_dir=None,
- default_flags=None,
- **kwargs):
- """A accuracy benchmark class for pruning method.
-
- Args:
- output_dir: directory where to output e.g. log files
- root_data_dir: directory under which to look for dataset
- default_flags: default flags
- **kwargs: arbitrary named arguments. This is needed to make the
- constructor forward compatible in case PerfZero provides more
- named arguments before updating the constructor.
- """
- if default_flags is None:
- default_flags = {}
- default_flags['pruning_method'] = 'polynomial_decay'
- default_flags['data_dir'] = os.path.join(root_data_dir, 'imagenet')
-
- flag_methods = [resnet_imagenet_main.define_imagenet_keras_flags]
-
- super(KerasPruningAccuracyBase, self).__init__(
- output_dir=output_dir,
- flag_methods=flag_methods,
- default_flags=default_flags,
- **kwargs)
-
- def benchmark_8_gpu(self):
- """Test Keras model with eager, dist_strat and 8 GPUs."""
- self._setup()
- FLAGS.num_gpus = 8
- FLAGS.batch_size = 32 * 8
- FLAGS.train_epochs = 90
- FLAGS.epochs_between_evals = 10
- FLAGS.model_dir = self._get_model_dir('benchmark_8_gpu')
- FLAGS.dtype = 'fp32'
- FLAGS.enable_eager = True
- self._run_and_report_benchmark()
-
- @benchmark_wrappers.enable_runtime_flags
- def _run_and_report_benchmark(self,
- top_1_min=MODEL_OPTIMIZATION_TOP_1_ACCURACY[
- 'RESNET50_FINETUNE_PRUNING'][0],
- top_1_max=MODEL_OPTIMIZATION_TOP_1_ACCURACY[
- 'RESNET50_FINETUNE_PRUNING'][1]):
- start_time_sec = time.time()
- stats = resnet_imagenet_main.run(flags.FLAGS)
- wall_time_sec = time.time() - start_time_sec
-
- super(KerasPruningAccuracyBase, self)._report_benchmark(
- stats,
- wall_time_sec,
- top_1_min=top_1_min,
- top_1_max=top_1_max,
- total_batch_size=FLAGS.batch_size,
- log_steps=100)
-
-
-class MobilenetV1KerasPruningAccuracy(KerasPruningAccuracyBase):
- """Benchmark accuracy tests for MobilenetV1 with pruning method."""
-
- def __init__(self, root_data_dir=None, **kwargs):
- default_flags = {
- 'model': 'mobilenet',
- 'optimizer': 'mobilenet_default',
- 'initial_learning_rate_per_sample': 0.00007,
- 'pretrained_filepath': tf.train.latest_checkpoint(
- os.path.join(root_data_dir, 'mobilenet_v1')),
- 'pruning_begin_step': 0,
- 'pruning_end_step': 100000,
- 'pruning_initial_sparsity': 0.0,
- 'pruning_final_sparsity': 0.5,
- 'pruning_frequency': 100,
- }
- super(MobilenetV1KerasPruningAccuracy, self).__init__(
- root_data_dir=root_data_dir,
- default_flags=default_flags,
- **kwargs)
-
- def _run_and_report_benchmark(self):
- super(MobilenetV1KerasPruningAccuracy, self)._run_and_report_benchmark(
- top_1_min=\
- MODEL_OPTIMIZATION_TOP_1_ACCURACY['MOBILENET_V1_FINETUNE_PRUNING'][0],
- top_1_max=\
- MODEL_OPTIMIZATION_TOP_1_ACCURACY['MOBILENET_V1_FINETUNE_PRUNING'][1])
-
-
-class Resnet50KerasPruningAccuracy(KerasPruningAccuracyBase):
- """Benchmark accuracy tests for resnet50 with pruning method."""
-
- def __init__(self, root_data_dir=None, **kwargs):
- default_flags = {
- 'model': 'resnet50_v1.5',
- 'optimizer': 'mobilenet_default',
- 'initial_learning_rate_per_sample': 0.0000039,
- 'pretrained_filepath': tf.train.latest_checkpoint(
- os.path.join(root_data_dir, 'resnet50')),
- 'pruning_begin_step': 0,
- 'pruning_end_step': 50000,
- 'pruning_initial_sparsity': 0.0,
- 'pruning_final_sparsity': 0.5,
- 'pruning_frequency': 100,
- }
- super(Resnet50KerasPruningAccuracy, self).__init__(
- root_data_dir=root_data_dir,
- default_flags=default_flags,
- **kwargs)
-
- def _run_and_report_benchmark(self):
- super(Resnet50KerasPruningAccuracy, self)._run_and_report_benchmark(
- top_1_min=\
- MODEL_OPTIMIZATION_TOP_1_ACCURACY['RESNET50_FINETUNE_PRUNING'][0],
- top_1_max=\
- MODEL_OPTIMIZATION_TOP_1_ACCURACY['RESNET50_FINETUNE_PRUNING'][1])
-
-
-class KerasPruningBenchmarkRealBase(Resnet50KerasBenchmarkBase):
- """Pruning method benchmarks."""
-
- def __init__(self, root_data_dir=None, default_flags=None, **kwargs):
- if default_flags is None:
- default_flags = {}
- default_flags.update({
- 'skip_eval': True,
- 'report_accuracy_metrics': False,
- 'data_dir': os.path.join(root_data_dir, 'imagenet'),
- 'train_steps': 110,
- 'log_steps': 10,
- 'pruning_method': 'polynomial_decay',
- 'pruning_begin_step': 0,
- 'pruning_end_step': 50000,
- 'pruning_initial_sparsity': 0,
- 'pruning_final_sparsity': 0.5,
- 'pruning_frequency': 100,
- })
- super(KerasPruningBenchmarkRealBase, self).__init__(
- default_flags=default_flags, **kwargs)
-
-
-class MobilenetV1KerasPruningBenchmarkReal(KerasPruningBenchmarkRealBase):
- """Pruning method benchmarks for MobilenetV1."""
-
- def __init__(self, **kwargs):
- default_flags = {
- 'model': 'mobilenet',
- 'optimizer': 'mobilenet_default',
- }
- super(MobilenetV1KerasPruningBenchmarkReal, self).__init__(
- default_flags=default_flags, **kwargs)
-
-
-class Resnet50KerasPruningBenchmarkReal(KerasPruningBenchmarkRealBase):
- """Pruning method benchmarks for resnet50."""
-
- def __init__(self, **kwargs):
- default_flags = {
- 'model': 'resnet50_v1.5',
- 'optimizer': 'mobilenet_default',
- }
- super(Resnet50KerasPruningBenchmarkReal, self).__init__(
- default_flags=default_flags, **kwargs)
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/tfhub_memory_usage_benchmark.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/tfhub_memory_usage_benchmark.py
deleted file mode 100644
index 7f50ecf6b3e0c95c78c0ac574131321a1e41fceb..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/tfhub_memory_usage_benchmark.py
+++ /dev/null
@@ -1,69 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Runs a memory usage benchmark for a Tensorflow Hub model.
-
-Loads a SavedModel and records memory usage.
-"""
-import functools
-import time
-
-from absl import flags
-import tensorflow as tf
-import tensorflow_hub as hub
-
-from official.benchmark.perfzero_benchmark import PerfZeroBenchmark
-
-FLAGS = flags.FLAGS
-
-
-class TfHubMemoryUsageBenchmark(PerfZeroBenchmark):
- """A benchmark measuring memory usage for a given TF Hub SavedModel."""
-
- def __init__(self,
- hub_model_handle_list=None,
- output_dir=None,
- default_flags=None,
- root_data_dir=None,
- **kwargs):
- super(TfHubMemoryUsageBenchmark, self).__init__(
- output_dir=output_dir, default_flags=default_flags, **kwargs)
- if hub_model_handle_list:
- for hub_model_handle in hub_model_handle_list.split(';'):
- # Converts a model handle of the form
- # https://tfhub.dev/google/nnlm-en-dim128/1 to valid python method name
- # like google_nnlm_en_dim128_1.
- hub_model_method_name = hub_model_handle.replace(
- 'https://tfhub.dev',
- '').replace('/', '_').replace('-', '_').strip('_')
- setattr(
- self, 'benchmark_' + hub_model_method_name,
- functools.partial(self.benchmark_memory_usage, hub_model_handle))
-
- def benchmark_memory_usage(
- self, hub_model_handle='https://tfhub.dev/google/nnlm-en-dim128/1'):
- start_time_sec = time.time()
- self.load_model(hub_model_handle)
- wall_time_sec = time.time() - start_time_sec
-
- metrics = []
- self.report_benchmark(iters=-1, wall_time=wall_time_sec, metrics=metrics)
-
- def load_model(self, hub_model_handle):
- """Loads a TF Hub module."""
- hub.load(hub_model_handle)
-
-
-if __name__ == '__main__':
- tf.test.main()
diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/README.md b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/README.md
deleted file mode 100644
index 69eaabcc6ccabada838a0a2a3f12fd7eed69744c..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/README.md
+++ /dev/null
@@ -1,192 +0,0 @@
-# Experiments for ICLR 2018 paper.
-
-[Neural Program Synthesis with Priority Queue Training](https://arxiv.org/abs/1801.03526).
-
-Runs policy gradient (REINFORCE), priority queue training, genetic algorithm,
-and uniform random search.
-
-Run all examples below out of your top-level repo directory, i.e. where your git
-clone resides.
-
-
-## Just tell me how to run something and see results
-```bash
-# These tasks are the fastest to learn. 'echo' and 'count-down' are very
-# easy. run_eval_tasks.py will do most of the work to run all the jobs.
-# Should take between 10 and 30 minutes.
-
-# How many repetitions each experiment will run. In the paper, we use 25. Less
-# reps means faster experiments, but noisier results.
-REPS=25
-
-# Extra description in the job names for these experiments. Use this description
-# to distinguish between multiple runs of the same experiment.
-DESC="demo"
-
-# The tasks to run.
-TASKS="reverse echo-second-seq"
-
-# The model types and max NPE.
-EXPS=( pg-20M topk-20M ga-20M rand-20M )
-
-# Where training data is saved. This is chosen by launch_training.sh. Custom
-# implementations of launch_training.sh may use different locations.
-MODELS_DIR="/tmp/models"
-
-# Run run_eval_tasks.py for each experiment name in EXPS.
-for exp in "${EXPS[@]}"
-do
- ./single_task/run_eval_tasks.py \
- --exp "$exp" --tasks $TASKS --desc "$DESC" --reps $REPS
-done
-
-# During training or after completion, run this to aggregate results into a
-# table. This is also useful for seeing how much progress has been made.
-# Make sure the arguments here match the settings used above.
-# Note: This can take a few minutes because it reads from every experiment
-# directory.
-bazel run single_task:aggregate_experiment_results -- \
- --models_dir="$MODELS_DIR" \
- --max_npe="20M" \
- --task_list="$TASKS" \
- --model_types="[('pg', '$DESC'), ('topk', '$DESC'), ('ga', '$DESC'),
- ('rand', '$DESC')]" \
- --csv_file="/tmp/results_table.csv"
-```
-
-
-## Reproduce tuning results in paper
-```bash
-bazel build -c opt single_task:tune.par
-
-# PG and TopK Tuning.
-MAX_NPE=5000000
-CONFIG="
-env=c(task_cycle=['reverse-tune','remove-tune']),
-agent=c(
- algorithm='pg',
- grad_clip_threshold=50.0,param_init_factor=0.5,entropy_beta=0.05,lr=1e-5,
- optimizer='rmsprop',ema_baseline_decay=0.99,topk_loss_hparam=0.0,topk=0,
- replay_temperature=1.0,alpha=0.0,eos_token=False),
-timestep_limit=50,batch_size=64"
-
-./single_task/launch_tuning.sh \
- --job_name="iclr_pg_gridsearch.reverse-remove" \
- --config="$CONFIG" \
- --max_npe="$MAX_NPE" \
- --num_workers_per_tuner=1 \
- --num_ps_per_tuner=0 \
- --num_tuners=1 \
- --num_repetitions=50 \
- --hparam_space_type="pg" \
- --stop_on_success=true
-./single_task/launch_tuning.sh \
- --job_name="iclr_pg_topk_gridsearch.reverse-remove" \
- --config="$CONFIG" \
- --max_npe="$MAX_NPE" \
- --num_workers_per_tuner=1 \
- --num_ps_per_tuner=0 \
- --num_tuners=1 \
- --num_repetitions=50 \
- --hparam_space_type="pg-topk" \
- --fixed_hparams="topk=10" \
- --stop_on_success=true
-./single_task/launch_tuning.sh \
- --job_name="iclr_topk_gridsearch.reverse-remove" \
- --config="$CONFIG" \
- --max_npe="$MAX_NPE" \
- --num_workers_per_tuner=1 \
- --num_ps_per_tuner=0 \
- --num_tuners=1 \
- --num_repetitions=50 \
- --hparam_space_type="topk" \
- --fixed_hparams="topk=10" \
- --stop_on_success=true
-
-# GA Tuning.
-CONFIG="
-env=c(task_cycle=['reverse-tune','remove-char-tune']),
-agent=c(algorithm='ga'),
-timestep_limit=50"
-./single_task/launch_tuning.sh \
- --job_name="iclr_ga_gridsearch.reverse-remove" \
- --config="$CONFIG" \
- --max_npe="$MAX_NPE" \
- --num_workers_per_tuner=25 \
- --num_ps_per_tuner=0 \
- --num_tuners=1 \
- --num_repetitions=50 \
- --hparam_space_type="ga" \
- --stop_on_success=true
-
-# Aggregate tuning results. Run after tuning jobs complete.
-bazel run -c opt single_task:aggregate_tuning_results -- \
- --tuning_dir="$MODELS_DIR/iclr_pg_gridsearch.reverse-remove"
-bazel run -c opt single_task:aggregate_tuning_results -- \
- --tuning_dir="$MODELS_DIR/iclr_pg_topk_gridsearch.reverse-remove"
-bazel run -c opt single_task:aggregate_tuning_results -- \
- --tuning_dir="$MODELS_DIR/iclr_topk_gridsearch.reverse-remove"
-bazel run -c opt single_task:aggregate_tuning_results -- \
- --tuning_dir="$MODELS_DIR/iclr_ga_gridsearch.reverse-remove"
-```
-
-## Reproduce eval results in paper
-```bash
-DESC="v0" # Description for each experiment. "Version 0" is a good default.
-EXPS=( pg-5M topk-5M ga-5M rand-5M pg-20M topk-20M ga-20M rand-20M )
-for exp in "${EXPS[@]}"
-do
- ./single_task/run_eval_tasks.py \
- --exp "$exp" --iclr_tasks --desc "$DESC"
-done
-```
-
-## Run single experiment
-```bash
-EXP="topk-20M" # Learning algorithm + max-NPE
-TASK="reverse" # Coding task
-DESC="v0" # Description for each experiment. "Version 0" is a good default.
-./single_task/run_eval_tasks.py \
- --exp "$EXP" --task "$TASK" --desc "$DESC"
-```
-
-## Fetch eval results into a table
-```bash
-# These arguments should match the settings you used to run the experiments.
-MODELS_DIR="/tmp/models"
-MAX_NPE="20M"
-DESC="v0" # Same description used in the experiments.
-# MODEL_TYPES specifies each model type and the description used in their
-# experiments.
-MODEL_TYPES="[('pg', '$DESC'), ('topk', '$DESC'),
- ('ga', '$DESC'), ('rand', '$DESC')]"
-TASKS="" # Empty string will default to all ICLR tasks.
-# To specify custom task list, give task names separated by spaces. Example:
-# TASKS="reverse remove-char"
-bazel run single_task:aggregate_experiment_results -- \
- --models_dir="$MODELS_DIR" \
- --max_npe="$MAX_NPE" \
- --task_list="$TASKS" \
- --model_types="$MODEL_TYPES" \
- --csv_file="/tmp/results_table.csv"
-```
-
-## Reproduce shortest code examples in paper
-```bash
-# Maximum NPE is higher here. We only do 1 repetition, and the algorithm needs
-# time to simplify its solution.
-MODELS_DIR="/tmp/models"
-NPE="500M"
-DESC="short-code"
-./single_task/run_eval_tasks.py \
- --exp "simpl-$NPE" --desc "$DESC" --iclr_tasks --reps 1
-
-# Aggregate best code strings. Run after training completes.
-TASKS="" # Empty string. Will default to all ICLR tasks.
-bazel run single_task:aggregate_experiment_results -- \
- --models_dir="$MODELS_DIR" \
- --max_npe="$NPE" \
- --task_list="$TASKS" \
- --model_types="[('topk', '$DESC')]" \
- --data=code
-```
diff --git a/spaces/Nahidabyer/img-to-music/README.md b/spaces/Nahidabyer/img-to-music/README.md
deleted file mode 100644
index f7e2487cd42d65ff44a707eef14ab7ed4fd23f01..0000000000000000000000000000000000000000
--- a/spaces/Nahidabyer/img-to-music/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Img To Music
-emoji: 🌅🎶
-colorFrom: green
-colorTo: purple
-sdk: gradio
-sdk_version: 3.20.0
-app_file: app.py
-pinned: true
-duplicated_from: fffiloni/img-to-music
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Nekomaru180/rvc-model/infer_pack/models_onnx.py b/spaces/Nekomaru180/rvc-model/infer_pack/models_onnx.py
deleted file mode 100644
index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000
--- a/spaces/Nekomaru180/rvc-model/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,849 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/NeoonN/Aurora/README.md b/spaces/NeoonN/Aurora/README.md
deleted file mode 100644
index a2000034c7cf2552597b4c707a3e613e05f7880d..0000000000000000000000000000000000000000
--- a/spaces/NeoonN/Aurora/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Aurora
-emoji: 🏃
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/NeuralInternet/Text-Generation_Playground/modules/models.py b/spaces/NeuralInternet/Text-Generation_Playground/modules/models.py
deleted file mode 100644
index f4bb11fd3f7292657b008ab644b5be121d9980e5..0000000000000000000000000000000000000000
--- a/spaces/NeuralInternet/Text-Generation_Playground/modules/models.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import json
-import os
-import time
-import zipfile
-from pathlib import Path
-
-import numpy as np
-import torch
-import transformers
-from transformers import AutoModelForCausalLM, AutoTokenizer
-
-import modules.shared as shared
-
-transformers.logging.set_verbosity_error()
-
-local_rank = None
-
-if shared.args.flexgen:
- from flexgen.flex_opt import (CompressionConfig, ExecutionEnv, OptLM,
- Policy, str2bool)
-
-if shared.args.deepspeed:
- import deepspeed
- from transformers.deepspeed import (HfDeepSpeedConfig,
- is_deepspeed_zero3_enabled)
-
- from modules.deepspeed_parameters import generate_ds_config
-
- # Distributed setup
- local_rank = shared.args.local_rank if shared.args.local_rank is not None else int(os.getenv("LOCAL_RANK", "0"))
- world_size = int(os.getenv("WORLD_SIZE", "1"))
- torch.cuda.set_device(local_rank)
- deepspeed.init_distributed()
- ds_config = generate_ds_config(shared.args.bf16, 1 * world_size, shared.args.nvme_offload_dir)
- dschf = HfDeepSpeedConfig(ds_config) # Keep this object alive for the Transformers integration
-
-
-def load_model(model_name):
- print(f"Loading {model_name}...")
- t0 = time.time()
-
- shared.is_RWKV = model_name.lower().startswith('rwkv-')
-
- # Default settings
- if not any([shared.args.cpu, shared.args.load_in_8bit, shared.args.gptq_bits, shared.args.auto_devices, shared.args.disk, shared.args.gpu_memory is not None, shared.args.cpu_memory is not None, shared.args.deepspeed, shared.args.flexgen, shared.is_RWKV]):
- if any(size in shared.model_name.lower() for size in ('13b', '20b', '30b')):
- model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), device_map='auto', load_in_8bit=True)
- else:
- model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16).cuda()
-
- # FlexGen
- elif shared.args.flexgen:
- # Initialize environment
- env = ExecutionEnv.create(shared.args.disk_cache_dir)
-
- # Offloading policy
- policy = Policy(1, 1,
- shared.args.percent[0], shared.args.percent[1],
- shared.args.percent[2], shared.args.percent[3],
- shared.args.percent[4], shared.args.percent[5],
- overlap=True, sep_layer=True, pin_weight=shared.args.pin_weight,
- cpu_cache_compute=False, attn_sparsity=1.0,
- compress_weight=shared.args.compress_weight,
- comp_weight_config=CompressionConfig(
- num_bits=4, group_size=64,
- group_dim=0, symmetric=False),
- compress_cache=False,
- comp_cache_config=CompressionConfig(
- num_bits=4, group_size=64,
- group_dim=2, symmetric=False))
-
- model = OptLM(f"facebook/{shared.model_name}", env, "models", policy)
-
- # DeepSpeed ZeRO-3
- elif shared.args.deepspeed:
- model = AutoModelForCausalLM.from_pretrained(Path(f"models/{shared.model_name}"), torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16)
- model = deepspeed.initialize(model=model, config_params=ds_config, model_parameters=None, optimizer=None, lr_scheduler=None)[0]
- model.module.eval() # Inference
- print(f"DeepSpeed ZeRO-3 is enabled: {is_deepspeed_zero3_enabled()}")
-
- # RMKV model (not on HuggingFace)
- elif shared.is_RWKV:
- from modules.RWKV import RWKVModel, RWKVTokenizer
-
- model = RWKVModel.from_pretrained(Path(f'models/{model_name}'), dtype="fp32" if shared.args.cpu else "bf16" if shared.args.bf16 else "fp16", device="cpu" if shared.args.cpu else "cuda")
- tokenizer = RWKVTokenizer.from_pretrained(Path('models'))
-
- return model, tokenizer
-
- # Quantized model
- elif shared.args.gptq_bits > 0:
- from modules.GPTQ_loader import load_quantized
-
- model = load_quantized(model_name)
-
- # Custom
- else:
- command = "AutoModelForCausalLM.from_pretrained"
- params = ["low_cpu_mem_usage=True"]
- if not shared.args.cpu and not torch.cuda.is_available():
- print("Warning: no GPU has been detected.\nFalling back to CPU mode.\n")
- shared.args.cpu = True
-
- if shared.args.cpu:
- params.append("low_cpu_mem_usage=True")
- params.append("torch_dtype=torch.float32")
- else:
- params.append("device_map='auto'")
- params.append("load_in_8bit=True" if shared.args.load_in_8bit else "torch_dtype=torch.bfloat16" if shared.args.bf16 else "torch_dtype=torch.float16")
-
- if shared.args.gpu_memory:
- memory_map = shared.args.gpu_memory
- max_memory = f"max_memory={{0: '{memory_map[0]}GiB'"
- for i in range(1, len(memory_map)):
- max_memory += (f", {i}: '{memory_map[i]}GiB'")
- max_memory += (f", 'cpu': '{shared.args.cpu_memory or '99'}GiB'}}")
- params.append(max_memory)
- elif not shared.args.load_in_8bit:
- total_mem = (torch.cuda.get_device_properties(0).total_memory/(1024*1024))
- suggestion = round((total_mem-1000)/1000)*1000
- if total_mem-suggestion < 800:
- suggestion -= 1000
- suggestion = int(round(suggestion/1000))
- print(f"\033[1;32;1mAuto-assiging --gpu-memory {suggestion} for your GPU to try to prevent out-of-memory errors.\nYou can manually set other values.\033[0;37;0m")
- params.append(f"max_memory={{0: '{suggestion}GiB', 'cpu': '{shared.args.cpu_memory or '99'}GiB'}}")
- if shared.args.disk:
- params.append(f"offload_folder='{shared.args.disk_cache_dir}'")
-
- command = f"{command}(Path(f'models/{shared.model_name}'), {', '.join(set(params))})"
- model = eval(command)
-
- # Loading the tokenizer
- if shared.model_name.lower().startswith(('gpt4chan', 'gpt-4chan', '4chan')) and Path("models/gpt-j-6B/").exists():
- tokenizer = AutoTokenizer.from_pretrained(Path("models/gpt-j-6B/"))
- else:
- tokenizer = AutoTokenizer.from_pretrained(Path(f"models/{shared.model_name}/"))
- tokenizer.truncation_side = 'left'
-
- print(f"Loaded the model in {(time.time()-t0):.2f} seconds.")
- return model, tokenizer
-
-def load_soft_prompt(name):
- if name == 'None':
- shared.soft_prompt = False
- shared.soft_prompt_tensor = None
- else:
- with zipfile.ZipFile(Path(f'softprompts/{name}.zip')) as zf:
- zf.extract('tensor.npy')
- zf.extract('meta.json')
- j = json.loads(open('meta.json', 'r').read())
- print(f"\nLoading the softprompt \"{name}\".")
- for field in j:
- if field != 'name':
- if type(j[field]) is list:
- print(f"{field}: {', '.join(j[field])}")
- else:
- print(f"{field}: {j[field]}")
- print()
- tensor = np.load('tensor.npy')
- Path('tensor.npy').unlink()
- Path('meta.json').unlink()
- tensor = torch.Tensor(tensor).to(device=shared.model.device, dtype=shared.model.dtype)
- tensor = torch.reshape(tensor, (1, tensor.shape[0], tensor.shape[1]))
-
- shared.soft_prompt = True
- shared.soft_prompt_tensor = tensor
-
- return name
diff --git a/spaces/Nightwing25/AICoverGen/src/main.py b/spaces/Nightwing25/AICoverGen/src/main.py
deleted file mode 100644
index a0dc7d0d119562c55bb0789aee902aea7b854648..0000000000000000000000000000000000000000
--- a/spaces/Nightwing25/AICoverGen/src/main.py
+++ /dev/null
@@ -1,355 +0,0 @@
-import argparse
-import gc
-import hashlib
-import json
-import os
-import shlex
-import subprocess
-from contextlib import suppress
-from urllib.parse import urlparse, parse_qs
-
-import gradio as gr
-import librosa
-import numpy as np
-import soundfile as sf
-import sox
-import yt_dlp
-from pedalboard import Pedalboard, Reverb, Compressor, HighpassFilter
-from pedalboard.io import AudioFile
-from pydub import AudioSegment
-
-from mdx import run_mdx
-from rvc import Config, load_hubert, get_vc, rvc_infer
-
-BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-mdxnet_models_dir = os.path.join(BASE_DIR, 'mdxnet_models')
-rvc_models_dir = os.path.join(BASE_DIR, 'rvc_models')
-output_dir = os.path.join(BASE_DIR, 'song_output')
-
-
-def get_youtube_video_id(url, ignore_playlist=True):
- """
- Examples:
- http://youtu.be/SA2iWivDJiE
- http://www.youtube.com/watch?v=_oPAwA_Udwc&feature=feedu
- http://www.youtube.com/embed/SA2iWivDJiE
- http://www.youtube.com/v/SA2iWivDJiE?version=3&hl=en_US
- """
- query = urlparse(url)
- if query.hostname == 'youtu.be':
- if query.path[1:] == 'watch':
- return query.query[2:]
- return query.path[1:]
-
- if query.hostname in {'www.youtube.com', 'youtube.com', 'music.youtube.com'}:
- if not ignore_playlist:
- # use case: get playlist id not current video in playlist
- with suppress(KeyError):
- return parse_qs(query.query)['list'][0]
- if query.path == '/watch':
- return parse_qs(query.query)['v'][0]
- if query.path[:7] == '/watch/':
- return query.path.split('/')[1]
- if query.path[:7] == '/embed/':
- return query.path.split('/')[2]
- if query.path[:3] == '/v/':
- return query.path.split('/')[2]
-
- # returns None for invalid YouTube url
- return None
-
-
-def yt_download(link):
- ydl_opts = {
- 'format': 'bestaudio',
- 'outtmpl': '%(title)s',
- 'nocheckcertificate': True,
- 'ignoreerrors': True,
- 'no_warnings': True,
- 'quiet': True,
- 'extractaudio': True,
- 'postprocessors': [{'key': 'FFmpegExtractAudio', 'preferredcodec': 'mp3'}],
- }
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- result = ydl.extract_info(link, download=True)
- download_path = ydl.prepare_filename(result, outtmpl='%(title)s.mp3')
-
- return download_path
-
-
-def raise_exception(error_msg, is_webui):
- if is_webui:
- raise gr.Error(error_msg)
- else:
- raise Exception(error_msg)
-
-
-def get_rvc_model(voice_model, is_webui):
- rvc_model_filename, rvc_index_filename = None, None
- model_dir = os.path.join(rvc_models_dir, voice_model)
- for file in os.listdir(model_dir):
- ext = os.path.splitext(file)[1]
- if ext == '.pth':
- rvc_model_filename = file
- if ext == '.index':
- rvc_index_filename = file
-
- if rvc_model_filename is None:
- error_msg = f'No model file exists in {model_dir}.'
- raise_exception(error_msg, is_webui)
-
- return os.path.join(model_dir, rvc_model_filename), os.path.join(model_dir, rvc_index_filename) if rvc_index_filename else ''
-
-
-def get_audio_paths(song_dir):
- orig_song_path = None
- instrumentals_path = None
- main_vocals_dereverb_path = None
- backup_vocals_path = None
-
- for file in os.listdir(song_dir):
- if file.endswith('_Instrumental.wav'):
- instrumentals_path = os.path.join(song_dir, file)
- orig_song_path = instrumentals_path.replace('_Instrumental', '')
-
- elif file.endswith('_Vocals_Main_DeReverb.wav'):
- main_vocals_dereverb_path = os.path.join(song_dir, file)
-
- elif file.endswith('_Vocals_Backup.wav'):
- backup_vocals_path = os.path.join(song_dir, file)
-
- return orig_song_path, instrumentals_path, main_vocals_dereverb_path, backup_vocals_path
-
-
-def convert_to_stereo(audio_path):
- wave, sr = librosa.load(audio_path, mono=False, sr=44100)
-
- # check if mono
- if type(wave[0]) != np.ndarray:
- stereo_path = f'{os.path.splitext(audio_path)[0]}_stereo.wav'
- command = shlex.split(f'ffmpeg -y -loglevel error -i "{audio_path}" -ac 2 -f wav "{stereo_path}"')
- subprocess.run(command)
- return stereo_path
- else:
- return audio_path
-
-
-def pitch_shift(audio_path, pitch_change):
- output_path = f'{os.path.splitext(audio_path)[0]}_p{pitch_change}.wav'
- if not os.path.exists(output_path):
- y, sr = sf.read(audio_path)
- tfm = sox.Transformer()
- tfm.pitch(pitch_change)
- y_shifted = tfm.build_array(input_array=y, sample_rate_in=sr)
- sf.write(output_path, y_shifted, sr)
-
- return output_path
-
-
-def get_hash(filepath):
- with open(filepath, 'rb') as f:
- file_hash = hashlib.blake2b()
- while chunk := f.read(8192):
- file_hash.update(chunk)
-
- return file_hash.hexdigest()[:11]
-
-
-def display_progress(message, percent, is_webui, progress=None):
- if is_webui:
- progress(percent, desc=message)
- else:
- print(message)
-
-
-def preprocess_song(song_input, mdx_model_params, song_id, is_webui, input_type, progress=None):
- keep_orig = False
- if input_type == 'yt':
- display_progress('[~] Downloading song...', 0, is_webui, progress)
- song_link = song_input.split('&')[0]
- orig_song_path = yt_download(song_link)
- elif input_type == 'local':
- orig_song_path = song_input
- keep_orig = True
- else:
- orig_song_path = None
-
- song_output_dir = os.path.join(output_dir, song_id)
- orig_song_path = convert_to_stereo(orig_song_path)
-
- display_progress('[~] Separating Vocals from Instrumental...', 0.1, is_webui, progress)
- vocals_path, instrumentals_path = run_mdx(mdx_model_params, song_output_dir, os.path.join(mdxnet_models_dir, 'UVR-MDX-NET-Voc_FT.onnx'), orig_song_path, denoise=True, keep_orig=keep_orig)
-
- display_progress('[~] Separating Main Vocals from Backup Vocals...', 0.2, is_webui, progress)
- backup_vocals_path, main_vocals_path = run_mdx(mdx_model_params, song_output_dir, os.path.join(mdxnet_models_dir, 'UVR_MDXNET_KARA_2.onnx'), vocals_path, suffix='Backup', invert_suffix='Main', denoise=True)
-
- display_progress('[~] Applying DeReverb to Vocals...', 0.3, is_webui, progress)
- _, main_vocals_dereverb_path = run_mdx(mdx_model_params, song_output_dir, os.path.join(mdxnet_models_dir, 'Reverb_HQ_By_FoxJoy.onnx'), main_vocals_path, invert_suffix='DeReverb', exclude_main=True, denoise=True)
-
- return orig_song_path, vocals_path, instrumentals_path, main_vocals_path, backup_vocals_path, main_vocals_dereverb_path
-
-
-def voice_change(voice_model, vocals_path, output_path, pitch_change, f0_method, index_rate, filter_radius, rms_mix_rate, protect, crepe_hop_length, is_webui):
- rvc_model_path, rvc_index_path = get_rvc_model(voice_model, is_webui)
- device = 'cpu'
- config = Config(device, False)
- hubert_model = load_hubert(device, config.is_half, os.path.join(rvc_models_dir, 'hubert_base.pt'))
- cpt, version, net_g, tgt_sr, vc = get_vc(device, False, config, rvc_model_path)
-
- # convert main vocals
- rvc_infer(rvc_index_path, index_rate, vocals_path, output_path, pitch_change, f0_method, cpt, version, net_g, filter_radius, tgt_sr, rms_mix_rate, protect, crepe_hop_length, vc, hubert_model)
- del hubert_model, cpt
- gc.collect()
-
-
-def add_audio_effects(audio_path, reverb_rm_size, reverb_wet, reverb_dry, reverb_damping):
- output_path = f'{os.path.splitext(audio_path)[0]}_mixed.wav'
-
- # Initialize audio effects plugins
- board = Pedalboard(
- [
- HighpassFilter(),
- Compressor(ratio=4, threshold_db=-15),
- Reverb(room_size=reverb_rm_size, dry_level=reverb_dry, wet_level=reverb_wet, damping=reverb_damping)
- ]
- )
-
- with AudioFile(audio_path) as f:
- with AudioFile(output_path, 'w', f.samplerate, f.num_channels) as o:
- # Read one second of audio at a time, until the file is empty:
- while f.tell() < f.frames:
- chunk = f.read(int(f.samplerate))
- effected = board(chunk, f.samplerate, reset=False)
- o.write(effected)
-
- return output_path
-
-
-def combine_audio(audio_paths, output_path, main_gain, backup_gain, inst_gain, output_format):
- main_vocal_audio = AudioSegment.from_wav(audio_paths[0]) - 4 + main_gain
- backup_vocal_audio = AudioSegment.from_wav(audio_paths[1]) - 6 + backup_gain
- instrumental_audio = AudioSegment.from_wav(audio_paths[2]) - 7 + inst_gain
- main_vocal_audio.overlay(backup_vocal_audio).overlay(instrumental_audio).export(output_path, format=output_format)
-
-
-def song_cover_pipeline(song_input, voice_model, pitch_change, keep_files,
- is_webui=0, main_gain=0, backup_gain=0, inst_gain=0, index_rate=0.5, filter_radius=3,
- rms_mix_rate=0.25, f0_method='rmvpe', crepe_hop_length=128, protect=0.33, pitch_change_all=0,
- reverb_rm_size=0.15, reverb_wet=0.2, reverb_dry=0.8, reverb_damping=0.7, output_format='mp3',
- progress=gr.Progress()):
- try:
- if not song_input or not voice_model:
- raise_exception('Ensure that the song input field and voice model field is filled.', is_webui)
-
- display_progress('[~] Starting AI Cover Generation Pipeline...', 0, is_webui, progress)
-
- with open(os.path.join(mdxnet_models_dir, 'model_data.json')) as infile:
- mdx_model_params = json.load(infile)
-
- # if youtube url
- if urlparse(song_input).scheme == 'https':
- input_type = 'yt'
- song_id = get_youtube_video_id(song_input)
- if song_id is None:
- error_msg = 'Invalid YouTube url.'
- raise_exception(error_msg, is_webui)
-
- # local audio file
- else:
- input_type = 'local'
- song_input = song_input.strip('\"')
- if os.path.exists(song_input):
- song_id = get_hash(song_input)
- else:
- error_msg = f'{song_input} does not exist.'
- song_id = None
- raise_exception(error_msg, is_webui)
-
- song_dir = os.path.join(output_dir, song_id)
-
- if not os.path.exists(song_dir):
- os.makedirs(song_dir)
- orig_song_path, vocals_path, instrumentals_path, main_vocals_path, backup_vocals_path, main_vocals_dereverb_path = preprocess_song(song_input, mdx_model_params, song_id, is_webui, input_type, progress)
-
- else:
- vocals_path, main_vocals_path = None, None
- paths = get_audio_paths(song_dir)
-
- # if any of the audio files aren't available or keep intermediate files, rerun preprocess
- if any(path is None for path in paths) or keep_files:
- orig_song_path, vocals_path, instrumentals_path, main_vocals_path, backup_vocals_path, main_vocals_dereverb_path = preprocess_song(song_input, mdx_model_params, song_id, is_webui, input_type, progress)
- else:
- orig_song_path, instrumentals_path, main_vocals_dereverb_path, backup_vocals_path = paths
-
- pitch_change = pitch_change * 12 + pitch_change_all
- ai_vocals_path = os.path.join(song_dir, f'{os.path.splitext(os.path.basename(orig_song_path))[0]}_{voice_model}_p{pitch_change}_i{index_rate}_fr{filter_radius}_rms{rms_mix_rate}_pro{protect}_{f0_method}{"" if f0_method != "mangio-crepe" else f"_{crepe_hop_length}"}.wav')
- ai_cover_path = os.path.join(song_dir, f'{os.path.splitext(os.path.basename(orig_song_path))[0]} ({voice_model} Ver).{output_format}')
-
- if not os.path.exists(ai_vocals_path):
- display_progress('[~] Converting voice using RVC...', 0.5, is_webui, progress)
- voice_change(voice_model, main_vocals_dereverb_path, ai_vocals_path, pitch_change, f0_method, index_rate, filter_radius, rms_mix_rate, protect, crepe_hop_length, is_webui)
-
- display_progress('[~] Applying audio effects to Vocals...', 0.8, is_webui, progress)
- ai_vocals_mixed_path = add_audio_effects(ai_vocals_path, reverb_rm_size, reverb_wet, reverb_dry, reverb_damping)
-
- if pitch_change_all != 0:
- display_progress('[~] Applying overall pitch change', 0.85, is_webui, progress)
- instrumentals_path = pitch_shift(instrumentals_path, pitch_change_all)
- backup_vocals_path = pitch_shift(backup_vocals_path, pitch_change_all)
-
- display_progress('[~] Combining AI Vocals and Instrumentals...', 0.9, is_webui, progress)
- combine_audio([ai_vocals_mixed_path, backup_vocals_path, instrumentals_path], ai_cover_path, main_gain, backup_gain, inst_gain, output_format)
-
- if not keep_files:
- display_progress('[~] Removing intermediate audio files...', 0.95, is_webui, progress)
- intermediate_files = [vocals_path, main_vocals_path, ai_vocals_mixed_path]
- if pitch_change_all != 0:
- intermediate_files += [instrumentals_path, backup_vocals_path]
- for file in intermediate_files:
- if file and os.path.exists(file):
- os.remove(file)
-
- return ai_cover_path
-
- except Exception as e:
- raise_exception(str(e), is_webui)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description='Generate a AI cover song in the song_output/id directory.', add_help=True)
- parser.add_argument('-i', '--song-input', type=str, required=True, help='Link to a YouTube video or the filepath to a local mp3/wav file to create an AI cover of')
- parser.add_argument('-dir', '--rvc-dirname', type=str, required=True, help='Name of the folder in the rvc_models directory containing the RVC model file and optional index file to use')
- parser.add_argument('-p', '--pitch-change', type=int, required=True, help='Change the pitch of AI Vocals only. Generally, use 1 for male to female and -1 for vice-versa. (Octaves)')
- parser.add_argument('-k', '--keep-files', action=argparse.BooleanOptionalAction, help='Whether to keep all intermediate audio files generated in the song_output/id directory, e.g. Isolated Vocals/Instrumentals')
- parser.add_argument('-ir', '--index-rate', type=float, default=0.5, help='A decimal number e.g. 0.5, used to reduce/resolve the timbre leakage problem. If set to 1, more biased towards the timbre quality of the training dataset')
- parser.add_argument('-fr', '--filter-radius', type=int, default=3, help='A number between 0 and 7. If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness.')
- parser.add_argument('-rms', '--rms-mix-rate', type=float, default=0.25, help="A decimal number e.g. 0.25. Control how much to use the original vocal's loudness (0) or a fixed loudness (1).")
- parser.add_argument('-palgo', '--pitch-detection-algo', type=str, default='rmvpe', help='Best option is rmvpe (clarity in vocals), then mangio-crepe (smoother vocals).')
- parser.add_argument('-hop', '--crepe-hop-length', type=int, default=128, help='If pitch detection algo is mangio-crepe, controls how often it checks for pitch changes in milliseconds. The higher the value, the faster the conversion and less risk of voice cracks, but there is less pitch accuracy. Recommended: 128.')
- parser.add_argument('-pro', '--protect', type=float, default=0.33, help='A decimal number e.g. 0.33. Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy.')
- parser.add_argument('-mv', '--main-vol', type=int, default=0, help='Volume change for AI main vocals in decibels. Use -3 to decrease by 3 decibels and 3 to increase by 3 decibels')
- parser.add_argument('-bv', '--backup-vol', type=int, default=0, help='Volume change for backup vocals in decibels')
- parser.add_argument('-iv', '--inst-vol', type=int, default=0, help='Volume change for instrumentals in decibels')
- parser.add_argument('-pall', '--pitch-change-all', type=int, default=0, help='Change the pitch/key of vocals and instrumentals. Changing this slightly reduces sound quality')
- parser.add_argument('-rsize', '--reverb-size', type=float, default=0.15, help='Reverb room size between 0 and 1')
- parser.add_argument('-rwet', '--reverb-wetness', type=float, default=0.2, help='Reverb wet level between 0 and 1')
- parser.add_argument('-rdry', '--reverb-dryness', type=float, default=0.8, help='Reverb dry level between 0 and 1')
- parser.add_argument('-rdamp', '--reverb-damping', type=float, default=0.7, help='Reverb damping between 0 and 1')
- parser.add_argument('-oformat', '--output-format', type=str, default='mp3', help='Output format of audio file. mp3 for smaller file size, wav for best quality')
- args = parser.parse_args()
-
- rvc_dirname = args.rvc_dirname
- if not os.path.exists(os.path.join(rvc_models_dir, rvc_dirname)):
- raise Exception(f'The folder {os.path.join(rvc_models_dir, rvc_dirname)} does not exist.')
-
- cover_path = song_cover_pipeline(args.song_input, rvc_dirname, args.pitch_change, args.keep_files,
- main_gain=args.main_vol, backup_gain=args.backup_vol, inst_gain=args.inst_vol,
- index_rate=args.index_rate, filter_radius=args.filter_radius,
- rms_mix_rate=args.rms_mix_rate, f0_method=args.pitch_detection_algo,
- crepe_hop_length=args.crepe_hop_length, protect=args.protect,
- pitch_change_all=args.pitch_change_all,
- reverb_rm_size=args.reverb_size, reverb_wet=args.reverb_wetness,
- reverb_dry=args.reverb_dryness, reverb_damping=args.reverb_damping,
- output_format=args.output_format)
- print(f'[+] Cover generated at {cover_path}')
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/speech_to_text/xm_transformer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/speech_to_text/xm_transformer.py
deleted file mode 100644
index 5eecbfa2158dcbee90eef6d395bb5611ff8ee8de..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/speech_to_text/xm_transformer.py
+++ /dev/null
@@ -1,505 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import copy
-from typing import Dict, List, Optional, Tuple
-
-from fairseq import utils, checkpoint_utils
-from fairseq.models import (FairseqEncoderDecoderModel, FairseqEncoder,
- register_model, register_model_architecture)
-from fairseq.models.transformer import Embedding, TransformerDecoder
-from fairseq.models.wav2vec import Wav2VecEncoder
-from fairseq.modules.layer_norm import LayerNorm
-from fairseq.data.data_utils import lengths_to_padding_mask
-from fairseq.utils import safe_hasattr
-from torch import Tensor
-import torch.nn as nn
-
-
-logger = logging.getLogger(__name__)
-
-
-class Conv1dAdaptor(nn.Module):
- def __init__(self, in_dim, out_dim, n_layers=3, kernel_size=3, stride=2,
- add_layernorm=False):
- super().__init__()
- self.layers = nn.ModuleList(
- nn.Conv1d(in_dim if i == 0 else out_dim, out_dim * 2, kernel_size,
- stride=stride, padding=kernel_size // 2)
- for i in range(n_layers)
- )
- self.layernorms = None
- if add_layernorm:
- self.layernorms = nn.ModuleList(LayerNorm(out_dim)
- for _ in range(n_layers))
- self.stride = stride
-
- @classmethod
- def add_args(cls, parser):
- parser.add_argument("--adaptor-n-layers", type=int)
- parser.add_argument("--adaptor-kernel-size", type=int)
- parser.add_argument("--adaptor-stride", type=int)
- parser.add_argument("--adaptor-layernorm", action='store_true')
-
- def get_out_seq_lens_tensor(self, in_seq_lens_tensor):
- out = in_seq_lens_tensor.clone()
- for _ in self.layers:
- out = ((out.float() - 1) / self.stride + 1).floor().long()
- return out
-
- def forward(self, x, padding_mask):
- # T x B x C -> B x C x T
- x = x.transpose(0, 1).transpose(1, 2)
- for i, layer in enumerate(self.layers):
- x = nn.functional.glu(layer(x), dim=1)
- if self.layernorms is not None:
- x = self.layernorms[i](x.transpose(1, 2)).transpose(1, 2)
- # B x C x T -> T x B x C
- x = x.transpose(1, 2).transpose(0, 1)
-
- if padding_mask is None:
- out_padding_mask = None
- else:
- out_lengths = self.get_out_seq_lens_tensor((~padding_mask).sum(1))
- out_padding_mask = lengths_to_padding_mask(out_lengths)
- return x, out_padding_mask
-
-
-def add_wav2vec_asr_args(parser):
- parser.add_argument("--w2v-path", help="path to wav2vec 2.0 model")
- parser.add_argument(
- "--no-pretrained-weights",
- action="store_true",
- help="if true, does not load pretrained weights",
- )
- parser.add_argument(
- "--dropout-input",
- type=float,
- metavar="D",
- help="dropout to apply to the input (after feat extr)",
- )
- parser.add_argument(
- "--final-dropout",
- type=float,
- metavar="D",
- help="dropout after transformer and before final projection",
- )
- parser.add_argument(
- "--apply-mask", action="store_true", help="apply masking during fine-tuning"
- )
- parser.add_argument(
- "--dropout",
- type=float,
- metavar="D",
- help="dropout probability inside wav2vec 2.0 model",
- )
- parser.add_argument(
- "--attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights inside wav2vec 2.0 model",
- )
- parser.add_argument(
- "--activation-dropout",
- "--relu-dropout",
- type=float,
- metavar="D",
- help="dropout probability after activation in FFN inside wav2vec 2.0 model",
- )
-
- parser.add_argument(
- "--mask-length", type=int, help="repeat the mask indices multiple times"
- )
-
- parser.add_argument(
- "--mask-prob", type=float, help="probability of replacing a token with mask"
- )
-
- parser.add_argument(
- "--mask-selection",
- type=str,
- choices=["static", "uniform", "normal", "poisson"],
- help="how to choose masks",
- )
-
- parser.add_argument(
- "--mask-other",
- type=float,
- help="stdev of the mask length in case of 'normal' selection strategy",
- )
-
- parser.add_argument(
- "--no-mask-overlap",
- action="store_true",
- help="whether to allow masks to overlap",
- )
-
- parser.add_argument(
- "--mask-channel-length", type=int, help="repeat the mask indices multiple times"
- )
-
- parser.add_argument(
- "--mask-channel-prob",
- type=float,
- help="probability of replacing a token with mask",
- )
-
- parser.add_argument(
- "--mask-channel-selection",
- type=str,
- choices=["static", "uniform", "normal", "poisson"],
- help="how to choose masks",
- )
-
- parser.add_argument(
- "--mask-channel-other",
- type=float,
- help="stdev of the mask length in case of 'normal' selection strategy",
- )
-
- parser.add_argument(
- "--no-mask-channel-overlap",
- action="store_true",
- help="whether to allow masks to overlap",
- )
-
- parser.add_argument(
- "--freeze-finetune-updates",
- default=0,
- type=int,
- help="dont finetune wav2vec for this many updates",
- )
-
- parser.add_argument(
- "--feature-grad-mult",
- default=None,
- type=float,
- help="reset feature grad mult in wav2vec 2.0 to this",
- )
-
- parser.add_argument(
- "--layerdrop",
- default=0.0,
- type=float,
- help="probability of dropping a layer in wav2vec 2.0",
- )
- parser.add_argument("--w2v-args", default=None)
-
-
-class Wav2VecEncoderWithAdaptor(FairseqEncoder):
- def __init__(self, args):
- super().__init__(None)
- self.w2v_encoder = Wav2VecEncoder(args)
- encoder_out_dim = self.w2v_encoder.w2v_model.encoder.embedding_dim
- # Projection + 8x shrinking
- self.adaptor = Conv1dAdaptor(
- encoder_out_dim, args.decoder_embed_dim,
- n_layers=args.adaptor_n_layers,
- kernel_size=args.adaptor_kernel_size, stride=args.adaptor_stride,
- add_layernorm=args.adaptor_layernorm
- )
- for k, p in self.w2v_encoder.w2v_model.named_parameters():
- # Freeze pretrained models by default
- if safe_hasattr(args, 'finetune_w2v_params') and XMTransformerModel.finetune_params(
- args.finetune_w2v_params, k):
- p.requires_grad = True
- else:
- p.requires_grad = False
-
- @classmethod
- def add_args(cls, parser):
- add_wav2vec_asr_args(parser)
- parser.add_argument(
- "--normalize", action="store_true",
- help="if set, normalizes input to have 0 mean and unit variance",
- )
- parser.add_argument("--finetune-w2v-params", type=str, metavar="STR",
- help="comma-separated param strings to finetune.")
- Conv1dAdaptor.add_args(parser)
-
- def forward(self, src_tokens, src_lengths=None, **kwargs):
- padding_mask = lengths_to_padding_mask(src_lengths)
- out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True)
- x = out["encoder_out"]
- enc_padding_mask = None
- if out["encoder_padding_mask"] is not None:
- enc_padding_mask = out["encoder_padding_mask"].transpose(0, 1) # T X B --> B X T
-
- x, enc_padding_mask = self.adaptor(x, enc_padding_mask)
-
- return {
- "encoder_out": [x], # T x B x C
- "encoder_padding_mask": [enc_padding_mask] if enc_padding_mask.any() else [], # B x T
- "encoder_embedding": [], # B x T x C
- "encoder_states": [], # List[T x B x C]
- "src_tokens": [],
- "src_lengths": [],
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- new_encoder_out = (
- [] if len(encoder_out["encoder_out"]) == 0
- else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]]
- )
-
- new_encoder_padding_mask = (
- [] if len(encoder_out["encoder_padding_mask"]) == 0
- else [x.index_select(0, new_order) for x in
- encoder_out["encoder_padding_mask"]]
- )
-
- new_encoder_embedding = (
- [] if len(encoder_out["encoder_embedding"]) == 0
- else [x.index_select(0, new_order) for x in
- encoder_out["encoder_embedding"]]
- )
-
- encoder_states = encoder_out["encoder_states"]
- if len(encoder_states) > 0:
- for idx, state in enumerate(encoder_states):
- encoder_states[idx] = state.index_select(1, new_order)
-
- return {
- "encoder_out": new_encoder_out, # T x B x C
- "encoder_padding_mask": new_encoder_padding_mask, # B x T
- "encoder_embedding": new_encoder_embedding, # B x T x C
- "encoder_states": encoder_states, # List[T x B x C]
- "src_tokens": [], # B x T
- "src_lengths": [], # B x 1
- }
-
-
-def add_decoder_args(parser):
- parser.add_argument("--activation-fn", type=str, default='relu',
- choices=utils.get_available_activation_fns(),
- help="activation function to use")
- parser.add_argument("--decoder-dropout", type=float, metavar="D",
- help="dropout probability")
- parser.add_argument("--decoder-attention-dropout", type=float,
- metavar="D",
- help="dropout probability for attention weights")
- parser.add_argument("--decoder-activation-dropout", type=float,
- metavar="D",
- help="dropout probability after activation in FFN.")
- parser.add_argument("--decoder-embed-dim", type=int, metavar="N",
- help="decoder embedding dimension")
- parser.add_argument("--decoder-ffn-embed-dim", type=int, metavar="N",
- help="decoder embedding dimension for FFN")
- parser.add_argument("--decoder-layers", type=int, metavar="N",
- help="num decoder layers")
- parser.add_argument("--decoder-attention-heads", type=int, metavar="N",
- help="num decoder attention heads")
- parser.add_argument("--decoder-normalize-before", action="store_true",
- help="apply layernorm before each decoder block")
- parser.add_argument("--layernorm-embedding", action="store_true",
- help="add layernorm to embedding")
- parser.add_argument("--no-scale-embedding", action="store_true",
- help="if True, dont scale embeddings")
- parser.add_argument(
- "--load-pretrained-decoder-from", type=str, metavar="STR",
- help="model to take decoder weights from (for initialization)"
- )
- parser.add_argument("--finetune-decoder-params", type=str,
- metavar="STR",
- help="comma-separated param strings to finetune.")
- parser.add_argument("--checkpoint-activations", action="store_true")
-
-
-@register_model("xm_transformer")
-class XMTransformerModel(FairseqEncoderDecoderModel):
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @classmethod
- def add_args(cls, parser):
- """Add model-specific arguments to the parser."""
- Wav2VecEncoderWithAdaptor.add_args(parser)
- add_decoder_args(parser)
-
- @classmethod
- def build_encoder(cls, args):
- _args = copy.deepcopy(args)
- state = checkpoint_utils.load_checkpoint_to_cpu(args.w2v_path)
- if state.get("cfg") is not None:
- encoder_embed_dim = state["cfg"]._content["model"]["encoder_embed_dim"]
- elif state.get("args") is not None:
- encoder_embed_dim = state["args"].encoder_embed_dim
- else:
- raise ValueError(f"Invalid config in {args.w2v_path}")
- _args.decoder_embed_dim = encoder_embed_dim
- encoder = Wav2VecEncoderWithAdaptor(_args)
- return encoder
-
- @classmethod
- def build_decoder(cls, args, task, embed_tokens):
- _args = copy.deepcopy(args)
- _args.dropout = args.decoder_dropout
- _args.attention_dropout = args.decoder_attention_dropout
- _args.activation_dropout = args.decoder_activation_dropout
- _args.max_target_positions = 1024
-
- decoder = TransformerDecoder(_args, task.target_dictionary,
- embed_tokens)
- if getattr(args, "load_pretrained_decoder_from", None):
- decoder = checkpoint_utils.load_pretrained_component_from_model(
- component=decoder, checkpoint=args.load_pretrained_decoder_from
- )
- for k, p in decoder.named_parameters():
- # Freeze pretrained models by default
- if safe_hasattr(args, 'finetune_decoder_params') and XMTransformerModel.finetune_params(
- args.finetune_decoder_params, k):
- p.requires_grad = True
- else:
- p.requires_grad = False
- return decoder
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present in older models
- base_architecture(args)
-
- def build_embedding(dictionary, embed_dim):
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- return Embedding(num_embeddings, embed_dim, padding_idx)
-
- decoder_embed_tokens = build_embedding(task.target_dictionary,
- args.decoder_embed_dim)
- encoder = cls.build_encoder(args)
- decoder = cls.build_decoder(args, task, decoder_embed_tokens)
- return cls(encoder, decoder)
-
- def get_normalized_probs(
- self,
- net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]],
- log_probs: bool,
- sample: Optional[Dict[str, Tensor]] = None,
- ):
- # net_output['encoder_out'] is a (B, T, D) tensor
- lprobs = self.get_normalized_probs_scriptable(net_output, log_probs,
- sample)
- lprobs.batch_first = True
- return lprobs
-
- def forward(self, src_tokens, src_lengths, prev_output_tokens, **kwargs):
- """
- The forward method inherited from the base class has a **kwargs
- argument in its input, which is not supported in torchscript. This
- method overrites the forward method definition without **kwargs.
- """
- encoder_out = self.encoder(src_tokens=src_tokens,
- src_lengths=src_lengths, **kwargs)
- decoder_out = self.decoder(prev_output_tokens=prev_output_tokens,
- encoder_out=encoder_out)
- return decoder_out
-
- def upgrade_state_dict(self, state_dict):
- for k, _ in state_dict.items():
- if 'adaptor.layers' in state_dict:
- print(k)
- new = k.replace('adaptor.layers', 'adaptor_layers')
- state_dict[new] = state_dict[k]
- del state_dict[k]
-
- @staticmethod
- def finetune_params(finetune_params, param_name):
- if finetune_params == "all":
- return True
- finetune_params_list = finetune_params.split(",")
- for finetune_param in finetune_params_list:
- if finetune_param in param_name:
- return True
- return False
-
-
-def set_default_w2v_encoder_args(args):
- args.no_pretrained_weights = getattr(args, "no_pretrained_weights", False)
- args.dropout_input = getattr(args, "dropout_input", 0)
- args.final_dropout = getattr(args, "final_dropout", 0)
- args.apply_mask = getattr(args, "apply_mask", False)
- args.dropout = getattr(args, "dropout", 0)
- args.attention_dropout = getattr(args, "attention_dropout", 0)
- args.activation_dropout = getattr(args, "activation_dropout", 0)
-
- args.mask_length = getattr(args, "mask_length", 10)
- args.mask_prob = getattr(args, "mask_prob", 0.5)
- args.mask_selection = getattr(args, "mask_selection", "static")
- args.mask_other = getattr(args, "mask_other", 0)
- args.no_mask_overlap = getattr(args, "no_mask_overlap", False)
- args.mask_channel_length = getattr(args, "mask_channel_length", 10)
- args.mask_channel_prob = getattr(args, "mask_channel_prob", 0.5)
- args.mask_channel_before = getattr(args, "mask_channel_before", False)
- args.mask_channel_selection = getattr(args, "mask_channel_selection",
- "static")
- args.mask_channel_other = getattr(args, "mask_channel_other", 0)
- args.no_mask_channel_overlap = getattr(args, "no_mask_channel_overlap",
- False)
-
- args.freeze_finetune_updates = getattr(args, "freeze_finetune_updates", 0)
- args.feature_grad_mult = 0.1
- args.layerdrop = getattr(args, "layerdrop", 0.0)
-
- args.normalize = getattr(args, "normalize", False)
-
-
-def set_default_adaptor_args(args):
- args.adaptor_n_layers = getattr(args, "adaptor_n_layers", 3)
- args.adaptor_kernel_size = getattr(args, "adaptor_kernel_size", 3)
- args.adaptor_stride = getattr(args, "adaptor_stride", 2)
- args.adaptor_layernorm = getattr(args, "adaptor_layernorm", False)
-
-
-def set_default_mbart_decoder_args(args):
- args.decoder_embed_path = getattr(args, 'decoder_embed_path', None)
- args.decoder_embed_dim = getattr(args, 'decoder_embed_dim', 1024)
- args.decoder_ffn_embed_dim = getattr(args, 'decoder_ffn_embed_dim',
- 4 * 1024)
- args.decoder_layers = getattr(args, 'decoder_layers', 12)
- args.decoder_attention_heads = getattr(args, 'decoder_attention_heads', 16)
- args.decoder_normalize_before = getattr(args, 'decoder_normalize_before',
- True)
- args.decoder_learned_pos = getattr(args, 'decoder_learned_pos', True)
- args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0)
- args.adaptive_input = getattr(args, "adaptive_input", False)
- args.decoder_attention_dropout = getattr(args, 'decoder_attention_dropout',
- 0.)
- args.decoder_activation_dropout = getattr(args,
- 'decoder_activation_dropout', 0.)
- args.decoder_dropout = getattr(args, 'decoder_dropout', 0.1)
- args.adaptive_softmax_cutoff = getattr(args, 'adaptive_softmax_cutoff',
- None)
- args.adaptive_softmax_dropout = getattr(args, 'adaptive_softmax_dropout', 0)
- args.share_decoder_input_output_embed = getattr(
- args, 'share_decoder_input_output_embed', True
- )
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
-
- args.decoder_output_dim = getattr(args, 'decoder_output_dim',
- args.decoder_embed_dim)
- args.decoder_input_dim = getattr(args, 'decoder_input_dim',
- args.decoder_embed_dim)
-
- args.no_scale_embedding = getattr(args, 'no_scale_embedding', False)
- args.quant_noise_pq = getattr(args, "quant_noise_pq", 0)
- args.layernorm_embedding = getattr(args, 'layernorm_embedding', True)
-
- args.activation_fn = getattr(args, 'activation_fn', 'gelu')
- args.pooler_activation_fn = getattr(args, 'pooler_activation_fn', 'tanh')
- args.pooler_dropout = getattr(args, 'pooler_dropout', 0.0)
- args.checkpoint_activations = getattr(args, "checkpoint_activations", False)
-
-
-@register_model_architecture(model_name="xm_transformer",
- arch_name="xm_transformer")
-def base_architecture(args):
- set_default_w2v_encoder_args(args)
- set_default_adaptor_args(args)
- set_default_mbart_decoder_args(args)
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/masked_lm.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/masked_lm.py
deleted file mode 100644
index 279458f317ee258e393c4bf1879bb3c14a04ab51..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/masked_lm.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-import math
-from omegaconf import II
-
-import torch
-from fairseq import metrics, modules, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-
-
-@dataclass
-class MaskedLmConfig(FairseqDataclass):
- tpu: bool = II("common.tpu")
-
-
-@register_criterion("masked_lm", dataclass=MaskedLmConfig)
-class MaskedLmLoss(FairseqCriterion):
- """
- Implementation for the loss used in masked language model (MLM) training.
- """
-
- def __init__(self, cfg: MaskedLmConfig, task):
- super().__init__(task)
- self.tpu = cfg.tpu
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- masked_tokens = sample["target"].ne(self.padding_idx)
- sample_size = masked_tokens.int().sum()
-
- # Rare: when all tokens are masked, project all tokens.
- # We use torch.where to avoid device-to-host transfers,
- # except on CPU where torch.where is not well supported
- # (see github.com/pytorch/pytorch/issues/26247).
- if self.tpu:
- masked_tokens = None # always project all tokens on TPU
- elif masked_tokens.device == torch.device("cpu"):
- if not masked_tokens.any():
- masked_tokens = None
- else:
- masked_tokens = torch.where(
- masked_tokens.any(),
- masked_tokens,
- masked_tokens.new([True]),
- )
-
- logits = model(**sample["net_input"], masked_tokens=masked_tokens)[0]
- targets = model.get_targets(sample, [logits])
- if masked_tokens is not None:
- targets = targets[masked_tokens]
-
- loss = modules.cross_entropy(
- logits.view(-1, logits.size(-1)),
- targets.view(-1),
- reduction="sum",
- ignore_index=self.padding_idx,
- )
-
- logging_output = {
- "loss": loss if self.tpu else loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample["nsentences"],
- "sample_size": sample_size,
- }
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg)
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/ORI-Muchim/RaidenTTS/app.py b/spaces/ORI-Muchim/RaidenTTS/app.py
deleted file mode 100644
index 2ccdc3e45722f9a36486ed49590c2fcce017fd5a..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/RaidenTTS/app.py
+++ /dev/null
@@ -1,156 +0,0 @@
-import json
-import os
-import re
-
-import librosa
-import numpy as np
-import torch
-from torch import no_grad, LongTensor
-import commons
-import utils
-import gradio as gr
-from models import SynthesizerTrn
-from text import text_to_sequence, _clean_text
-from mel_processing import spectrogram_torch
-
-limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces
-
-
-def get_text(text, hps, is_phoneme):
- text_norm = text_to_sequence(text, hps.symbols, [] if is_phoneme else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm
-
-
-def create_tts_fn(model, hps, speaker_ids):
- def tts_fn(text, speaker, speed, is_phoneme):
- if limitation:
- text_len = len(text)
- max_len = 700
- if is_phoneme:
- max_len *= 3
- else:
- if len(hps.data.text_cleaners) > 0 and hps.data.text_cleaners[0] == "zh_ja_mixture_cleaners":
- text_len = len(re.sub("(\[ZH\]|\[JA\])", "", text))
- if text_len > max_len:
- return "Error: Text is too long", None
-
- speaker_id = speaker_ids[speaker]
- stn_tst = get_text(text, hps, is_phoneme)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = LongTensor([stn_tst.size(0)])
- sid = LongTensor([speaker_id])
- audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8,
- length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy()
- del stn_tst, x_tst, x_tst_lengths, sid
- return "Success", (hps.data.sampling_rate, audio)
-
- return tts_fn
-
-
-def create_to_phoneme_fn(hps):
- def to_phoneme_fn(text):
- return _clean_text(text, hps.data.text_cleaners) if text != "" else ""
-
- return to_phoneme_fn
-
-
-css = """
- #advanced-btn {
- color: white;
- border-color: black;
- background: black;
- font-size: .7rem !important;
- line-height: 19px;
- margin-top: 24px;
- margin-bottom: 12px;
- padding: 2px 8px;
- border-radius: 14px !important;
- }
- #advanced-options {
- display: none;
- margin-bottom: 20px;
- }
-"""
-
-if __name__ == '__main__':
- models_tts = []
- name = 'RaidenTTS'
- lang = '한국어 (Korean)'
- example = '내가 누군가를 좋아한다는 사실이 그 사람에게는 상처가 될 수 있잖아요.'
- config_path = f"saved_model/config.json"
- model_path = f"saved_model/model.pth"
- cover_path = f"saved_model/cover.png"
- hps = utils.get_hparams_from_file(config_path)
- model = SynthesizerTrn(
- len(hps.symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model)
- utils.load_checkpoint(model_path, model, None)
- model.eval()
- speaker_ids = [0]
- speakers = [name]
-
- t = 'vits'
- models_tts.append((name, cover_path, speakers, lang, example,
- hps.symbols, create_tts_fn(model, hps, speaker_ids),
- create_to_phoneme_fn(hps)))
-
- app = gr.Blocks(css=css)
-
- with app:
- gr.Markdown("# Genshin Impact RaidenTTS Using Vits Model\n"
- "\n\n")
-
- for i, (name, cover_path, speakers, lang, example, symbols, tts_fn,
- to_phoneme_fn) in enumerate(models_tts):
-
- with gr.Column():
- gr.Markdown(f"## {name}\n\n"
- f"\n\n"
- f"lang: {lang}")
- tts_input1 = gr.TextArea(label="Text (700 words limitation)", value=example,
- elem_id=f"tts-input{i}")
- tts_input2 = gr.Dropdown(label="Speaker", choices=speakers,
- type="index", value=speakers[0])
- tts_input3 = gr.Slider(label="Speed", value=1, minimum=0.1, maximum=2, step=0.1)
- with gr.Accordion(label="Advanced Options", open=False):
- phoneme_input = gr.Checkbox(value=False, label="Phoneme input")
- to_phoneme_btn = gr.Button("Covert text to phoneme")
- phoneme_list = gr.Dataset(label="Phoneme list", components=[tts_input1],
- samples=[[x] for x in symbols],
- elem_id=f"phoneme-list{i}")
- phoneme_list_json = gr.Json(value=symbols, visible=False)
- tts_submit = gr.Button("Generate", variant="primary")
- tts_output1 = gr.Textbox(label="Output Message")
- tts_output2 = gr.Audio(label="Output Audio")
- tts_submit.click(tts_fn, [tts_input1, tts_input2, tts_input3, phoneme_input],
- [tts_output1, tts_output2])
- to_phoneme_btn.click(to_phoneme_fn, [tts_input1], [tts_input1])
- phoneme_list.click(None, [phoneme_list, phoneme_list_json], [],
- _js=f"""
- (i,phonemes) => {{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let text_input = root.querySelector("#tts-input{i}").querySelector("textarea");
- let startPos = text_input.selectionStart;
- let endPos = text_input.selectionEnd;
- let oldTxt = text_input.value;
- let result = oldTxt.substring(0, startPos) + phonemes[i] + oldTxt.substring(endPos);
- text_input.value = result;
- let x = window.scrollX, y = window.scrollY;
- text_input.focus();
- text_input.selectionStart = startPos + phonemes[i].length;
- text_input.selectionEnd = startPos + phonemes[i].length;
- text_input.blur();
- window.scrollTo(x, y);
- return [];
- }}""")
-
- app.queue(concurrency_count=3).launch(show_api=False)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/object365.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/object365.py
deleted file mode 100644
index 8b8cc19da23d8397284b50588ee46e750b5b7552..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/object365.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import logging
-import os
-from fvcore.common.timer import Timer
-from detectron2.structures import BoxMode
-from fvcore.common.file_io import PathManager
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from lvis import LVIS
-
-logger = logging.getLogger(__name__)
-
-__all__ = ["load_o365_json", "register_o365_instances"]
-
-
-def register_o365_instances(name, metadata, json_file, image_root):
- DatasetCatalog.register(name, lambda: load_o365_json(
- json_file, image_root, name))
- MetadataCatalog.get(name).set(
- json_file=json_file, image_root=image_root,
- evaluator_type="lvis", **metadata
- )
-
-
-def get_o365_meta():
- categories = [{'supercategory': 'object', 'id': 1, 'name': 'object'}]
- o365_categories = sorted(categories, key=lambda x: x["id"])
- thing_classes = [k["name"] for k in o365_categories]
- meta = {"thing_classes": thing_classes}
- return meta
-
-
-def load_o365_json(json_file, image_root, dataset_name=None):
- '''
- Load Object365 class name text for object description for GRiT
- '''
-
- json_file = PathManager.get_local_path(json_file)
-
- timer = Timer()
- lvis_api = LVIS(json_file)
- if timer.seconds() > 1:
- logger.info("Loading {} takes {:.2f} seconds.".format(
- json_file, timer.seconds()))
-
- class_names = {}
- sort_cat = sorted(lvis_api.dataset['categories'], key=lambda x: x['id'])
- for x in sort_cat:
- if '/' in x['name']:
- text = ''
- for xx in x['name'].split('/'):
- text += xx
- text += ' '
- text = text[:-1]
- else:
- text = x['name']
- class_names[x['id']] = text
-
- img_ids = sorted(lvis_api.imgs.keys())
- imgs = lvis_api.load_imgs(img_ids)
- anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids]
-
- ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
- assert len(set(ann_ids)) == len(ann_ids), \
- "Annotation ids in '{}' are not unique".format(json_file)
-
- imgs_anns = list(zip(imgs, anns))
- logger.info("Loaded {} images in the LVIS v1 format from {}".format(
- len(imgs_anns), json_file))
-
- dataset_dicts = []
-
- for (img_dict, anno_dict_list) in imgs_anns:
- record = {}
- if "file_name" in img_dict:
- file_name = img_dict["file_name"]
- record["file_name"] = os.path.join(image_root, file_name)
-
- record["height"] = int(img_dict["height"])
- record["width"] = int(img_dict["width"])
- image_id = record["image_id"] = img_dict["id"]
-
- objs = []
- for anno in anno_dict_list:
- assert anno["image_id"] == image_id
- if anno.get('iscrowd', 0) > 0:
- continue
- obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS}
- obj["category_id"] = 0
- obj["object_description"] = class_names[anno['category_id']]
-
- objs.append(obj)
- record["annotations"] = objs
- if len(record["annotations"]) == 0:
- continue
- record["task"] = "ObjectDet"
- dataset_dicts.append(record)
-
- return dataset_dicts
-
-
-_CUSTOM_SPLITS_LVIS = {
- "object365_train": ("object365/images/train/", "object365/annotations/train_v1.json"),
-}
-
-
-for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items():
- register_o365_instances(
- key,
- get_o365_meta(),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
\ No newline at end of file
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/resnet.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/resnet.py
deleted file mode 100644
index 3e1d521f171c984cf6a7ff3dcebd96f8c5faf908..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/resnet.py
+++ /dev/null
@@ -1,181 +0,0 @@
-"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch"""
-
-import math
-
-import torch.nn as nn
-from torch.nn import BatchNorm2d
-
-from .utils import load_url
-
-__all__ = ['ResNet', 'resnet50']
-
-
-model_urls = {
- 'resnet50': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet50-imagenet.pth',
-}
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(inplanes, planes, stride)
- self.bn1 = BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(Bottleneck, self).__init__()
- self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
- self.bn1 = BatchNorm2d(planes)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
- self.bn2 = BatchNorm2d(planes)
- self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
- self.bn3 = BatchNorm2d(planes * 4)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(self, block, layers, num_classes=1000):
- self.inplanes = 128
- super(ResNet, self).__init__()
- self.conv1 = conv3x3(3, 64, stride=2)
- self.bn1 = BatchNorm2d(64)
- self.relu1 = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(64, 64)
- self.bn2 = BatchNorm2d(64)
- self.relu2 = nn.ReLU(inplace=True)
- self.conv3 = conv3x3(64, 128)
- self.bn3 = BatchNorm2d(128)
- self.relu3 = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
- self.avgpool = nn.AvgPool2d(7, stride=1)
- self.fc = nn.Linear(512 * block.expansion, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(2. / n))
- elif isinstance(m, BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
-
- x = self.avgpool(x)
- x = x.view(x.size(0), -1)
- x = self.fc(x)
-
- return x
-
-
-def resnet50(pretrained=False, **kwargs):
- """Constructs a ResNet-50 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet50']), strict=False)
- return model
-
-
-def resnet18(pretrained=False, **kwargs):
- """Constructs a ResNet-18 model.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet18']))
- return model
\ No newline at end of file
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/evaluator.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/evaluator.py
deleted file mode 100644
index aa9e80402633c08a580929b38a5cb695cb7171d8..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/evaluator.py
+++ /dev/null
@@ -1,220 +0,0 @@
-import logging
-import math
-from typing import Dict
-
-import numpy as np
-import torch
-import torch.nn as nn
-import tqdm
-from torch.utils.data import DataLoader
-
-from saicinpainting.evaluation.utils import move_to_device
-
-LOGGER = logging.getLogger(__name__)
-
-
-class InpaintingEvaluator():
- def __init__(self, dataset, scores, area_grouping=True, bins=10, batch_size=32, device='cuda',
- integral_func=None, integral_title=None, clamp_image_range=None):
- """
- :param dataset: torch.utils.data.Dataset which contains images and masks
- :param scores: dict {score_name: EvaluatorScore object}
- :param area_grouping: in addition to the overall scores, allows to compute score for the groups of samples
- which are defined by share of area occluded by mask
- :param bins: number of groups, partition is generated by np.linspace(0., 1., bins + 1)
- :param batch_size: batch_size for the dataloader
- :param device: device to use
- """
- self.scores = scores
- self.dataset = dataset
-
- self.area_grouping = area_grouping
- self.bins = bins
-
- self.device = torch.device(device)
-
- self.dataloader = DataLoader(self.dataset, shuffle=False, batch_size=batch_size)
-
- self.integral_func = integral_func
- self.integral_title = integral_title
- self.clamp_image_range = clamp_image_range
-
- def _get_bin_edges(self):
- bin_edges = np.linspace(0, 1, self.bins + 1)
-
- num_digits = max(0, math.ceil(math.log10(self.bins)) - 1)
- interval_names = []
- for idx_bin in range(self.bins):
- start_percent, end_percent = round(100 * bin_edges[idx_bin], num_digits), \
- round(100 * bin_edges[idx_bin + 1], num_digits)
- start_percent = '{:.{n}f}'.format(start_percent, n=num_digits)
- end_percent = '{:.{n}f}'.format(end_percent, n=num_digits)
- interval_names.append("{0}-{1}%".format(start_percent, end_percent))
-
- groups = []
- for batch in self.dataloader:
- mask = batch['mask']
- batch_size = mask.shape[0]
- area = mask.to(self.device).reshape(batch_size, -1).mean(dim=-1)
- bin_indices = np.searchsorted(bin_edges, area.detach().cpu().numpy(), side='right') - 1
- # corner case: when area is equal to 1, bin_indices should return bins - 1, not bins for that element
- bin_indices[bin_indices == self.bins] = self.bins - 1
- groups.append(bin_indices)
- groups = np.hstack(groups)
-
- return groups, interval_names
-
- def evaluate(self, model=None):
- """
- :param model: callable with signature (image_batch, mask_batch); should return inpainted_batch
- :return: dict with (score_name, group_type) as keys, where group_type can be either 'overall' or
- name of the particular group arranged by area of mask (e.g. '10-20%')
- and score statistics for the group as values.
- """
- results = dict()
- if self.area_grouping:
- groups, interval_names = self._get_bin_edges()
- else:
- groups = None
-
- for score_name, score in tqdm.auto.tqdm(self.scores.items(), desc='scores'):
- score.to(self.device)
- with torch.no_grad():
- score.reset()
- for batch in tqdm.auto.tqdm(self.dataloader, desc=score_name, leave=False):
- batch = move_to_device(batch, self.device)
- image_batch, mask_batch = batch['image'], batch['mask']
- if self.clamp_image_range is not None:
- image_batch = torch.clamp(image_batch,
- min=self.clamp_image_range[0],
- max=self.clamp_image_range[1])
- if model is None:
- assert 'inpainted' in batch, \
- 'Model is None, so we expected precomputed inpainting results at key "inpainted"'
- inpainted_batch = batch['inpainted']
- else:
- inpainted_batch = model(image_batch, mask_batch)
- score(inpainted_batch, image_batch, mask_batch)
- total_results, group_results = score.get_value(groups=groups)
-
- results[(score_name, 'total')] = total_results
- if groups is not None:
- for group_index, group_values in group_results.items():
- group_name = interval_names[group_index]
- results[(score_name, group_name)] = group_values
-
- if self.integral_func is not None:
- results[(self.integral_title, 'total')] = dict(mean=self.integral_func(results))
-
- return results
-
-
-def ssim_fid100_f1(metrics, fid_scale=100):
- ssim = metrics[('ssim', 'total')]['mean']
- fid = metrics[('fid', 'total')]['mean']
- fid_rel = max(0, fid_scale - fid) / fid_scale
- f1 = 2 * ssim * fid_rel / (ssim + fid_rel + 1e-3)
- return f1
-
-
-def lpips_fid100_f1(metrics, fid_scale=100):
- neg_lpips = 1 - metrics[('lpips', 'total')]['mean'] # invert, so bigger is better
- fid = metrics[('fid', 'total')]['mean']
- fid_rel = max(0, fid_scale - fid) / fid_scale
- f1 = 2 * neg_lpips * fid_rel / (neg_lpips + fid_rel + 1e-3)
- return f1
-
-
-
-class InpaintingEvaluatorOnline(nn.Module):
- def __init__(self, scores, bins=10, image_key='image', inpainted_key='inpainted',
- integral_func=None, integral_title=None, clamp_image_range=None):
- """
- :param scores: dict {score_name: EvaluatorScore object}
- :param bins: number of groups, partition is generated by np.linspace(0., 1., bins + 1)
- :param device: device to use
- """
- super().__init__()
- LOGGER.info(f'{type(self)} init called')
- self.scores = nn.ModuleDict(scores)
- self.image_key = image_key
- self.inpainted_key = inpainted_key
- self.bins_num = bins
- self.bin_edges = np.linspace(0, 1, self.bins_num + 1)
-
- num_digits = max(0, math.ceil(math.log10(self.bins_num)) - 1)
- self.interval_names = []
- for idx_bin in range(self.bins_num):
- start_percent, end_percent = round(100 * self.bin_edges[idx_bin], num_digits), \
- round(100 * self.bin_edges[idx_bin + 1], num_digits)
- start_percent = '{:.{n}f}'.format(start_percent, n=num_digits)
- end_percent = '{:.{n}f}'.format(end_percent, n=num_digits)
- self.interval_names.append("{0}-{1}%".format(start_percent, end_percent))
-
- self.groups = []
-
- self.integral_func = integral_func
- self.integral_title = integral_title
- self.clamp_image_range = clamp_image_range
-
- LOGGER.info(f'{type(self)} init done')
-
- def _get_bins(self, mask_batch):
- batch_size = mask_batch.shape[0]
- area = mask_batch.view(batch_size, -1).mean(dim=-1).detach().cpu().numpy()
- bin_indices = np.clip(np.searchsorted(self.bin_edges, area) - 1, 0, self.bins_num - 1)
- return bin_indices
-
- def forward(self, batch: Dict[str, torch.Tensor]):
- """
- Calculate and accumulate metrics for batch. To finalize evaluation and obtain final metrics, call evaluation_end
- :param batch: batch dict with mandatory fields mask, image, inpainted (can be overriden by self.inpainted_key)
- """
- result = {}
- with torch.no_grad():
- image_batch, mask_batch, inpainted_batch = batch[self.image_key], batch['mask'], batch[self.inpainted_key]
- if self.clamp_image_range is not None:
- image_batch = torch.clamp(image_batch,
- min=self.clamp_image_range[0],
- max=self.clamp_image_range[1])
- self.groups.extend(self._get_bins(mask_batch))
-
- for score_name, score in self.scores.items():
- result[score_name] = score(inpainted_batch, image_batch, mask_batch)
- return result
-
- def process_batch(self, batch: Dict[str, torch.Tensor]):
- return self(batch)
-
- def evaluation_end(self, states=None):
- """:return: dict with (score_name, group_type) as keys, where group_type can be either 'overall' or
- name of the particular group arranged by area of mask (e.g. '10-20%')
- and score statistics for the group as values.
- """
- LOGGER.info(f'{type(self)}: evaluation_end called')
-
- self.groups = np.array(self.groups)
-
- results = {}
- for score_name, score in self.scores.items():
- LOGGER.info(f'Getting value of {score_name}')
- cur_states = [s[score_name] for s in states] if states is not None else None
- total_results, group_results = score.get_value(groups=self.groups, states=cur_states)
- LOGGER.info(f'Getting value of {score_name} done')
- results[(score_name, 'total')] = total_results
-
- for group_index, group_values in group_results.items():
- group_name = self.interval_names[group_index]
- results[(score_name, group_name)] = group_values
-
- if self.integral_func is not None:
- results[(self.integral_title, 'total')] = dict(mean=self.integral_func(results))
-
- LOGGER.info(f'{type(self)}: reset scores')
- self.groups = []
- for sc in self.scores.values():
- sc.reset()
- LOGGER.info(f'{type(self)}: reset scores done')
-
- LOGGER.info(f'{type(self)}: evaluation_end done')
- return results
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fast_scnn.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fast_scnn.py
deleted file mode 100644
index 32fdeb659355a5ce5ef2cc7c2f30742703811cdf..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fast_scnn.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True, momentum=0.01)
-model = dict(
- type='EncoderDecoder',
- backbone=dict(
- type='FastSCNN',
- downsample_dw_channels=(32, 48),
- global_in_channels=64,
- global_block_channels=(64, 96, 128),
- global_block_strides=(2, 2, 1),
- global_out_channels=128,
- higher_in_channels=64,
- lower_in_channels=128,
- fusion_out_channels=128,
- out_indices=(0, 1, 2),
- norm_cfg=norm_cfg,
- align_corners=False),
- decode_head=dict(
- type='DepthwiseSeparableFCNHead',
- in_channels=128,
- channels=128,
- concat_input=False,
- num_classes=19,
- in_index=-1,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.4)),
- auxiliary_head=[
- dict(
- type='FCNHead',
- in_channels=128,
- channels=32,
- num_convs=1,
- num_classes=19,
- in_index=-2,
- norm_cfg=norm_cfg,
- concat_input=False,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.4)),
- dict(
- type='FCNHead',
- in_channels=64,
- channels=32,
- num_convs=1,
- num_classes=19,
- in_index=-3,
- norm_cfg=norm_cfg,
- concat_input=False,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.4)),
- ],
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/PKaushik/humandetect/yolov6/solver/build.py b/spaces/PKaushik/humandetect/yolov6/solver/build.py
deleted file mode 100644
index 0684ff7bfae7db248b29850d8ed2e8a33ff623b1..0000000000000000000000000000000000000000
--- a/spaces/PKaushik/humandetect/yolov6/solver/build.py
+++ /dev/null
@@ -1,42 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-import os
-import math
-
-import torch
-import torch.nn as nn
-
-
-def build_optimizer(cfg, model):
- """ Build optimizer from cfg file."""
- g_bnw, g_w, g_b = [], [], []
- for v in model.modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- g_b.append(v.bias)
- if isinstance(v, nn.BatchNorm2d):
- g_bnw.append(v.weight)
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- g_w.append(v.weight)
-
- assert cfg.solver.optim == 'SGD' or 'Adam', 'ERROR: unknown optimizer, use SGD defaulted'
- if cfg.solver.optim == 'SGD':
- optimizer = torch.optim.SGD(g_bnw, lr=cfg.solver.lr0, momentum=cfg.solver.momentum, nesterov=True)
- elif cfg.solver.optim == 'Adam':
- optimizer = torch.optim.Adam(g_bnw, lr=cfg.solver.lr0, betas=(cfg.solver.momentum, 0.999))
-
- optimizer.add_param_group({'params': g_w, 'weight_decay': cfg.solver.weight_decay})
- optimizer.add_param_group({'params': g_b})
-
- del g_bnw, g_w, g_b
- return optimizer
-
-
-def build_lr_scheduler(cfg, optimizer, epochs):
- """Build learning rate scheduler from cfg file."""
- if cfg.solver.lr_scheduler == 'Cosine':
- lf = lambda x: ((1 - math.cos(x * math.pi / epochs)) / 2) * (cfg.solver.lrf - 1) + 1
- else:
- LOGGER.error('unknown lr scheduler, use Cosine defaulted')
-
- scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- return scheduler, lf
diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/spherical_optimizer.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/spherical_optimizer.py
deleted file mode 100644
index b24c18540e74bfd74f345c10df9494a902f37e64..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/spherical_optimizer.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import torch
-from torch.optim import Optimizer
-
-
-# Spherical Optimizer Class
-# Uses the first two dimensions as batch information
-# Optimizes over the surface of a sphere using the initial radius throughout
-#
-# Example Usage:
-# opt = SphericalOptimizer(torch.optim.SGD, [x], lr=0.01)
-
-
-class SphericalOptimizer(Optimizer):
- def __init__(self, optimizer, params, **kwargs):
- self.opt = optimizer(params, **kwargs)
- self.params = params
- with torch.no_grad():
- self.radii = {
- param: (param.pow(2).sum(tuple(range(2, param.ndim)), keepdim=True) + 1e-9).sqrt() for param in params
- }
-
- @torch.no_grad()
- def step(self, closure=None):
- loss = self.opt.step(closure)
- for param in self.params:
- param.data.div_((param.pow(2).sum(tuple(range(2, param.ndim)), keepdim=True) + 1e-9).sqrt())
- param.mul_(self.radii[param])
-
- return loss
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/pretty-print.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/pretty-print.go
deleted file mode 100644
index 40161192be3f4ea1094f30a0a35a1ca3540f3ac9..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/pretty-print.go and /dev/null differ
diff --git a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py b/spaces/Plachta/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py
deleted file mode 100644
index 258b618cd338322365dfa25bec468a0a3f70ccd1..0000000000000000000000000000000000000000
--- a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-import IPython.display as ipd
-import torch
-import commons
-import utils
-import ONNXVITS_infer
-from text import text_to_sequence
-
-def get_text(text, hps):
- text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
-hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json")
-
-net_g = ONNXVITS_infer.SynthesizerTrn(
- len(hps.symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model)
-_ = net_g.eval()
-
-_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g)
-
-text1 = get_text("おはようございます。", hps)
-stn_tst = text1
-with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
- sid = torch.LongTensor([0])
- audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy()
-print(audio)
\ No newline at end of file
diff --git a/spaces/Plurigrid/LifeSim/src/app/agents/fish.ts b/spaces/Plurigrid/LifeSim/src/app/agents/fish.ts
deleted file mode 100644
index 73dce589766a1388c7b9e32ae7088fad6092fadb..0000000000000000000000000000000000000000
--- a/spaces/Plurigrid/LifeSim/src/app/agents/fish.ts
+++ /dev/null
@@ -1,45 +0,0 @@
-import { pick } from "./pick"
-import { Agent, Scene } from "./types"
-
-const actions = [
- "idling",
- "making bubbles",
- "making circles",
- "opening and closing its mouth",
- // "with an octopus",
- "playing with another fish",
- "eating fishfood",
- "eating a crab",
- "attacked by a jellyfish"
-]
-
-const positions = [
- "at the top of the coral",
- "at the bottom of the coral",
- "centered in the middle",
- "burrowing in the sand",
- "hiding in the coral"
-]
-
-export const agent: Agent = {
- title: "Fish",
- type: "fish",
- simulate: (): Scene => {
- const action = pick(actions)
- const position = pick(positions)
-
- const prompt = [
- `medium shot of a clownfish`,
- action,
- position,
- `in front of yellow coral`,
- `high res underwater footage`,
- ].join(", ")
-
- return {
- action,
- position,
- prompt
- }
- }
-}
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/__main__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/__main__.py
deleted file mode 100644
index 9c54bfb438d241ad17ce15d1e1346200ddf46b1c..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/__main__.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from __future__ import annotations
-
-from pip._vendor.platformdirs import PlatformDirs, __version__
-
-PROPS = (
- "user_data_dir",
- "user_config_dir",
- "user_cache_dir",
- "user_state_dir",
- "user_log_dir",
- "user_documents_dir",
- "user_runtime_dir",
- "site_data_dir",
- "site_config_dir",
-)
-
-
-def main() -> None:
- app_name = "MyApp"
- app_author = "MyCompany"
-
- print(f"-- platformdirs {__version__} --")
-
- print("-- app dirs (with optional 'version')")
- dirs = PlatformDirs(app_name, app_author, version="1.0")
- for prop in PROPS:
- print(f"{prop}: {getattr(dirs, prop)}")
-
- print("\n-- app dirs (without optional 'version')")
- dirs = PlatformDirs(app_name, app_author)
- for prop in PROPS:
- print(f"{prop}: {getattr(dirs, prop)}")
-
- print("\n-- app dirs (without optional 'appauthor')")
- dirs = PlatformDirs(app_name)
- for prop in PROPS:
- print(f"{prop}: {getattr(dirs, prop)}")
-
- print("\n-- app dirs (with disabled 'appauthor')")
- dirs = PlatformDirs(app_name, appauthor=False)
- for prop in PROPS:
- print(f"{prop}: {getattr(dirs, prop)}")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Rbrq/DeticChatGPT/tools/merge_lvis_coco.py b/spaces/Rbrq/DeticChatGPT/tools/merge_lvis_coco.py
deleted file mode 100644
index abc2b673a30541fd71679a549acd9a53f7693183..0000000000000000000000000000000000000000
--- a/spaces/Rbrq/DeticChatGPT/tools/merge_lvis_coco.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from collections import defaultdict
-import torch
-import sys
-import json
-import numpy as np
-
-from detectron2.structures import Boxes, pairwise_iou
-COCO_PATH = 'datasets/coco/annotations/instances_train2017.json'
-IMG_PATH = 'datasets/coco/train2017/'
-LVIS_PATH = 'datasets/lvis/lvis_v1_train.json'
-NO_SEG = False
-if NO_SEG:
- SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_box.json'
-else:
- SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_mask.json'
-THRESH = 0.7
-DEBUG = False
-
-# This mapping is extracted from the official LVIS mapping:
-# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json
-COCO_SYNSET_CATEGORIES = [
- {"synset": "person.n.01", "coco_cat_id": 1},
- {"synset": "bicycle.n.01", "coco_cat_id": 2},
- {"synset": "car.n.01", "coco_cat_id": 3},
- {"synset": "motorcycle.n.01", "coco_cat_id": 4},
- {"synset": "airplane.n.01", "coco_cat_id": 5},
- {"synset": "bus.n.01", "coco_cat_id": 6},
- {"synset": "train.n.01", "coco_cat_id": 7},
- {"synset": "truck.n.01", "coco_cat_id": 8},
- {"synset": "boat.n.01", "coco_cat_id": 9},
- {"synset": "traffic_light.n.01", "coco_cat_id": 10},
- {"synset": "fireplug.n.01", "coco_cat_id": 11},
- {"synset": "stop_sign.n.01", "coco_cat_id": 13},
- {"synset": "parking_meter.n.01", "coco_cat_id": 14},
- {"synset": "bench.n.01", "coco_cat_id": 15},
- {"synset": "bird.n.01", "coco_cat_id": 16},
- {"synset": "cat.n.01", "coco_cat_id": 17},
- {"synset": "dog.n.01", "coco_cat_id": 18},
- {"synset": "horse.n.01", "coco_cat_id": 19},
- {"synset": "sheep.n.01", "coco_cat_id": 20},
- {"synset": "beef.n.01", "coco_cat_id": 21},
- {"synset": "elephant.n.01", "coco_cat_id": 22},
- {"synset": "bear.n.01", "coco_cat_id": 23},
- {"synset": "zebra.n.01", "coco_cat_id": 24},
- {"synset": "giraffe.n.01", "coco_cat_id": 25},
- {"synset": "backpack.n.01", "coco_cat_id": 27},
- {"synset": "umbrella.n.01", "coco_cat_id": 28},
- {"synset": "bag.n.04", "coco_cat_id": 31},
- {"synset": "necktie.n.01", "coco_cat_id": 32},
- {"synset": "bag.n.06", "coco_cat_id": 33},
- {"synset": "frisbee.n.01", "coco_cat_id": 34},
- {"synset": "ski.n.01", "coco_cat_id": 35},
- {"synset": "snowboard.n.01", "coco_cat_id": 36},
- {"synset": "ball.n.06", "coco_cat_id": 37},
- {"synset": "kite.n.03", "coco_cat_id": 38},
- {"synset": "baseball_bat.n.01", "coco_cat_id": 39},
- {"synset": "baseball_glove.n.01", "coco_cat_id": 40},
- {"synset": "skateboard.n.01", "coco_cat_id": 41},
- {"synset": "surfboard.n.01", "coco_cat_id": 42},
- {"synset": "tennis_racket.n.01", "coco_cat_id": 43},
- {"synset": "bottle.n.01", "coco_cat_id": 44},
- {"synset": "wineglass.n.01", "coco_cat_id": 46},
- {"synset": "cup.n.01", "coco_cat_id": 47},
- {"synset": "fork.n.01", "coco_cat_id": 48},
- {"synset": "knife.n.01", "coco_cat_id": 49},
- {"synset": "spoon.n.01", "coco_cat_id": 50},
- {"synset": "bowl.n.03", "coco_cat_id": 51},
- {"synset": "banana.n.02", "coco_cat_id": 52},
- {"synset": "apple.n.01", "coco_cat_id": 53},
- {"synset": "sandwich.n.01", "coco_cat_id": 54},
- {"synset": "orange.n.01", "coco_cat_id": 55},
- {"synset": "broccoli.n.01", "coco_cat_id": 56},
- {"synset": "carrot.n.01", "coco_cat_id": 57},
- # {"synset": "frank.n.02", "coco_cat_id": 58},
- {"synset": "sausage.n.01", "coco_cat_id": 58},
- {"synset": "pizza.n.01", "coco_cat_id": 59},
- {"synset": "doughnut.n.02", "coco_cat_id": 60},
- {"synset": "cake.n.03", "coco_cat_id": 61},
- {"synset": "chair.n.01", "coco_cat_id": 62},
- {"synset": "sofa.n.01", "coco_cat_id": 63},
- {"synset": "pot.n.04", "coco_cat_id": 64},
- {"synset": "bed.n.01", "coco_cat_id": 65},
- {"synset": "dining_table.n.01", "coco_cat_id": 67},
- {"synset": "toilet.n.02", "coco_cat_id": 70},
- {"synset": "television_receiver.n.01", "coco_cat_id": 72},
- {"synset": "laptop.n.01", "coco_cat_id": 73},
- {"synset": "mouse.n.04", "coco_cat_id": 74},
- {"synset": "remote_control.n.01", "coco_cat_id": 75},
- {"synset": "computer_keyboard.n.01", "coco_cat_id": 76},
- {"synset": "cellular_telephone.n.01", "coco_cat_id": 77},
- {"synset": "microwave.n.02", "coco_cat_id": 78},
- {"synset": "oven.n.01", "coco_cat_id": 79},
- {"synset": "toaster.n.02", "coco_cat_id": 80},
- {"synset": "sink.n.01", "coco_cat_id": 81},
- {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82},
- {"synset": "book.n.01", "coco_cat_id": 84},
- {"synset": "clock.n.01", "coco_cat_id": 85},
- {"synset": "vase.n.01", "coco_cat_id": 86},
- {"synset": "scissors.n.01", "coco_cat_id": 87},
- {"synset": "teddy.n.01", "coco_cat_id": 88},
- {"synset": "hand_blower.n.01", "coco_cat_id": 89},
- {"synset": "toothbrush.n.01", "coco_cat_id": 90},
-]
-
-
-def get_bbox(ann):
- bbox = ann['bbox']
- return [bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]]
-
-
-if __name__ == '__main__':
- file_name_key = 'file_name' if 'v0.5' in LVIS_PATH else 'coco_url'
- coco_data = json.load(open(COCO_PATH, 'r'))
- lvis_data = json.load(open(LVIS_PATH, 'r'))
-
- coco_cats = coco_data['categories']
- lvis_cats = lvis_data['categories']
-
- num_find = 0
- num_not_find = 0
- num_twice = 0
- coco2lviscats = {}
- synset2lvisid = {x['synset']: x['id'] for x in lvis_cats}
- # cocoid2synset = {x['coco_cat_id']: x['synset'] for x in COCO_SYNSET_CATEGORIES}
- coco2lviscats = {x['coco_cat_id']: synset2lvisid[x['synset']] \
- for x in COCO_SYNSET_CATEGORIES if x['synset'] in synset2lvisid}
- print(len(coco2lviscats))
-
- lvis_file2id = {x[file_name_key][-16:]: x['id'] for x in lvis_data['images']}
- lvis_id2img = {x['id']: x for x in lvis_data['images']}
- lvis_catid2name = {x['id']: x['name'] for x in lvis_data['categories']}
-
- coco_file2anns = {}
- coco_id2img = {x['id']: x for x in coco_data['images']}
- coco_img2anns = defaultdict(list)
- for ann in coco_data['annotations']:
- coco_img = coco_id2img[ann['image_id']]
- file_name = coco_img['file_name'][-16:]
- if ann['category_id'] in coco2lviscats and \
- file_name in lvis_file2id:
- lvis_image_id = lvis_file2id[file_name]
- lvis_image = lvis_id2img[lvis_image_id]
- lvis_cat_id = coco2lviscats[ann['category_id']]
- if lvis_cat_id in lvis_image['neg_category_ids']:
- continue
- if DEBUG:
- import cv2
- img_path = IMG_PATH + file_name
- img = cv2.imread(img_path)
- print(lvis_catid2name[lvis_cat_id])
- print('neg', [lvis_catid2name[x] for x in lvis_image['neg_category_ids']])
- cv2.imshow('img', img)
- cv2.waitKey()
- ann['category_id'] = lvis_cat_id
- ann['image_id'] = lvis_image_id
- coco_img2anns[file_name].append(ann)
-
- lvis_img2anns = defaultdict(list)
- for ann in lvis_data['annotations']:
- lvis_img = lvis_id2img[ann['image_id']]
- file_name = lvis_img[file_name_key][-16:]
- lvis_img2anns[file_name].append(ann)
-
- ann_id_count = 0
- anns = []
- for file_name in lvis_img2anns:
- coco_anns = coco_img2anns[file_name]
- lvis_anns = lvis_img2anns[file_name]
- ious = pairwise_iou(
- Boxes(torch.tensor([get_bbox(x) for x in coco_anns])),
- Boxes(torch.tensor([get_bbox(x) for x in lvis_anns]))
- )
-
- for ann in lvis_anns:
- ann_id_count = ann_id_count + 1
- ann['id'] = ann_id_count
- anns.append(ann)
-
- for i, ann in enumerate(coco_anns):
- if len(ious[i]) == 0 or ious[i].max() < THRESH:
- ann_id_count = ann_id_count + 1
- ann['id'] = ann_id_count
- anns.append(ann)
- else:
- duplicated = False
- for j in range(len(ious[i])):
- if ious[i, j] >= THRESH and \
- coco_anns[i]['category_id'] == lvis_anns[j]['category_id']:
- duplicated = True
- if not duplicated:
- ann_id_count = ann_id_count + 1
- ann['id'] = ann_id_count
- anns.append(ann)
- if NO_SEG:
- for ann in anns:
- del ann['segmentation']
- lvis_data['annotations'] = anns
-
- print('# Images', len(lvis_data['images']))
- print('# Anns', len(lvis_data['annotations']))
- json.dump(lvis_data, open(SAVE_PATH, 'w'))
diff --git a/spaces/RealTimeLiveAIForHealth/VoicetoTexttoSentiment/README.md b/spaces/RealTimeLiveAIForHealth/VoicetoTexttoSentiment/README.md
deleted file mode 100644
index 65aadecc1f79ef6471abc2d5622eceff8bd63d00..0000000000000000000000000000000000000000
--- a/spaces/RealTimeLiveAIForHealth/VoicetoTexttoSentiment/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: VoicetoTexttoSentiment
-emoji: 🏢
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen_v1_1/pipeline.py b/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen_v1_1/pipeline.py
deleted file mode 100644
index fb8cc3f3b172fc097cbaee0d6e77b63187352efa..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen_v1_1/pipeline.py
+++ /dev/null
@@ -1,95 +0,0 @@
-from pathlib import Path
-from pprint import pformat
-import argparse
-
-from ... import extract_features, match_features, triangulation
-from ... import pairs_from_covisibility, pairs_from_retrieval, localize_sfm
-
-
-parser = argparse.ArgumentParser()
-parser.add_argument(
- "--dataset",
- type=Path,
- default="datasets/aachen_v1.1",
- help="Path to the dataset, default: %(default)s",
-)
-parser.add_argument(
- "--outputs",
- type=Path,
- default="outputs/aachen_v1.1",
- help="Path to the output directory, default: %(default)s",
-)
-parser.add_argument(
- "--num_covis",
- type=int,
- default=20,
- help="Number of image pairs for SfM, default: %(default)s",
-)
-parser.add_argument(
- "--num_loc",
- type=int,
- default=50,
- help="Number of image pairs for loc, default: %(default)s",
-)
-args = parser.parse_args()
-
-# Setup the paths
-dataset = args.dataset
-images = dataset / "images/images_upright/"
-sift_sfm = dataset / "3D-models/aachen_v_1_1"
-
-outputs = args.outputs # where everything will be saved
-reference_sfm = (
- outputs / "sfm_superpoint+superglue"
-) # the SfM model we will build
-sfm_pairs = (
- outputs / f"pairs-db-covis{args.num_covis}.txt"
-) # top-k most covisible in SIFT model
-loc_pairs = (
- outputs / f"pairs-query-netvlad{args.num_loc}.txt"
-) # top-k retrieved by NetVLAD
-results = (
- outputs / f"Aachen-v1.1_hloc_superpoint+superglue_netvlad{args.num_loc}.txt"
-)
-
-# list the standard configurations available
-print(f"Configs for feature extractors:\n{pformat(extract_features.confs)}")
-print(f"Configs for feature matchers:\n{pformat(match_features.confs)}")
-
-# pick one of the configurations for extraction and matching
-retrieval_conf = extract_features.confs["netvlad"]
-feature_conf = extract_features.confs["superpoint_max"]
-matcher_conf = match_features.confs["superglue"]
-
-features = extract_features.main(feature_conf, images, outputs)
-
-pairs_from_covisibility.main(sift_sfm, sfm_pairs, num_matched=args.num_covis)
-sfm_matches = match_features.main(
- matcher_conf, sfm_pairs, feature_conf["output"], outputs
-)
-
-triangulation.main(
- reference_sfm, sift_sfm, images, sfm_pairs, features, sfm_matches
-)
-
-global_descriptors = extract_features.main(retrieval_conf, images, outputs)
-pairs_from_retrieval.main(
- global_descriptors,
- loc_pairs,
- args.num_loc,
- query_prefix="query",
- db_model=reference_sfm,
-)
-loc_matches = match_features.main(
- matcher_conf, loc_pairs, feature_conf["output"], outputs
-)
-
-localize_sfm.main(
- reference_sfm,
- dataset / "queries/*_time_queries_with_intrinsics.txt",
- loc_pairs,
- features,
- loc_matches,
- results,
- covisibility_clustering=False,
-) # not required with SuperPoint+SuperGlue
diff --git a/spaces/RichardMB1217/blip/train_caption.py b/spaces/RichardMB1217/blip/train_caption.py
deleted file mode 100644
index 7c639ac646b9a1b8074b6e9c2343b961de76db05..0000000000000000000000000000000000000000
--- a/spaces/RichardMB1217/blip/train_caption.py
+++ /dev/null
@@ -1,206 +0,0 @@
-'''
- * Copyright (c) 2022, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
-'''
-import argparse
-import os
-import ruamel_yaml as yaml
-import numpy as np
-import random
-import time
-import datetime
-import json
-from pathlib import Path
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-from torch.utils.data import DataLoader
-
-from models.blip import blip_decoder
-import utils
-from utils import cosine_lr_schedule
-from data import create_dataset, create_sampler, create_loader
-from data.utils import save_result, coco_caption_eval
-
-def train(model, data_loader, optimizer, epoch, device):
- # train
- model.train()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
- metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}'))
- header = 'Train Caption Epoch: [{}]'.format(epoch)
- print_freq = 50
-
- for i, (image, caption, _) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
- image = image.to(device)
-
- loss = model(image, caption)
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- metric_logger.update(loss=loss.item())
- metric_logger.update(lr=optimizer.param_groups[0]["lr"])
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- print("Averaged stats:", metric_logger.global_avg())
- return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()}
-
-
-@torch.no_grad()
-def evaluate(model, data_loader, device, config):
- # evaluate
- model.eval()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- header = 'Caption generation:'
- print_freq = 10
-
- result = []
- for image, image_id in metric_logger.log_every(data_loader, print_freq, header):
-
- image = image.to(device)
-
- captions = model.generate(image, sample=False, num_beams=config['num_beams'], max_length=config['max_length'],
- min_length=config['min_length'])
-
- for caption, img_id in zip(captions, image_id):
- result.append({"image_id": img_id.item(), "caption": caption})
-
- return result
-
-
-def main(args, config):
- utils.init_distributed_mode(args)
-
- device = torch.device(args.device)
-
- # fix the seed for reproducibility
- seed = args.seed + utils.get_rank()
- torch.manual_seed(seed)
- np.random.seed(seed)
- random.seed(seed)
- cudnn.benchmark = True
-
- #### Dataset ####
- print("Creating captioning dataset")
- train_dataset, val_dataset, test_dataset = create_dataset('caption_coco', config)
-
- if args.distributed:
- num_tasks = utils.get_world_size()
- global_rank = utils.get_rank()
- samplers = create_sampler([train_dataset,val_dataset,test_dataset], [True,False,False], num_tasks, global_rank)
- else:
- samplers = [None, None, None]
-
- train_loader, val_loader, test_loader = create_loader([train_dataset, val_dataset, test_dataset],samplers,
- batch_size=[config['batch_size']]*3,num_workers=[4,4,4],
- is_trains=[True, False, False], collate_fns=[None,None,None])
-
- #### Model ####
- print("Creating model")
- model = blip_decoder(pretrained=config['pretrained'], image_size=config['image_size'], vit=config['vit'],
- vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'],
- prompt=config['prompt'])
-
- model = model.to(device)
-
- model_without_ddp = model
- if args.distributed:
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- model_without_ddp = model.module
-
- optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay'])
-
- best = 0
- best_epoch = 0
-
- print("Start training")
- start_time = time.time()
- for epoch in range(0, config['max_epoch']):
- if not args.evaluate:
- if args.distributed:
- train_loader.sampler.set_epoch(epoch)
-
- cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr'])
-
- train_stats = train(model, train_loader, optimizer, epoch, device)
-
- val_result = evaluate(model_without_ddp, val_loader, device, config)
- val_result_file = save_result(val_result, args.result_dir, 'val_epoch%d'%epoch, remove_duplicate='image_id')
-
- test_result = evaluate(model_without_ddp, test_loader, device, config)
- test_result_file = save_result(test_result, args.result_dir, 'test_epoch%d'%epoch, remove_duplicate='image_id')
-
- if utils.is_main_process():
- coco_val = coco_caption_eval(config['coco_gt_root'],val_result_file,'val')
- coco_test = coco_caption_eval(config['coco_gt_root'],test_result_file,'test')
-
- if args.evaluate:
- log_stats = {**{f'val_{k}': v for k, v in coco_val.eval.items()},
- **{f'test_{k}': v for k, v in coco_test.eval.items()},
- }
- with open(os.path.join(args.output_dir, "evaluate.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
- else:
- save_obj = {
- 'model': model_without_ddp.state_dict(),
- 'optimizer': optimizer.state_dict(),
- 'config': config,
- 'epoch': epoch,
- }
-
- if coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4'] > best:
- best = coco_val.eval['CIDEr'] + coco_val.eval['Bleu_4']
- best_epoch = epoch
- torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth'))
-
- log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},
- **{f'val_{k}': v for k, v in coco_val.eval.items()},
- **{f'test_{k}': v for k, v in coco_test.eval.items()},
- 'epoch': epoch,
- 'best_epoch': best_epoch,
- }
- with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
-
- if args.evaluate:
- break
- dist.barrier()
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Training time {}'.format(total_time_str))
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--config', default='./configs/caption_coco.yaml')
- parser.add_argument('--output_dir', default='output/Caption_coco')
- parser.add_argument('--evaluate', action='store_true')
- parser.add_argument('--device', default='cuda')
- parser.add_argument('--seed', default=42, type=int)
- parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes')
- parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training')
- parser.add_argument('--distributed', default=True, type=bool)
- args = parser.parse_args()
-
- config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
-
- args.result_dir = os.path.join(args.output_dir, 'result')
-
- Path(args.output_dir).mkdir(parents=True, exist_ok=True)
- Path(args.result_dir).mkdir(parents=True, exist_ok=True)
-
- yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w'))
-
- main(args, config)
\ No newline at end of file
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/apis/inference.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/apis/inference.py
deleted file mode 100644
index df7bd4b7316cd8fad4cd10dbdfcbfeba59d95bbc..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/apis/inference.py
+++ /dev/null
@@ -1,217 +0,0 @@
-import warnings
-
-import mmcv
-import numpy as np
-import torch
-# from mmcv.ops import RoIPool
-from mmcv.parallel import collate, scatter
-from mmcv.runner import load_checkpoint
-
-from mmdet.core import get_classes
-from mmdet.datasets import replace_ImageToTensor
-from mmdet.datasets.pipelines import Compose
-from mmdet.models import build_detector
-
-
-def init_detector(config, checkpoint=None, device='cuda:0', cfg_options=None):
- """Initialize a detector from config file.
-
- Args:
- config (str or :obj:`mmcv.Config`): Config file path or the config
- object.
- checkpoint (str, optional): Checkpoint path. If left as None, the model
- will not load any weights.
- cfg_options (dict): Options to override some settings in the used
- config.
-
- Returns:
- nn.Module: The constructed detector.
- """
- if isinstance(config, str):
- config = mmcv.Config.fromfile(config)
- elif not isinstance(config, mmcv.Config):
- raise TypeError('config must be a filename or Config object, '
- f'but got {type(config)}')
- if cfg_options is not None:
- config.merge_from_dict(cfg_options)
- config.model.pretrained = None
- config.model.train_cfg = None
- model = build_detector(config.model, test_cfg=config.get('test_cfg'))
- if checkpoint is not None:
- map_loc = 'cpu' if device == 'cpu' else None
- checkpoint = load_checkpoint(model, checkpoint, map_location=map_loc)
- if 'CLASSES' in checkpoint.get('meta', {}):
- model.CLASSES = checkpoint['meta']['CLASSES']
- else:
- warnings.simplefilter('once')
- warnings.warn('Class names are not saved in the checkpoint\'s '
- 'meta data, use COCO classes by default.')
- model.CLASSES = get_classes('coco')
- model.cfg = config # save the config in the model for convenience
- model.to(device)
- model.eval()
- return model
-
-
-class LoadImage(object):
- """Deprecated.
-
- A simple pipeline to load image.
- """
-
- def __call__(self, results):
- """Call function to load images into results.
-
- Args:
- results (dict): A result dict contains the file name
- of the image to be read.
- Returns:
- dict: ``results`` will be returned containing loaded image.
- """
- warnings.simplefilter('once')
- warnings.warn('`LoadImage` is deprecated and will be removed in '
- 'future releases. You may use `LoadImageFromWebcam` '
- 'from `mmdet.datasets.pipelines.` instead.')
- if isinstance(results['img'], str):
- results['filename'] = results['img']
- results['ori_filename'] = results['img']
- else:
- results['filename'] = None
- results['ori_filename'] = None
- img = mmcv.imread(results['img'])
- results['img'] = img
- results['img_fields'] = ['img']
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- return results
-
-
-def inference_detector(model, imgs):
- """Inference image(s) with the detector.
-
- Args:
- model (nn.Module): The loaded detector.
- imgs (str/ndarray or list[str/ndarray] or tuple[str/ndarray]):
- Either image files or loaded images.
-
- Returns:
- If imgs is a list or tuple, the same length list type results
- will be returned, otherwise return the detection results directly.
- """
-
- if isinstance(imgs, (list, tuple)):
- is_batch = True
- else:
- imgs = [imgs]
- is_batch = False
-
- cfg = model.cfg
- device = next(model.parameters()).device # model device
-
- if isinstance(imgs[0], np.ndarray):
- cfg = cfg.copy()
- # set loading pipeline type
- cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'
-
- cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
- test_pipeline = Compose(cfg.data.test.pipeline)
-
- datas = []
- for img in imgs:
- # prepare data
- if isinstance(img, np.ndarray):
- # directly add img
- data = dict(img=img)
- else:
- # add information into dict
- data = dict(img_info=dict(filename=img), img_prefix=None)
- # build the data pipeline
- data = test_pipeline(data)
- datas.append(data)
-
- data = collate(datas, samples_per_gpu=len(imgs))
- # just get the actual data from DataContainer
- data['img_metas'] = [img_metas.data[0] for img_metas in data['img_metas']]
- data['img'] = [img.data[0] for img in data['img']]
- if next(model.parameters()).is_cuda:
- # scatter to specified GPU
- data = scatter(data, [device])[0]
- else:
- for m in model.modules():
- assert not isinstance(
- m, RoIPool
- ), 'CPU inference with RoIPool is not supported currently.'
-
- # forward the model
- with torch.no_grad():
- results = model(return_loss=False, rescale=True, **data)
-
- if not is_batch:
- return results[0]
- else:
- return results
-
-
-async def async_inference_detector(model, img):
- """Async inference image(s) with the detector.
-
- Args:
- model (nn.Module): The loaded detector.
- img (str | ndarray): Either image files or loaded images.
-
- Returns:
- Awaitable detection results.
- """
- cfg = model.cfg
- device = next(model.parameters()).device # model device
- # prepare data
- if isinstance(img, np.ndarray):
- # directly add img
- data = dict(img=img)
- cfg = cfg.copy()
- # set loading pipeline type
- cfg.data.test.pipeline[0].type = 'LoadImageFromWebcam'
- else:
- # add information into dict
- data = dict(img_info=dict(filename=img), img_prefix=None)
- # build the data pipeline
- test_pipeline = Compose(cfg.data.test.pipeline)
- data = test_pipeline(data)
- data = scatter(collate([data], samples_per_gpu=1), [device])[0]
-
- # We don't restore `torch.is_grad_enabled()` value during concurrent
- # inference since execution can overlap
- torch.set_grad_enabled(False)
- result = await model.aforward_test(rescale=True, **data)
- return result
-
-
-def show_result_pyplot(model,
- img,
- result,
- score_thr=0.3,
- title='result',
- wait_time=0):
- """Visualize the detection results on the image.
-
- Args:
- model (nn.Module): The loaded detector.
- img (str or np.ndarray): Image filename or loaded image.
- result (tuple[list] or list): The detection result, can be either
- (bbox, segm) or just bbox.
- score_thr (float): The threshold to visualize the bboxes and masks.
- title (str): Title of the pyplot figure.
- wait_time (float): Value of waitKey param.
- Default: 0.
- """
- if hasattr(model, 'module'):
- model = model.module
- model.show_result(
- img,
- result,
- score_thr=score_thr,
- show=True,
- wait_time=wait_time,
- win_name=title,
- bbox_color=(72, 101, 241),
- text_color=(72, 101, 241))
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmseg/core/seg/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmseg/core/seg/__init__.py
deleted file mode 100644
index 93bc129b685e4a3efca2cc891729981b2865900d..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmseg/core/seg/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .builder import build_pixel_sampler
-from .sampler import BasePixelSampler, OHEMPixelSampler
-
-__all__ = ['build_pixel_sampler', 'BasePixelSampler', 'OHEMPixelSampler']
diff --git a/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/general.py b/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/general.py
deleted file mode 100644
index faf908f960bfbb7797260a5135827019781001a1..0000000000000000000000000000000000000000
--- a/spaces/Sa-m/YOLO-V7-Custom-Model-Pot-Hole-Detection/utils/general.py
+++ /dev/null
@@ -1,891 +0,0 @@
-# YOLOR general utils
-
-import glob
-import logging
-import math
-import os
-import platform
-import random
-import re
-import subprocess
-import time
-from pathlib import Path
-
-import cv2
-import numpy as np
-import pandas as pd
-import torch
-import torchvision
-import yaml
-
-from utils.google_utils import gsutil_getsize
-from utils.metrics import fitness
-from utils.torch_utils import init_torch_seeds
-
-# Settings
-torch.set_printoptions(linewidth=320, precision=5, profile='long')
-np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
-pd.options.display.max_columns = 10
-cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
-os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads
-
-
-def set_logging(rank=-1):
- logging.basicConfig(
- format="%(message)s",
- level=logging.INFO if rank in [-1, 0] else logging.WARN)
-
-
-def init_seeds(seed=0):
- # Initialize random number generator (RNG) seeds
- random.seed(seed)
- np.random.seed(seed)
- init_torch_seeds(seed)
-
-
-def get_latest_run(search_dir='.'):
- # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
- last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
- return max(last_list, key=os.path.getctime) if last_list else ''
-
-
-def isdocker():
- # Is environment a Docker container
- return Path('/workspace').exists() # or Path('/.dockerenv').exists()
-
-
-def emojis(str=''):
- # Return platform-dependent emoji-safe version of string
- return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
-
-
-def check_online():
- # Check internet connectivity
- import socket
- try:
- socket.create_connection(("1.1.1.1", 443), 5) # check host accesability
- return True
- except OSError:
- return False
-
-
-def check_git_status():
- # Recommend 'git pull' if code is out of date
- print(colorstr('github: '), end='')
- try:
- assert Path('.git').exists(), 'skipping check (not a git repository)'
- assert not isdocker(), 'skipping check (Docker image)'
- assert check_online(), 'skipping check (offline)'
-
- cmd = 'git fetch && git config --get remote.origin.url'
- url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url
- branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out
- n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind
- if n > 0:
- s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \
- f"Use 'git pull' to update or 'git clone {url}' to download latest."
- else:
- s = f'up to date with {url} ✅'
- print(emojis(s)) # emoji-safe
- except Exception as e:
- print(e)
-
-
-def check_requirements(requirements='requirements.txt', exclude=()):
- # Check installed dependencies meet requirements (pass *.txt file or list of packages)
- import pkg_resources as pkg
- prefix = colorstr('red', 'bold', 'requirements:')
- if isinstance(requirements, (str, Path)): # requirements.txt file
- file = Path(requirements)
- if not file.exists():
- print(f"{prefix} {file.resolve()} not found, check failed.")
- return
- requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude]
- else: # list or tuple of packages
- requirements = [x for x in requirements if x not in exclude]
-
- n = 0 # number of packages updates
- for r in requirements:
- try:
- pkg.require(r)
- except Exception as e: # DistributionNotFound or VersionConflict if requirements not met
- n += 1
- print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...")
- print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode())
-
- if n: # if packages updated
- source = file.resolve() if 'file' in locals() else requirements
- s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \
- f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
- print(emojis(s)) # emoji-safe
-
-
-def check_img_size(img_size, s=32):
- # Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
- if new_size != img_size:
- print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size))
- return new_size
-
-
-def check_imshow():
- # Check if environment supports image displays
- try:
- assert not isdocker(), 'cv2.imshow() is disabled in Docker environments'
- cv2.imshow('test', np.zeros((1, 1, 3)))
- cv2.waitKey(1)
- cv2.destroyAllWindows()
- cv2.waitKey(1)
- return True
- except Exception as e:
- print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}')
- return False
-
-
-def check_file(file):
- # Search for file if not found
- if Path(file).is_file() or file == '':
- return file
- else:
- files = glob.glob('./**/' + file, recursive=True) # find file
- assert len(files), f'File Not Found: {file}' # assert file was found
- assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique
- return files[0] # return file
-
-
-def check_dataset(dict):
- # Download dataset if not found locally
- val, s = dict.get('val'), dict.get('download')
- if val and len(val):
- val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path
- if not all(x.exists() for x in val):
- print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()])
- if s and len(s): # download script
- print('Downloading %s ...' % s)
- if s.startswith('http') and s.endswith('.zip'): # URL
- f = Path(s).name # filename
- torch.hub.download_url_to_file(s, f)
- r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip
- else: # bash script
- r = os.system(s)
- print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value
- else:
- raise Exception('Dataset not found.')
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def clean_str(s):
- # Cleans a string by replacing special characters with underscore _
- return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
-
-
-def one_cycle(y1=0.0, y2=1.0, steps=100):
- # lambda function for sinusoidal ramp from y1 to y2
- return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
-
-
-def colorstr(*input):
- # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
- *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string
- colors = {'black': '\033[30m', # basic colors
- 'red': '\033[31m',
- 'green': '\033[32m',
- 'yellow': '\033[33m',
- 'blue': '\033[34m',
- 'magenta': '\033[35m',
- 'cyan': '\033[36m',
- 'white': '\033[37m',
- 'bright_black': '\033[90m', # bright colors
- 'bright_red': '\033[91m',
- 'bright_green': '\033[92m',
- 'bright_yellow': '\033[93m',
- 'bright_blue': '\033[94m',
- 'bright_magenta': '\033[95m',
- 'bright_cyan': '\033[96m',
- 'bright_white': '\033[97m',
- 'end': '\033[0m', # misc
- 'bold': '\033[1m',
- 'underline': '\033[4m'}
- return ''.join(colors[x] for x in args) + f'{string}' + colors['end']
-
-
-def labels_to_class_weights(labels, nc=80):
- # Get class weights (inverse frequency) from training labels
- if labels[0] is None: # no labels loaded
- return torch.Tensor()
-
- labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
- classes = labels[:, 0].astype(np.int) # labels = [class xywh]
- weights = np.bincount(classes, minlength=nc) # occurrences per class
-
- # Prepend gridpoint count (for uCE training)
- # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
- # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
-
- weights[weights == 0] = 1 # replace empty bins with 1
- weights = 1 / weights # number of targets per class
- weights /= weights.sum() # normalize
- return torch.from_numpy(weights)
-
-
-def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
- # Produces image weights based on class_weights and image contents
- class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels])
- image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1)
- # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample
- return image_weights
-
-
-def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
- # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
- # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
- # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
- # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
- x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,
- 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
- return x
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
- # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
- y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y
- y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x
- y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y
- return y
-
-
-def xyn2xy(x, w=640, h=640, padw=0, padh=0):
- # Convert normalized segments into pixel segments, shape (n,2)
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * x[:, 0] + padw # top left x
- y[:, 1] = h * x[:, 1] + padh # top left y
- return y
-
-
-def segment2box(segment, width=640, height=640):
- # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
- x, y = segment.T # segment xy
- inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
- x, y, = x[inside], y[inside]
- return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy
-
-
-def segments2boxes(segments):
- # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
- boxes = []
- for s in segments:
- x, y = s.T # segment xy
- boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
- return xyxy2xywh(np.array(boxes)) # cls, xywh
-
-
-def resample_segments(segments, n=1000):
- # Up-sample an (n,2) segment
- for i, s in enumerate(segments):
- x = np.linspace(0, len(s) - 1, n)
- xp = np.arange(len(s))
- segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
- return segments
-
-
-def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, :4] /= gain
- clip_coords(coords, img0_shape)
- return coords
-
-
-def clip_coords(boxes, img_shape):
- # Clip bounding xyxy bounding boxes to image shape (height, width)
- boxes[:, 0].clamp_(0, img_shape[1]) # x1
- boxes[:, 1].clamp_(0, img_shape[0]) # y1
- boxes[:, 2].clamp_(0, img_shape[1]) # x2
- boxes[:, 3].clamp_(0, img_shape[0]) # y2
-
-
-def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
- # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- iou = inter / union
-
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
- (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + eps))
- return iou - (rho2 / c2 + v * alpha) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU
- else:
- return iou # IoU
-
-
-
-
-def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9):
- # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- # change iou into pow(iou+eps)
- # iou = inter / union
- iou = torch.pow(inter/union + eps, alpha)
- # beta = 2 * alpha
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal
- rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2)
- rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2)
- rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha_ciou = v / ((1 + eps) - inter / union + v)
- # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU
- return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- # c_area = cw * ch + eps # convex area
- # return iou - (c_area - union) / c_area # GIoU
- c_area = torch.max(cw * ch + eps, union) # convex area
- return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU
- else:
- return iou # torch.log(iou+eps) or iou
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def wh_iou(wh1, wh2):
- # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
- wh1 = wh1[:, None] # [N,1,2]
- wh2 = wh2[None] # [1,M,2]
- inter = torch.min(wh1, wh2).prod(2) # [N,M]
- return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def box_giou(box1, box2):
- """
- Return generalized intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- areai = whi[:, :, 0] * whi[:, :, 1]
-
- return iou - (areai - union) / areai
-
-
-def box_ciou(box1, box2, eps: float = 1e-7):
- """
- Return complete intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- w_pred = box1[:, None, 2] - box1[:, None, 0]
- h_pred = box1[:, None, 3] - box1[:, None, 1]
-
- w_gt = box2[:, 2] - box2[:, 0]
- h_gt = box2[:, 3] - box2[:, 1]
-
- v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2)
- with torch.no_grad():
- alpha = v / (1 - iou + v + eps)
- return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v
-
-
-def box_diou(box1, box2, eps: float = 1e-7):
- """
- Return distance intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- # The distance IoU is the IoU penalized by a normalized
- # distance between boxes' centers squared.
- return iou - (centers_distance_squared / diagonal_distance_squared)
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=()):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
-
- nc = prediction.shape[2] - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- if nc == 1:
- x[:, 5:] = x[:, 4:5] # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def non_max_suppression_kpt(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=(), kpt_label=False, nc=None, nkpt=None):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
- if nc is None:
- nc = prediction.shape[2] - 5 if not kpt_label else prediction.shape[2] - 56 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0,6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:5+nc] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- if not kpt_label:
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
- else:
- kpts = x[:, 6:]
- conf, j = x[:, 5:6].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float(), kpts), 1)[conf.view(-1) > conf_thres]
-
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer()
- # Strip optimizer from 'f' to finalize training, optionally save as 's'
- x = torch.load(f, map_location=torch.device('cpu'))
- if x.get('ema'):
- x['model'] = x['ema'] # replace model with ema
- for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys
- x[k] = None
- x['epoch'] = -1
- x['model'].half() # to FP16
- for p in x['model'].parameters():
- p.requires_grad = False
- torch.save(x, s or f)
- mb = os.path.getsize(s or f) / 1E6 # filesize
- print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB")
-
-
-def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''):
- # Print mutation results to evolve.txt (for use with train.py --evolve)
- a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys
- b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c))
-
- if bucket:
- url = 'gs://%s/evolve.txt' % bucket
- if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0):
- os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local
-
- with open('evolve.txt', 'a') as f: # append result
- f.write(c + b + '\n')
- x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows
- x = x[np.argsort(-fitness(x))] # sort
- np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness
-
- # Save yaml
- for i, k in enumerate(hyp.keys()):
- hyp[k] = float(x[0, i + 7])
- with open(yaml_file, 'w') as f:
- results = tuple(x[0, :7])
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n')
- yaml.dump(hyp, f, sort_keys=False)
-
- if bucket:
- os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload
-
-
-def apply_classifier(x, model, img, im0):
- # applies a second stage classifier to yolo outputs
- im0 = [im0] if isinstance(im0, np.ndarray) else im0
- for i, d in enumerate(x): # per image
- if d is not None and len(d):
- d = d.clone()
-
- # Reshape and pad cutouts
- b = xyxy2xywh(d[:, :4]) # boxes
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
- b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
- d[:, :4] = xywh2xyxy(b).long()
-
- # Rescale boxes from img_size to im0 size
- scale_coords(img.shape[2:], d[:, :4], im0[i].shape)
-
- # Classes
- pred_cls1 = d[:, 5].long()
- ims = []
- for j, a in enumerate(d): # per item
- cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
- im = cv2.resize(cutout, (224, 224)) # BGR
- # cv2.imwrite('test%i.jpg' % j, cutout)
-
- im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
- im /= 255.0 # 0 - 255 to 0.0 - 1.0
- ims.append(im)
-
- pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction
- x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections
-
- return x
-
-
-def increment_path(path, exist_ok=True, sep=''):
- # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc.
- path = Path(path) # os-agnostic
- if (path.exists() and exist_ok) or (not path.exists()):
- return str(path)
- else:
- dirs = glob.glob(f"{path}{sep}*") # similar paths
- matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
- i = [int(m.groups()[0]) for m in matches if m] # indices
- n = max(i) + 1 if i else 2 # increment number
- return f"{path}{sep}{n}" # update path
diff --git a/spaces/SaiRaam/AIAvatarchatbot/app.py b/spaces/SaiRaam/AIAvatarchatbot/app.py
deleted file mode 100644
index dd75b1a0cd946f286e82f9db7fb134d5a886a870..0000000000000000000000000000000000000000
--- a/spaces/SaiRaam/AIAvatarchatbot/app.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Ram's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging.
-
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/SeyedAli/Persian-Speech-Transcription/README.md b/spaces/SeyedAli/Persian-Speech-Transcription/README.md
deleted file mode 100644
index 369ab86792f7b95dd35d7a442aad52009b890f24..0000000000000000000000000000000000000000
--- a/spaces/SeyedAli/Persian-Speech-Transcription/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Persian Speech Transcription
-emoji: 🔊➡️📝
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
-license: mit
----
-
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/vision.cpp b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/vision.cpp
deleted file mode 100644
index c1f2c50c82909bbd5492c163d634af77a3ba1781..0000000000000000000000000000000000000000
--- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/vision.cpp
+++ /dev/null
@@ -1,58 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-#include "MsDeformAttn/ms_deform_attn.h"
-
-namespace groundingdino {
-
-#ifdef WITH_CUDA
-extern int get_cudart_version();
-#endif
-
-std::string get_cuda_version() {
-#ifdef WITH_CUDA
- std::ostringstream oss;
-
- // copied from
- // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231
- auto printCudaStyleVersion = [&](int v) {
- oss << (v / 1000) << "." << (v / 10 % 100);
- if (v % 10 != 0) {
- oss << "." << (v % 10);
- }
- };
- printCudaStyleVersion(get_cudart_version());
- return oss.str();
-#else
- return std::string("not available");
-#endif
-}
-
-// similar to
-// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp
-std::string get_compiler_version() {
- std::ostringstream ss;
-#if defined(__GNUC__)
-#ifndef __clang__
- { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; }
-#endif
-#endif
-
-#if defined(__clang_major__)
- {
- ss << "clang " << __clang_major__ << "." << __clang_minor__ << "."
- << __clang_patchlevel__;
- }
-#endif
-
-#if defined(_MSC_VER)
- { ss << "MSVC " << _MSC_FULL_VER; }
-#endif
- return ss.str();
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward");
- m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward");
-}
-
-} // namespace groundingdino
\ No newline at end of file
diff --git a/spaces/Skyler123/TangGPT/modules/llama_func.py b/spaces/Skyler123/TangGPT/modules/llama_func.py
deleted file mode 100644
index 9f4f799882b4e7c34aa8df815ebeb90ed822ba46..0000000000000000000000000000000000000000
--- a/spaces/Skyler123/TangGPT/modules/llama_func.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import os
-import logging
-
-from llama_index import download_loader
-from llama_index import (
- Document,
- LLMPredictor,
- PromptHelper,
- QuestionAnswerPrompt,
- RefinePrompt,
-)
-import colorama
-import PyPDF2
-from tqdm import tqdm
-
-from modules.presets import *
-from modules.utils import *
-
-def get_index_name(file_src):
- file_paths = [x.name for x in file_src]
- file_paths.sort(key=lambda x: os.path.basename(x))
-
- md5_hash = hashlib.md5()
- for file_path in file_paths:
- with open(file_path, "rb") as f:
- while chunk := f.read(8192):
- md5_hash.update(chunk)
-
- return md5_hash.hexdigest()
-
-def block_split(text):
- blocks = []
- while len(text) > 0:
- blocks.append(Document(text[:1000]))
- text = text[1000:]
- return blocks
-
-def get_documents(file_src):
- documents = []
- logging.debug("Loading documents...")
- logging.debug(f"file_src: {file_src}")
- for file in file_src:
- filepath = file.name
- filename = os.path.basename(filepath)
- file_type = os.path.splitext(filepath)[1]
- logging.info(f"loading file: {filename}")
- if file_type == ".pdf":
- logging.debug("Loading PDF...")
- try:
- from modules.pdf_func import parse_pdf
- from modules.config import advance_docs
- two_column = advance_docs["pdf"].get("two_column", False)
- pdftext = parse_pdf(filepath, two_column).text
- except:
- pdftext = ""
- with open(filepath, 'rb') as pdfFileObj:
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
- for page in tqdm(pdfReader.pages):
- pdftext += page.extract_text()
- text_raw = pdftext
- elif file_type == ".docx":
- logging.debug("Loading Word...")
- DocxReader = download_loader("DocxReader")
- loader = DocxReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".epub":
- logging.debug("Loading EPUB...")
- EpubReader = download_loader("EpubReader")
- loader = EpubReader()
- text_raw = loader.load_data(file=filepath)[0].text
- elif file_type == ".xlsx":
- logging.debug("Loading Excel...")
- text_raw = excel_to_string(filepath)
- else:
- logging.debug("Loading text file...")
- with open(filepath, "r", encoding="utf-8") as f:
- text_raw = f.read()
- text = add_space(text_raw)
- # text = block_split(text)
- # documents += text
- documents += [Document(text)]
- logging.debug("Documents loaded.")
- return documents
-
-
-def construct_index(
- api_key,
- file_src,
- max_input_size=4096,
- num_outputs=5,
- max_chunk_overlap=20,
- chunk_size_limit=600,
- embedding_limit=None,
- separator=" "
-):
- from langchain.chat_models import ChatOpenAI
- from llama_index import GPTSimpleVectorIndex, ServiceContext
-
- os.environ["OPENAI_API_KEY"] = api_key
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
- embedding_limit = None if embedding_limit == 0 else embedding_limit
- separator = " " if separator == "" else separator
-
- llm_predictor = LLMPredictor(
- llm=ChatOpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key)
- )
- prompt_helper = PromptHelper(max_input_size = max_input_size, num_output = num_outputs, max_chunk_overlap = max_chunk_overlap, embedding_limit=embedding_limit, chunk_size_limit=600, separator=separator)
- index_name = get_index_name(file_src)
- if os.path.exists(f"./index/{index_name}.json"):
- logging.info("找到了缓存的索引文件,加载中……")
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
- else:
- try:
- documents = get_documents(file_src)
- logging.info("构建索引中……")
- with retrieve_proxy():
- service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit)
- index = GPTSimpleVectorIndex.from_documents(
- documents, service_context=service_context
- )
- logging.debug("索引构建完成!")
- os.makedirs("./index", exist_ok=True)
- index.save_to_disk(f"./index/{index_name}.json")
- logging.debug("索引已保存至本地!")
- return index
-
- except Exception as e:
- logging.error("索引构建失败!", e)
- print(e)
- return None
-
-
-def add_space(text):
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
- for cn_punc, en_punc in punctuations.items():
- text = text.replace(cn_punc, en_punc)
- return text
diff --git a/spaces/Souranil/VAE/utils.py b/spaces/Souranil/VAE/utils.py
deleted file mode 100644
index 68da576393b37ed9023afd9d0494e2731617a932..0000000000000000000000000000000000000000
--- a/spaces/Souranil/VAE/utils.py
+++ /dev/null
@@ -1,63 +0,0 @@
-from pytorch_lightning import Trainer
-from torchvision.utils import save_image
-from models import vae_models
-from config import config
-from PIL import Image
-from pytorch_lightning.loggers import TensorBoardLogger
-import torch
-from torch.nn.functional import interpolate
-from torchvision.transforms import Resize, ToPILImage, Compose
-from torchvision.utils import make_grid
-
-def load_model(ckpt, model_type="vae"):
- model = vae_models[model_type].load_from_checkpoint(f"./saved_models/{ckpt}")
- model.eval()
- return model
-
-def parse_model_file_name(file_name):
- # Hard Coded Parsing based on the filenames that I use
- substrings = file_name.split(".")[0].split("_")
- name, alpha, dim = substrings[0], substrings[2], substrings[4]
- new_name = ""
- if name == "vae":
- new_name += "Vanilla VAE"
- new_name += f" | alpha={alpha}"
- new_name += f" | dim={dim}"
- return new_name
-
-def tensor_to_img(tsr):
- if tsr.ndim == 4:
- tsr = tsr.squeeze(0)
-
- transform = Compose([
- ToPILImage()
- ])
- img = transform(tsr)
- return img
-
-
-def resize_img(img, w, h):
- return img.resize((w, h))
-
-def canvas_to_tensor(canvas):
- """
- Convert Image of RGBA to single channel B/W and convert from numpy array
- to a PyTorch Tensor of [1,1,28,28]
- """
- img = canvas.image_data
- img = img[:, :, :-1] # Ignore alpha channel
- img = img.mean(axis=2)
- img = img/255
- img = img*2 - 1.
- img = torch.FloatTensor(img)
- tens = img.unsqueeze(0).unsqueeze(0)
- tens = interpolate(tens, (28, 28))
- return tens
-
-
-def export_to_onnx(ckpt):
- model = load_model(ckpt)
- filepath = "model.onnx"
- test_iter = iter(model.test_dataloader())
- sample, _ = next(test_iter)
- model.to_onnx(filepath, sample, export_params=True)
diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/quantization/base.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/quantization/base.py
deleted file mode 100644
index a77fefb98e62a5bbc6385910261ffdde2ffa5a25..0000000000000000000000000000000000000000
--- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/quantization/base.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Base class for all quantizers.
-"""
-
-from dataclasses import dataclass, field
-import typing as tp
-
-import torch
-from torch import nn
-
-
-@dataclass
-class QuantizedResult:
- x: torch.Tensor
- codes: torch.Tensor
- bandwidth: torch.Tensor # bandwidth in kb/s used, per batch item.
- penalty: tp.Optional[torch.Tensor] = None
- metrics: dict = field(default_factory=dict)
-
-
-class BaseQuantizer(nn.Module):
- """Base class for quantizers.
- """
-
- def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult:
- """
- Given input tensor x, returns first the quantized (or approximately quantized)
- representation along with quantized codes, bandwidth, and any penalty term for the loss.
- Finally, this returns a dict of metrics to update logging etc.
- Frame rate must be passed so that the bandwidth is properly computed.
- """
- raise NotImplementedError()
-
- def encode(self, x: torch.Tensor) -> torch.Tensor:
- """Encode a given input tensor with the specified sample rate at the given bandwidth."""
- raise NotImplementedError()
-
- def decode(self, codes: torch.Tensor) -> torch.Tensor:
- """Decode the given codes to the quantized representation."""
- raise NotImplementedError()
-
- @property
- def total_codebooks(self):
- """Total number of codebooks."""
- raise NotImplementedError()
-
- @property
- def num_codebooks(self):
- """Number of active codebooks."""
- raise NotImplementedError()
-
- def set_num_codebooks(self, n: int):
- """Set the number of active codebooks."""
- raise NotImplementedError()
-
-
-class DummyQuantizer(BaseQuantizer):
- """Fake quantizer that actually does not perform any quantization.
- """
- def __init__(self):
- super().__init__()
-
- def forward(self, x: torch.Tensor, frame_rate: int):
- q = x.unsqueeze(1)
- return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x))
-
- def encode(self, x: torch.Tensor) -> torch.Tensor:
- """Encode a given input tensor with the specified sample rate at the given bandwidth.
- In the case of the DummyQuantizer, the codes are actually identical
- to the input and resulting quantized representation as no quantization is done.
- """
- return x.unsqueeze(1)
-
- def decode(self, codes: torch.Tensor) -> torch.Tensor:
- """Decode the given codes to the quantized representation.
- In the case of the DummyQuantizer, the codes are actually identical
- to the input and resulting quantized representation as no quantization is done.
- """
- return codes.squeeze(1)
-
- @property
- def total_codebooks(self):
- """Total number of codebooks."""
- return 1
-
- @property
- def num_codebooks(self):
- """Total number of codebooks."""
- return self.total_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the number of active codebooks."""
- raise AttributeError("Cannot override the number of codebooks for the dummy quantizer")
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FpxImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FpxImagePlugin.py
deleted file mode 100644
index 2450c67e9a67530a4ad4fdcb1bbbfd39971ea484..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FpxImagePlugin.py
+++ /dev/null
@@ -1,253 +0,0 @@
-#
-# THIS IS WORK IN PROGRESS
-#
-# The Python Imaging Library.
-# $Id$
-#
-# FlashPix support for PIL
-#
-# History:
-# 97-01-25 fl Created (reads uncompressed RGB images only)
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1997.
-#
-# See the README file for information on usage and redistribution.
-#
-import olefile
-
-from . import Image, ImageFile
-from ._binary import i32le as i32
-
-# we map from colour field tuples to (mode, rawmode) descriptors
-MODES = {
- # opacity
- (0x00007FFE,): ("A", "L"),
- # monochrome
- (0x00010000,): ("L", "L"),
- (0x00018000, 0x00017FFE): ("RGBA", "LA"),
- # photo YCC
- (0x00020000, 0x00020001, 0x00020002): ("RGB", "YCC;P"),
- (0x00028000, 0x00028001, 0x00028002, 0x00027FFE): ("RGBA", "YCCA;P"),
- # standard RGB (NIFRGB)
- (0x00030000, 0x00030001, 0x00030002): ("RGB", "RGB"),
- (0x00038000, 0x00038001, 0x00038002, 0x00037FFE): ("RGBA", "RGBA"),
-}
-
-
-#
-# --------------------------------------------------------------------
-
-
-def _accept(prefix):
- return prefix[:8] == olefile.MAGIC
-
-
-##
-# Image plugin for the FlashPix images.
-
-
-class FpxImageFile(ImageFile.ImageFile):
- format = "FPX"
- format_description = "FlashPix"
-
- def _open(self):
- #
- # read the OLE directory and see if this is a likely
- # to be a FlashPix file
-
- try:
- self.ole = olefile.OleFileIO(self.fp)
- except OSError as e:
- msg = "not an FPX file; invalid OLE file"
- raise SyntaxError(msg) from e
-
- if self.ole.root.clsid != "56616700-C154-11CE-8553-00AA00A1F95B":
- msg = "not an FPX file; bad root CLSID"
- raise SyntaxError(msg)
-
- self._open_index(1)
-
- def _open_index(self, index=1):
- #
- # get the Image Contents Property Set
-
- prop = self.ole.getproperties(
- [f"Data Object Store {index:06d}", "\005Image Contents"]
- )
-
- # size (highest resolution)
-
- self._size = prop[0x1000002], prop[0x1000003]
-
- size = max(self.size)
- i = 1
- while size > 64:
- size = size / 2
- i += 1
- self.maxid = i - 1
-
- # mode. instead of using a single field for this, flashpix
- # requires you to specify the mode for each channel in each
- # resolution subimage, and leaves it to the decoder to make
- # sure that they all match. for now, we'll cheat and assume
- # that this is always the case.
-
- id = self.maxid << 16
-
- s = prop[0x2000002 | id]
-
- colors = []
- bands = i32(s, 4)
- if bands > 4:
- msg = "Invalid number of bands"
- raise OSError(msg)
- for i in range(bands):
- # note: for now, we ignore the "uncalibrated" flag
- colors.append(i32(s, 8 + i * 4) & 0x7FFFFFFF)
-
- self.mode, self.rawmode = MODES[tuple(colors)]
-
- # load JPEG tables, if any
- self.jpeg = {}
- for i in range(256):
- id = 0x3000001 | (i << 16)
- if id in prop:
- self.jpeg[i] = prop[id]
-
- self._open_subimage(1, self.maxid)
-
- def _open_subimage(self, index=1, subimage=0):
- #
- # setup tile descriptors for a given subimage
-
- stream = [
- f"Data Object Store {index:06d}",
- f"Resolution {subimage:04d}",
- "Subimage 0000 Header",
- ]
-
- fp = self.ole.openstream(stream)
-
- # skip prefix
- fp.read(28)
-
- # header stream
- s = fp.read(36)
-
- size = i32(s, 4), i32(s, 8)
- # tilecount = i32(s, 12)
- tilesize = i32(s, 16), i32(s, 20)
- # channels = i32(s, 24)
- offset = i32(s, 28)
- length = i32(s, 32)
-
- if size != self.size:
- msg = "subimage mismatch"
- raise OSError(msg)
-
- # get tile descriptors
- fp.seek(28 + offset)
- s = fp.read(i32(s, 12) * length)
-
- x = y = 0
- xsize, ysize = size
- xtile, ytile = tilesize
- self.tile = []
-
- for i in range(0, len(s), length):
- x1 = min(xsize, x + xtile)
- y1 = min(ysize, y + ytile)
-
- compression = i32(s, i + 8)
-
- if compression == 0:
- self.tile.append(
- (
- "raw",
- (x, y, x1, y1),
- i32(s, i) + 28,
- (self.rawmode,),
- )
- )
-
- elif compression == 1:
- # FIXME: the fill decoder is not implemented
- self.tile.append(
- (
- "fill",
- (x, y, x1, y1),
- i32(s, i) + 28,
- (self.rawmode, s[12:16]),
- )
- )
-
- elif compression == 2:
- internal_color_conversion = s[14]
- jpeg_tables = s[15]
- rawmode = self.rawmode
-
- if internal_color_conversion:
- # The image is stored as usual (usually YCbCr).
- if rawmode == "RGBA":
- # For "RGBA", data is stored as YCbCrA based on
- # negative RGB. The following trick works around
- # this problem :
- jpegmode, rawmode = "YCbCrK", "CMYK"
- else:
- jpegmode = None # let the decoder decide
-
- else:
- # The image is stored as defined by rawmode
- jpegmode = rawmode
-
- self.tile.append(
- (
- "jpeg",
- (x, y, x1, y1),
- i32(s, i) + 28,
- (rawmode, jpegmode),
- )
- )
-
- # FIXME: jpeg tables are tile dependent; the prefix
- # data must be placed in the tile descriptor itself!
-
- if jpeg_tables:
- self.tile_prefix = self.jpeg[jpeg_tables]
-
- else:
- msg = "unknown/invalid compression"
- raise OSError(msg)
-
- x = x + xtile
- if x >= xsize:
- x, y = 0, y + ytile
- if y >= ysize:
- break # isn't really required
-
- self.stream = stream
- self.fp = None
-
- def load(self):
- if not self.fp:
- self.fp = self.ole.openstream(self.stream[:2] + ["Subimage 0000 Data"])
-
- return ImageFile.ImageFile.load(self)
-
- def close(self):
- self.ole.close()
- super().close()
-
- def __exit__(self, *args):
- self.ole.close()
- super().__exit__()
-
-
-#
-# --------------------------------------------------------------------
-
-
-Image.register_open(FpxImageFile.format, FpxImageFile, _accept)
-
-Image.register_extension(FpxImageFile.format, ".fpx")
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageColor.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageColor.py
deleted file mode 100644
index e184ed68da37404397dfd45c4af08c2a8fb78ac0..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageColor.py
+++ /dev/null
@@ -1,305 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# map CSS3-style colour description strings to RGB
-#
-# History:
-# 2002-10-24 fl Added support for CSS-style color strings
-# 2002-12-15 fl Added RGBA support
-# 2004-03-27 fl Fixed remaining int() problems for Python 1.5.2
-# 2004-07-19 fl Fixed gray/grey spelling issues
-# 2009-03-05 fl Fixed rounding error in grayscale calculation
-#
-# Copyright (c) 2002-2004 by Secret Labs AB
-# Copyright (c) 2002-2004 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import re
-
-from . import Image
-
-
-def getrgb(color):
- """
- Convert a color string to an RGB or RGBA tuple. If the string cannot be
- parsed, this function raises a :py:exc:`ValueError` exception.
-
- .. versionadded:: 1.1.4
-
- :param color: A color string
- :return: ``(red, green, blue[, alpha])``
- """
- if len(color) > 100:
- msg = "color specifier is too long"
- raise ValueError(msg)
- color = color.lower()
-
- rgb = colormap.get(color, None)
- if rgb:
- if isinstance(rgb, tuple):
- return rgb
- colormap[color] = rgb = getrgb(rgb)
- return rgb
-
- # check for known string formats
- if re.match("#[a-f0-9]{3}$", color):
- return int(color[1] * 2, 16), int(color[2] * 2, 16), int(color[3] * 2, 16)
-
- if re.match("#[a-f0-9]{4}$", color):
- return (
- int(color[1] * 2, 16),
- int(color[2] * 2, 16),
- int(color[3] * 2, 16),
- int(color[4] * 2, 16),
- )
-
- if re.match("#[a-f0-9]{6}$", color):
- return int(color[1:3], 16), int(color[3:5], 16), int(color[5:7], 16)
-
- if re.match("#[a-f0-9]{8}$", color):
- return (
- int(color[1:3], 16),
- int(color[3:5], 16),
- int(color[5:7], 16),
- int(color[7:9], 16),
- )
-
- m = re.match(r"rgb\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)$", color)
- if m:
- return int(m.group(1)), int(m.group(2)), int(m.group(3))
-
- m = re.match(r"rgb\(\s*(\d+)%\s*,\s*(\d+)%\s*,\s*(\d+)%\s*\)$", color)
- if m:
- return (
- int((int(m.group(1)) * 255) / 100.0 + 0.5),
- int((int(m.group(2)) * 255) / 100.0 + 0.5),
- int((int(m.group(3)) * 255) / 100.0 + 0.5),
- )
-
- m = re.match(
- r"hsl\(\s*(\d+\.?\d*)\s*,\s*(\d+\.?\d*)%\s*,\s*(\d+\.?\d*)%\s*\)$", color
- )
- if m:
- from colorsys import hls_to_rgb
-
- rgb = hls_to_rgb(
- float(m.group(1)) / 360.0,
- float(m.group(3)) / 100.0,
- float(m.group(2)) / 100.0,
- )
- return (
- int(rgb[0] * 255 + 0.5),
- int(rgb[1] * 255 + 0.5),
- int(rgb[2] * 255 + 0.5),
- )
-
- m = re.match(
- r"hs[bv]\(\s*(\d+\.?\d*)\s*,\s*(\d+\.?\d*)%\s*,\s*(\d+\.?\d*)%\s*\)$", color
- )
- if m:
- from colorsys import hsv_to_rgb
-
- rgb = hsv_to_rgb(
- float(m.group(1)) / 360.0,
- float(m.group(2)) / 100.0,
- float(m.group(3)) / 100.0,
- )
- return (
- int(rgb[0] * 255 + 0.5),
- int(rgb[1] * 255 + 0.5),
- int(rgb[2] * 255 + 0.5),
- )
-
- m = re.match(r"rgba\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)$", color)
- if m:
- return int(m.group(1)), int(m.group(2)), int(m.group(3)), int(m.group(4))
- msg = f"unknown color specifier: {repr(color)}"
- raise ValueError(msg)
-
-
-def getcolor(color, mode):
- """
- Same as :py:func:`~PIL.ImageColor.getrgb`, but converts the RGB value to a
- greyscale value if ``mode`` is not color or a palette image. If the string
- cannot be parsed, this function raises a :py:exc:`ValueError` exception.
-
- .. versionadded:: 1.1.4
-
- :param color: A color string
- :param mode: Convert result to this mode
- :return: ``(graylevel[, alpha]) or (red, green, blue[, alpha])``
- """
- # same as getrgb, but converts the result to the given mode
- color, alpha = getrgb(color), 255
- if len(color) == 4:
- color, alpha = color[:3], color[3]
-
- if Image.getmodebase(mode) == "L":
- r, g, b = color
- # ITU-R Recommendation 601-2 for nonlinear RGB
- # scaled to 24 bits to match the convert's implementation.
- color = (r * 19595 + g * 38470 + b * 7471 + 0x8000) >> 16
- if mode[-1] == "A":
- return color, alpha
- else:
- if mode[-1] == "A":
- return color + (alpha,)
- return color
-
-
-colormap = {
- # X11 colour table from https://drafts.csswg.org/css-color-4/, with
- # gray/grey spelling issues fixed. This is a superset of HTML 4.0
- # colour names used in CSS 1.
- "aliceblue": "#f0f8ff",
- "antiquewhite": "#faebd7",
- "aqua": "#00ffff",
- "aquamarine": "#7fffd4",
- "azure": "#f0ffff",
- "beige": "#f5f5dc",
- "bisque": "#ffe4c4",
- "black": "#000000",
- "blanchedalmond": "#ffebcd",
- "blue": "#0000ff",
- "blueviolet": "#8a2be2",
- "brown": "#a52a2a",
- "burlywood": "#deb887",
- "cadetblue": "#5f9ea0",
- "chartreuse": "#7fff00",
- "chocolate": "#d2691e",
- "coral": "#ff7f50",
- "cornflowerblue": "#6495ed",
- "cornsilk": "#fff8dc",
- "crimson": "#dc143c",
- "cyan": "#00ffff",
- "darkblue": "#00008b",
- "darkcyan": "#008b8b",
- "darkgoldenrod": "#b8860b",
- "darkgray": "#a9a9a9",
- "darkgrey": "#a9a9a9",
- "darkgreen": "#006400",
- "darkkhaki": "#bdb76b",
- "darkmagenta": "#8b008b",
- "darkolivegreen": "#556b2f",
- "darkorange": "#ff8c00",
- "darkorchid": "#9932cc",
- "darkred": "#8b0000",
- "darksalmon": "#e9967a",
- "darkseagreen": "#8fbc8f",
- "darkslateblue": "#483d8b",
- "darkslategray": "#2f4f4f",
- "darkslategrey": "#2f4f4f",
- "darkturquoise": "#00ced1",
- "darkviolet": "#9400d3",
- "deeppink": "#ff1493",
- "deepskyblue": "#00bfff",
- "dimgray": "#696969",
- "dimgrey": "#696969",
- "dodgerblue": "#1e90ff",
- "firebrick": "#b22222",
- "floralwhite": "#fffaf0",
- "forestgreen": "#228b22",
- "fuchsia": "#ff00ff",
- "gainsboro": "#dcdcdc",
- "ghostwhite": "#f8f8ff",
- "gold": "#ffd700",
- "goldenrod": "#daa520",
- "gray": "#808080",
- "grey": "#808080",
- "green": "#008000",
- "greenyellow": "#adff2f",
- "honeydew": "#f0fff0",
- "hotpink": "#ff69b4",
- "indianred": "#cd5c5c",
- "indigo": "#4b0082",
- "ivory": "#fffff0",
- "khaki": "#f0e68c",
- "lavender": "#e6e6fa",
- "lavenderblush": "#fff0f5",
- "lawngreen": "#7cfc00",
- "lemonchiffon": "#fffacd",
- "lightblue": "#add8e6",
- "lightcoral": "#f08080",
- "lightcyan": "#e0ffff",
- "lightgoldenrodyellow": "#fafad2",
- "lightgreen": "#90ee90",
- "lightgray": "#d3d3d3",
- "lightgrey": "#d3d3d3",
- "lightpink": "#ffb6c1",
- "lightsalmon": "#ffa07a",
- "lightseagreen": "#20b2aa",
- "lightskyblue": "#87cefa",
- "lightslategray": "#778899",
- "lightslategrey": "#778899",
- "lightsteelblue": "#b0c4de",
- "lightyellow": "#ffffe0",
- "lime": "#00ff00",
- "limegreen": "#32cd32",
- "linen": "#faf0e6",
- "magenta": "#ff00ff",
- "maroon": "#800000",
- "mediumaquamarine": "#66cdaa",
- "mediumblue": "#0000cd",
- "mediumorchid": "#ba55d3",
- "mediumpurple": "#9370db",
- "mediumseagreen": "#3cb371",
- "mediumslateblue": "#7b68ee",
- "mediumspringgreen": "#00fa9a",
- "mediumturquoise": "#48d1cc",
- "mediumvioletred": "#c71585",
- "midnightblue": "#191970",
- "mintcream": "#f5fffa",
- "mistyrose": "#ffe4e1",
- "moccasin": "#ffe4b5",
- "navajowhite": "#ffdead",
- "navy": "#000080",
- "oldlace": "#fdf5e6",
- "olive": "#808000",
- "olivedrab": "#6b8e23",
- "orange": "#ffa500",
- "orangered": "#ff4500",
- "orchid": "#da70d6",
- "palegoldenrod": "#eee8aa",
- "palegreen": "#98fb98",
- "paleturquoise": "#afeeee",
- "palevioletred": "#db7093",
- "papayawhip": "#ffefd5",
- "peachpuff": "#ffdab9",
- "peru": "#cd853f",
- "pink": "#ffc0cb",
- "plum": "#dda0dd",
- "powderblue": "#b0e0e6",
- "purple": "#800080",
- "rebeccapurple": "#663399",
- "red": "#ff0000",
- "rosybrown": "#bc8f8f",
- "royalblue": "#4169e1",
- "saddlebrown": "#8b4513",
- "salmon": "#fa8072",
- "sandybrown": "#f4a460",
- "seagreen": "#2e8b57",
- "seashell": "#fff5ee",
- "sienna": "#a0522d",
- "silver": "#c0c0c0",
- "skyblue": "#87ceeb",
- "slateblue": "#6a5acd",
- "slategray": "#708090",
- "slategrey": "#708090",
- "snow": "#fffafa",
- "springgreen": "#00ff7f",
- "steelblue": "#4682b4",
- "tan": "#d2b48c",
- "teal": "#008080",
- "thistle": "#d8bfd8",
- "tomato": "#ff6347",
- "turquoise": "#40e0d0",
- "violet": "#ee82ee",
- "wheat": "#f5deb3",
- "white": "#ffffff",
- "whitesmoke": "#f5f5f5",
- "yellow": "#ffff00",
- "yellowgreen": "#9acd32",
-}
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/audio/abstract_audio_tensor.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/audio/abstract_audio_tensor.py
deleted file mode 100644
index 213d69b2ee03ef895bbd63c6a78737471dc867f5..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/audio/abstract_audio_tensor.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import warnings
-from abc import ABC
-from typing import TYPE_CHECKING, Any, BinaryIO, Dict, TypeVar, Union
-
-from docarray.typing.tensor.abstract_tensor import AbstractTensor
-from docarray.utils._internal.misc import import_library, is_notebook
-
-if TYPE_CHECKING:
- from docarray.typing.bytes.audio_bytes import AudioBytes
-
-T = TypeVar('T', bound='AbstractAudioTensor')
-
-MAX_INT_16 = 2**15
-
-
-class AbstractAudioTensor(AbstractTensor, ABC):
- def to_bytes(self) -> 'AudioBytes':
- """
- Convert audio tensor to [`AudioBytes`][docarray.typing.AudioBytes].
- """
- from docarray.typing.bytes.audio_bytes import AudioBytes
-
- tensor = self.get_comp_backend().to_numpy(self)
- tensor = (tensor * MAX_INT_16).astype(' None:
- """
- Save audio tensor to an audio file. Mono/stereo is preserved.
-
- :param file_path: path to an audio file. If file is a string, open the file by
- that name, otherwise treat it as a file-like object.
- :param format: format for the audio file ('mp3', 'wav', 'raw', 'ogg' or other ffmpeg/avconv supported files)
- :param frame_rate: sampling frequency
- :param sample_width: sample width in bytes
- :param pydub_args: dictionary of additional arguments for pydub.AudioSegment.export function
- """
- pydub = import_library('pydub', raise_error=True) # noqa: F841
- from pydub import AudioSegment
-
- comp_backend = self.get_comp_backend()
- channels = 2 if comp_backend.n_dim(array=self) > 1 else 1 # type: ignore
-
- segment = AudioSegment(
- self.to_bytes(),
- frame_rate=frame_rate,
- sample_width=sample_width,
- channels=channels,
- )
- segment.export(file_path, format=format, **pydub_args)
-
- def display(self, rate=44100):
- """
- Play audio data from tensor in notebook.
- """
- if is_notebook():
- from IPython.display import Audio, display
-
- audio_np = self.get_comp_backend().to_numpy(self)
- display(Audio(audio_np, rate=rate))
- else:
- warnings.warn('Display of audio is only possible in a notebook.')
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/video/video_torch_tensor.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/video/video_torch_tensor.py
deleted file mode 100644
index dd4c5a5dcd343554e9c60d4731d523c5215cda53..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/tensor/video/video_torch_tensor.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from typing import TYPE_CHECKING, Any, List, Tuple, Type, TypeVar, Union
-
-import numpy as np
-
-from docarray.typing.proto_register import _register_proto
-from docarray.typing.tensor.torch_tensor import TorchTensor, metaTorchAndNode
-from docarray.typing.tensor.video.video_tensor_mixin import VideoTensorMixin
-
-T = TypeVar('T', bound='VideoTorchTensor')
-
-if TYPE_CHECKING:
- from pydantic import BaseConfig
- from pydantic.fields import ModelField
-
-
-@_register_proto(proto_type_name='video_torch_tensor')
-class VideoTorchTensor(TorchTensor, VideoTensorMixin, metaclass=metaTorchAndNode):
- """
- Subclass of [`TorchTensor`][docarray.typing.TorchTensor], to represent a video tensor.
- Adds video-specific features to the tensor.
-
- ---
-
- ```python
- from typing import Optional
-
- import torch
-
- from docarray import BaseDoc
- from docarray.typing import VideoTorchTensor, VideoUrl
-
-
- class MyVideoDoc(BaseDoc):
- title: str
- url: Optional[VideoUrl]
- video_tensor: Optional[VideoTorchTensor]
-
-
- doc_1 = MyVideoDoc(
- title='my_first_video_doc',
- video_tensor=torch.randn(size=(100, 224, 224, 3)),
- )
- # doc_1.video_tensor.save(file_path='file_1.mp4')
-
- doc_2 = MyVideoDoc(
- title='my_second_video_doc',
- url='https://github.com/docarray/docarray/blob/main/tests/toydata/mov_bbb.mp4?raw=true',
- )
-
- doc_2.video_tensor = doc_2.url.load().video
- # doc_2.video_tensor.save(file_path='file_2.wav')
- ```
-
- ---
-
- """
-
- @classmethod
- def validate(
- cls: Type[T],
- value: Union[T, np.ndarray, List[Any], Tuple[Any], Any],
- field: 'ModelField',
- config: 'BaseConfig',
- ) -> T:
- tensor = super().validate(value=value, field=field, config=config)
- return cls.validate_shape(value=tensor)
diff --git a/spaces/Sygil/INE-dataset-explorer/Dockerfile b/spaces/Sygil/INE-dataset-explorer/Dockerfile
deleted file mode 100644
index 37eb3876eba25a2268335fc02ccc31c71ad01e8d..0000000000000000000000000000000000000000
--- a/spaces/Sygil/INE-dataset-explorer/Dockerfile
+++ /dev/null
@@ -1,32 +0,0 @@
-FROM datasetteproject/datasette:0.64.1
-
-# huggingface spaces run as user 1000
-RUN adduser hf-space --uid 1000 --disabled-password --gecos '' && \
- mkdir /home/hf-space/app && \
- chown hf-space: /home/hf-space/app
-WORKDIR /home/hf-space/app
-
-RUN datasette install datasette-configure-fts && \
- datasette install datasette-dashboards && \
- datasette install datasette-render-image-tags
-
-RUN apt-get update && \
- apt-get install -y --no-install-recommends git && \
- apt-get clean && \
- rm -rf /var/lib/apt && \
- rm -rf /var/lib/dpkg/info/*
-
-USER hf-space
-
-# spaces default port
-EXPOSE 7860
-ENTRYPOINT ["datasette", "--host=0.0.0.0", "--port=7860"]
-CMD ["."]
-
-ENV PYTHONPATH=/home/hf-space/app/src/
-
-COPY src src
-COPY metadata.json metadata.yml settings.json ./
-
-RUN src/import-git.sh && \
- datasette inspect *.db --inspect-file=inspect-data.json
diff --git a/spaces/TIMBOVILL/RVC-Noobie/app.py b/spaces/TIMBOVILL/RVC-Noobie/app.py
deleted file mode 100644
index df1afd32ee11f0d166e96260c382c98563b851f2..0000000000000000000000000000000000000000
--- a/spaces/TIMBOVILL/RVC-Noobie/app.py
+++ /dev/null
@@ -1,2080 +0,0 @@
-import subprocess, torch, os, traceback, sys, warnings, shutil, numpy as np
-from mega import Mega
-os.environ["no_proxy"] = "localhost, 127.0.0.1, ::1"
-import threading
-from time import sleep
-from subprocess import Popen
-import faiss
-from random import shuffle
-import json, datetime, requests
-from gtts import gTTS
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-tmp = os.path.join(now_dir, "TEMP")
-shutil.rmtree(tmp, ignore_errors=True)
-shutil.rmtree("%s/runtime/Lib/site-packages/infer_pack" % (now_dir), ignore_errors=True)
-os.makedirs(tmp, exist_ok=True)
-os.makedirs(os.path.join(now_dir, "logs"), exist_ok=True)
-os.makedirs(os.path.join(now_dir, "weights"), exist_ok=True)
-os.environ["TEMP"] = tmp
-warnings.filterwarnings("ignore")
-torch.manual_seed(114514)
-from i18n import I18nAuto
-
-import signal
-
-import math
-
-from utils import load_audio, CSVutil
-
-global DoFormant, Quefrency, Timbre
-
-if not os.path.isdir('csvdb/'):
- os.makedirs('csvdb')
- frmnt, stp = open("csvdb/formanting.csv", 'w'), open("csvdb/stop.csv", 'w')
- frmnt.close()
- stp.close()
-
-try:
- DoFormant, Quefrency, Timbre = CSVutil('csvdb/formanting.csv', 'r', 'formanting')
- DoFormant = (
- lambda DoFormant: True if DoFormant.lower() == 'true' else (False if DoFormant.lower() == 'false' else DoFormant)
- )(DoFormant)
-except (ValueError, TypeError, IndexError):
- DoFormant, Quefrency, Timbre = False, 1.0, 1.0
- CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, Quefrency, Timbre)
-
-def download_models():
- # Download hubert base model if not present
- if not os.path.isfile('./hubert_base.pt'):
- response = requests.get('https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt')
-
- if response.status_code == 200:
- with open('./hubert_base.pt', 'wb') as f:
- f.write(response.content)
- print("Downloaded hubert base model file successfully. File saved to ./hubert_base.pt.")
- else:
- raise Exception("Failed to download hubert base model file. Status code: " + str(response.status_code) + ".")
-
- # Download rmvpe model if not present
- if not os.path.isfile('./rmvpe.pt'):
- response = requests.get('https://drive.usercontent.google.com/download?id=1Hkn4kNuVFRCNQwyxQFRtmzmMBGpQxptI&export=download&authuser=0&confirm=t&uuid=0b3a40de-465b-4c65-8c41-135b0b45c3f7&at=APZUnTV3lA3LnyTbeuduura6Dmi2:1693724254058')
-
- if response.status_code == 200:
- with open('./rmvpe.pt', 'wb') as f:
- f.write(response.content)
- print("Downloaded rmvpe model file successfully. File saved to ./rmvpe.pt.")
- else:
- raise Exception("Failed to download rmvpe model file. Status code: " + str(response.status_code) + ".")
-
-download_models()
-
-print("\n-------------------------------\nRVC v2 Easy GUI (Local Edition)\n-------------------------------\n")
-
-def formant_apply(qfrency, tmbre):
- Quefrency = qfrency
- Timbre = tmbre
- DoFormant = True
- CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre)
-
- return ({"value": Quefrency, "__type__": "update"}, {"value": Timbre, "__type__": "update"})
-
-def get_fshift_presets():
- fshift_presets_list = []
- for dirpath, _, filenames in os.walk("./formantshiftcfg/"):
- for filename in filenames:
- if filename.endswith(".txt"):
- fshift_presets_list.append(os.path.join(dirpath,filename).replace('\\','/'))
-
- if len(fshift_presets_list) > 0:
- return fshift_presets_list
- else:
- return ''
-
-
-
-def formant_enabled(cbox, qfrency, tmbre, frmntapply, formantpreset, formant_refresh_button):
-
- if (cbox):
-
- DoFormant = True
- CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre)
- #print(f"is checked? - {cbox}\ngot {DoFormant}")
-
- return (
- {"value": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- )
-
-
- else:
-
- DoFormant = False
- CSVutil('csvdb/formanting.csv', 'w+', 'formanting', DoFormant, qfrency, tmbre)
-
- #print(f"is checked? - {cbox}\ngot {DoFormant}")
- return (
- {"value": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- )
-
-
-
-def preset_apply(preset, qfer, tmbr):
- if str(preset) != '':
- with open(str(preset), 'r') as p:
- content = p.readlines()
- qfer, tmbr = content[0].split('\n')[0], content[1]
-
- formant_apply(qfer, tmbr)
- else:
- pass
- return ({"value": qfer, "__type__": "update"}, {"value": tmbr, "__type__": "update"})
-
-def update_fshift_presets(preset, qfrency, tmbre):
-
- qfrency, tmbre = preset_apply(preset, qfrency, tmbre)
-
- if (str(preset) != ''):
- with open(str(preset), 'r') as p:
- content = p.readlines()
- qfrency, tmbre = content[0].split('\n')[0], content[1]
-
- formant_apply(qfrency, tmbre)
- else:
- pass
- return (
- {"choices": get_fshift_presets(), "__type__": "update"},
- {"value": qfrency, "__type__": "update"},
- {"value": tmbre, "__type__": "update"},
- )
-
-i18n = I18nAuto()
-#i18n.print()
-# 判断是否有能用来训练和加速推理的N卡
-ngpu = torch.cuda.device_count()
-gpu_infos = []
-mem = []
-if (not torch.cuda.is_available()) or ngpu == 0:
- if_gpu_ok = False
-else:
- if_gpu_ok = False
- for i in range(ngpu):
- gpu_name = torch.cuda.get_device_name(i)
- if (
- "10" in gpu_name
- or "16" in gpu_name
- or "20" in gpu_name
- or "30" in gpu_name
- or "40" in gpu_name
- or "A2" in gpu_name.upper()
- or "A3" in gpu_name.upper()
- or "A4" in gpu_name.upper()
- or "P4" in gpu_name.upper()
- or "A50" in gpu_name.upper()
- or "A60" in gpu_name.upper()
- or "70" in gpu_name
- or "80" in gpu_name
- or "90" in gpu_name
- or "M4" in gpu_name.upper()
- or "T4" in gpu_name.upper()
- or "TITAN" in gpu_name.upper()
- ): # A10#A100#V100#A40#P40#M40#K80#A4500
- if_gpu_ok = True # 至少有一张能用的N卡
- gpu_infos.append("%s\t%s" % (i, gpu_name))
- mem.append(
- int(
- torch.cuda.get_device_properties(i).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- )
-if if_gpu_ok == True and len(gpu_infos) > 0:
- gpu_info = "\n".join(gpu_infos)
- default_batch_size = min(mem) // 2
-else:
- gpu_info = i18n("很遗憾您这没有能用的显卡来支持您训练")
- default_batch_size = 1
-gpus = "-".join([i[0] for i in gpu_infos])
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-import soundfile as sf
-from fairseq import checkpoint_utils
-import gradio as gr
-import logging
-from vc_infer_pipeline import VC
-from config import Config
-
-config = Config()
-# from trainset_preprocess_pipeline import PreProcess
-logging.getLogger("numba").setLevel(logging.WARNING)
-
-hubert_model = None
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-
-weight_root = "weights"
-index_root = "logs"
-names = []
-for name in os.listdir(weight_root):
- if name.endswith(".pth"):
- names.append(name)
-index_paths = []
-for root, dirs, files in os.walk(index_root, topdown=False):
- for name in files:
- if name.endswith(".index") and "trained" not in name:
- index_paths.append("%s/%s" % (root, name))
-
-
-
-def vc_single(
- sid,
- input_audio_path,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- #file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length,
-): # spk_item, input_audio0, vc_transform0,f0_file,f0method0
- global tgt_sr, net_g, vc, hubert_model, version
- if input_audio_path is None:
- return "You need to upload an audio", None
- f0_up_key = int(f0_up_key)
- try:
- audio = load_audio(input_audio_path, 16000, DoFormant, Quefrency, Timbre)
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
- if hubert_model == None:
- load_hubert()
- if_f0 = cpt.get("f0", 1)
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- ) # 防止小白写错,自动帮他替换掉
- # file_big_npy = (
- # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- # )
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- sid,
- audio,
- input_audio_path,
- times,
- f0_up_key,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- crepe_hop_length,
- f0_file=f0_file,
- )
- if resample_sr >= 16000 and tgt_sr != resample_sr:
- tgt_sr = resample_sr
- index_info = (
- "Using index:%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % (
- index_info,
- times[0],
- times[1],
- times[2],
- ), (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
-
-
-def vc_multi(
- sid,
- dir_path,
- opt_root,
- paths,
- f0_up_key,
- f0_method,
- file_index,
- file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- format1,
- crepe_hop_length,
-):
- try:
- dir_path = (
- dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- os.makedirs(opt_root, exist_ok=True)
- try:
- if dir_path != "":
- paths = [os.path.join(dir_path, name) for name in os.listdir(dir_path)]
- else:
- paths = [path.name for path in paths]
- except:
- traceback.print_exc()
- paths = [path.name for path in paths]
- infos = []
- for path in paths:
- info, opt = vc_single(
- sid,
- path,
- f0_up_key,
- None,
- f0_method,
- file_index,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length
- )
- if "Success" in info:
- try:
- tgt_sr, audio_opt = opt
- if format1 in ["wav", "flac"]:
- sf.write(
- "%s/%s.%s" % (opt_root, os.path.basename(path), format1),
- audio_opt,
- tgt_sr,
- )
- else:
- path = "%s/%s.wav" % (opt_root, os.path.basename(path))
- sf.write(
- path,
- audio_opt,
- tgt_sr,
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format1)
- )
- except:
- info += traceback.format_exc()
- infos.append("%s->%s" % (os.path.basename(path), info))
- yield "\n".join(infos)
- yield "\n".join(infos)
- except:
- yield traceback.format_exc()
-
-# 一个选项卡全局只能有一个音色
-def get_vc(sid):
- global n_spk, tgt_sr, net_g, vc, cpt, version
- if sid == "" or sid == []:
- global hubert_model
- if hubert_model != None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的
- print("clean_empty_cache")
- del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt
- hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- ###楼下不这么折腾清理不干净
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g, cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- cpt = None
- return {"visible": False, "__type__": "update"}
- person = "%s/%s" % (weight_root, sid)
- print("loading %s" % person)
- cpt = torch.load(person, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- return {"visible": False, "maximum": n_spk, "__type__": "update"}
-
-
-def change_choices():
- names = []
- for name in os.listdir(weight_root):
- if name.endswith(".pth"):
- names.append(name)
- index_paths = []
- for root, dirs, files in os.walk(index_root, topdown=False):
- for name in files:
- if name.endswith(".index") and "trained" not in name:
- index_paths.append("%s/%s" % (root, name))
- return {"choices": sorted(names), "__type__": "update"}, {
- "choices": sorted(index_paths),
- "__type__": "update",
- }
-
-
-def clean():
- return {"value": "", "__type__": "update"}
-
-
-sr_dict = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-def if_done(done, p):
- while 1:
- if p.poll() == None:
- sleep(0.5)
- else:
- break
- done[0] = True
-
-
-def if_done_multi(done, ps):
- while 1:
- # poll==None代表进程未结束
- # 只要有一个进程未结束都不停
- flag = 1
- for p in ps:
- if p.poll() == None:
- flag = 0
- sleep(0.5)
- break
- if flag == 1:
- break
- done[0] = True
-
-
-def preprocess_dataset(trainset_dir, exp_dir, sr, n_p):
- sr = sr_dict[sr]
- os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True)
- f = open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "w")
- f.close()
- cmd = (
- config.python_cmd
- + " trainset_preprocess_pipeline_print.py %s %s %s %s/logs/%s "
- % (trainset_dir, sr, n_p, now_dir, exp_dir)
- + str(config.noparallel)
- )
- print(cmd)
- p = Popen(cmd, shell=True) # , stdin=PIPE, stdout=PIPE,stderr=PIPE,cwd=now_dir
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done,
- args=(
- done,
- p,
- ),
- ).start()
- while 1:
- with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f:
- yield (f.read())
- sleep(1)
- if done[0] == True:
- break
- with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
-
-# but2.click(extract_f0,[gpus6,np7,f0method8,if_f0_3,trainset_dir4],[info2])
-def extract_f0_feature(gpus, n_p, f0method, if_f0, exp_dir, version19, echl):
- gpus = gpus.split("-")
- os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True)
- f = open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "w")
- f.close()
- if if_f0:
- cmd = config.python_cmd + " extract_f0_print.py %s/logs/%s %s %s %s" % (
- now_dir,
- exp_dir,
- n_p,
- f0method,
- echl,
- )
- print(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir) # , stdin=PIPE, stdout=PIPE,stderr=PIPE
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done,
- args=(
- done,
- p,
- ),
- ).start()
- while 1:
- with open(
- "%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r"
- ) as f:
- yield (f.read())
- sleep(1)
- if done[0] == True:
- break
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
- ####对不同part分别开多进程
- """
- n_part=int(sys.argv[1])
- i_part=int(sys.argv[2])
- i_gpu=sys.argv[3]
- exp_dir=sys.argv[4]
- os.environ["CUDA_VISIBLE_DEVICES"]=str(i_gpu)
- """
- leng = len(gpus)
- ps = []
- for idx, n_g in enumerate(gpus):
- cmd = (
- config.python_cmd
- + " extract_feature_print.py %s %s %s %s %s/logs/%s %s"
- % (
- config.device,
- leng,
- idx,
- n_g,
- now_dir,
- exp_dir,
- version19,
- )
- )
- print(cmd)
- p = Popen(
- cmd, shell=True, cwd=now_dir
- ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir
- ps.append(p)
- ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读
- done = [False]
- threading.Thread(
- target=if_done_multi,
- args=(
- done,
- ps,
- ),
- ).start()
- while 1:
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- yield (f.read())
- sleep(1)
- if done[0] == True:
- break
- with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f:
- log = f.read()
- print(log)
- yield log
-
-
-def change_sr2(sr2, if_f0_3, version19):
- path_str = "" if version19 == "v1" else "_v2"
- f0_str = "f0" if if_f0_3 else ""
- if_pretrained_generator_exist = os.access("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK)
- if_pretrained_discriminator_exist = os.access("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK)
- if (if_pretrained_generator_exist == False):
- print("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model")
- if (if_pretrained_discriminator_exist == False):
- print("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model")
- return (
- ("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_generator_exist else "",
- ("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_discriminator_exist else "",
- {"visible": True, "__type__": "update"}
- )
-
-def change_version19(sr2, if_f0_3, version19):
- path_str = "" if version19 == "v1" else "_v2"
- f0_str = "f0" if if_f0_3 else ""
- if_pretrained_generator_exist = os.access("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), os.F_OK)
- if_pretrained_discriminator_exist = os.access("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), os.F_OK)
- if (if_pretrained_generator_exist == False):
- print("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model")
- if (if_pretrained_discriminator_exist == False):
- print("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), "not exist, will not use pretrained model")
- return (
- ("pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_generator_exist else "",
- ("pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2)) if if_pretrained_discriminator_exist else "",
- )
-
-
-def change_f0(if_f0_3, sr2, version19): # f0method8,pretrained_G14,pretrained_D15
- path_str = "" if version19 == "v1" else "_v2"
- if_pretrained_generator_exist = os.access("pretrained%s/f0G%s.pth" % (path_str, sr2), os.F_OK)
- if_pretrained_discriminator_exist = os.access("pretrained%s/f0D%s.pth" % (path_str, sr2), os.F_OK)
- if (if_pretrained_generator_exist == False):
- print("pretrained%s/f0G%s.pth" % (path_str, sr2), "not exist, will not use pretrained model")
- if (if_pretrained_discriminator_exist == False):
- print("pretrained%s/f0D%s.pth" % (path_str, sr2), "not exist, will not use pretrained model")
- if if_f0_3:
- return (
- {"visible": True, "__type__": "update"},
- "pretrained%s/f0G%s.pth" % (path_str, sr2) if if_pretrained_generator_exist else "",
- "pretrained%s/f0D%s.pth" % (path_str, sr2) if if_pretrained_discriminator_exist else "",
- )
- return (
- {"visible": False, "__type__": "update"},
- ("pretrained%s/G%s.pth" % (path_str, sr2)) if if_pretrained_generator_exist else "",
- ("pretrained%s/D%s.pth" % (path_str, sr2)) if if_pretrained_discriminator_exist else "",
- )
-
-
-global log_interval
-
-
-def set_log_interval(exp_dir, batch_size12):
- log_interval = 1
-
- folder_path = os.path.join(exp_dir, "1_16k_wavs")
-
- if os.path.exists(folder_path) and os.path.isdir(folder_path):
- wav_files = [f for f in os.listdir(folder_path) if f.endswith(".wav")]
- if wav_files:
- sample_size = len(wav_files)
- log_interval = math.ceil(sample_size / batch_size12)
- if log_interval > 1:
- log_interval += 1
- return log_interval
-
-# but3.click(click_train,[exp_dir1,sr2,if_f0_3,save_epoch10,total_epoch11,batch_size12,if_save_latest13,pretrained_G14,pretrained_D15,gpus16])
-def click_train(
- exp_dir1,
- sr2,
- if_f0_3,
- spk_id5,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
-):
- CSVutil('csvdb/stop.csv', 'w+', 'formanting', False)
- # 生成filelist
- exp_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- os.makedirs(exp_dir, exist_ok=True)
- gt_wavs_dir = "%s/0_gt_wavs" % (exp_dir)
- feature_dir = (
- "%s/3_feature256" % (exp_dir)
- if version19 == "v1"
- else "%s/3_feature768" % (exp_dir)
- )
-
- log_interval = set_log_interval(exp_dir, batch_size12)
-
- if if_f0_3:
- f0_dir = "%s/2a_f0" % (exp_dir)
- f0nsf_dir = "%s/2b-f0nsf" % (exp_dir)
- names = (
- set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)])
- & set([name.split(".")[0] for name in os.listdir(feature_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)])
- )
- else:
- names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set(
- [name.split(".")[0] for name in os.listdir(feature_dir)]
- )
- opt = []
- for name in names:
- if if_f0_3:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- f0_dir.replace("\\", "\\\\"),
- name,
- f0nsf_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- else:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- fea_dim = 256 if version19 == "v1" else 768
- if if_f0_3:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5)
- )
- else:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, spk_id5)
- )
- shuffle(opt)
- with open("%s/filelist.txt" % exp_dir, "w") as f:
- f.write("\n".join(opt))
- print("write filelist done")
- # 生成config#无需生成config
- # cmd = python_cmd + " train_nsf_sim_cache_sid_load_pretrain.py -e mi-test -sr 40k -f0 1 -bs 4 -g 0 -te 10 -se 5 -pg pretrained/f0G40k.pth -pd pretrained/f0D40k.pth -l 1 -c 0"
- print("use gpus:", gpus16)
- if pretrained_G14 == "":
- print("no pretrained Generator")
- if pretrained_D15 == "":
- print("no pretrained Discriminator")
- if gpus16:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s -li %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- gpus16,
- total_epoch11,
- save_epoch10,
- ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "",
- ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == True else 0,
- 1 if if_cache_gpu17 == True else 0,
- 1 if if_save_every_weights18 == True else 0,
- version19,
- log_interval,
- )
- )
- else:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s -li %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- total_epoch11,
- save_epoch10,
- ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "\b",
- ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "\b",
- 1 if if_save_latest13 == True else 0,
- 1 if if_cache_gpu17 == True else 0,
- 1 if if_save_every_weights18 == True else 0,
- version19,
- log_interval,
- )
- )
- print(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- global PID
- PID = p.pid
- p.wait()
- return ("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log", {"visible": False, "__type__": "update"}, {"visible": True, "__type__": "update"})
-
-
-# but4.click(train_index, [exp_dir1], info3)
-def train_index(exp_dir1, version19):
- exp_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- os.makedirs(exp_dir, exist_ok=True)
- feature_dir = (
- "%s/3_feature256" % (exp_dir)
- if version19 == "v1"
- else "%s/3_feature768" % (exp_dir)
- )
- if os.path.exists(feature_dir) == False:
- return "请先进行特征提取!"
- listdir_res = list(os.listdir(feature_dir))
- if len(listdir_res) == 0:
- return "请先进行特征提取!"
- npys = []
- for name in sorted(listdir_res):
- phone = np.load("%s/%s" % (feature_dir, name))
- npys.append(phone)
- big_npy = np.concatenate(npys, 0)
- big_npy_idx = np.arange(big_npy.shape[0])
- np.random.shuffle(big_npy_idx)
- big_npy = big_npy[big_npy_idx]
- np.save("%s/total_fea.npy" % exp_dir, big_npy)
- # n_ivf = big_npy.shape[0] // 39
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
- infos = []
- infos.append("%s,%s" % (big_npy.shape, n_ivf))
- yield "\n".join(infos)
- index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf)
- # index = faiss.index_factory(256if version19=="v1"else 768, "IVF%s,PQ128x4fs,RFlat"%n_ivf)
- infos.append("training")
- yield "\n".join(infos)
- index_ivf = faiss.extract_index_ivf(index) #
- index_ivf.nprobe = 1
- index.train(big_npy)
- faiss.write_index(
- index,
- "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- # faiss.write_index(index, '%s/trained_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19))
- infos.append("adding")
- yield "\n".join(infos)
- batch_size_add = 8192
- for i in range(0, big_npy.shape[0], batch_size_add):
- index.add(big_npy[i : i + batch_size_add])
- faiss.write_index(
- index,
- "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- infos.append(
- "成功构建索引,added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (n_ivf, index_ivf.nprobe, exp_dir1, version19)
- )
- # faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19))
- # infos.append("成功构建索引,added_IVF%s_Flat_FastScan_%s.index"%(n_ivf,version19))
- yield "\n".join(infos)
-
-
-# but5.click(train1key, [exp_dir1, sr2, if_f0_3, trainset_dir4, spk_id5, gpus6, np7, f0method8, save_epoch10, total_epoch11, batch_size12, if_save_latest13, pretrained_G14, pretrained_D15, gpus16, if_cache_gpu17], info3)
-def train1key(
- exp_dir1,
- sr2,
- if_f0_3,
- trainset_dir4,
- spk_id5,
- np7,
- f0method8,
- save_epoch10,
- total_epoch11,
- batch_size12,
- if_save_latest13,
- pretrained_G14,
- pretrained_D15,
- gpus16,
- if_cache_gpu17,
- if_save_every_weights18,
- version19,
- echl
-):
- infos = []
-
- def get_info_str(strr):
- infos.append(strr)
- return "\n".join(infos)
-
- model_log_dir = "%s/logs/%s" % (now_dir, exp_dir1)
- preprocess_log_path = "%s/preprocess.log" % model_log_dir
- extract_f0_feature_log_path = "%s/extract_f0_feature.log" % model_log_dir
- gt_wavs_dir = "%s/0_gt_wavs" % model_log_dir
- feature_dir = (
- "%s/3_feature256" % model_log_dir
- if version19 == "v1"
- else "%s/3_feature768" % model_log_dir
- )
-
- os.makedirs(model_log_dir, exist_ok=True)
- #########step1:处理数据
- open(preprocess_log_path, "w").close()
- cmd = (
- config.python_cmd
- + " trainset_preprocess_pipeline_print.py %s %s %s %s "
- % (trainset_dir4, sr_dict[sr2], np7, model_log_dir)
- + str(config.noparallel)
- )
- yield get_info_str(i18n("step1:正在处理数据"))
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True)
- p.wait()
- with open(preprocess_log_path, "r") as f:
- print(f.read())
- #########step2a:提取音高
- open(extract_f0_feature_log_path, "w")
- if if_f0_3:
- yield get_info_str("step2a:正在提取音高")
- cmd = config.python_cmd + " extract_f0_print.py %s %s %s %s" % (
- model_log_dir,
- np7,
- f0method8,
- echl
- )
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- p.wait()
- with open(extract_f0_feature_log_path, "r") as f:
- print(f.read())
- else:
- yield get_info_str(i18n("step2a:无需提取音高"))
- #######step2b:提取特征
- yield get_info_str(i18n("step2b:正在提取特征"))
- gpus = gpus16.split("-")
- leng = len(gpus)
- ps = []
- for idx, n_g in enumerate(gpus):
- cmd = config.python_cmd + " extract_feature_print.py %s %s %s %s %s %s" % (
- config.device,
- leng,
- idx,
- n_g,
- model_log_dir,
- version19,
- )
- yield get_info_str(cmd)
- p = Popen(
- cmd, shell=True, cwd=now_dir
- ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir
- ps.append(p)
- for p in ps:
- p.wait()
- with open(extract_f0_feature_log_path, "r") as f:
- print(f.read())
- #######step3a:训练模型
- yield get_info_str(i18n("step3a:正在训练模型"))
- # 生成filelist
- if if_f0_3:
- f0_dir = "%s/2a_f0" % model_log_dir
- f0nsf_dir = "%s/2b-f0nsf" % model_log_dir
- names = (
- set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)])
- & set([name.split(".")[0] for name in os.listdir(feature_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0_dir)])
- & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)])
- )
- else:
- names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set(
- [name.split(".")[0] for name in os.listdir(feature_dir)]
- )
- opt = []
- for name in names:
- if if_f0_3:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- f0_dir.replace("\\", "\\\\"),
- name,
- f0nsf_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- else:
- opt.append(
- "%s/%s.wav|%s/%s.npy|%s"
- % (
- gt_wavs_dir.replace("\\", "\\\\"),
- name,
- feature_dir.replace("\\", "\\\\"),
- name,
- spk_id5,
- )
- )
- fea_dim = 256 if version19 == "v1" else 768
- if if_f0_3:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5)
- )
- else:
- for _ in range(2):
- opt.append(
- "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s"
- % (now_dir, sr2, now_dir, fea_dim, spk_id5)
- )
- shuffle(opt)
- with open("%s/filelist.txt" % model_log_dir, "w") as f:
- f.write("\n".join(opt))
- yield get_info_str("write filelist done")
- if gpus16:
- cmd = (
- config.python_cmd
- +" train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- gpus16,
- total_epoch11,
- save_epoch10,
- ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "",
- ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == True else 0,
- 1 if if_cache_gpu17 == True else 0,
- 1 if if_save_every_weights18 == True else 0,
- version19,
- )
- )
- else:
- cmd = (
- config.python_cmd
- + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s %s %s -l %s -c %s -sw %s -v %s"
- % (
- exp_dir1,
- sr2,
- 1 if if_f0_3 else 0,
- batch_size12,
- total_epoch11,
- save_epoch10,
- ("-pg %s" % pretrained_G14) if pretrained_G14 != "" else "",
- ("-pd %s" % pretrained_D15) if pretrained_D15 != "" else "",
- 1 if if_save_latest13 == True else 0,
- 1 if if_cache_gpu17 == True else 0,
- 1 if if_save_every_weights18 == True else 0,
- version19,
- )
- )
- yield get_info_str(cmd)
- p = Popen(cmd, shell=True, cwd=now_dir)
- p.wait()
- yield get_info_str(i18n("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log"))
- #######step3b:训练索引
- npys = []
- listdir_res = list(os.listdir(feature_dir))
- for name in sorted(listdir_res):
- phone = np.load("%s/%s" % (feature_dir, name))
- npys.append(phone)
- big_npy = np.concatenate(npys, 0)
-
- big_npy_idx = np.arange(big_npy.shape[0])
- np.random.shuffle(big_npy_idx)
- big_npy = big_npy[big_npy_idx]
- np.save("%s/total_fea.npy" % model_log_dir, big_npy)
-
- # n_ivf = big_npy.shape[0] // 39
- n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39)
- yield get_info_str("%s,%s" % (big_npy.shape, n_ivf))
- index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf)
- yield get_info_str("training index")
- index_ivf = faiss.extract_index_ivf(index) #
- index_ivf.nprobe = 1
- index.train(big_npy)
- faiss.write_index(
- index,
- "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- yield get_info_str("adding index")
- batch_size_add = 8192
- for i in range(0, big_npy.shape[0], batch_size_add):
- index.add(big_npy[i : i + batch_size_add])
- faiss.write_index(
- index,
- "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19),
- )
- yield get_info_str(
- "成功构建索引, added_IVF%s_Flat_nprobe_%s_%s_%s.index"
- % (n_ivf, index_ivf.nprobe, exp_dir1, version19)
- )
- yield get_info_str(i18n("全流程结束!"))
-
-
-def whethercrepeornah(radio):
- mango = True if radio == 'mangio-crepe' or radio == 'mangio-crepe-tiny' else False
- return ({"visible": mango, "__type__": "update"})
-
-# ckpt_path2.change(change_info_,[ckpt_path2],[sr__,if_f0__])
-def change_info_(ckpt_path):
- if (
- os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log"))
- == False
- ):
- return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"}
- try:
- with open(
- ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r"
- ) as f:
- info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1])
- sr, f0 = info["sample_rate"], info["if_f0"]
- version = "v2" if ("version" in info and info["version"] == "v2") else "v1"
- return sr, str(f0), version
- except:
- traceback.print_exc()
- return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"}
-
-
-from lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM
-
-
-def export_onnx(ModelPath, ExportedPath, MoeVS=True):
- cpt = torch.load(ModelPath, map_location="cpu")
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- hidden_channels = 256 if cpt.get("version","v1")=="v1"else 768#cpt["config"][-2] # hidden_channels,为768Vec做准备
-
- test_phone = torch.rand(1, 200, hidden_channels) # hidden unit
- test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
- test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
- test_pitchf = torch.rand(1, 200) # nsf基频
- test_ds = torch.LongTensor([0]) # 说话人ID
- test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
-
- device = "cpu" # 导出时设备(不影响使用模型)
-
-
- net_g = SynthesizerTrnMsNSFsidM(
- *cpt["config"], is_half=False,version=cpt.get("version","v1")
- ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16)
- net_g.load_state_dict(cpt["weight"], strict=False)
- input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
- output_names = [
- "audio",
- ]
- # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出
- torch.onnx.export(
- net_g,
- (
- test_phone.to(device),
- test_phone_lengths.to(device),
- test_pitch.to(device),
- test_pitchf.to(device),
- test_ds.to(device),
- test_rnd.to(device),
- ),
- ExportedPath,
- dynamic_axes={
- "phone": [1],
- "pitch": [1],
- "pitchf": [1],
- "rnd": [2],
- },
- do_constant_folding=False,
- opset_version=16,
- verbose=False,
- input_names=input_names,
- output_names=output_names,
- )
- return "Finished"
-
-#region RVC WebUI App
-
-def get_presets():
- data = None
- with open('../inference-presets.json', 'r') as file:
- data = json.load(file)
- preset_names = []
- for preset in data['presets']:
- preset_names.append(preset['name'])
-
- return preset_names
-
-def change_choices2():
- audio_files=[]
- for filename in os.listdir("./audios"):
- if filename.endswith(('.wav','.mp3','.ogg','.flac','.m4a','.aac','.mp4')):
- audio_files.append(os.path.join('./audios',filename).replace('\\', '/'))
- return {"choices": sorted(audio_files), "__type__": "update"}, {"__type__": "update"}
-
-audio_files=[]
-for filename in os.listdir("./audios"):
- if filename.endswith(('.wav','.mp3','.ogg','.flac','.m4a','.aac','.mp4')):
- audio_files.append(os.path.join('./audios',filename).replace('\\', '/'))
-
-def get_index():
- if check_for_name() != '':
- chosen_model=sorted(names)[0].split(".")[0]
- logs_path="./logs/"+chosen_model
- if os.path.exists(logs_path):
- for file in os.listdir(logs_path):
- if file.endswith(".index"):
- return os.path.join(logs_path, file)
- return ''
- else:
- return ''
-
-def get_indexes():
- indexes_list=[]
- for dirpath, dirnames, filenames in os.walk("./logs/"):
- for filename in filenames:
- if filename.endswith(".index"):
- indexes_list.append(os.path.join(dirpath,filename))
- if len(indexes_list) > 0:
- return indexes_list
- else:
- return ''
-
-def get_name():
- if len(audio_files) > 0:
- return sorted(audio_files)[0]
- else:
- return ''
-
-def save_to_wav(record_button):
- if record_button is None:
- pass
- else:
- path_to_file=record_button
- new_name = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")+'.wav'
- new_path='./audios/'+new_name
- shutil.move(path_to_file,new_path)
- return new_path
-
-def save_to_wav2(dropbox):
- file_path=dropbox.name
- shutil.move(file_path,'./audios')
- return os.path.join('./audios',os.path.basename(file_path))
-
-def match_index(sid0):
- folder=sid0.split(".")[0]
- parent_dir="./logs/"+folder
- if os.path.exists(parent_dir):
- for filename in os.listdir(parent_dir):
- if filename.endswith(".index"):
- index_path=os.path.join(parent_dir,filename)
- return index_path
- else:
- return ''
-
-def check_for_name():
- if len(names) > 0:
- return sorted(names)[0]
- else:
- return ''
-
-def download_from_url(url, model):
- if url == '':
- return "URL cannot be left empty."
- if model =='':
- return "You need to name your model. For example: My-Model"
- url = url.strip()
- zip_dirs = ["zips", "unzips"]
- for directory in zip_dirs:
- if os.path.exists(directory):
- shutil.rmtree(directory)
- os.makedirs("zips", exist_ok=True)
- os.makedirs("unzips", exist_ok=True)
- zipfile = model + '.zip'
- zipfile_path = './zips/' + zipfile
- try:
- if "drive.google.com" in url:
- subprocess.run(["gdown", url, "--fuzzy", "-O", zipfile_path])
- elif "mega.nz" in url:
- m = Mega()
- m.download_url(url, './zips')
- else:
- subprocess.run(["wget", url, "-O", zipfile_path])
- for filename in os.listdir("./zips"):
- if filename.endswith(".zip"):
- zipfile_path = os.path.join("./zips/",filename)
- shutil.unpack_archive(zipfile_path, "./unzips", 'zip')
- else:
- return "No zipfile found."
- for root, dirs, files in os.walk('./unzips'):
- for file in files:
- file_path = os.path.join(root, file)
- if file.endswith(".index"):
- os.mkdir(f'./logs/{model}')
- shutil.copy2(file_path,f'./logs/{model}')
- elif "G_" not in file and "D_" not in file and file.endswith(".pth"):
- shutil.copy(file_path,f'./weights/{model}.pth')
- shutil.rmtree("zips")
- shutil.rmtree("unzips")
- return "Model imported, you can go back to the inference page!"
- except:
- return "An error occurred, try reinporting or using a different name, if not use a different model."
-def success_message(face):
- return f'{face.name} has been uploaded.', 'None'
-def mouth(size, face, voice, faces):
- if size == 'Half':
- size = 2
- else:
- size = 1
- if faces == 'None':
- character = face.name
- else:
- if faces == 'Ben Shapiro':
- character = '/content/wav2lip-HD/inputs/ben-shapiro-10.mp4'
- elif faces == 'Andrew Tate':
- character = '/content/wav2lip-HD/inputs/tate-7.mp4'
- command = "python inference.py " \
- "--checkpoint_path checkpoints/wav2lip.pth " \
- f"--face {character} " \
- f"--audio {voice} " \
- "--pads 0 20 0 0 " \
- "--outfile /content/wav2lip-HD/outputs/result.mp4 " \
- "--fps 24 " \
- f"--resize_factor {size}"
- process = subprocess.Popen(command, shell=True, cwd='/content/wav2lip-HD/Wav2Lip-master')
- stdout, stderr = process.communicate()
- return '/content/wav2lip-HD/outputs/result.mp4', 'Animation completed.'
-eleven_voices = ['Adam','Antoni','Josh','Arnold','Sam','Bella','Rachel','Domi','Elli']
-eleven_voices_ids=['pNInz6obpgDQGcFmaJgB','ErXwobaYiN019PkySvjV','TxGEqnHWrfWFTfGW9XjX','VR6AewLTigWG4xSOukaG','yoZ06aMxZJJ28mfd3POQ','EXAVITQu4vr4xnSDxMaL','21m00Tcm4TlvDq8ikWAM','AZnzlk1XvdvUeBnXmlld','MF3mGyEYCl7XYWbV9V6O']
-chosen_voice = dict(zip(eleven_voices, eleven_voices_ids))
-
-def stoptraining(mim):
- if int(mim) == 1:
- try:
- CSVutil('csvdb/stop.csv', 'w+', 'stop', 'True')
- os.kill(PID, signal.SIGTERM)
- except Exception as e:
- print(f"Couldn't click due to {e}")
- return (
- {"visible": False, "__type__": "update"},
- {"visible": True, "__type__": "update"},
- )
-
-
-def elevenTTS(xiapi, text, id, lang):
- if xiapi!= '' and id !='':
- choice = chosen_voice[id]
- CHUNK_SIZE = 1024
- url = f"https://api.elevenlabs.io/v1/text-to-speech/{choice}"
- headers = {
- "Accept": "audio/mpeg",
- "Content-Type": "application/json",
- "xi-api-key": xiapi
- }
- if lang == 'en':
- data = {
- "text": text,
- "model_id": "eleven_monolingual_v1",
- "voice_settings": {
- "stability": 0.5,
- "similarity_boost": 0.5
- }
- }
- else:
- data = {
- "text": text,
- "model_id": "eleven_multilingual_v1",
- "voice_settings": {
- "stability": 0.5,
- "similarity_boost": 0.5
- }
- }
-
- response = requests.post(url, json=data, headers=headers)
- with open('./temp_eleven.mp3', 'wb') as f:
- for chunk in response.iter_content(chunk_size=CHUNK_SIZE):
- if chunk:
- f.write(chunk)
- aud_path = save_to_wav('./temp_eleven.mp3')
- return aud_path, aud_path
- else:
- tts = gTTS(text, lang=lang)
- tts.save('./temp_gTTS.mp3')
- aud_path = save_to_wav('./temp_gTTS.mp3')
- return aud_path, aud_path
-
-def upload_to_dataset(files, dir):
- if dir == '':
- dir = './dataset'
- if not os.path.exists(dir):
- os.makedirs(dir)
- count = 0
- for file in files:
- path=file.name
- shutil.copy2(path,dir)
- count += 1
- return f' {count} files uploaded to {dir}.'
-
-def zip_downloader(model):
- if not os.path.exists(f'./weights/{model}.pth'):
- return {"__type__": "update"}, f'Make sure the Voice Name is correct. I could not find {model}.pth'
- index_found = False
- for file in os.listdir(f'./logs/{model}'):
- if file.endswith('.index') and 'added' in file:
- log_file = file
- index_found = True
- if index_found:
- return [f'./weights/{model}.pth', f'./logs/{model}/{log_file}'], "Done"
- else:
- return f'./weights/{model}.pth', "Could not find Index file."
-
-with gr.Blocks(theme=gr.themes.Base (), title='RVC Noobie 👶') as app:
- with gr.Tabs():
- with gr.TabItem("Inference"):
- gr.HTML("
RVC Noobie 👶
")
- gr.HTML(" Voice models can be found here on HuggingFace, on weights.gg or AI Hub: https://discord.gg/aihub ")
- gr.HTML("
Fork of the Huggingface port by Ilaria of the Rejekt Easy GUI which THAT is a fork of RVC v2 which THAT is a fork of RVC, dang thats a lot.
- Please use this prompt template to get the desired generation results:
-
Prompt:
-
-RAW photo, *subject*, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
-
-Example: RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
-
-
-
-
Negative Prompt:
-
-(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck
-
-
-Have Fun & Enjoy
-//THAFX
- {"" if prefix else ""}
-
- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🔥"}.
-
- """
- )
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- with gr.Row():
- prompt = gr.Textbox(label="Prompt", show_label=False,max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False)
- generate = gr.Button(value="Generate").style(rounded=(False, True, True, False))
-
- image_out = gr.Image(height=512)
- error_output = gr.Markdown()
-
- with gr.Column(scale=45):
- with gr.Tab("Options"):
- with gr.Group():
- neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image")
- auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (RAW photo,)", value=prefix, visible=prefix)
-
- with gr.Row():
- guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15)
- steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1)
-
- with gr.Row():
- width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8)
- height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8)
-
- seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)
-
- with gr.Tab("Image to image"):
- with gr.Group():
- image = gr.Image(label="Image", height=256, tool="editor", type="pil")
- strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5)
-
- auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False)
-
- inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix]
- outputs = [image_out, error_output]
- prompt.submit(inference, inputs=inputs, outputs=outputs)
- generate.click(inference, inputs=inputs, outputs=outputs)
-
-
-
-demo.queue(concurrency_count=1)
-demo.launch()
diff --git a/spaces/TheWolf/DreamlikeArt-Diffusion-1.0/app.py b/spaces/TheWolf/DreamlikeArt-Diffusion-1.0/app.py
deleted file mode 100644
index 4f2b558d28de6ca42f9701b31ed4b7aeb61119cd..0000000000000000000000000000000000000000
--- a/spaces/TheWolf/DreamlikeArt-Diffusion-1.0/app.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import gradio as gr
-import os
-import sys
-from pathlib import Path
-import random
-import string
-import time
-from queue import Queue
-queue = Queue()
-
-text_gen=gr.Interface.load("spaces/Omnibus/MagicPrompt-Stable-Diffusion")
-proc5=gr.Interface.load("models/dreamlike-art/dreamlike-diffusion-1.0")
-
-
-import random
-
-def add_random_noise(prompt, noise_level=0.07):
- noise_level=0.01
- # Get the percentage of characters to add as noise
- percentage_noise = noise_level * 5
- # Get the number of characters to add as noise
- num_noise_chars = int(len(prompt) * (percentage_noise/100))
- # Get the indices of the characters to add noise to
- noise_indices = random.sample(range(len(prompt)), num_noise_chars)
- # Add noise to the selected characters
- prompt_list = list(prompt)
- for index in noise_indices:
- prompt_list[index] = random.choice(string.ascii_letters + string.punctuation)
- return "".join(prompt_list)
-
-queue_length_counter = 0
-
-def send_it8(inputs, noise_level, proc5=proc5):
- global queue_length_counter
- prompt_list = list(inputs)
- prompt_with_noise = "".join(prompt_list)
- if queue_length_counter >= 15:
- if not queue.empty():
- queue.queue.clear()
- queue_length_counter = 0
- output8 = proc5(prompt_with_noise)
- queue_length_counter += 1
- time.sleep(3)
- return output8
- time.sleep(1)
-
-
-
-def get_prompts(prompt_text):
- global queue_length_counter
- if queue_length_counter >= 15:
- if not queue.empty():
- queue.queue.clear()
- queue_length_counter = 0
- output = text_gen(prompt_text)
- queue_length_counter += 1
- time.sleep(3)
- return output
- time.sleep(1)
-
-
-with gr.Blocks() as myface:
- with gr.Row():
-
- input_text=gr.Textbox(label="Short Prompt")
- see_prompts=gr.Button("Magic Prompt")
- with gr.Row():
-
- prompt=gr.Textbox(label="Enter Prompt")
- noise_level=gr.Slider(minimum=0.1, maximum=3, step=0.1, label="Noise Level: Controls how much randomness is added to the input before it is sent to the model. Higher noise level produces more diverse outputs, while lower noise level produces similar outputs.")
- run=gr.Button("Generate")
-
- with gr.Row():
- output8=gr.Image(label="Dreamlike Diffusion 1.0")
-
-
- run.click(send_it8, inputs=[prompt, noise_level], outputs=[output8],api_name="predict")
- see_prompts.click(get_prompts, inputs=[input_text], outputs=[prompt])
-
-
-
-myface.queue(concurrency_count=8)
-myface.launch(enable_queue=True, inline=True)
-while True:
- if queue.qsize() >= 20:
- queue.queue.clear()
- time.sleep(30)
diff --git a/spaces/Tirendaz/background-remover/README.md b/spaces/Tirendaz/background-remover/README.md
deleted file mode 100644
index a3cff016e68e3e9c3fb88a9f20ab72901e761f39..0000000000000000000000000000000000000000
--- a/spaces/Tirendaz/background-remover/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Background Remover
-emoji: 🏢
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Tirendaz/pytorch_cat_vs_dog/app.py b/spaces/Tirendaz/pytorch_cat_vs_dog/app.py
deleted file mode 100644
index 27321cae76a30f0b1e0031bf83634864517c994a..0000000000000000000000000000000000000000
--- a/spaces/Tirendaz/pytorch_cat_vs_dog/app.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch import nn
-from torchvision import transforms
-import gradio as gr
-
-title = "PyTorch Cat vs Dog"
-description = "Classifying cats and dogs with Pytorch"
-article = "
", getHTML(var_dict,""), None, var_dict
-
-##### PREDICTION TEXT HTML
-def getHTML(var_dict,text,win=0):
- ### Which parts of the sentence have been guessed?
- guessed, not_guessed = "", ""
- text_words = text.split(" ")
- target_words = var_dict["target_sentence"].split(" ")
- for i,word in enumerate(text_words):
- if i < len(target_words) and word == target_words[i]: guessed += word + " "
- else: not_guessed += word + " "
- ### Display prediction
- if win!=1:
- html = "
"
-
-##### DRAWING PROCESSING & GAME STATE UPDATE
-def process_img(var_dict,img,title):
- # Makes sure that start_time is updates for the first game
- if var_dict["start_time"] == -1:
- var_dict["start_time"] = time.time()
- if (None is img):
- return getHTML(var_dict,"",win=0),"
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Autodata 3.39 Srpski Download Tp How to Get the Latest Version of the Car Diagnostic Program for Free.md b/spaces/bioriAsaeru/text-to-voice/Autodata 3.39 Srpski Download Tp How to Get the Latest Version of the Car Diagnostic Program for Free.md
deleted file mode 100644
index d63036e0c4a92d6b22ed4e7e01c603e73ae74acd..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Autodata 3.39 Srpski Download Tp How to Get the Latest Version of the Car Diagnostic Program for Free.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Gaussian 09 Rev D 01 Em64t 43 A Comprehensive Guide to the Latest Release of the Quantum Chemistry Software.md b/spaces/bioriAsaeru/text-to-voice/Gaussian 09 Rev D 01 Em64t 43 A Comprehensive Guide to the Latest Release of the Quantum Chemistry Software.md
deleted file mode 100644
index a8bc298d382edeb5f67d7dc489fb3678602aa2d3..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Gaussian 09 Rev D 01 Em64t 43 A Comprehensive Guide to the Latest Release of the Quantum Chemistry Software.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
- """,
- unsafe_allow_html=True,
-)
-
-mode = 'Models'
-
-if mode == 'Models':
- model = st.sidebar.selectbox(
- 'Select Model',
- list(models.keys()))
- masking_level = st.sidebar.selectbox('Masking Level:', ['Tokens', 'SubWords'])
- n_res = st.sidebar.number_input(
- 'Number Of Results',
- format='%d',
- value=5,
- min_value=1,
- max_value=100)
-
- model_tags = model.split('-')
- model_tags[0] = 'Model:' + model_tags[0]
-
- st.markdown(''.join([f'{tag}' for tag in model_tags]),unsafe_allow_html=True)
- st.markdown('___')
-
- unmasker, tokenize = load_model(model)
-
- input_text = st.text_input('Insert text you want to mask', '')
- if input_text:
- input_masked = None
- tokenized = tokenize(input_text)
- ids = tokenized['input_ids'].tolist()[0]
- subwords = unmasker.tokenizer.convert_ids_to_tokens(ids)
-
- if masking_level == 'Tokens':
- tokens = str(input_text).split()
- mask_idx = st.selectbox('Select token to mask:', [None] + list(range(len(tokens))), format_func=lambda i: tokens[i] if i else '')
- if mask_idx is not None:
- input_masked = ' '.join(token if i != mask_idx else '[MASK]' for i, token in enumerate(tokens))
- display_input = input_masked
- if masking_level == 'SubWords':
- tokens = subwords
- idx = st.selectbox('Select token to mask:', list(range(0,len(tokens)-1)), format_func=lambda i: tokens[i] if i else '')
- tokenized['input_ids'][0][idx] = unmasker.tokenizer.mask_token_id
- ids = tokenized['input_ids'].tolist()[0]
- display_input = ' '.join(unmasker.tokenizer.convert_ids_to_tokens(ids[1:-1]))
- if idx:
- input_masked = tokenized
-
- if input_masked:
- st.markdown('#### Input:')
- ids = tokenized['input_ids'].tolist()[0]
- subwords = unmasker.tokenizer.convert_ids_to_tokens(ids)
- st.markdown(f'
{display_input}
',
- unsafe_allow_html=True,
- )
- st.markdown('#### Outputs:')
- with st.spinner(f'Running {model_tags[0]} (may take a minute)...'):
- res = unmasker(input_masked, tokenized=masking_level == 'SubWords', top_k=n_res)
- if res:
- res = [{'Prediction':r['token_str'], 'Completed Sentence':r['sequence'].replace('[SEP]', '').replace('[CLS]', ''), 'Score':r['score']} for r in res]
- res_table = pd.DataFrame(res)
- st.table(res_table)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/layers/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/layers/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/camileLDJ/allenai-cosmo-xl/app.py b/spaces/camileLDJ/allenai-cosmo-xl/app.py
deleted file mode 100644
index 8f9e4d818efbdb1722b979233262ac1bba876e73..0000000000000000000000000000000000000000
--- a/spaces/camileLDJ/allenai-cosmo-xl/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/allenai/cosmo-xl").launch()
\ No newline at end of file
diff --git a/spaces/cariai/somos-alpaca-es/README.md b/spaces/cariai/somos-alpaca-es/README.md
deleted file mode 100644
index e1f0b0bbf84a12842f95243dcc1016c18ab0fa48..0000000000000000000000000000000000000000
--- a/spaces/cariai/somos-alpaca-es/README.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Argilla Space Template
-emoji: 🏷️
-colorFrom: purple
-colorTo: red
-sdk: docker
-app_port: 6900
-fullWidth: true
-tags:
-- argilla
-duplicated_from: argilla/argilla-template-space
----
-
-This is the Argilla Space Template you can use to deploy and run your own instance of Argilla on the Hugging Face Hub, for labeling, fun, and active learning loops!
-
-Login with:
-
-user: argilla
-password: 1234
\ No newline at end of file
diff --git a/spaces/cccc-c/web-ui-pub/_next/static/chunks/757de1a6.cd4299fbf5be8e3c.js b/spaces/cccc-c/web-ui-pub/_next/static/chunks/757de1a6.cd4299fbf5be8e3c.js
deleted file mode 100644
index c755934c21396fa0e8c7a365d438a544aa8b1592..0000000000000000000000000000000000000000
--- a/spaces/cccc-c/web-ui-pub/_next/static/chunks/757de1a6.cd4299fbf5be8e3c.js
+++ /dev/null
@@ -1 +0,0 @@
-"use strict";(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[121],{25372:function(t,n,r){r.d(n,{VQF:function(){return i},mcF:function(){return o}});var e=r(83270);function i(t){return(0,e.w_)({tag:"svg",attr:{viewBox:"0 0 512 512"},child:[{tag:"path",attr:{fill:"none",strokeLinecap:"square",strokeMiterlimit:"10",strokeWidth:"44",d:"M416 128L192 384l-96-96"}}]})(t)}function o(t){return(0,e.w_)({tag:"svg",attr:{viewBox:"0 0 512 512"},child:[{tag:"rect",attr:{width:"336",height:"336",x:"128",y:"128",fill:"none",strokeLinejoin:"round",strokeWidth:"32",rx:"57",ry:"57"}},{tag:"path",attr:{fill:"none",strokeLinecap:"round",strokeLinejoin:"round",strokeWidth:"32",d:"M383.5 128l.5-24a56.16 56.16 0 00-56-56H112a64.19 64.19 0 00-64 64v216a56.16 56.16 0 0056 56h24"}}]})(t)}}}]);
\ No newline at end of file
diff --git a/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/LoggingHandler.py b/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/LoggingHandler.py
deleted file mode 100644
index 8f73660b57a3dfbf7dae0173f8d7af3a7e752112..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/music/representation_learning/sentence_transfo/sentence_transformers/LoggingHandler.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import logging
-import tqdm
-
-class LoggingHandler(logging.Handler):
- def __init__(self, level=logging.NOTSET):
- super().__init__(level)
-
- def emit(self, record):
- try:
- msg = self.format(record)
- tqdm.tqdm.write(msg)
- self.flush()
- except (KeyboardInterrupt, SystemExit):
- raise
- except:
- self.handleError(record)
-
-
-def install_logger(
- given_logger, level = logging.WARNING, fmt="%(levelname)s:%(name)s:%(message)s"
-):
- """ Configures the given logger; format, logging level, style, etc """
- import coloredlogs
-
- def add_notice_log_level():
- """ Creates a new 'notice' logging level """
- # inspired by:
- # https://stackoverflow.com/questions/2183233/how-to-add-a-custom-loglevel-to-pythons-logging-facility
- NOTICE_LEVEL_NUM = 25
- logging.addLevelName(NOTICE_LEVEL_NUM, "NOTICE")
-
- def notice(self, message, *args, **kws):
- if self.isEnabledFor(NOTICE_LEVEL_NUM):
- self._log(NOTICE_LEVEL_NUM, message, args, **kws)
-
- logging.Logger.notice = notice
-
- # Add an extra logging level above INFO and below WARNING
- add_notice_log_level()
-
- # More style info at:
- # https://coloredlogs.readthedocs.io/en/latest/api.html
- field_styles = coloredlogs.DEFAULT_FIELD_STYLES.copy()
- field_styles["asctime"] = {}
- level_styles = coloredlogs.DEFAULT_LEVEL_STYLES.copy()
- level_styles["debug"] = {"color": "white", "faint": True}
- level_styles["notice"] = {"color": "cyan", "bold": True}
-
- coloredlogs.install(
- logger=given_logger,
- level=level,
- use_chroot=False,
- fmt=fmt,
- level_styles=level_styles,
- field_styles=field_styles,
- )
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/python/openvino_inference.py b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/python/openvino_inference.py
deleted file mode 100644
index 00952880043c8b24c738324ee3f527aca7774f75..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/OpenVINO/python/openvino_inference.py
+++ /dev/null
@@ -1,156 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-# Copyright (C) 2018-2021 Intel Corporation
-# SPDX-License-Identifier: Apache-2.0
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import argparse
-import logging as log
-import os
-import sys
-
-import cv2
-import numpy as np
-
-from openvino.inference_engine import IECore
-
-from yolox.data.data_augment import preproc as preprocess
-from yolox.data.datasets import COCO_CLASSES
-from yolox.utils import mkdir, multiclass_nms, demo_postprocess, vis
-
-
-def parse_args() -> argparse.Namespace:
- """Parse and return command line arguments"""
- parser = argparse.ArgumentParser(add_help=False)
- args = parser.add_argument_group('Options')
- args.add_argument(
- '-h',
- '--help',
- action='help',
- help='Show this help message and exit.')
- args.add_argument(
- '-m',
- '--model',
- required=True,
- type=str,
- help='Required. Path to an .xml or .onnx file with a trained model.')
- args.add_argument(
- '-i',
- '--input',
- required=True,
- type=str,
- help='Required. Path to an image file.')
- args.add_argument(
- '-o',
- '--output_dir',
- type=str,
- default='demo_output',
- help='Path to your output dir.')
- args.add_argument(
- '-s',
- '--score_thr',
- type=float,
- default=0.3,
- help="Score threshould to visualize the result.")
- args.add_argument(
- '-d',
- '--device',
- default='CPU',
- type=str,
- help='Optional. Specify the target device to infer on; CPU, GPU, \
- MYRIAD, HDDL or HETERO: is acceptable. The sample will look \
- for a suitable plugin for device specified. Default value \
- is CPU.')
- args.add_argument(
- '--labels',
- default=None,
- type=str,
- help='Option:al. Path to a labels mapping file.')
- args.add_argument(
- '-nt',
- '--number_top',
- default=10,
- type=int,
- help='Optional. Number of top results.')
- return parser.parse_args()
-
-
-def main():
- log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
- args = parse_args()
-
- # ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
- log.info('Creating Inference Engine')
- ie = IECore()
-
- # ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation or ONNX format---------------
- log.info(f'Reading the network: {args.model}')
- # (.xml and .bin files) or (.onnx file)
- net = ie.read_network(model=args.model)
-
- if len(net.input_info) != 1:
- log.error('Sample supports only single input topologies')
- return -1
- if len(net.outputs) != 1:
- log.error('Sample supports only single output topologies')
- return -1
-
- # ---------------------------Step 3. Configure input & output----------------------------------------------------------
- log.info('Configuring input and output blobs')
- # Get names of input and output blobs
- input_blob = next(iter(net.input_info))
- out_blob = next(iter(net.outputs))
-
- # Set input and output precision manually
- net.input_info[input_blob].precision = 'FP32'
- net.outputs[out_blob].precision = 'FP16'
-
- # Get a number of classes recognized by a model
- num_of_classes = max(net.outputs[out_blob].shape)
-
- # ---------------------------Step 4. Loading model to the device-------------------------------------------------------
- log.info('Loading the model to the plugin')
- exec_net = ie.load_network(network=net, device_name=args.device)
-
- # ---------------------------Step 5. Create infer request--------------------------------------------------------------
- # load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork
- # instance which stores infer requests. So you already created Infer requests in the previous step.
-
- # ---------------------------Step 6. Prepare input---------------------------------------------------------------------
- origin_img = cv2.imread(args.input)
- _, _, h, w = net.input_info[input_blob].input_data.shape
- image, ratio = preprocess(origin_img, (h, w))
-
- # ---------------------------Step 7. Do inference----------------------------------------------------------------------
- log.info('Starting inference in synchronous mode')
- res = exec_net.infer(inputs={input_blob: image})
-
- # ---------------------------Step 8. Process output--------------------------------------------------------------------
- res = res[out_blob]
-
- predictions = demo_postprocess(res, (h, w))[0]
-
- boxes = predictions[:, :4]
- scores = predictions[:, 4, None] * predictions[:, 5:]
-
- boxes_xyxy = np.ones_like(boxes)
- boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2]/2.
- boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3]/2.
- boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2]/2.
- boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3]/2.
- boxes_xyxy /= ratio
- dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.45, score_thr=0.1)
-
- if dets is not None:
- final_boxes = dets[:, :4]
- final_scores, final_cls_inds = dets[:, 4], dets[:, 5]
- origin_img = vis(origin_img, final_boxes, final_scores, final_cls_inds,
- conf=args.score_thr, class_names=COCO_CLASSES)
-
- mkdir(args.output_dir)
- output_path = os.path.join(args.output_dir, os.path.basename(args.input))
- cv2.imwrite(output_path, origin_img)
-
-
-if __name__ == '__main__':
- sys.exit(main())
diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/train/__init__.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/train/__init__.py
deleted file mode 100644
index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/open_flamingo/train/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/spaces/chendl/compositional_test/multimodal/tools/prepare_vg_with_box2.py b/spaces/chendl/compositional_test/multimodal/tools/prepare_vg_with_box2.py
deleted file mode 100644
index 2a96c40b32944d0f9e8da4c2205a446f6fc6d92f..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/tools/prepare_vg_with_box2.py
+++ /dev/null
@@ -1,205 +0,0 @@
-import webdataset as wds
-import glob
-import os
-from tqdm import tqdm
-import orjson as json
-import itertools
-from PIL import Image
-import numpy as np
-from typing import List
-import cv2
-import random
-
-class Generator():
- def __init__(self, dataset_name):
- self.dataset_name = dataset_name
- self.is_end = False
-
-class CC3MGenerator(Generator):
- def __init__(self, root: str, dataset_name="cc3m"):
- super().__init__(dataset_name=dataset_name)
- self.tars = glob.glob(os.path.join(root, "cc3m_*", "*.tar"))
-
- def __len__(self):
- return 3000000
-
- def __iter__(self):
- for tar in self.tars:
- dataset = wds.WebDataset(tar).decode("pilrgb").to_tuple("jpg;png;jpeg", "txt")
- for data in dataset:
- yield [self.dataset_name] + list(data)
- self.is_end = True
-
-class CC12MGenerator(CC3MGenerator):
- def __init__(self, root: str):
- super().__init__(root, "cc12m")
- self.tars = glob.glob(os.path.join(root, "*.tar"))
-
- def __len__(self):
- return 12000000
-
-class COCOGenerator(Generator):
- def __init__(self, anno: str, image_dir):
- super().__init__(dataset_name="coco")
- data = json.loads(open(anno).read())
- self.annotations = data["annotations"]
- self.image_id_to_filename = {}
- for image in data["images"]:
- file_name = image["file_name"]
- image_id = image["id"]
- self.image_id_to_filename[image_id] = os.path.join(image_dir, file_name)
-
- def __len__(self):
- return len(self.annotations)
-
- def __iter__(self):
- for anno in self.annotations:
- image_id = anno["image_id"]
- caption = anno["caption"]
- try:
- image = Image.open(self.image_id_to_filename[image_id])
- except:
- continue
- yield [self.dataset_name, image, caption]
- self.is_end = True
-
-
-class KarpathyCOCOGenerator(Generator):
- def __init__(self, anno="/gpfs/u/home/LMCG/LMCGljnn/scratch/code/multimodal/tools/coco_karpathy_train.json", image_dir="/gpfs/u/home/LMCG/LMCGljnn/scratch/.cache/lavis/coco/images"):
- super().__init__(dataset_name="coco")
- data = json.loads(open(anno).read())
- self.annotations = data
- self.image_id_to_filename = {}
- for d in data:
- self.image_id_to_filename[d["image_id"]] = os.path.join(image_dir, d["image"])
-
- def __len__(self):
- return len(self.annotations)
-
- def __iter__(self):
- for anno in self.annotations:
- image_id = anno["image_id"]
- caption = anno["caption"]
- try:
- image = Image.open(self.image_id_to_filename[image_id])
- except:
- print(self.image_id_to_filename[image_id])
- yield [self.dataset_name, image, caption]
- self.is_end = True
-
-
-class VisualGenomeGenerator(Generator):
- def __init__(self, root: str):
- super().__init__(dataset_name="vg")
- data = json.loads(open(os.path.join(root, "region_descriptions.json")).read())
- image_data = json.loads(open(os.path.join(root, "image_data.json")).read())
- self.image_id_to_filename = {}
- self.image_id_to_wh = {}
- for image in image_data:
- image_id = image["image_id"]
- subfolder, filename = image['url'].split("/")[-2:]
- self.image_id_to_filename[image_id] = os.path.join(root, subfolder, filename)
- self.image_id_to_wh[image_id] = (image["width"], image["height"])
- self.regions = []
- total = 0
- total_image = 0
- used_image = 0
- for xx in data:
- total_image += 1
- flag = False
- for region in xx['regions']:
- total += 1
- region_w = int(region["width"])
- region_h = int(region["height"])
- x = int(region["x"])
- y = int(region["y"])
- image_w = self.image_id_to_wh[region["image_id"]][0]
- image_h = self.image_id_to_wh[region["image_id"]][1]
- region_w /= image_w
- region_h /= image_h
- x /= image_w
- y /= image_h
- if region_w * region_h < 0.1:
- continue
- if " is" in region["phrase"] or " are" in region["phrase"]:
- continue
- region["norm_xywh"] = (x, y, region_w, region_h)
- self.regions.append(region)
- flag = True
- if flag:
- used_image += 1
- random.shuffle(self.regions)
- print("valid region", len(self.regions), total, len(self.regions) / total)
- print("valid image", used_image, total_image, used_image / total_image)
-
- def __len__(self):
- return len(self.regions)
-
- def __iter__(self):
- for region in self.regions:
- image_id = region["image_id"]
- phrase = region["phrase"]
- try:
- image = Image.open(self.image_id_to_filename[image_id])
- except:
- continue
- image = image.resize((224, 224))
- x, y, region_w, region_h = region["norm_xywh"]
- x1 = int(x * 224)
- y1 = int(y * 224)
- x2 = int(x1 + region_w * 224)
- y2 = int(y1 + region_h * 224)
- # open_cv_image = np.array(image)
- # # Convert RGB to BGR
- # open_cv_image = open_cv_image[:, :, ::-1].copy()
- # open_cv_image = cv2.rectangle(open_cv_image, (x1, y1), (x2, y2), (255, 0, 0), 2)
- # cv2.imwrite("vg.jpg", open_cv_image)
- # import pdb; pdb.set_trace()
- yield [self.dataset_name, image, phrase, np.array([x1, y1, x2, y2]), image_id]
- self.is_end = True
-
-class ShuffleGenerator():
- def __init__(self, generators: List[Generator], p: List[int]):
- self.generators = generators
- self.p = list(np.array(p) / sum(p))
- self.ids = list(range(len(self.generators)))
- print("rebalance", self.ids, self.p)
-
- def __len__(self):
- return sum([len(g) for g in self.generators])
-
- def __iter__(self):
- while True:
- if len(self.ids) == 0:
- break
- id = np.random.choice(self.ids, p=self.p)
- gen = self.generators[id]
- if gen.is_end:
- print(gen.dataset_name, "is all done")
- del self.ids[id]
- del self.p[id]
- self.p = list(np.array(self.p) / sum(p))
- print("rebalance", self.ids, self.p)
- else:
- return iter(gen)
-
-
-if __name__ == "__main__":
- OUT_DIR = "/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/vg_withBox_wds"
- os.makedirs(OUT_DIR, exist_ok=True)
- # cc3m_generator = CC3MGenerator("/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/cc3m")
- # cc12m_generator = CC12MGenerator("/gpfs/u/home/LMCG/LMCGljnn/scratch-shared/junyan/raw/cc12m/tars")
- # coco_generator = KarpathyCOCOGenerator()
- visual_genome_generator = VisualGenomeGenerator("/gpfs/u/home/LMCG/LMCGljnn/scratch/datasets/raw/vg")
- # generators = [cc3m_generator, cc12m_generator, coco_generator, visual_genome_generator]
- # p = [len(generator) for generator in generators]
- # dataset = ShuffleGenerator(generators, p)
-
- with wds.ShardWriter(os.path.join(OUT_DIR, "%05d.tar"), maxcount=8500) as sink:
- sink.verbose = False
- pbar = tqdm(visual_genome_generator)
- for i, data in enumerate(pbar):
- dataset_name, image, caption, xyxy, image_id = data
- sink.write({"__key__": f"{dataset_name}_{i}_containBox", "jpg": image, "txt": caption, "xyxy.pyd": xyxy})
- if i % 200 == 0:
- tqdm.write(f"{caption} {xyxy}")
diff --git a/spaces/chinhon/fake_tweet_detector/app.py b/spaces/chinhon/fake_tweet_detector/app.py
deleted file mode 100644
index ebcdfe2626ecaafeea9dabd07483724ceedc33cd..0000000000000000000000000000000000000000
--- a/spaces/chinhon/fake_tweet_detector/app.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import gradio as gr
-import numpy as np
-import pandas as pd
-import re
-import shap
-
-from transformers import (
- AutoTokenizer,
- AutoModelForSequenceClassification,
- TextClassificationPipeline,
-)
-
-tokenizer = AutoTokenizer.from_pretrained("chinhon/fake_tweet_detect")
-
-model = AutoModelForSequenceClassification.from_pretrained("chinhon/fake_tweet_detect")
-
-tweet_detector = TextClassificationPipeline(model=model, tokenizer=tokenizer)
-
-# tweak the extent of text cleaning as you wish
-def clean_text(text):
- text = re.sub(r"http\S+", "", text)
- text = re.sub(r"\n", " ", text)
- text = re.sub(r"\'t", " not", text) # Change 't to 'not'
- text = re.sub(r"(@.*?)[\s]", " ", text) # Remove @name
- text = re.sub(r"$\d+\W+|\b\d+\b|\W+\d+$", " ", text) # remove digits
- text = re.sub(r"[^\w\s\#]", "", text) # remove special characters except hashtags
- text = text.strip(" ")
- text = re.sub(
- " +", " ", text
- ).strip() # get rid of multiple spaces and replace with a single
- return text
-
-def tweet_detect(text):
- data = [clean_text(text)]
- prediction = tweet_detector(data)
-
- pred_label = [x.get("label") for x in prediction]
-
- if pred_label == ["LABEL_1"]:
- return "Fake Tweet"
- elif pred_label == ["LABEL_0"]:
- return "Real Tweet"
-
-#Define Gradio interface
-gradio_ui = gr.Interface(
- fn=tweet_detect,
- title="Detect Fake Tweets",
- description="Enter a tweet and see if a Distilbert model can identify if it was written by state-backed trolls. DISCLAIMER: While the model was fine tuned on 100k real and troll tweets, and achieved high accuracy in my tests, its performance drops significantly against the day-to-day barrage of content on Twitter. As such, this app is intended as an example for understanding the limits of AI/ML in highly complex problems like fake media detection, and not as a final arbiter of whether someone's tweet is real or not.",
- inputs=gr.Textbox(lines=10, label="Paste tweet text here [English Only]"),
- outputs=gr.Label(type="auto", label="Prediction"),
- interpretation="shap",
- article="Details of the fine tuning and tests are in this Medium post: https://bit.ly/3tueP36",
-)
-
-gradio_ui.launch(enable_queue=True)
diff --git a/spaces/cihyFjudo/fairness-paper-search/....md b/spaces/cihyFjudo/fairness-paper-search/....md
deleted file mode 100644
index 55cd3ef1e0eae8b7492c19020b57ec002307699d..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/....md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
- proffessional cracking of any kind of software (CAD,CAM,CAE,EDA,GIS,PCB,FEA,FEM,CNC,CFD,PDS,3D,Optics etc.) designed for any kind of operating systems(Windows 95/98/ME/2000/XP, Linux, FreeBSD, OS/2, MAC OS etc.) - producing keygens, licenses for different proctection systems (FlexLM, SentinelLM, ElanLM, CrypKey, etc.) - producing emulators for any kind of dongles (SentinelPro, SentinelSuperPro, Hasp4, Hardlock, WIBU, Guardant, etc.) - quality reverse engineering (decompilation of programs, algorithms reconstruction, etc.) - any other reverse engineering services... Also we have an excellent relationship with all our customers and we always happy to help them solve their problems. Our skills and experince will help you make a valuable conribution to your business. All soft we offer have been completely cracked and tested carefully by expert in corresponding field. All programs are full copies of originals (not demo, evaluation, trial or educational version) including English or multilanguage tutorial and crack or license file that makes program work as registered software from the house. Here is just a part of our software list.For getting the additional information, view our software list and ordering programs just visit our WEBSITE:
>> Contact e-mail: sup...@qualityprosoft.com or quality...@yahoo.comHere is an example of the most popular offered latest software: DOWNLOAD ALLDATA v9.20.1002 full cracked DOWNLOAD ALTAIR HYPERWORKS V8.0 SR1 full cracked DOWNLOAD Altera QUARTUS II DSP Builder v7.2 full cracked DOWNLOAD Altera QUARTUS II Megacore IP Library v7.2 full cracked DOWNLOAD Altera QUARTUS II The Nios II EDS v7.2 full cracked DOWNLOAD Altera QUARTUS II v7.2 full cracked DOWNLOAD Altera QUARTUS II v7.2 Modelsim v6.1g full cracked DOWNLOAD Ansoft Hfss v11 full cracked DOWNLOAD Ansoft Maxwell 2D & 3D 11.1 full cracked DOWNLOAD ANSYS FLUENT V6.3 full cracked DOWNLOAD ANSYS PRODUCTS V11 SP1 full cracked DOWNLOAD ANSYS WORKBENCH V10.0 full cracked DOWNLOAD ARCHICAD 11 full cracked DOWNLOAD ArcInfo Desktop v9.2 SP2 full cracked DOWNLOAD AUTODESK ALIAS STUDIO V2008 full cracked DOWNLOAD Autodesk AutoCAD 2008 full cracked DOWNLOAD Autodesk AutoCAD Architecture 2008 full cracked DOWNLOAD Autodesk AutoCAD Electrical 2008 full cracked DOWNLOAD Autodesk AutoCAD LT 2008 full cracked DOWNLOAD Autodesk AutoCAD Mechanical 2008 full cracked DOWNLOAD AUTODESK AUTOCAD RASTER DESIGN V2008 full cracked DOWNLOAD Autodesk Inventor Professional 2008 full cracked DOWNLOAD AUTODESK MAX 2008 full cracked DOWNLOAD AUTODESK REVIT MAP V2008 full cracked DOWNLOAD Autodsys IntelliCAD v6.3 Pro Plus Edition full cracked DOWNLOAD Autodesk VIZ 2008 full cracked DOWNLOAD Avanti Synopsys Hspice 2007.09 full cracked DOWNLOAD Avid NewsCutter XP v6.7.5 full cracked DOWNLOAD Avid Xpress Pro v5.7.5 full cracked DOWNLOAD AWR Design Environment v7.51.3650 full cracked DOWNLOAD BOBCAD-CAM AND BOBART V20.6 full cracked DOWNLOAD Bentley Civil Extension for GEOPAK v08.08.02.81 full cracked DOWNLOAD Bentley Microstation GEOPAK Survey XM Edition v08.09.04.37 full cracked DOWNLOAD Bentley PowerCivil v08.09.04.37 for Powerdraft XM full cracked DOWNLOAD Bentley PowerSurvey v08.09.04.37 for Powerdraft XM full cracked DOWNLOAD BMW ETK v8.0 2007 full cracked DOWNLOAD CABINET VISION SOLID V4.0 full cracked DOWNLOAD CAD TRANSLATORS FOR CRANES NISA V15.1 full cracked DOWNLOAD CADlink SignLab 7.0 Revision 1 full cracked DOWNLOAD Cadence Allegro PCB v16.0 full cracked DOWNLOAD Calyx Point 5.3 full cracked DOWNLOAD Cadence OrCad V16 PCB Designer Suite with PSpice full cracked DOWNLOAD CD-adapco Star-CCM Plus and Cad Series v2.10 full cracked DOWNLOAD Cimatron Elite 7.1 full cracked DOWNLOAD CHIEF ARCHITECT X1.V11 full cracked DOWNLOAD CNCKAD V8.5 full cracked DOWNLOAD COADE CADWORX PLANT PRO V2008 full cracked DOWNLOAD COADE CADWORX STEEL PRO V2008 full cracked DOWNLOAD Comsol Multiphysics v3.3a full cracked DOWNLOAD CSI ETABS 9.16 full cracked DOWNLOAD CSI SAP 2000 V11.07 full cracked DOWNLOAD CST Studio Suite 2008 full cracked DOWNLOAD Dassault Systemes Catia V5R18 Sp2 full cracked DOWNLOAD DASYLab v10.0 full cracked DOWNLOAD DataCAD v12 full cracked DOWNLOAD Discreet 3dS MAX 9.0 full cracked DOWNLOAD Dolphin SMASH v5.9.3 full cracked DOWNLOAD Dolphin SoC GDS v6.0.1 full cracked DOWNLOAD Dynaform.5.6 full cracked DOWNLOAD EDS I-DEAS NX V11 M4 full cracked DOWNLOAD EDS SOLID EDGE V19 full cracked DOWNLOAD ESI Procast v2007 full cracked DOWNLOAD ESRI ArcGIS Desktop V9.1 full cracked DOWNLOAD Fastrip 8.0 full cracked DOWNLOAD FEKO V5.2 FOR 32BIT&64BIT full cracked DOWNLOAD Fintronic Super Finsim v9.2.9 full cracked DOWNLOAD FileMaker Pro 8.5 R3 full cracked DOWNLOAD FLOMERICS FLOTHERM V7.1 full cracked DOWNLOAD FORMZ RADIOZITY 6.5 full cracked DOWNLOAD GC-powerstation 7.1.4 full cracked DOWNLOAD GENESYS 2007.03 SP1 full cracked DOWNLOAD GEOMAGIC ESHELL V8 full cracked DOWNLOAD GEOMAGIC STUDIO V9 SR3 full cracked DOWNLOAD GibbsCAM 2007 v8.7.7 full cracked DOWNLOAD Graphisoft ArchiCAD v11 Hotfix 1114 full cracked DOWNLOAD Husqvarna Viking 4D Professional full cracked DOWNLOAD IMSI Turbo FLOORPLAN Home and Landscape Pro v12 full cracked DOWNLOAD IMSI TurboCAD Professional v14 incl Symbol Packs full cracked DOWNLOAD Intel Cluster Toolkit Compiler Edition v3.1 full cracked DOWNLOAD Intel VTune Performance Analyzer v9.0.030 full cracked DOWNLOAD INVENSYS SIMSCI PROII V8.1 full cracked DOWNLOAD IronCAD v10.0 full cracked DOWNLOAD Janome 10000 Digitizer V1.0C full cracked DOWNLOAD Janome Customizer 10000 V2.2A full cracked DOWNLOAD Jewelcad v5.12 full cracked DOWNLOAD LEAPSOFT CONBOX V7.0.1 full cracked DOWNLOAD LEAPSOFT CONSPAN RATING V7.0.1 full cracked DOWNLOAD LEAPSOFT CONSYS V1.3.0 full cracked DOWNLOAD LEAPSOFT GEOMATH V7.0.0 full cracked DOWNLOAD LEAPSOFT RC-PIER V7.0.0 full cracked DOWNLOAD MASTERCAM X2 MR2 V11.2.35 full cracked DOWNLOAD MathCAD v14.0 full cracked DOWNLOAD Mathworks Matlab R2007b full cracked DOWNLOAD Mentor Graphics Precision RTL Synthesis v2007 full cracked DOWNLOAD MOLDFLOW DESIGN LINK V5.2 full cracked DOWNLOAD MSC FEA AFEA V2006 R1 full cracked DOWNLOAD MSC MD ADAMS 2007 R2 full cracked DOWNLOAD MSC Patran v2007 R1B full cracked DOWNLOAD MSC SimDesigner for Catia v5R17 R2 full cracked DOWNLOAD MSC SIMOFFICE R2.1 full cracked DOWNLOAD NI LabVIEW Embedded Development Module v2.5 full cracked DOWNLOAD NI LabVIEW v8.5 DSC Module Run Time System full cracked DOWNLOAD NI LabVIEW v8.5 PDA Module full cracked DOWNLOAD NI LabVIEW with Embedded Support v8.5 full cracked DOWNLOAD NI LabWindows CVI FDS v8.5 full cracked DOWNLOAD Nihon Unisys Dynavista v7.7 full cracked DOWNLOAD Onyx PosterShop 6.0 full cracked DOWNLOAD OPENMIND HYPERMILL V9.6 full cracked DOWNLOAD Pointwise Gridgen v16.0 R2 full cracked DOWNLOAD PTC PRO ENGINEER WILDFIRE v3.0 M100 full cracked DOWNLOAD Schrodinger Suite 2007 full cracked DOWNLOAD SDS/2 Suite V7.025 (C) Data Design System full cracked DOWNLOAD SIEMENS NX 5.0.2.2 full cracked DOWNLOAD Siemens SIMATIC S7 SCL v5.3 SP2 full cracked DOWNLOAD Siemens Simatic WinCC Connectivity Pack v6.2 full cracked DOWNLOAD SmartCode VNC Manager Enterprise v3.6.24.0 full cracked DOWNLOAD SmartDraw 2008 full cracked DOWNLOAD SOFTPLAN V13.33 full cracked DOWNLOAD SolidWorks 2008 full cracked DOWNLOAD SolidWorks 2008 Office Premium full cracked DOWNLOAD SURFWARE SURFCAM VELOCITY III SP1 full cracked DOWNLOAD SOLIDWORKS V2008 SP1 X86 full cracked DOWNLOAD Synapticad Allproducts v12.10a full cracked DOWNLOAD SynaptiCAD AllProducts v12.12d full cracked DOWNLOAD SynaptiCAD Tool Suite v12.10a full cracked DOWNLOAD Synplicity Certify v9.0 full cracked DOWNLOAD Synplicity Identify Ver 2.5 full cracked DOWNLOAD Synplicity Synplify Premier Ver 9.0 full cracked DOWNLOAD TEKLA XSTEEL v12 Structures full cracked DOWNLOAD TEKLA XSTEEL v13 Structures full cracked DOWNLOAD TESSEL HYPERDOC V4.51 full cracked DOWNLOAD THINK3 THINKDESIGN THINKID V2007 1.49 full cracked DOWNLOAD Think3 ThinkiD DesignXpressions v2007.1.106.36 full cracked DOWNLOAD UNIGRAPHICS NX V5.0.1.4 64BIT full cracked DOWNLOAD UNIGRAPHICS NX5 V5.0.2 full cracked DOWNLOAD UNIGRAPHICS SOLID EDGE V20 full cracked DOWNLOAD VectorWorks v12.5.0 R1 full cracked DOWNLOAD Vero VISI Series v15.0 full cracked DOWNLOAD Wilcom ES 2006 full cracked DOWNLOAD Xilinx PlanAhead v9.2.5 full cracked DOWNLOAD Zuken Cadstar v9.0 full crackedThis is not our complete software list. If you want to get access to our Software archive you should visit our website: >> Contact e-mail: sup...@qualityprosoft.com or quality...@yahoo.com ------------------------------------------------------------------------ View this thread: =38736
-
We today possess 307,094 downloads in the associate section. Get the FileFixation now for more detailed information! The phrase 'keygen' means a little system that can produce a cd key, activation number, permit code, serial amount, or registration number for a item of software. KeyGen can be a shortened term for Key Generator. A keygen is usually made available through crack groups free to download. When composing a keygen, the author will identify the protocol used in generating a legitimate cd essential. As soon as the criteria is determined they can after that integrate this into thé keygen.
Your search expression for Advisor Graphics Modelsim Se 10.4 A64 will return more accurate download results if you exclude using keywords like: crack, code, download, hack, serial, keygen, etc.Numerous downloads like Coach Graphics Modelsim Se 10.4 Times64 may furthermore consist of a serial number, cd key or keygen. If this can be the situation then it's generally incorporated in the complete crack download save itself.If you are usually still getting trouble finding Mentor Images Modelsim Se 10.4 A64 after simplifying your lookup term after that we extremely recommend making use of the alternate complete download websites (linked above).
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Davy Crockett E I Pirati Del Fiume Full Free Download __LINK__.md b/spaces/cihyFjudo/fairness-paper-search/Davy Crockett E I Pirati Del Fiume Full Free Download __LINK__.md
deleted file mode 100644
index d7fc7422c1c10064ef3c2ef4824bb00d3054fde4..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Davy Crockett E I Pirati Del Fiume Full Free Download __LINK__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Davy Crockett e i pirati del fiume full free download
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Download The Latest Version Of Whatsapp Messenger For Nokia C3 And Stay Connected With Your Friends And Family.md b/spaces/cihyFjudo/fairness-paper-search/Download The Latest Version Of Whatsapp Messenger For Nokia C3 And Stay Connected With Your Friends And Family.md
deleted file mode 100644
index fc2fbd65d607b9ff79bd1e857780adf995925213..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Download The Latest Version Of Whatsapp Messenger For Nokia C3 And Stay Connected With Your Friends And Family.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
whatsapp for c3 00 expand the limits of your phone with this download. Today mobile apps and high demand, and mobile apps developer are in short working with free mobile app development software to provide easy-to-use apps and helping their users to have rich and engaging apps that can be available on any mobile phone. It has great importance and has been steadily growing. This gives tools for a developer to write, test and deploy applications into the target platform environment. Some try to make their apps available, and try to make them work similarly, on all platforms. It provides the resources that are needed to start building mobile applications for Smartphone and Pocket PC devices.
-
Still, even after the Offical Announcement, People want to download and install the mobile app temporarily. For those people, we have gathered the information from the internet and provided here. As it is collected from the internet we are not sure whether these Whatsapp trick will work 100%. Give a try if it works then well & good. Please remember that these solutions are temporary. If not please update to the latest cheap mobile phone that support WhatsApp.
-
Download The Latest Version Of Whatsapp Messenger For Nokia C3
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Il download di Linvincibile ninja in HD italiano la storia di Cole il ninja bianco.md b/spaces/cihyFjudo/fairness-paper-search/Il download di Linvincibile ninja in HD italiano la storia di Cole il ninja bianco.md
deleted file mode 100644
index ba7a5103ad08bd717b11f0771925075a4b6799d6..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Il download di Linvincibile ninja in HD italiano la storia di Cole il ninja bianco.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/CurImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/CurImagePlugin.py
deleted file mode 100644
index 94efff3415679a5bf5b7038f9a1da15ebc6d04ca..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/CurImagePlugin.py
+++ /dev/null
@@ -1,75 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# Windows Cursor support for PIL
-#
-# notes:
-# uses BmpImagePlugin.py to read the bitmap data.
-#
-# history:
-# 96-05-27 fl Created
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1996.
-#
-# See the README file for information on usage and redistribution.
-#
-from . import BmpImagePlugin, Image
-from ._binary import i16le as i16
-from ._binary import i32le as i32
-
-#
-# --------------------------------------------------------------------
-
-
-def _accept(prefix):
- return prefix[:4] == b"\0\0\2\0"
-
-
-##
-# Image plugin for Windows Cursor files.
-
-
-class CurImageFile(BmpImagePlugin.BmpImageFile):
- format = "CUR"
- format_description = "Windows Cursor"
-
- def _open(self):
- offset = self.fp.tell()
-
- # check magic
- s = self.fp.read(6)
- if not _accept(s):
- msg = "not a CUR file"
- raise SyntaxError(msg)
-
- # pick the largest cursor in the file
- m = b""
- for i in range(i16(s, 4)):
- s = self.fp.read(16)
- if not m:
- m = s
- elif s[0] > m[0] and s[1] > m[1]:
- m = s
- if not m:
- msg = "No cursors were found"
- raise TypeError(msg)
-
- # load as bitmap
- self._bitmap(i32(m, 12) + offset)
-
- # patch up the bitmap height
- self._size = self.size[0], self.size[1] // 2
- d, e, o, a = self.tile[0]
- self.tile[0] = d, (0, 0) + self.size, o, a
-
- return
-
-
-#
-# --------------------------------------------------------------------
-
-Image.register_open(CurImageFile.format, CurImageFile, _accept)
-
-Image.register_extension(CurImageFile.format, ".cur")
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/__init__.py
deleted file mode 100644
index 156cb232a7aa80eee1526c7598f72043de10473f..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Empty __init__.py file to signal Python this directory is a package."""
diff --git a/spaces/codeparrot/code-generation-models/architectures/polycoder.md b/spaces/codeparrot/code-generation-models/architectures/polycoder.md
deleted file mode 100644
index 179a5b3db78adf29f9ad78beec9e46b601e1a6d3..0000000000000000000000000000000000000000
--- a/spaces/codeparrot/code-generation-models/architectures/polycoder.md
+++ /dev/null
@@ -1,14 +0,0 @@
-[PolyCoder](https://github.com/VHellendoorn/Code-LMs) uses GPT2 architecture, with BPE tokenizer trained on a random 5% subset of the data (all languages), and a context length of 2048. To study the effect of scaling of model size, the odel was trained in 3 different sizes.
-
-
-
-
-PolyCoder is currently being integrated in 🤗 `transformers`. Meanwhile it can be loaded following the instructions in the original GitHub [repo](https://github.com/vhellendoorn/code-lms#models).
\ No newline at end of file
diff --git "a/spaces/codertoro/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py" "b/spaces/codertoro/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py"
deleted file mode 100644
index cee35479ab8c277dee9457aefc7a2848c4609371..0000000000000000000000000000000000000000
--- "a/spaces/codertoro/gpt-academic/crazy_functions/Latex\345\205\250\346\226\207\346\266\246\350\211\262.py"
+++ /dev/null
@@ -1,176 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-fast_debug = False
-
-class PaperFileGroup():
- def __init__(self):
- self.file_paths = []
- self.file_contents = []
- self.sp_file_contents = []
- self.sp_file_index = []
- self.sp_file_tag = []
-
- # count_token
- import tiktoken
- from toolbox import get_conf
- enc = tiktoken.encoding_for_model(*get_conf('LLM_MODEL'))
- def get_token_num(txt): return len(enc.encode(txt))
- self.get_token_num = get_token_num
-
- def run_file_split(self, max_token_limit=1900):
- """
- 将长文本分离开来
- """
- for index, file_content in enumerate(self.file_contents):
- if self.get_token_num(file_content) < max_token_limit:
- self.sp_file_contents.append(file_content)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index])
- else:
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
- for j, segment in enumerate(segments):
- self.sp_file_contents.append(segment)
- self.sp_file_index.append(index)
- self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
-
- print('Segmentation: done')
-
-def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
- import time, os, re
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
-
-
- # <-------- 读取Latex文件,删除其中的所有注释 ---------->
- pfg = PaperFileGroup()
-
- for index, fp in enumerate(file_manifest):
- with open(fp, 'r', encoding='utf-8') as f:
- file_content = f.read()
- # 定义注释的正则表达式
- comment_pattern = r'%.*'
- # 使用正则表达式查找注释,并替换为空字符串
- clean_tex_content = re.sub(comment_pattern, '', file_content)
- # 记录删除注释后的文本
- pfg.file_paths.append(fp)
- pfg.file_contents.append(clean_tex_content)
-
- # <-------- 拆分过长的latex文件 ---------->
- pfg.run_file_split(max_token_limit=1024)
- n_split = len(pfg.sp_file_contents)
-
- # <-------- 抽取摘要 ---------->
- # if language == 'en':
- # abs_extract_inputs = f"Please write an abstract for this paper"
-
- # # 单线,获取文章meta信息
- # paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
- # inputs=abs_extract_inputs,
- # inputs_show_user=f"正在抽取摘要信息。",
- # llm_kwargs=llm_kwargs,
- # chatbot=chatbot, history=[],
- # sys_prompt="Your job is to collect information from materials。",
- # )
-
- # <-------- 多线程润色开始 ---------->
- if language == 'en':
- inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
- inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag]
- sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
- elif language == 'zh':
- inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
- inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
- sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
-
-
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array=inputs_array,
- inputs_show_user_array=inputs_show_user_array,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history_array=[[""] for _ in range(n_split)],
- sys_prompt_array=sys_prompt_array,
- max_workers=10, # OpenAI所允许的最大并行过载
- scroller_max_len = 80
- )
-
- # <-------- 整理结果,退出 ---------->
- create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
- res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
- history = gpt_response_collection
- chatbot.append((f"{fp}完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
-@CatchException
-def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en')
-
-
-
-
-
-
-@CatchException
-def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- history = [] # 清空历史,以免输入溢出
- import glob, os
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
- yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh')
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_parse.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_parse.h
deleted file mode 100644
index 4ee863df6627007cabc7ae5200fc39e153bad036..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_parse.h
+++ /dev/null
@@ -1,136 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * H.264 decoder/parser shared code
- */
-
-#ifndef AVCODEC_H264_PARSE_H
-#define AVCODEC_H264_PARSE_H
-
-#include "config.h"
-
-#include
-
-#include "libavutil/attributes.h"
-
-#include "get_bits.h"
-#include "h264_ps.h"
-
-#define MB_TYPE_REF0 MB_TYPE_ACPRED // dirty but it fits in 16 bit
-#define MB_TYPE_8x8DCT 0x01000000
-
-// This table must be here because scan8[constant] must be known at compiletime
-static const uint8_t scan8[16 * 3 + 3] = {
- 4 + 1 * 8, 5 + 1 * 8, 4 + 2 * 8, 5 + 2 * 8,
- 6 + 1 * 8, 7 + 1 * 8, 6 + 2 * 8, 7 + 2 * 8,
- 4 + 3 * 8, 5 + 3 * 8, 4 + 4 * 8, 5 + 4 * 8,
- 6 + 3 * 8, 7 + 3 * 8, 6 + 4 * 8, 7 + 4 * 8,
- 4 + 6 * 8, 5 + 6 * 8, 4 + 7 * 8, 5 + 7 * 8,
- 6 + 6 * 8, 7 + 6 * 8, 6 + 7 * 8, 7 + 7 * 8,
- 4 + 8 * 8, 5 + 8 * 8, 4 + 9 * 8, 5 + 9 * 8,
- 6 + 8 * 8, 7 + 8 * 8, 6 + 9 * 8, 7 + 9 * 8,
- 4 + 11 * 8, 5 + 11 * 8, 4 + 12 * 8, 5 + 12 * 8,
- 6 + 11 * 8, 7 + 11 * 8, 6 + 12 * 8, 7 + 12 * 8,
- 4 + 13 * 8, 5 + 13 * 8, 4 + 14 * 8, 5 + 14 * 8,
- 6 + 13 * 8, 7 + 13 * 8, 6 + 14 * 8, 7 + 14 * 8,
- 0 + 0 * 8, 0 + 5 * 8, 0 + 10 * 8
-};
-
-/**
- * Memory management control operation opcode.
- */
-typedef enum MMCOOpcode {
- MMCO_END = 0,
- MMCO_SHORT2UNUSED,
- MMCO_LONG2UNUSED,
- MMCO_SHORT2LONG,
- MMCO_SET_MAX_LONG,
- MMCO_RESET,
- MMCO_LONG,
-} MMCOOpcode;
-
-typedef struct H264PredWeightTable {
- int use_weight;
- int use_weight_chroma;
- int luma_log2_weight_denom;
- int chroma_log2_weight_denom;
- int luma_weight_flag[2]; ///< 7.4.3.2 luma_weight_lX_flag
- int chroma_weight_flag[2]; ///< 7.4.3.2 chroma_weight_lX_flag
- // The following 2 can be changed to int8_t but that causes a 10 CPU cycles speed loss
- int luma_weight[48][2][2];
- int chroma_weight[48][2][2][2];
- int implicit_weight[48][48][2];
-} H264PredWeightTable;
-
-typedef struct H264POCContext {
- int poc_lsb;
- int poc_msb;
- int delta_poc_bottom;
- int delta_poc[2];
- int frame_num;
- int prev_poc_msb; ///< poc_msb of the last reference pic for POC type 0
- int prev_poc_lsb; ///< poc_lsb of the last reference pic for POC type 0
- int frame_num_offset; ///< for POC type 2
- int prev_frame_num_offset; ///< for POC type 2
- int prev_frame_num; ///< frame_num of the last pic for POC type 1/2
-} H264POCContext;
-
-int ff_h264_pred_weight_table(GetBitContext *gb, const SPS *sps,
- const int *ref_count, int slice_type_nos,
- H264PredWeightTable *pwt,
- int picture_structure, void *logctx);
-
-/**
- * Check if the top & left blocks are available if needed & change the
- * dc mode so it only uses the available blocks.
- */
-int ff_h264_check_intra4x4_pred_mode(int8_t *pred_mode_cache, void *logctx,
- int top_samples_available, int left_samples_available);
-
-/**
- * Check if the top & left blocks are available if needed & change the
- * dc mode so it only uses the available blocks.
- */
-int ff_h264_check_intra_pred_mode(void *logctx, int top_samples_available,
- int left_samples_available,
- int mode, int is_chroma);
-
-int ff_h264_parse_ref_count(int *plist_count, int ref_count[2],
- GetBitContext *gb, const PPS *pps,
- int slice_type_nos, int picture_structure, void *logctx);
-
-int ff_h264_init_poc(int pic_field_poc[2], int *pic_poc,
- const SPS *sps, H264POCContext *poc,
- int picture_structure, int nal_ref_idc);
-
-int ff_h264_decode_extradata(const uint8_t *data, int size, H264ParamSets *ps,
- int *is_avc, int *nal_length_size,
- int err_recognition, void *logctx);
-
-static av_always_inline uint32_t pack16to32(unsigned a, unsigned b)
-{
-#if HAVE_BIGENDIAN
- return (b & 0xFFFF) + (a << 16);
-#else
- return (a & 0xFFFF) + (b << 16);
-#endif
-}
-
-#endif /* AVCODEC_H264_PARSE_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/idcinvideo.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/idcinvideo.c
deleted file mode 100644
index f6b8b3cd697120b2642e0b672d6d44f48f3473dd..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/idcinvideo.c
+++ /dev/null
@@ -1,252 +0,0 @@
-/*
- * id Quake II CIN Video Decoder
- * Copyright (C) 2003 The FFmpeg project
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * id Quake II Cin Video Decoder by Dr. Tim Ferguson
- * For more information about the id CIN format, visit:
- * http://www.csse.monash.edu.au/~timf/
- *
- * This video decoder outputs PAL8 colorspace data. Interacting with this
- * decoder is a little involved. During initialization, the demuxer must
- * transmit the 65536-byte Huffman table(s) to the decoder via extradata.
- * Then, whenever a palette change is encountered while demuxing the file,
- * the demuxer must use the same extradata space to transmit an
- * AVPaletteControl structure.
- *
- * id CIN video is purely Huffman-coded, intraframe-only codec. It achieves
- * a little more compression by exploiting the fact that adjacent pixels
- * tend to be similar.
- *
- * Note that this decoder could use libavcodec's optimized VLC facilities
- * rather than naive, tree-based Huffman decoding. However, there are 256
- * Huffman tables. Plus, the VLC bit coding order is right -> left instead
- * or left -> right, so all of the bits would have to be reversed. Further,
- * the original Quake II implementation likely used a similar naive
- * decoding algorithm and it worked fine on much lower spec machines.
- */
-
-#include
-#include
-
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "decode.h"
-#include "libavutil/internal.h"
-
-#define HUFFMAN_TABLE_SIZE 64 * 1024
-#define HUF_TOKENS 256
-#define PALETTE_COUNT 256
-
-typedef struct hnode {
- int count;
- unsigned char used;
- int children[2];
-} hnode;
-
-typedef struct IdcinContext {
-
- AVCodecContext *avctx;
-
- const unsigned char *buf;
- int size;
-
- hnode huff_nodes[256][HUF_TOKENS*2];
- int num_huff_nodes[256];
-
- uint32_t pal[256];
-} IdcinContext;
-
-/**
- * Find the lowest probability node in a Huffman table, and mark it as
- * being assigned to a higher probability.
- * @return the node index of the lowest unused node, or -1 if all nodes
- * are used.
- */
-static int huff_smallest_node(hnode *hnodes, int num_hnodes) {
- int i;
- int best, best_node;
-
- best = 99999999;
- best_node = -1;
- for(i = 0; i < num_hnodes; i++) {
- if(hnodes[i].used)
- continue;
- if(!hnodes[i].count)
- continue;
- if(hnodes[i].count < best) {
- best = hnodes[i].count;
- best_node = i;
- }
- }
-
- if(best_node == -1)
- return -1;
- hnodes[best_node].used = 1;
- return best_node;
-}
-
-/*
- * Build the Huffman tree using the generated/loaded probabilities histogram.
- *
- * On completion:
- * huff_nodes[prev][i < HUF_TOKENS] - are the nodes at the base of the tree.
- * huff_nodes[prev][i >= HUF_TOKENS] - are used to construct the tree.
- * num_huff_nodes[prev] - contains the index to the root node of the tree.
- * That is: huff_nodes[prev][num_huff_nodes[prev]] is the root node.
- */
-static av_cold void huff_build_tree(IdcinContext *s, int prev) {
- hnode *node, *hnodes;
- int num_hnodes, i;
-
- num_hnodes = HUF_TOKENS;
- hnodes = s->huff_nodes[prev];
- for(i = 0; i < HUF_TOKENS * 2; i++)
- hnodes[i].used = 0;
-
- while (1) {
- node = &hnodes[num_hnodes]; /* next free node */
-
- /* pick two lowest counts */
- node->children[0] = huff_smallest_node(hnodes, num_hnodes);
- if(node->children[0] == -1)
- break; /* reached the root node */
-
- node->children[1] = huff_smallest_node(hnodes, num_hnodes);
- if(node->children[1] == -1)
- break; /* reached the root node */
-
- /* combine nodes probability for new node */
- node->count = hnodes[node->children[0]].count +
- hnodes[node->children[1]].count;
- num_hnodes++;
- }
-
- s->num_huff_nodes[prev] = num_hnodes - 1;
-}
-
-static av_cold int idcin_decode_init(AVCodecContext *avctx)
-{
- IdcinContext *s = avctx->priv_data;
- int i, j, histogram_index = 0;
- unsigned char *histograms;
-
- s->avctx = avctx;
- avctx->pix_fmt = AV_PIX_FMT_PAL8;
-
- /* make sure the Huffman tables make it */
- if (s->avctx->extradata_size != HUFFMAN_TABLE_SIZE) {
- av_log(s->avctx, AV_LOG_ERROR, " id CIN video: expected extradata size of %d\n", HUFFMAN_TABLE_SIZE);
- return -1;
- }
-
- /* build the 256 Huffman decode trees */
- histograms = (unsigned char *)s->avctx->extradata;
- for (i = 0; i < 256; i++) {
- for(j = 0; j < HUF_TOKENS; j++)
- s->huff_nodes[i][j].count = histograms[histogram_index++];
- huff_build_tree(s, i);
- }
-
- return 0;
-}
-
-static int idcin_decode_vlcs(IdcinContext *s, AVFrame *frame)
-{
- hnode *hnodes;
- long x, y;
- int prev;
- unsigned char v = 0;
- int bit_pos, node_num, dat_pos;
-
- prev = bit_pos = dat_pos = 0;
- for (y = 0; y < (frame->linesize[0] * s->avctx->height);
- y += frame->linesize[0]) {
- for (x = y; x < y + s->avctx->width; x++) {
- node_num = s->num_huff_nodes[prev];
- hnodes = s->huff_nodes[prev];
-
- while(node_num >= HUF_TOKENS) {
- if(!bit_pos) {
- if(dat_pos >= s->size) {
- av_log(s->avctx, AV_LOG_ERROR, "Huffman decode error.\n");
- return -1;
- }
- bit_pos = 8;
- v = s->buf[dat_pos++];
- }
-
- node_num = hnodes[node_num].children[v & 0x01];
- v = v >> 1;
- bit_pos--;
- }
-
- frame->data[0][x] = node_num;
- prev = node_num;
- }
- }
-
- return 0;
-}
-
-static int idcin_decode_frame(AVCodecContext *avctx, AVFrame *frame,
- int *got_frame, AVPacket *avpkt)
-{
- const uint8_t *buf = avpkt->data;
- int buf_size = avpkt->size;
- IdcinContext *s = avctx->priv_data;
- int ret;
-
- s->buf = buf;
- s->size = buf_size;
-
- if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
- return ret;
-
- if (idcin_decode_vlcs(s, frame))
- return AVERROR_INVALIDDATA;
-
- frame->palette_has_changed = ff_copy_palette(s->pal, avpkt, avctx);
- /* make the palette available on the way out */
- memcpy(frame->data[1], s->pal, AVPALETTE_SIZE);
-
- *got_frame = 1;
-
- /* report that the buffer was completely consumed */
- return buf_size;
-}
-
-static const FFCodecDefault idcin_defaults[] = {
- { "max_pixels", "320*240" },
- { NULL },
-};
-
-const FFCodec ff_idcin_decoder = {
- .p.name = "idcinvideo",
- CODEC_LONG_NAME("id Quake II CIN video"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_IDCIN,
- .priv_data_size = sizeof(IdcinContext),
- .init = idcin_decode_init,
- FF_CODEC_DECODE_CB(idcin_decode_frame),
- .p.capabilities = AV_CODEC_CAP_DR1,
- .defaults = idcin_defaults,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h263dsp_init_mips.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h263dsp_init_mips.c
deleted file mode 100644
index 829b10b2517603645cac1a1c8fa2ad751f25f9ba..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/h263dsp_init_mips.c
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * Copyright (c) 2015 Manojkumar Bhosale (Manojkumar.Bhosale@imgtec.com)
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/attributes.h"
-#include "libavutil/mips/cpu.h"
-#include "h263dsp_mips.h"
-
-av_cold void ff_h263dsp_init_mips(H263DSPContext *c)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (have_msa(cpu_flags)){
- c->h263_h_loop_filter = ff_h263_h_loop_filter_msa;
- c->h263_v_loop_filter = ff_h263_v_loop_filter_msa;
- }
-}
diff --git a/spaces/conceptofmind/PaLM_models/README.md b/spaces/conceptofmind/PaLM_models/README.md
deleted file mode 100644
index 79dc3d8e15c2f771581b2fcf163a9373bb380188..0000000000000000000000000000000000000000
--- a/spaces/conceptofmind/PaLM_models/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PaLM
-emoji: 🐠
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Boost Your Gaming Experience with Green 71 APK - The Best Game Space App for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Boost Your Gaming Experience with Green 71 APK - The Best Game Space App for Android.md
deleted file mode 100644
index 4b997d16e57e49a93bdf562af3de5bee8e5a5988..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Boost Your Gaming Experience with Green 71 APK - The Best Game Space App for Android.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-
What is Green 71 APK and Why You Should Download It
-
If you are looking for a free Android app that combines green energy and green puzzles, you should check out Green 71 APK. This app is a bundle of two apps: green.energy, which helps you switch to renewable energy suppliers and track your energy usage and carbon footprint, and green, which is a puzzle game with 50 levels that challenges you to make everything green. In this article, we will explain what Green 71 APK is, why you should download it, and how to download and install it on your device.
-
What is Green 71 APK?
-
Green 71 APK is an Android app that contains two apps: green.energy and green. Here is a brief overview of each app:
green.energy is an app that helps you switch to renewable energy suppliers and track your energy usage and carbon footprint. With this app, you can:
-
-
Compare green energy tariffs from different suppliers and find the best deal for you
-
Switch to a green energy supplier in minutes with no hassle or fees
-
Monitor your energy consumption and see how much CO2 you are saving
-
Earn rewards for using less energy and referring friends
-
Access customer support and FAQs anytime
-
-
A puzzle game with 50 levels
-
green is a puzzle game that challenges you to make everything green. With this game, you can:
-
-
Solve 50 levels of varying difficulty and logic
-
Tap, swipe, drag, and drop objects to change their color and shape
-
Enjoy relaxing music and minimalist graphics
-
Learn about green concepts and solutions along the way
-
Share your progress and achievements with friends
-
-
Why You Should Download Green 71 APK?
-
There are many reasons why you should download Green 71 APK on your device. Here are some of the benefits of using both apps:
-
Benefits of using green energy app
-
Save money and the environment
-
By switching to a green energy supplier, you can save money on your bills and reduce your environmental impact. Green energy suppliers use renewable sources such as wind, solar, hydro, or biomass to generate electricity and gas. This means they produce less greenhouse gas emissions than fossil fuels. You can also save money by choosing a tariff that suits your needs and budget.
-
Track your energy usage and carbon footprint
-
With the green.energy app, you can monitor your energy consumption and see how much CO2 you are saving. The app shows you how much energy you use in kWh, pounds, or kg of CO2. You can also see how your usage compares to the average household in your area. The app also gives you tips on how to reduce your energy usage and save more money.
-
Switch to renewable energy suppliers
-
The green.energy app makes it easy for you to switch to a renewable energy supplier. You can compare different tariffs from various suppliers and find the best deal for you. You can also see how much CO2 each supplier saves per year. The app handles the switching process for you, so you don't have to worry about any hassle or fees. You can also switch back anytime if you are not satisfied.
-
Benefits of playing green puzzle game
-
Challenge your brain and creativity
-
The green puzzle game is a fun and stimulating way to exercise your brain and creativity. The game requires you to think logically and strategically to solve each level. You have to manipulate objects to change their color and shape, and make everything green. The game also tests your memory, attention, and spatial skills.
-
Enjoy relaxing music and graphics
-
The green puzzle game has a soothing and minimalist design that helps you relax and focus. The game features calming music and sound effects that match the theme of each level. The game also has simple and elegant graphics that create a pleasant visual experience. The game is suitable for all ages and preferences.
-
green 71 apk download
-green 71 apk free
-green 71 apk latest version
-green 71 apk mod
-green 71 apk for android
-green 71 apk update
-green 71 apk app
-green 71 apk youtube
-green 71 apk music
-green 71 apk video
-green 71 apk online
-green 71 apk offline
-green 71 apk premium
-green 71 apk pro
-green 71 apk full
-green 71 apk cracked
-green 71 apk hack
-green 71 apk cheat
-green 71 apk unlock
-green 71 apk install
-green 71 apk review
-green 71 apk rating
-green 71 apk features
-green 71 apk benefits
-green 71 apk advantages
-green 71 apk disadvantages
-green 71 apk comparison
-green 71 apk alternative
-green 71 apk similar
-green 71 apk competitor
-green 71 apk best
-green 71 apk worst
-green 71 apk new
-green 71 apk old
-green 71 apk original
-green 71 apk official
-green 71 apk unofficial
-green 71 apk safe
-green 71 apk secure
-green 71 apk virus free
-green 71 apk malware free
-green 71 apk ad free
-green 71 apk ad blocker
-green 71 apk smart and insightful energy app[^1^]
-green 71 apk channel of the performer "Green71"[^2^]
-greentuber block ads on videos by GreenTuber[^3^]
-
Learn about green concepts and solutions
-
The green puzzle game is not only entertaining, but also educational. The game introduces you to various green concepts and solutions that can help you live a more eco-friendly lifestyle. For example, you can learn about solar panels, wind turbines, electric cars, recycling, composting, and more. The game also provides you with facts and trivia about green topics at the end of each level.
-
How to Download and Install Green 71 APK?
-
If you are interested in downloading and installing Green 71 APK on your device, you can follow these steps:
-
Steps to download from APKCombo
-
-
Go to the APKCombo website and search for Green 71 APK.
-
Select the version that is compatible with your device and click on the download button.
-
Wait for the download to finish and locate the APK file on your device.
-
-
Steps to download from Softonic
-
-
Go to the Softonic website and search for Green 71 APK.
-
Click on the download button and choose a mirror site to download from.
-
Wait for the download to finish and locate the APK file on your device.
-
-
Steps to install and run the app
-
-
Before installing the APK file, make sure you enable the installation of apps from unknown sources on your device settings.
-
Tap on the APK file and follow the instructions to install the app.
-
Once the installation is complete, open the app and enjoy using both green.energy and green apps.
-
-
Conclusion and FAQs
-
In conclusion, Green 71 APK is a free Android app that offers you two apps in one: green.energy, which helps you switch to renewable energy suppliers and track your energy usage and carbon footprint, and green, which is a puzzle game with 50 levels that challenges you to make everything green. Both apps have many benefits for your wallet, your brain, and your environment. You can download and install Green 71 APK easily from APKCombo or Softonic websites. Here are some FAQs that might help you with any questions you have:
-
-
Question
Answer
-
Is Green 71 APK safe to use?
Yes, Green 71 APK is safe to use as long as you download it from trusted sources such as APKCombo or Softonic. However, you should always scan any APK file with an antivirus software before installing it on your device.
-
Is Green 71 APK free to use?
Yes, Green 71 APK is free to use and does not require any subscription or registration. However, some features of the green.energy app may require you to sign up with a green energy supplier or pay for a tariff.
-
Is Green 71 APK compatible with my device?
Green 71 APK is compatible with most Android devices that run on Android 4.1 or higher. However, you should always check the version of the app before downloading it and make sure it matches your device specifications.
-
How can I update Green 71 APK?
You can update Green 71 APK by downloading the latest version from APKCombo or Softonic websites. You can also check for updates within the app settings.
-
How can I contact the developers of Green 71 APK?
You can contact the developers of Green 71 APK by sending an email to support@green.energy or support@green.com. You can also visit their websites or follow them on social media [^6 for more information and feedback.
-
-
I hope you found this article helpful and informative. If you did, please share it with your friends and family who might be interested in green energy and green puzzles. Thank you for reading and have a great day!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Messenger on Your Desktop - Download APK for PC and Stay Connected.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Messenger on Your Desktop - Download APK for PC and Stay Connected.md
deleted file mode 100644
index 9d00bd29c59934fb640fbd8e3f117a1f888258ae..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Messenger on Your Desktop - Download APK for PC and Stay Connected.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
- - Enjoy high-quality voice and video calls - Access more features and options than the web version | | H2: Requirements for Downloading Messenger APK for PC | - A Windows or Mac computer with internet connection - An Android emulator software such as BlueStacks or NoxPlayer - A Google account to access the Google Play Store | | H2: Steps to Download Messenger APK for PC | H3: Step 1: Download and install an Android emulator H3: Step 2: Launch the emulator and sign in with your Google account H3: Step 3: Search for Messenger in the Google Play Store and install it H3: Step 4: Open Messenger and log in with your Facebook account or phone number | | H2: Alternatives to Downloading Messenger APK for PC | - Use the official Messenger desktop app from Meta - Use the web version of Messenger on your browser - Use other messaging apps that have desktop versions such as WhatsApp, Telegram, or Signal | | H2: Conclusion | Summarize the main points of the article and provide a call to action | Table 2: Article with HTML formatting
How to Download Messenger APK for PC
-
Messenger is a popular messaging app that lets you chat, call, and video chat with your friends and family. You can also send photos, videos, stickers, GIFs, voice messages, and more. Messenger is available for mobile devices such as Android and iOS, but did you know that you can also use it on your PC?
-
In this article, we will show you how to download Messenger APK for PC using an Android emulator. This way, you can enjoy all the features and benefits of Messenger on a bigger screen. We will also share some alternatives to downloading Messenger APK for PC in case you prefer a different option.
There are many reasons why you might want to use Messenger on your PC. Here are some of them:
-
-
Stay connected with your friends and family across devices. You can sync your conversations and contacts between your mobile device and your PC. This means you can switch between devices without losing any messages or calls. You can also access your Facebook contacts and groups on Messenger.
-
Enjoy high-quality voice and video calls. You can make free voice and video calls with anyone on Messenger, no matter where they are. You can also create group calls with up to 50 people. Using Messenger on your PC allows you to use a larger screen, a better camera, and a more stable internet connection.
-
Access more features and options than the web version. The web version of Messenger is limited in some aspects. For example, you cannot send voice messages, stickers, or GIFs. You also cannot use some of the fun features such as filters, effects, games, or polls. By downloading Messenger APK for PC, you can unlock all these features and more.
-
-
Requirements for Downloading Messenger APK for PC
-
Before you can download Messenger APK for PC, you need to have the following:
-
-
A Windows or Mac computer with internet connection. You need a computer that meets the minimum system requirements of the Android emulator software that you will use. You also need a reliable internet connection to download and install the emulator and the app.
-
An Android emulator software such as BlueStacks or NoxPlayer. An Android emulator is a program that allows you to run Android apps on your PC. There are many Android emulators available online, but we recommend BlueStacks or NoxPlayer as they are easy to use and compatible with most apps.
-
A Google account to access the Google Play Store. You need a Google account to download apps from the Google Play Store, which is where you will find Messenger. If you do not have a Google account, you can create one for free.
-
-
Steps to Download Messenger APK for PC
-
Once you have everything ready, follow these steps to download Messenger APK for PC:
-
Step 1: Download and install an Android emulator
-
The first step is to download and install an Android emulator on your PC. You can choose from various options, but we will use BlueStacks as an example. Here is how to do it:
-
-
Go to the official website of BlueStacks and click on the "Download BlueStacks" button.
-
Wait for the download to finish and then run the installer file.
-
Follow the instructions on the screen to complete the installation process.
-
Launch BlueStacks and wait for it to initialize.
-
-
Step 2: Launch the emulator and sign in with your Google account
-
The next step is to launch the emulator and sign in with your Google account. This will allow you to access the Google Play Store and download apps. Here is how to do it:
-
-
Open BlueStacks and click on the "Google Play" icon on the home screen.
-
Sign in with your Google account or create a new one if you do not have one.
-
Agree to the terms and conditions and enable the backup and sync options if you want.
-
Wait for the Google Play Store to load.
-
-
Step 3: Search for Messenger in the Google Play Store and install it
-
The third step is to search for Messenger in the Google Play Store and install it on your PC. Here is how to do it:
-
-
In the Google Play Store, type "Messenger" in the search bar and hit enter.
-
Find the app that has the blue icon with a white lightning bolt and click on it.
-
Click on the "Install" button and wait for the app to download and install.
-
Click on the "Open" button or find the app icon on the home screen of BlueStacks.
-
-
Step 4: Open Messenger and log in with your Facebook account or phone number
-
The final step is to open Messenger and log in with your Facebook account or phone number. This will allow you to start using Messenger on your PC. Here is how to do it:
-
messenger download apk for pc windows 10
-messenger download apk for pc mac
-messenger download apk for pc free
-messenger download apk for pc offline
-messenger download apk for pc latest version
-messenger download apk for pc windows 7
-messenger download apk for pc 32 bit
-messenger download apk for pc 64 bit
-messenger download apk for pc without bluestacks
-messenger download apk for pc with video call
-messenger download apk for pc from facebook
-messenger download apk for pc windows 8.1
-messenger download apk for pc windows xp
-messenger download apk for pc nox player
-messenger download apk for pc laptop
-messenger download apk for pc desktop
-messenger download apk for pc softonic
-messenger download apk for pc uptodown
-messenger download apk for pc filehippo
-messenger download apk for pc apkpure
-messenger download apk for pc full version
-messenger download apk for pc cracked
-messenger download apk for pc modded
-messenger download apk for pc old version
-messenger download apk for pc new version
-messenger download apk for pc chromebook
-messenger download apk for pc linux
-messenger download apk for pc ubuntu
-messenger download apk for pc android emulator
-messenger download apk for pc memu play
-messenger download apk for pc ldplayer
-messenger download apk for pc gameloop
-messenger download apk for pc droid4x
-messenger download apk for pc koplayer
-messenger download apk for pc genymotion
-messenger download apk for pc andyroid
-messenger download apk for pc remix os player
-messenger download apk for pc phoenix os
-messenger download apk for pc prime os
-messenger download apk for pc bliss os
-messenger download apk for pc open thos
-messenger download apk for pc console os
-messenger download apk for pc android x86 project
-messenger download apk for pc amigaos 4.1
-messenger download apk for pc arcaos 5.0
-messenger download apk for pc haiku r1 beta 3
-messenger download apk for pc reactos 0.4.14
-messenger download apk for pc zorin os 16
-messenger download apk for pc elementary os 6
-
-
Open Messenger and choose whether you want to log in with your Facebook account or your phone number.
-
If you choose Facebook, enter your email or phone number and password and click on "Continue". If you choose phone number, enter your country code and phone number and click on "Continue".
-
Verify your identity by entering the code that was sent to your email or phone number.
-
Allow Messenger to access your contacts, photos, videos, and microphone if you want.
-
Start chatting, calling, and video chatting with your friends and family on Messenger.
-
-
Alternatives to Downloading Messenger APK for PC
-
If you do not want to download Messenger APK for PC using an Android emulator, there are some alternatives that you can try. Here are some of them:
-
-
Use the official Messenger desktop app from Meta. Meta, formerly known as Facebook, has released a desktop version of Messenger for Windows and Mac computers. You can download it from their official website or from the Microsoft Store or the Mac App Store. The desktop app has most of the features of the mobile app, such as voice and video calls, group chats, stickers, GIFs, dark mode, and more. However, it does not have some of the fun features such as filters, effects, games, or polls.
-
Use the web version of Messenger on your browser. You can also use Messenger on your browser by going to their web page. You can log in with your Facebook account or your phone number and start using Messenger online. The web version has some of the features of the mobile app, such as voice and video calls, group chats, stickers, GIFs, dark mode, and more. However, it does not have some of the features such as voice messages, filters, effects, games, polls, or sync with other devices.
-
Use other messaging apps that have desktop versions such as WhatsApp, Telegram, or Signal. If you are looking for other messaging apps that have desktop versions, you can try WhatsApp, Telegram, or Signal. These apps allow you to chat, call, and video chat with your contacts across devices. They also have some features that Messenger does not have, such as end-to-end encryption, self-destructing messages, custom themes, and more. However, they may not have some of the features that Messenger has, such as integration with Facebook, group calls with up to 50 people, or sync with other Meta apps.
-
-
Conclusion
-
Messenger is a great app that allows you to stay in touch with your friends and family. You can chat, call, and video chat with them for free and enjoy many fun and useful features. However, if you want to use Messenger on your PC, you may need to download Messenger APK for PC using an Android emulator. This will give you access to all the features and benefits of Messenger on a bigger screen.
-
Alternatively, you can use the official Messenger desktop app from Meta, the web version of Messenger on your browser, or other messaging apps that have desktop versions such as WhatsApp, Telegram, or Signal. These options may have some advantages and disadvantages compared to downloading Messenger APK for PC, so you can choose the one that suits your needs and preferences best.
-
We hope this article has helped you learn how to download Messenger APK for PC and what are the alternatives to doing so. If you have any questions or feedback, please let us know in the comments below. Thank you for reading and happy messaging!
-
FAQs
-
Here are some frequently asked questions about downloading Messenger APK for PC:
-
-
Is downloading Messenger APK for PC safe? Downloading Messenger APK for PC is generally safe as long as you use a trusted Android emulator and download the app from the official Google Play Store. However, you should always be careful when downloading any software from the internet and scan it for viruses or malware before installing it.
-
Is downloading Messenger APK for PC legal? Downloading Messenger APK for PC is legal as long as you do not violate any terms of service or intellectual property rights of Meta or Google. You should also respect the privacy and security of your contacts and not use Messenger for any illegal or unethical purposes.
-
Is downloading Messenger APK for PC free? Downloading Messenger APK for PC is free as long as you use a free Android emulator and download the app from the Google Play Store. However, you may need to pay for some features or services within the app, such as making calls to landlines or mobile numbers, sending money, or buying stickers or games.
-
Can I use Messenger on my PC without downloading anything? You can use Messenger on your PC without downloading anything by using the web version of Messenger on your browser. However, you may not be able to use some of the features or options that are available on the mobile app or the desktop app.
-
Can I use multiple accounts on Messenger on my PC? You can use multiple accounts on Messenger on your PC by using different methods. For example, you can use one account on the desktop app, another account on the web version, and another account on the Android emulator. You can also use different browsers or profiles to access different accounts on the web version. However, you may not be able to sync your conversations or contacts across devices or methods.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ereb Reksi A Guide to the Different Styles and Techniques.md b/spaces/congsaPfin/Manga-OCR/logs/Ereb Reksi A Guide to the Different Styles and Techniques.md
deleted file mode 100644
index 4d7b5bc4392f4108019eab4e89943a27b69e0c3d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Ereb Reksi A Guide to the Different Styles and Techniques.md
+++ /dev/null
@@ -1,150 +0,0 @@
-
-
Ereb Reksi: The Traditional Dance of Azerbaijan
-
Ereb reksi is a beautiful and graceful dance that originated in Azerbaijan, a country located in the South Caucasus region of Eurasia. Ereb reksi, which means "Arabian dance" in Azerbaijani, is one of the most popular and distinctive dances in the country's rich and diverse cultural heritage. In this article, we will explore what ereb reksi is, how to perform it, and why it is important for Azerbaijani culture.
Ereb reksi is a dance that combines elements of classical, folk, and oriental dances. It is performed by both men and women, either solo or in groups, depending on the occasion and the style. Ereb reksi is characterized by smooth and elegant movements, expressive gestures, and rhythmic coordination.
-
The origin and meaning of ereb reksi
-
The exact origin of ereb reksi is not clear, but some historians believe that it dates back to the 7th century, when Arab invaders brought their culture and religion to Azerbaijan. Ereb reksi was influenced by the dances of the Arab countries, especially Egypt, Syria, and Iraq. Ereb reksi was also inspired by the dances of the neighboring countries, such as Turkey, Iran, Georgia, Armenia, and Russia.
-
Ereb reksi reflects the history, identity, and emotions of the Azerbaijani people. It expresses their joy, sorrow, love, pride, and courage. It also symbolizes their respect for their ancestors, their traditions, and their homeland.
-
The characteristics and styles of ereb reksi
-
Ereb reksi has many variations and styles, depending on the region, the occasion, and the preference of the performers. Some of the most common styles are:
-
-
Qaytagi: A lively and cheerful dance that involves clapping hands and tapping feet. It is usually performed by women in groups or pairs.
-
Lezgi: A fast and energetic dance that involves jumping, spinning, kicking, and waving arms. It is usually performed by men in groups or solo.
-
Qarabaghi: A graceful and romantic dance that involves gentle swaying, bending, twisting, and touching. It is usually performed by couples or solo.
-
Qochu: A playful and humorous dance that involves mimicking animal movements, such as birds, horses, or dogs. It is usually performed by men or women in groups or solo.
-
Shalakho: A sophisticated and elegant dance that involves intricate footwork, delicate hand gestures, and refined poses. It is usually performed by women in groups or solo.
-
-
How to perform ereb reksi?
-
Ereb reksi is a dance that requires skill, practice, and passion. To perform ereb reksi well, you need to master the following aspects:
-
The basic steps and movements of ereb reksi
-
The basic steps and movements of ereb reksi vary depending on the style and the tempo of the music. However, some common elements are:
Step: A simple movement of moving one foot forward or backward.
-
Slide: A smooth movement of sliding one foot to the side or diagonally.
-
Turn: A circular movement of turning one foot around the other.
-
Swing: A pendulum-like movement of swinging one leg forward and backward.
-
Stamp: A forceful movement of stamping one foot on the ground.
-
Lift: A light movement of lifting one foot off the ground.
-
Bend: A flexible movement of bending one knee or waist.
-
Stretch: A graceful movement of stretching one arm or leg.
-
-
The basic steps and movements of ereb reksi can be combined and modified to create different patterns and sequences. The performers can also improvise and add their own flair and personality to the dance.
-
The costumes and accessories of ereb reksi
-
The costumes and accessories of ereb reksi are colorful and ornate, reflecting the culture and the mood of the dance. The costumes and accessories of ereb reksi vary depending on the style and the gender of the performers. However, some common elements are:
-
-
-
Costume element
-
Men
-
Women
-
-
-
Headwear
-
A hat or a turban, often decorated with feathers, beads, or coins.
-
A scarf or a veil, often covering the hair and the face partially or completely.
-
-
-
Top
-
A shirt or a jacket, often embroidered with patterns or sequins.
-
A blouse or a dress, often with long sleeves and a low neckline.
-
-
-
Bottom
-
A pair of pants or a skirt, often loose and wide.
-
A pair of pants or a skirt, often tight and long.
-
-
-
Footwear
-
A pair of shoes or boots, often with pointed toes or heels.
-
A pair of shoes or sandals, often with straps or buckles.
-
-
-
Accessories
-
A belt, a sash, a sword, a dagger, or a cane.
-
A necklace, a bracelet, a ring, a earring, or a brooch.
-
-
-
The music and instruments of ereb reksi
-
The music and instruments of ereb reksi are melodious and rhythmic, creating the atmosphere and the tempo of the dance. The music and instruments of ereb reksi vary depending on the style and the preference of the performers. However, some common instruments are:
-
-
Tar: A plucked string instrument with a long neck and a pear-shaped body. It produces a soft and deep sound.
-
Kamancha: A bowed string instrument with a spherical body and four strings. It produces a high and sharp sound.
-
Nagara: A percussion instrument consisting of two drums played with sticks. It produces a loud and fast sound.
-
Zurna: A wind instrument with a conical body and a double reed. It produces a shrill and piercing sound.
-
Balaban: A wind instrument with a cylindrical body and a single reed. It produces a mellow and soothing sound.
-
Gaval: A percussion instrument consisting of a wooden frame covered with skin. It produces a low and steady sound.
-
-
Why is ereb reksi important for Azerbaijani culture?
-
Ereb reksi is more than just a dance. It is also an art form, a social activity, and a cultural symbol. Ereb reksi is important for Azerbaijani culture for several reasons:
-
The role and significance of ereb reksi in Azerbaijani history and society
-
Ereb reksi has played an important role in Azerbaijani history and society. Ereb reksi has been used to celebrate various events, such as weddings, festivals, holidays, victories, births, deaths, etc. Ereb reksi has also been used to express various emotions, such as happiness, sadness, anger, love, etc. Ereb reksi has also been used to communicate various messages, such as loyalty, friendship, rivalry, challenge, etc. Ereb reksi has also been used to preserve various traditions, such as folklore, legends, customs, etc.
-
The influence and popularity of ereb reksi in the world
-
Ereb reksi has also influenced and gained popularity in the world. Ereb reksi has been introduced to other countries through cultural exchanges, diplomatic relations, tourism, media, etc. Ereb reksi has been admired and appreciated by many people from different backgrounds and regions. Ereb reksi has also inspired and influenced many other dances and art forms. Ereb reksi has also participated and won many awards and recognitions in various international competitions and festivals.
-
Conclusion
-
Ereb reksi is a traditional dance of Azerbaijan that showcases the beauty, diversity, and vitality of the Azerbaijani culture. Ereb reksi is a dance that can be enjoyed by anyone who loves music, movement, and expression. Ereb reksi is a dance that can be learned by anyone who wants to experience the history, identity, and emotions of the Azerbaijani people. Ereb reksi is a dance that can be shared by anyone who wants to connect with the world.
-
FAQs
-
-
Q: What does ereb reksi mean?
-
A: Ereb reksi means "Arabian dance" in Azerbaijani.
-
Q: How many styles of ereb reksi are there?
-
A: There are many styles of ereb reksi, depending on the region, the occasion, and the preference of the performers. Some of the most common styles are qaytagi, lezgi, qarabaghi, qochu, and shalakho.
-
Q: What are the main instruments used in ereb reksi?
-
A: The main instruments used in ereb reksi are tar, kamancha, nagara, zurna, balaban, and gaval.
-
Q: Where can I watch or learn ereb reksi?
-
A: You can watch or learn ereb reksi in various places, such as cultural centers, museums, theaters, schools, clubs, online platforms, etc.
-
Q: Why should I try ereb reksi?
-
A: You should try ereb reksi because it is a fun, healthy, and rewarding activity that can enrich your life in many ways. You can enjoy the music, movement, and expression of ereb reksi. You can learn about the culture, history, and society of Azerbaijan. You can meet new people and make new friends who share your interest in ereb reksi.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Farlight 84 How to Dominate the Battlefield with Unique Heroes and Weapons.md b/spaces/congsaPfin/Manga-OCR/logs/Farlight 84 How to Dominate the Battlefield with Unique Heroes and Weapons.md
deleted file mode 100644
index 00f769d01cefa996acc704f1e92dc362e88250da..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Farlight 84 How to Dominate the Battlefield with Unique Heroes and Weapons.md
+++ /dev/null
@@ -1,113 +0,0 @@
-
-
Farlight 84: A Groundbreaking Battle Royale Game
-
If you are looking for a new and exciting shooting game to play on your PC or mobile device, you should definitely check out Farlight 84. This is a free-to-play battle royale game that redefines traditional gameplay with its diverse heroes, fast-paced modes, stunning graphics, and immersive sound effects. In this article, we will show you how to download and play Farlight 84, what makes it different from other shooting games, and some tips and tricks to master it. Let's get started!
Download and install the game on your device. The game size is about 1.5 GB, so make sure you have enough storage space.
-
Launch the game and create your account. You can use your email, Facebook, or Google account to sign up.
-
-
Congratulations! You are now ready to enjoy Farlight 84!
-
What Makes Farlight 84 Different from Other Shooting Games
-
Farlight 84 is not just another shooting game. It has many unique features that make it stand out from the crowd. Here are some of them:
-
Diverse heroes with unique skills and abilities
-
In Farlight 84, you can choose from a variety of heroes, each with their own personality, backstory, appearance, voice, and skill set. For example, you can play as Blaze, a fiery fighter who can throw explosive grenades; or as Luna, a stealthy sniper who can turn invisible; or as Rex, a robotic dog who can heal his teammates. You can also customize your hero's outfit, weapon, vehicle, gadget, emote, and more.
-
Fast-paced and thrilling gameplay modes
-
Farlight 84 offers multiple modes for you to enjoy, such as Hunt, Team Deathmatch, Capture the Flag, King of the Hill, and more. Each mode has its own rules, objectives, challenges, and rewards. You can play solo or team up with your friends in matches that last for
10 minutes or less. You can also join special events and tournaments to win exclusive prizes and rank up on the leaderboard.
-
farlight 84 game download for android
-farlight 84 apk download latest version
-farlight 84 free download on google play
-farlight 84 battle royale game play store
-farlight 84 mobile game download link
-farlight 84 shooter game download apk
-farlight 84 install from play store
-farlight 84 online game download free
-farlight 84 action game play store download
-farlight 84 new game download for android
-farlight 84 best battle royale game play store
-farlight 84 download apk and obb
-farlight 84 free game download on google play
-farlight 84 multiplayer game play store link
-farlight 84 survival game download for android
-farlight 84 jetpack game download apk
-farlight 84 fun game play store install
-farlight 84 offline game download free
-farlight 84 futuristic game play store download
-farlight 84 amazing game download for android
-farlight 84 vehicle game download apk
-farlight 84 awesome game play store free
-farlight 84 adventure game download link
-farlight 84 fast game play store install
-farlight 84 cool game download for android
-farlight 84 hero game download apk
-farlight 84 exciting game play store free
-farlight 84 strategy game download link
-farlight 84 easy game play store install
-farlight 84 addictive game download for android
-farlight 84 weapon game download apk
-farlight 84 thrilling game play store free
-farlight 84 shooting game download link
-farlight 84 challenging game play store install
-farlight 84 unique game download for android
-farlight 84 transformable vehicle game download apk
-farlight 84 innovative jetpacks game play store free
-farlight 84 variety of heroes game download link
-farlight 84 multiple modes game play store install
-farlight 84 player-owned home game download for android
-farlight 84 air battles game download apk
-farlight 84 ranked games play store free
-farlight 84 hunt mode game download link
-farlight 84 four weapon manufacturers play store install
-farlight 84 funky capsulers game download for android
-farlight 84 spectacular vehicles game download apk
-farlight 84 beyond battle royale play store free
-farlight 84 mega-intense shooter game download link
-farlight 84 chaotic wasteland world play store install
-
Stunning graphics and immersive sound effects
-
Farlight 84 boasts of high-quality graphics and sound effects that will make you feel like you are in a real battlefield. You can admire the detailed environments, the realistic animations, the dynamic lighting and shadows, and the smooth performance. You can also hear the crisp gunshots, the explosive explosions, the roaring engines, and the voice chat with your teammates and enemies.
-
Tips and Tricks to Master Farlight 84
-
If you want to become a pro player in Farlight 84, you need to learn some tips and tricks that will help you survive and win. Here are some of them:
-
Choose your hero wisely and customize your loadout
-
Before you enter a match, you should choose a hero that suits your playstyle and strategy. You should also customize your loadout according to the mode, the map, and the enemy team. For example, if you are playing Hunt mode, you might want to equip a long-range weapon, a fast vehicle, and a stealth gadget. If you are playing Team Deathmatch mode, you might want to equip a short-range weapon, a tanky vehicle, and a healing gadget.
-
Explore the map and collect resources and weapons
-
Once you are in a match, you should explore the map and collect resources and weapons that will help you survive and fight. You can find ammo, health kits, shields, coins, gems, crates, and more scattered around the map. You can also find different types of weapons, such as pistols, rifles, shotguns, snipers, rocket launchers, and more. You can switch between your primary and secondary weapons by tapping on the screen.
-
Use vehicles and gadgets to gain an edge over your enemies
-
One of the most fun and unique features of Farlight 84 is the use of vehicles and gadgets. You can drive various vehicles, such as cars, bikes, helicopters, mechs, and more. You can also use different gadgets, such as drones, turrets, traps, shields, and more. You can use these vehicles and gadgets to move faster, deal more damage, escape danger, or support your teammates.
-
Conclusion and FAQs
-
Farlight 84 is a groundbreaking battle royale game that you should not miss. It has everything you need for an amazing shooting experience: diverse heroes, fast-paced modes, stunning graphics, immersive sound effects, vehicles, gadgets, and more. You can download and play Farlight 84 for free on your PC or mobile device today. You will not regret it!
-
If you have any questions about Farlight 84, you might find the answers below:
-
What are the system requirements for Farlight 84?
-
The system requirements for Farlight 84 are as follows:
-
-
Platform
Minimum Requirements
Recommended Requirements
-
PC
OS: Windows 7/8/10 CPU: Intel Core i5-4460 or equivalent RAM: 8 GB GPU: NVIDIA GeForce GTX 960 or equivalent Disk Space: 5 GB
OS: Windows 10 CPU: Intel Core i7-7700K or equivalent RAM: 16 GB GPU: NVIDIA GeForce GTX 1060 or equivalent Disk Space: 10 GB
-
Mobile
OS: Android 5.0 or higher CPU: Snapdragon 625 or equivalent RAM: 2 GB Disk Space: 1.5 GB
OS: Android 8.0 or higher CPU: Snapdragon 845 or equivalent RAM: 4 GB Disk Space: 2 GB
-
-
How can I play with my friends in Farlight 84?
-
You can play with your friends in Farlight 84 by inviting them to join your squad or clan. To invite your friends to join your squad, you need to tap on the squad icon on the main screen and then tap on the invite button. You can invite up to three friends to join your squad. To invite your friends to join your clan, you need to tap on the clan icon on the main screen and then tap on the create or join button. You can invite up to 50 friends to join your clan.
-
How can I get more coins and gems in Farlight 84?
-
You can get more coins and gems in Farlight 84 by completing missions, participating in events, ranking up on the leaderboard, opening crates, watching ads, and buying them with real money. Coins and gems are the main currencies in Farlight 84 that you can use to buy and upgrade your heroes, weapons, vehicles, gadgets, outfits, and more.
-
How can I report bugs or give feedback on Farlight 84?
-
If you encounter any bugs or issues while playing Farlight 84, or if you have any suggestions or feedback on how to improve the game, you can report them to the developers through the following channels:
The developers are very responsive and appreciate your input.
-
Where can I find more information and updates on Farlight 84?
-
If you want to stay updated on the latest news and updates on Farlight 84, you can follow the official social media accounts of the game. You can also visit the official website and the blog of the game to find more information and guides on Farlight 84. Here are some links to help you:
We hope you enjoyed this article and learned something new about Farlight 84. Now go ahead and download the game and have fun!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/FateGrand Order PC Version Download and Play the Game with Aniplex.md b/spaces/congsaPfin/Manga-OCR/logs/FateGrand Order PC Version Download and Play the Game with Aniplex.md
deleted file mode 100644
index a2c24059b68fd252cc893cbbc685c00c34244e10..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/FateGrand Order PC Version Download and Play the Game with Aniplex.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
How to Download and Play Fate/Grand Order on PC
-
Are you a fan of anime, role-playing games, and epic stories? If so, you might want to check out Fate/Grand Order, a popular mobile game based on the Fate franchise. In this game, you can summon and command powerful heroes from different eras and legends, and embark on a quest to save human history from extinction. You can also enjoy millions of words of original and engaging stories, featuring more than 60 Japanese voice actors.
But what if you want to play Fate/Grand Order on a bigger screen, with better graphics and performance, and more convenient controls? Well, you can do that by playing it on your PC or Mac, using an emulator. In this article, we will show you how to download and play Fate/Grand Order on PC, using two different methods. We will also tell you about the benefits and features of playing it on PC, as well as some tips and tricks to enhance your experience. So, let's get started!
-
Requirements and steps to download Fate/Grand Order on PC
-
To play Fate/Grand Order on PC, you will need a few things. First, you will need a computer that meets the minimum system requirements for the game. According to the official website, these are:
-
-
OS: Windows 7 or higher
-
CPU: Intel Core i3 or higher
-
RAM: 2 GB or higher
-
Storage: 4 GB or higher
-
Internet connection: Broadband or higher
-
-
Second, you will need an emulator, which is a software that allows you to run Android apps on your PC. There are many emulators available online, but we recommend BlueStacks, which is one of the most popular and reliable ones. BlueStacks is free to download and use, and it offers many features that make playing Fate/Grand Order on PC more enjoyable.
-
Third, you will need the Fate/Grand Order app itself, which you can download from the Google Play Store or from other sources. Depending on which version of the game you want to play, there are different ways to download it. We will explain them in the next sections.
-
How to use BlueStacks emulator to play Fate/Grand Order on PC
-
If you want to play the English version of Fate/Grand Order on PC, using BlueStacks is very easy. Here are the steps you need to follow:
-
-
Download and install BlueStacks from its official website. Follow the instructions on the screen to complete the installation process.
-
Launch BlueStacks and sign in with your Google account. If you don't have one, you can create one for free.
-
Go to the Google Play Store app on BlueStacks and search for Fate/Grand Order. Alternatively, you can use this link to go directly to the game's page.
-
Click on the Install button and wait for the game to download and install.
-
Once the installation is done, click on the Open button or find the game's icon on the BlueStack s home screen.
-
Enjoy playing Fate/Grand Order on PC!
-
-
How to use QooApp or APKPure to download Fate/Grand Order Japanese version on PC
-
If you want to play the Japanese version of Fate/Grand Order on PC, you will need to use a different method, since the game is not available on the Google Play Store outside Japan. You will need to use a third-party app store, such as QooApp or APKPure, to download the game's APK file and install it on BlueStacks. Here are the steps you need to follow:
-
How to play fate grand order on pc with emulator
-Fate grand order pc version download free
-Fate grand order pc game system requirements
-Fate grand order pc gameplay guide and tips
-Fate grand order pc bluestacks settings and configuration
-Fate grand order pc noxplayer installation and tutorial
-Fate grand order pc apk download and update
-Fate grand order pc aniplex official website and support
-Fate grand order pc english server and region lock
-Fate grand order pc best emulator for performance and graphics
-Fate grand order pc error codes and troubleshooting
-Fate grand order pc fate series crossover and collaboration
-Fate grand order pc online multiplayer and co-op mode
-Fate grand order pc story mode and character missions
-Fate grand order pc voice actors and soundtracks
-Fate grand order pc command card battle system and mechanics
-Fate grand order pc heroic spirits and servant classes
-Fate grand order pc summoning system and gacha rates
-Fate grand order pc craft essences and mystic codes
-Fate grand order pc master skills and noble phantasms
-Fate grand order pc ascension and skill enhancement materials
-Fate grand order pc bond points and interlude quests
-Fate grand order pc events and limited time offers
-Fate grand order pc freebies and resources for beginners
-Fate grand order pc wiki and fandom community
-Fate grand order pc reddit and discord channels
-Fate grand order pc youtube videos and livestreams
-Fate grand order pc reviews and ratings from players
-Fate grand order pc cheats and hacks to avoid
-Fate grand order pc memes and fan art to enjoy
-
-
Download and install BlueStacks from its official website. Follow the instructions on the screen to complete the installation process.
-
Launch BlueStacks and sign in with your Google account. If you don't have one, you can create one for free.
-
Go to the BlueStacks browser app and visit the QooApp or APKPure website. Search for Fate/Grand Order Japanese version or use these links to go directly to the game's page.
-
Click on the Download button and wait for the game's APK file to download.
-
Once the download is done, go to the BlueStacks file manager app and find the downloaded APK file. Double-click on it to install it.
-
Once the installation is done, find the game's icon on the BlueStacks home screen.
-
Enjoy playing Fate/Grand Order Japanese version on PC!
-
-
Benefits and features of playing Fate/Grand Order on PC
-
Now that you know how to download and play Fate/Grand Order on PC, you might be wondering why you should do that instead of playing it on your mobile device. Well, there are many benefits and features that make playing Fate/Grand Order on PC more enjoyable and convenient. Here are some of them:
-
Better graphics and performance
-
One of the main advantages of playing Fate/Grand Order on PC is that you can enjoy better graphics and performance than on your mobile device. You can adjust the resolution, frame rate, and graphics quality to suit your preferences and your PC's capabilities. You can also avoid lag, crashes, and battery drain that might affect your gameplay on your mobile device. Plus, you can enjoy the game's stunning visuals, animations, and effects on a bigger screen, which will enhance your immersion and enjoyment.
-
Multi-instance and multi-language support
-
Another benefit of playing Fate/Grand Order on PC is that you can use the multi-instance feature of BlueStacks to play multiple versions of the game at the same time. For example, you can play the English version and the Japanese version simultaneously, without having to switch accounts or devices. You can also play with different accounts or servers, or try different strategies and scenarios. This way, you can experience more content and features of the game, and have more fun.
-
Moreover, you can use the multi-language support of BlueStacks to play Fate/Grand Order in different languages, such as English, Japanese, Chinese, Korean, French, Spanish, German, Portuguese, Russian, Thai, Indonesian, Vietnamese, Turkish, Arabic, Italian, Polish, Dutch, Swedish, Norwegian, Danish, Finnish, and more. You can also change the language of the game's voice and text separately, to suit your preferences. This way, you can enjoy the game in your native language, or learn a new language while playing.
-
Customizable controls and macros
-
A third benefit of playing Fate/Grand Order on PC is that you can customize the controls and macros to your liking. You can use the keyboard and mouse, or a gamepad, to play the game more comfortably and efficiently. You can also use the BlueStacks keymapping tool to assign different keys or buttons to different actions, such as summoning, attacking, using skills, and more. You can also create and use macros, which are sequences of commands that can be executed automatically with a single keystroke. For example, you can create a macro to perform a specific combo, or to repeat a certain task, such as farming or leveling up. This way, you can save time and effort, and optimize your gameplay.
-
Tips and tricks to enjoy Fate/Grand Order on PC
-
Finally, we will share with you some tips and tricks to help you enjoy Fate/Grand Order on PC even more. Here are some of them:
-
How to optimize your settings and gameplay
-
To ensure that you have the best possible experience playing Fate/Grand Order on PC, you should optimize your settings and gameplay accordingly. Here are some suggestions:
-
-
Adjust the resolution, frame rate, and graphics quality to match your PC's capabilities and your preferences. You can do this from the BlueStacks settings menu or from the game's settings menu.
-
Enable the high FPS mode and the eco mode from the BlueStacks settings menu to improve the performance and reduce the CPU usage of the game.
-
Disable the notifications and sounds from the BlueStacks settings menu to avoid distractions and interruptions while playing.
-
Use the sync feature of BlueStacks to sync your game data across different devices and platforms. You can do this from the BlueStacks cloud menu or from the game's settings menu.
-
Use the backup and restore feature of BlueStacks to backup your game data and restore it in case of any issues or errors. You can do this from the BlueStacks cloud menu or from the game's settings menu.
-
Use the screenshot and video recording feature of BlueStacks to capture your gameplay moments and share them with your friends or on social media. You can do this from the BlueStacks toolbar or by using the predefined keys.
-
-
How to access the official website and social media accounts
-
To stay updated with the latest news, events, and updates of Fate/Grand Order, you should visit the official website and follow the official social media accounts. You can also find useful information, guides, tips, fan art, memes, and more on these platforms. Here are some links to access them:
-
-
Platform
Link
-
Official website
-
Facebook
-
Twitter
-
Instagram
-
YouTube
-
Reddit
-
Discord
-
-
How to get free resources and bonuses
-
To make your gameplay more enjoyable and rewarding, you should take advantage of the free resources and bonuses that Fate/Grand Order offers. Here are some ways to get them:
-
-
Login daily to get free Saint Quartz, Friend Points, Summon Tickets, Mana Prisms, Golden Fruits, QP, EXP Cards, Fou Cards, Materials, Gems, and more.
-
Complete the daily quests, weekly missions, main quests, free quests, interludes, rank up quests, events, challenges, and achievements to get more rewards.
-
Participate in the gacha banners, friend point summons, rare prism shop, mana prism shop, and other shops to get more servants, craft essences, and items.
-
Use the exchange tickets, codes, and codes to get more options and customization for your servants and craft essences.
-
Join the community and social media platforms to get more tips, guides, help, and fun from other players and fans.
-
-
Conclusion
-
Fate/Grand Order is a fantastic game that combines anime, role-playing, and epic stories. It is a game that you can enjoy for hours and hours, without getting bored or tired. However, if you want to have an even better experience, you should try playing it on PC, using an emulator like BlueStacks. By doing so, you can enjoy better graphics and performance, multi-instance and multi-language support, customizable controls and macros, and more. You can also access the official website and social media accounts, and get free resources and bonuses, to stay updated and rewarded. So, what are you waiting for? Download Fate/Grand Order on PC today and join the millions of players who are saving human history!
-
FAQs
-
Here are some frequently asked questions about Fate/Grand Order on PC:
-
Q: Is Fate/Grand Order free to play?
-
A: Yes, Fate/Grand Order is free to play. You can download and play the game without spending any money. However, the game also offers in-app purchases that can enhance your gameplay, such as Saint Quartz, which can be used to summon more servants and craft essences.
-
Q: Is Fate/Grand Order compatible with Windows 10?
-
A: Yes, Fate/Grand Order is compatible with Windows 10. You can play the game on your Windows 10 PC or laptop, using an emulator like BlueStacks. You can also play the game on other versions of Windows, such as Windows 7 or 8.
-
Q: How can I transfer my Fate/Grand Order account from my mobile device to my PC?
-
A: To transfer your Fate/Grand Order account from your mobile device to your PC, you need to use the transfer code feature of the game. You can find this feature in the game's settings menu. You need to generate a transfer code and a password on your mobile device, and then enter them on your PC. You can only use the transfer code once, so make sure you don't lose it or share it with anyone else.
-
Q: How can I update Fate/Grand Order on PC?
-
A: To update Fate/Grand Order on PC, you need to use the same method that you used to download the game. If you downloaded the game from the Google Play Store, you can update it from there. If you downloaded the game from QooApp or APKPure, you need to download the latest version of the game's APK file and install it on BlueStacks.
-
Q: How can I contact the customer support of Fate/Grand Order?
-
A: To contact the customer support of Fate/Grand Order, you need to use the inquiry form feature of the game. You can find this feature in the game's settings menu. You need to fill out the form with your details and your issue or question, and then submit it. You will receive a reply from the customer support team within a few days.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Kick the Buddy 2 Mod Apk Experience the Thrill of Destruction with All Weapons.md b/spaces/congsaPfin/Manga-OCR/logs/Kick the Buddy 2 Mod Apk Experience the Thrill of Destruction with All Weapons.md
deleted file mode 100644
index 6d6e39791c75de3af47d39bef37f9849b1f7fbd5..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Kick the Buddy 2 Mod Apk Experience the Thrill of Destruction with All Weapons.md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
Kick the Buddy 2 Mod Apk Unlocked All Weapons 2022: A Fun and Stress-Relieving Game
-
Introduction
-
Do you ever feel stressed, angry, or bored? Do you want to unleash your creativity and have some fun? If yes, then you should try Kick the Buddy 2, a hilarious and addictive game that lets you torture a ragdoll buddy in various ways. You can use different weapons, tools, objects, and even animals to make him suffer and laugh at his reactions. But wait, there's more! You can also enjoy the game with unlimited resources and features by downloading the mod apk version of Kick the Buddy 2. In this article, we will tell you everything you need to know about this amazing game and how to get the mod apk version for free.
-
kick the buddy 2 mod apk unlocked all weapons 2022
Kick the Buddy 2 is a sequel to the popular game Kick the Buddy, which was released in 2018 by Playgendary. It is a casual and simulation game that allows you to interact with a ragdoll buddy in various ways. You can hit him, shoot him, explode him, freeze him, burn him, and do anything you can imagine. You can also customize his appearance, clothes, and accessories to make him look more funny or scary. The game has no rules or limits, so you can play as long as you want and have fun.
-
Why do you need the mod apk version?
-
While Kick the Buddy 2 is free to play, it also has some in-app purchases and ads that may limit your enjoyment. For example, you may need to spend real money to buy more coins and gems, which are used to unlock new weapons, items, and features. You may also have to watch ads to get some rewards or access some modes. However, if you don't want to spend any money or waste any time on ads, you can download the mod apk version of Kick the Buddy 2. The mod apk version is a modified version of the original game that gives you unlimited coins and gems, unlocked all weapons and items, removed all ads, and added some extra features. With the mod apk version, you can enjoy the game without any restrictions or interruptions.
-
Features of Kick the Buddy 2 Mod Apk
-
Variety of destroying weapons
-
One of the best features of Kick the Buddy 2 is that it offers a huge variety of weapons and items that you can use to destroy your buddy. You can choose from guns, knives, bombs, rockets, grenades, lasers, swords, hammers, axes, chainsaws, scissors, needles, and many more. You can also use some unconventional weapons like dinosaurs, sharks, piranhas, bees, spiders, snakes, cacti, fireworks, balloons, magnets, and even a black hole. The mod apk version unlocks all these weapons and items for free so that you can experiment with them and see how they affect your buddy.
-
Impressive graphics and sounds
-
Kick the Buddy 2 has impressive graphics and sounds that make the game more realistic and enjoyable. The game has colorful and detailed graphics that show every detail of your buddy's body and expressions. You can also see the effects of your actions on him like blood splashes, bruises, cuts, burns, etc. The game also has funny and lively sounds that match your buddy's reactions and emotions. You can hear him scream, cry, laugh, beg
or insult you depending on what you do to him. The game also has some background music and sound effects that add to the atmosphere of the game.
-
kick the buddy 2 mod apk unlimited money and gold 2022
-kick the buddy 2 mod apk download latest version 2022
-kick the buddy 2 mod apk free shopping and no ads 2022
-kick the buddy 2 mod apk all characters and outfits unlocked 2022
-kick the buddy 2 mod apk android 1 and rexdl 2022
-kick the buddy 2 mod apk hack cheats and tips 2022
-kick the buddy 2 mod apk offline and online mode 2022
-kick the buddy 2 mod apk fun with physics and destruction 2022
-kick the buddy 2 mod apk best weapons and tools 2022
-kick the buddy 2 mod apk how to install and play 2022
-kick the buddy 2 mod apk review and rating 2022
-kick the buddy 2 mod apk new features and updates 2022
-kick the buddy 2 mod apk for pc and laptop 2022
-kick the buddy 2 mod apk for ios and iphone 2022
-kick the buddy 2 mod apk for windows and mac 2022
-kick the buddy 2 mod apk original vs modified version comparison 2022
-kick the buddy 2 mod apk safe and secure download link 2022
-kick the buddy 2 mod apk full game and premium access 2022
-kick the buddy 2 mod apk unlimited rockets and grenades 2022
-kick the buddy 2 mod apk unlimited diamonds and coins 2022
-kick the buddy 2 mod apk unlimited health and energy 2022
-kick the buddy 2 mod apk unlimited bombs and fireworks 2022
-kick the buddy 2 mod apk unlimited knives and swords 2022
-kick the buddy 2 mod apk unlimited guns and rifles 2022
-kick the buddy 2 mod apk unlimited lasers and plasma weapons 2022
-kick the buddy 2 mod apk unlimited animals and pets weapons 2022
-kick the buddy 2 mod apk unlimited food and drinks weapons 2022
-kick the buddy 2 mod apk unlimited sports and games weapons 2022
-kick the buddy 2 mod apk unlimited machines and vehicles weapons 2022
-kick the buddy 2 mod apk unlimited musical instruments weapons 2022
-kick the buddy 2 mod apk unlimited plants and flowers weapons
-
Fun loving challenges
-
Kick the Buddy 2 is not just a mindless game where you torture your buddy for no reason. It also has some fun loving challenges that test your skills and creativity. You can complete various tasks and missions that require you to use specific weapons, items, or methods to destroy your buddy. For example, you may have to make him fly, freeze him, electrocute him, or make him explode. You can also earn rewards and achievements for completing these challenges. The mod apk version gives you access to all the challenges and modes in the game so that you can have more fun and variety.
-
Learn physics laws
-
Kick the Buddy 2 is not only a fun and stress-relieving game, but also an educational one. It can help you learn some basic physics laws and principles by playing with your buddy. You can see how different forces, motions, energies, and reactions work in the game. You can also experiment with different objects and materials and see how they interact with each other and with your buddy. For example, you can see how gravity, friction, elasticity, buoyancy, magnetism, and other phenomena affect your buddy's movements and behaviors. The game can help you develop your logical thinking and problem-solving skills as well.
-
Suitable for all age groups
-
Kick the Buddy 2 is a game that can be enjoyed by anyone regardless of their age or gender. It is suitable for kids, teens, adults, and even seniors who want to have some fun and relax. The game has a simple and intuitive interface that anyone can use easily. The game also has a parental control option that allows parents to restrict some features or content that may be inappropriate for younger players. The game is also compatible with most devices and platforms, so you can play it anytime and anywhere.
-
How to download and install Kick the Buddy 2 Mod Apk
-
If you want to download and install Kick the Buddy 2 Mod Apk on your device, you need to follow these simple steps:
-
Step 1: Enable unknown sources
-
Before you can install any mod apk file on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings > security > unknown sources and toggle it on.
-
Step 2: Download the mod apk file
-
Next, you need to download the mod apk file of Kick the Buddy 2 from a reliable source. You can use this link to download the latest version of the mod apk file for free. The file size is about 100 MB, so make sure you have enough space on your device.
-
Step 3: Install the mod apk file
-
After downloading the mod apk file, locate it in your device's file manager and tap on it to start the installation process. Follow the instructions on the screen and wait for a few seconds until the installation is complete.
-
Step 4: Enjoy the game
-
Once the installation is done, you can launch the game from your app drawer or home screen and enjoy it with unlimited resources and features.
-
Conclusion
-
Kick the Buddy 2 is a fun and stress-relieving game that lets you interact with a ragdoll buddy in various ways. You can use different weapons, items, objects, and animals to make him suffer and laugh at his reactions. You can also enjoy the game with unlimited resources and features by downloading the mod apk version of Kick the Buddy 2. The mod apk version gives you unlimited coins and gems, unlocked all weapons and items, removed all ads, and added some extra features. With the mod apk version, you can enjoy the game without any restrictions or interruptions.
-
If you are looking for a game that can help you relax, have fun, and learn something new, then Kick the Buddy 2 is the perfect choice for you. Download it now and start kicking your buddy!
-
Frequently Asked Questions
-
Here are some of the most common questions that people ask about Kick the Buddy 2 Mod Apk:
-
-
Is Kick the Buddy 2 Mod Apk safe to use?
-
Yes, Kick the Buddy 2 Mod Apk is safe to use as long as you download it from a trusted source like this one. The mod apk file has been tested for viruses and malware and does not contain any harmful or malicious code. However, you should always be careful when installing any mod apk file on your device and make sure you have a backup of your data in case something goes wrong.
-
Does Kick the Buddy 2 Mod Apk require root access?
-
No, Kick the Buddy 2 Mod Apk does not require root access to work on your device. You can install and play it without rooting your device or modifying any system settings. However, if you have a rooted device, you can also use the mod apk file without any problems.
-
Can I play Kick the Buddy 2 Mod Apk online with other players?
-
No, Kick the Buddy 2 Mod Apk is an offline game that does not support online multiplayer mode. You can only play it solo on your device and enjoy it at your own pace. However, you can still share your screenshots and videos of your gameplay with your friends and family on social media platforms.
-
Can I update Kick the Buddy 2 Mod Apk to the latest version?
-
Yes, you can update Kick the Buddy 2 Mod Apk to the latest version whenever it is available. However, you may need to download and install the new mod apk file from the same source as before. You may also need to uninstall the previous mod apk file before installing the new one. You can also check this website regularly for the latest updates and news about Kick the Buddy 2 Mod Apk.
-
What if I have any issues or questions about Kick the Buddy 2 Mod Apk?
-
If you have any issues or questions about Kick the Buddy 2 Mod Apk, you can contact us through this website or leave a comment below. We will try our best to help you and answer your queries as soon as possible. You can also check the FAQ section of this website for more information and tips about Kick the Buddy 2 Mod Apk.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela APK 4.4.2.570 - The Latest Version of the Popular Virtual Pet Game.md b/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela APK 4.4.2.570 - The Latest Version of the Popular Virtual Pet Game.md
deleted file mode 100644
index c3d1a8bd75cda497b832a95aba28d515b92e6b82..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela APK 4.4.2.570 - The Latest Version of the Popular Virtual Pet Game.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
My Talking Angela 4.4 4 APK: A Fun and Interactive Simulation Game for Cat Lovers
-
If you love cats and want to have a virtual pet that you can take care of, dress up, and play with, then you should try My Talking Angela 4.4 4 APK. This is a simulation game that lets you interact and have fun with a very cute and funny cat named Angela. You can feed her, bathe her, brush her teeth, make her sleep, and watch her grow from a kitten to an adult. You can also customize her appearance, wardrobe, and home with hundreds of different items and accessories. You can even talk to her and she will repeat what you say in a hilarious voice.
My Talking Angela is a simulation game developed by Outfit7 Limited, the same company that created the popular My Talking Tom series. It was first released in 2014 and has since been downloaded over 500 million times on Google Play Store. It is rated 4.5 out of 5 stars by more than 12 million users.
-
Features of My Talking Angela
-
My Talking Angela has many features that make it an entertaining and engaging game for cat lovers of all ages. Some of the features are:
-
-
You can adopt Angela as your own virtual pet and take care of her daily needs.
-
You can play mini-games with Angela and earn coins that you can use to buy items and upgrades.
-
You can collect stickers and swap them with other players around the world.
-
You can explore different locations and discover new things with Angela.
-
You can record videos of your interactions with Angela and share them with your friends on social media.
-
-
How to download and install My Talking Angela 4.4 4 APK
-
If you want to download and install My Talking Angela 4.4 4 APK on your Android device, you can follow these simple steps:
-
-
Go to [this link](^1^) and click on the "Download APK" button.
-
Wait for the download to finish and then open the file.
-
Allow the installation of apps from unknown sources if prompted.
-
Follow the instructions on the screen and enjoy the game.
-
-
Why you should play My Talking Angela 4.4 4 APK
-
My Talking Angela 4.4 4 APK is not just a game, but also a friend that you can bond with and have fun with. There are many reasons why you should play this game, such as:
-
Benefits of playing My Talking Angela
-
Playing My Talking Angela can have positive effects on your mood, creativity, and cognitive skills. Some of the benefits are:
-
my talking angela mod apk 4.4 4 unlimited money
-my talking angela 4.4 4 apk download for android
-my talking angela hack apk 4.4 4 free download
-my talking angela old version 4.4 4 apk
-my talking angela apk 4.4 4 rexdl
-my talking angela latest version 4.4 4 apk
-my talking angela apk pure 4.4 4
-my talking angela mod apk revdl 4.4 4
-my talking angela apk mirror 4.4 4
-my talking angela mod apk happymod 4.4 4
-my talking angela apk uptodown 4.4 4
-my talking angela mod apk android 1 4.4 4
-my talking angela apk mod menu 4.4 4
-my talking angela apk obb 4.4 4
-my talking angela mod apk unlimited diamonds and coins 2023 version 4.4 4
-my talking angela full unlocked apk 4.4 4
-my talking angela offline apk 4.4 4
-my talking angela pro apk 4.4 4
-my talking angela vip mod apk download latest version (2023) v.5.2.1.1515 (mod, unlimited money) for android free download (updated) - apkgodl.com/my-talking-angela-mod-apk/
-my talking angela mega mod apk download for pc windows (10/8/7/xp) - appsforwindowspc.com/my-talking-angela-mega-mod-apk-download-for-pc-windows/
-my talking angela modded apk download ios - iosninja.io/ipa-library/download-my-talking-angela-hack-ipa-ios
-my talking angela cracked apk download for mac - macupdate.com/app/mac/61376/my-talking-angela
-my talking angela premium apk download for fire tablet - apkpure.com/my-talking-angela/com.outfit7.mytalkingangelafree/download?from=details%2Fversion&fid=b%2Fapk%2FY29tLm91dGZpdDcubXl0YWxraW5nYW5nZWxhZnJlZV8xMDI0MDUwMjdfYzEwMjYyNzQ%3D&version_code=102405027&hl=en_US&source=fire-tablets
-my talking angela unlimited everything apk download for chromebook - chrome.google.com/webstore/detail/my-talking-angela/bhjgjgkldfjnbegdjnhcnclgjcfdfihm?hl=en-US
-my talking angela hacked version apk download for laptop - bluestacks.com/apps/casual/my-talking-angela-on-pc.html
-my talking angela cheat codes apk download for kindle - kindlefireworld.net/apk/my-talking-angela-cheat-codes/
-my talking angela generator apk download for smart tv - apkpure.com/my-talking-angela/com.outfit7.mytalkingangelafree/download?from=details%2Fversion&fid=b%2Fapk%2FY29tLm91dGZpdDcubXl0YWxraW5nYW5nZWxhZnJlZV8xMDI0MDUwMjdfYzEwMjYyNzQ%3D&version_code=102405027&hl=en_US&source=smart-tv
-my talking angela glitch apk download for nintendo switch - switcher.gg/games/nintendo-switch/my-talking-angela/
-my talking angela online hack tool apk download for ps5 - ps5emulator.org/games/my-talking-angela/
-my talking angela diamond generator no human verification apk download for xbox one - xboxoneemulator.org/games/my-talking-angela/
-my talking angela unlimited coins and diamonds no survey no password no download no root no jailbreak apk - apkmody.io/games/my-talking-angela-mod-apk.html
-how to get free diamonds in my talking angela without downloading apps or doing surveys or verification or watching ads or buying them or using lucky patcher or game guardian or root explorer or es file explorer or cheat engine or sb game hacker or xmodgames or freedom app or cree
-
-
You can express your personality and style by dressing up Angela and decorating her home.
-
You can improve your communication skills by talking to Angela and listening to her responses.
-
You can enhance your memory and concentration by playing mini-games that challenge your brain.
-
You can relax and unwind by watching Angela's funny antics and listening to her soothing music.
-
You can learn new things by exploring different places and collecting stickers with facts about them.
-
-
Tips and tricks for playing My Talking Angela
-
If you want to make the most out of your experience with My Talking Angela, you can use some tips and tricks that will help you progress faster and have more fun. Some of the tips are:
-
-
You can get free coins by watching ads, completing tasks, or spinning the wheel.
-
You can get free diamonds by leveling up, logging in daily, or completing achievements. additional coins, diamonds, and items. You can also watch ads or complete tasks to earn free coins and diamonds.
-
How can I talk to Angela?
-
You can talk to Angela by tapping on the microphone icon on the bottom right corner of the screen. You can then speak to her and she will repeat what you say in a funny voice. You can also type your message and she will read it out loud.
-
What are the mini-games in My Talking Angela?
-
There are many mini-games that you can play with Angela, such as:
-
-
Happy Connect: Connect matching icons and clear the board.
-
Bubble Shooter: Pop bubbles and collect stars.
-
Brick Breaker: Break bricks and avoid obstacles.
-
Tiny Puzzles: Solve puzzles and reveal pictures.
-
Angela's Dream World: Explore Angela's dreams and collect stars.
-
-
How can I collect stickers in My Talking Angela?
-
You can collect stickers by opening chests that appear randomly on the screen. You can also buy chests with coins or diamonds. You can swap stickers with other players by tapping on the album icon on the bottom left corner of the screen. You can also view your sticker collection and learn facts about different places and animals.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Angry Birds Classic APK 2016.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Angry Birds Classic APK 2016.md
deleted file mode 100644
index 49ceab48ca030245a622ea503a6df37a74095595..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Angry Birds Classic APK 2016.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
Angry Birds Classic APK 2016: A Fun and Addictive Game for Everyone
-
If you are looking for a game that can keep you entertained for hours, you might want to check out Angry Birds Classic APK 2016. This is a casual puzzle game that has been downloaded over 100 million times on Google Play Store and has received many positive reviews from players and critics alike. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, why you should play it, and some tips and tricks to help you master it.
Angry Birds Classic is a game that was developed by Rovio Entertainment in 2009 and has become one of the most successful games in the mobile market. The game series focuses on a flock of colorful angry birds who try to save their eggs from green-colored pigs who have stolen them . The game involves using a slingshot to launch the birds at the pigs' structures, with the aim of destroying them and eliminating all the pigs on the screen. The game features challenging physics-based gameplay and hours of replay value . Each level requires logic, skill, and force to solve .
-
As you progress through the game, you will encounter different types of birds with unique powers and abilities that can help you overcome various obstacles and enemies. For example, the yellow bird can speed up in mid-air, the black bird can explode like a bomb, and the white bird can drop an egg bomb . You will also face different types of pigs with different behaviors and defenses . Some pigs wear helmets or armor, some pigs hide inside structures or behind objects, and some pigs are bigger and tougher than others . You will need to use strategy and creativity to find the best way to defeat them all.
-
How to Download and Install Angry Birds Classic APK 2016?
-
If you want to play Angry Birds Classic APK 2016 on your Android device, you will need to follow these simple steps:
-
-
Go to this link to download the APK file of Angry Birds Classic.
-
Once the download is complete, tap on the file to open it.
-
You may need to enable unknown sources in your device settings to install apps from outside the Google Play Store.
-
Follow the instructions on the screen to install the app.
-
Launch the app and enjoy playing Angry Birds Classic APK 2016!
-
-
Note: You will need Android version 4.1 or higher to run this app . You may also need internet connectivity for some features of the game .
Why Play Angry Birds Classic APK 2016?
-
There are many reasons why you should play Angry Birds Classic APK 2016, especially if you are a fan of the original game or the Angry Birds franchise in general. Here are some of them:
-
-
It has more levels than any other Angry Birds game. Angry Birds Classic APK 2016 has over 680 levels across 15 episodes, each with a different theme and challenge. You can play the classic episodes such as Poached Eggs, Mighty Hoax, and Danger Above, or the newer ones such as Jurassic Pork, Birdday Party, and Surf and Turf. You can also play special episodes based on holidays, seasons, and movies, such as Hogs and Kisses, Wreck the Halls, and Rio . There is always something new and exciting to discover in this game.
-
It has more power-ups than any other Angry Birds game. Angry Birds Classic APK 2016 has six different power-ups that you can use to boost your birds' destructive strength and make the game more fun and easy. You can use the Wingman to unleash a giant bird that can smash through anything, the Power Potion to turn any bird into a giant bird, the Boombox to blast away the pigs with a loud speaker, the Birdquake to shake the ground and make the pigs fall, the King Sling to fling your birds with more speed and power, and the Sling Scope to aim your birds with precision. You can buy these power-ups with coins or watch ads to get them for free.
-
It has better graphics than the original game. Angry Birds Classic APK 2016 has improved graphics that make the game look more colorful and vibrant. The birds, pigs, structures, backgrounds, and effects are all more detailed and realistic. The game also runs smoothly on most devices without lagging or crashing.
-
It has regular updates that keep the game fresh and interesting. Angry Birds Classic APK 2016 is constantly updated by Rovio Entertainment to add new levels, features, fixes, and improvements. The game also celebrates various events and occasions by adding special content and offers. For example, in 2019, Rovio Entertainment re-released the original Angry Birds games as Rovio Classics , which included Angry Birds Classic APK 2016 with some changes and enhancements.
-
-
These are just some of the reasons why you should play Angry Birds Classic APK 2016. If you want to experience the fun and addictive gameplay of one of the most popular games in history, download it now and start slinging those birds!
-
angry birds classic apk 2016 download free
-angry birds classic apk 2016 mod unlimited
-angry birds classic apk 2016 latest version
-angry birds classic apk 2016 offline play
-angry birds classic apk 2016 for android
-angry birds classic apk 2016 update features
-angry birds classic apk 2016 no ads
-angry birds classic apk 2016 full unlocked
-angry birds classic apk 2016 original episodes
-angry birds classic apk 2016 mighty league
-angry birds classic apk 2016 powerups boosters
-angry birds classic apk 2016 slingshot gameplay
-angry birds classic apk 2016 rovio entertainment
-angry birds classic apk 2016 review ratings
-angry birds classic apk 2016 installation guide
-angry birds classic apk 2016 tips tricks cheats
-angry birds classic apk 2016 best levels
-angry birds classic apk 2016 fun and satisfying
-angry birds classic apk 2016 compatible devices
-angry birds classic apk 2016 system requirements
-angry birds classic apk 2016 bugs fixes
-angry birds classic apk 2016 support contact
-angry birds classic apk 2016 alternatives similar
-angry birds classic apk 2016 gameplay videos
-angry birds classic apk 2016 screenshots images
-angry birds classic apk 2016 fan community forum
-angry birds classic apk 2016 news updates events
-angry birds classic apk 2016 merchandise store
-angry birds classic apk 2016 history development
-angry birds classic apk 2016 awards achievements
Tips and Tricks to Master Angry Birds Classic APK 2016
-
Playing Angry Birds Classic APK 2016 can be easy or hard, depending on how you approach each level and how you use your birds and power-ups. If you want to master the game and get three stars on every level, you might want to follow these tips and tricks:
-
-
Know your birds and their abilities. Each bird has a different color, shape, size, and ability that can affect how they fly, bounce, and damage the pigs and structures. For example, the red bird is the basic bird that does not have any special ability, the blue bird can split into three smaller birds, and the green bird can boomerang back to hit targets from behind . You should learn how to use each bird effectively and choose the best one for each situation.
-
Aim carefully and adjust your angle and power. The slingshot is the main tool that you use to launch your birds at the pigs. You can drag your finger on the screen to adjust the angle and power of your shot. You should aim carefully and try to hit the weak points of the structures or the pigs directly. You can also use the Sling Scope power-up to see the trajectory of your shot and make more accurate shots.
-
Use the environment to your advantage. The levels in Angry Birds Classic APK 2016 are not just flat surfaces with pigs and structures on them. They also have various elements and objects that can help or hinder your progress. For example, some levels have TNT crates, rocks, ice blocks, balloons, fans, magnets, and other items that can explode, fall, break, float, blow, or attract when hit by your birds or debris . You should use these elements to create chain reactions and cause more damage to the pigs and structures.
-
Collect golden eggs and unlock bonus levels. Golden eggs are hidden items that can be found in some levels or by completing certain tasks . For example, you can find a golden egg by tapping on the sun in the main menu, by getting three stars on all levels in an episode, or by hitting a hidden object in a level . When you collect a golden egg, you can unlock a bonus level that has a different gameplay or theme than the regular levels. These bonus levels can be fun and challenging to play.
-
-
Reviews and Ratings of Angry Birds Classic APK 2016
-
Angry Birds Classic APK 2016 has received many reviews and ratings from players and critics who have played the game. Most of them are positive and praise the game for its fun and addictive gameplay, its variety and content, its graphics and sound, and its updates and improvements. Here are some examples of what they have said:
-
-
-
Source
-
Review
-
Rating
-
-
-
Google Play Store
-
"This game is awesome! I love how there are so many levels and episodes to play. The graphics are great and the sound effects are hilarious. The power-ups are very helpful and make the game more fun. I also like how they update the game regularly with new content and fixes. This is one of my favorite games ever!"
-
5 stars
-
-
-
App Store
-
"Angry Birds is a classic game that never gets old. I have been playing it since it came out in 2009 and I still enjoy it today. The game is simple but challenging, with lots of levels to beat and stars to collect. The game also has a lot of humor and personality, with cute characters and funny animations. The game is well-made and runs smoothly on my device."
-
4 stars
-
-
-
IGN
-
"Angry Birds is one of those games that transcends its genre and appeals to everyone. It's a casual game that anyone can pick up and play, but it also has enough depth and strategy to keep hardcore gamers hooked. The game is well-designed and polished, with excellent physics-based gameplay and colorful graphics. The game also has a lot of content and replay value, with hundreds of levels to complete and secrets to discover."
-
9/10
-
-
-
Metacritic
-
"Angry Birds is a phenomenon that deserves all the praise it gets. It's a game that combines simplicity with complexity, fun with frustration, addiction with satisfaction. It's a game that tests your skills, patience, logic, creativity, and luck. It's a game that makes you laugh, cry, cheer, and scream. It's a game that you will never get tired of playing."
-
91/100
-
-
-
As you can see, Angry Birds Classic APK 2016 has received mostly positive reviews and ratings from various sources. The game has an average rating of 4.4 out of 5 stars on Google Play Store and 4.6 out of 5 stars on App Store . The game also has a Metascore of 91 out of 100 on Metacritic and a score of 9 out of 10 on IGN . These scores indicate that the game is highly rated and recommended by many players and critics.
-
Conclusion
-
In conclusion, Angry Birds Classic APK 2016 is a fun and addictive game that everyone can enjoy. The game has a simple but challenging gameplay, a variety of levels and episodes, a lot of power-ups and features, better graphics than the original game, and regular updates that keep the game fresh and interesting. The game also has many positive reviews and ratings from players and critics who have played the game. If you are looking for a game that can keep you entertained for hours, you should download Angry Birds Classic APK 2016 and start slinging those birds!
-
We hope that this article has given you some useful information and tips about Angry Birds Classic APK 2016. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy gaming!
-
FAQs
-
Here are some frequently asked questions about Angry Birds Classic APK 2016:
-
-
Is Angry Birds Classic APK 2016 free to play?
-
Yes, Angry Birds Classic APK 2016 is free to download and play on your Android device. However, the game may contain ads and in-app purchases that can enhance your gaming experience. You can disable these features in your device settings if you wish.
-
How can I get more coins in Angry Birds Classic APK 2016?
-
You can get more coins in Angry Birds Classic APK 2016 by completing levels, getting three stars, collecting golden eggs, watching ads, or buying them with real money. You can use coins to buy power-ups or unlock episodes.
-
How can I get rid of the ads in Angry Birds Classic APK 2016?
-
You can get rid of the ads in Angry Birds Classic APK 2016 by buying the ad-free version of the game for $0.99 or by turning off your internet connection while playing the game. However, some features of the game may require internet connectivity to work properly.
-
What is the difference between Angry Birds Classic APK 2016 and the original Angry Birds game?
-
Angry Birds Classic APK 2016 is an updated version of the original Angry Birds game that was released in 2009 . The main difference is that Angry Birds Classic APK 2016 has more levels, power-ups, graphics, and updates than the original game. The gameplay and characters are mostly the same.
-
Is Angry Birds Classic APK 2016 safe to download and install?
-
Yes, Angry Birds Classic APK 2016 is safe to download and install on your Android device. The app has been scanned by various antivirus programs and has no viruses or malware. However, you should always download apps from trusted sources and check the permissions before installing them.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Top 10 Attack on Titan 3D Offline Games for Android That You Should Try.md b/spaces/congsaPfin/Manga-OCR/logs/Top 10 Attack on Titan 3D Offline Games for Android That You Should Try.md
deleted file mode 100644
index e9ee12b0b177772ca98ea0a707539151375bc7d3..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Top 10 Attack on Titan 3D Offline Games for Android That You Should Try.md
+++ /dev/null
@@ -1,155 +0,0 @@
-
-
How to Download Game Attack on Titan 3D Android Offline
-
If you are a fan of anime and manga, you have probably heard of Attack on Titan, a popular series that depicts a world where humanity is under threat from giant humanoid creatures called Titans. The series has spawned several adaptations, including a 3D action game that lets you play as your favorite characters and fight against the Titans using omni-directional mobility gear.
-
The game is available for Android devices, and it offers an immersive and thrilling experience that will keep you hooked for hours. But what if you want to play the game offline, without an internet connection? Maybe you want to save your data plan, or you are in a place where Wi-Fi is not available. Or maybe you just want to enjoy the game without any interruptions or ads.
In this article, we will show you how to download game Attack on Titan 3D Android offline, so you can play it anytime and anywhere you want. We will also give you some tips and tricks for playing the game offline, so you can have more fun and challenge. Let's get started!
-
Requirements for Downloading and Playing the Game
-
Before you download game Attack on Titan 3D Android offline, you need to make sure that your device meets the minimum and recommended specifications for running the game smoothly. Here are the requirements according to [the official website](^1^):
-
-
-
Specification
-
Minimum
-
Recommended
-
-
-
RAM
-
2 GB
-
4 GB or more
-
-
-
Storage
-
500 MB
-
1 GB or more
-
-
-
OS Version
-
Android 5.0 (Lollipop)
-
Android 8.0 (Oreo) or higher
-
-
-
Graphics
-
Mali-T720 MP2 or equivalent
-
Mali-G72 MP12 or higher
-
-
-
Processor
-
Quad-core 1.3 GHz or equivalent
-
Octa-core 2.0 GHz or higher
-
-
-
Battery
-
3000 mAh or more
-
4000 mAh or more
-
-
-
If your device does not meet these requirements, you may experience lagging, crashing, or other issues while playing the game offline.
-
Sources for Downloading the Game
-
Official Source
-
The easiest and safest way. to download game Attack on Titan 3D Android offline is to use the official source, which is the Google Play Store. You can find the game by searching for "Attack on Titan 3D" or by following [this link]. The game is free to download and play, but it may contain some in-app purchases or ads.
-
To download the game from the Google Play Store, you need to have a Google account and a stable internet connection. You also need to have enough storage space on your device. Here are the steps to download the game from the official source:
-
-
Open the Google Play Store app on your device, or visit [the website] on your browser.
-
Search for "Attack on Titan 3D" or use [this link] to go directly to the game page.
-
Tap on the "Install" button and wait for the download to complete.
-
Once the download is finished, tap on the "Open" button to launch the game.
-
-
Alternative Sources
-
If you cannot access the Google Play Store, or you want to download game Attack on Titan 3D Android offline from other sources, you can also use APK files or third-party websites. APK files are Android application packages that contain all the files and data needed to install and run an app. Third-party websites are online platforms that offer APK files or direct downloads of apps and games.
-
However, you should be careful when using alternative sources, as they may not be safe or reliable. Some APK files or websites may contain malware or viruses that can harm your device or steal your personal information. Some may also offer outdated or modified versions of the game that may not work properly or have bugs. Therefore, you should always check the reviews, ratings, and permissions of any APK file or website before downloading anything.
-
Here are some of the alternative sources that you can use to download game Attack on Titan 3D Android offline:
-
download attack on titan offline 3d apk
-download attack on titan assault apk
-download game attack on titan android offline terbaru
-download game attack on titan 3d mod apk
-download game attack on titan 3d offline apk
-download game attack on titan 3d android free
-download game attack on titan 3d android no internet
-download game attack on titan 3d android full version
-download game attack on titan 3d android latest version
-download game attack on titan 3d android update
-download game attack on titan 3d android gameplay
-download game attack on titan 3d android review
-download game attack on titan 3d android tips and tricks
-download game attack on titan 3d android cheats and hacks
-download game attack on titan 3d android best settings
-download game attack on titan 3d android high graphics
-download game attack on titan 3d android low mb
-download game attack on titan 3d android unlimited money
-download game attack on titan 3d android offline mode
-download game attack on titan 3d android without wifi
-download game attack on titan 3d android without ads
-download game attack on titan 3d android without root
-download game attack on titan 3d android without emulator
-download game attack on titan 3d android without verification
-download game attack on titan 3d android without survey
-how to download game attack on titan 3d android offline
-where to download game attack on titan 3d android offline
-when to download game attack on titan 3d android offline
-why to download game attack on titan 3d android offline
-what to do after downloading game attack on titan 3d android offline
-best site to download game attack on titan 3d android offline
-best app to download game attack on titan 3d android offline
-best way to download game attack on titan 3d android offline
-easy way to download game attack on titan 3d android offline
-fast way to download game attack on titan 3d android offline
-safe way to download game attack on titan 3d android offline
-free way to download game attack on titan 3d android offline
-legal way to download game attack on titan 3d android offline
-illegal way to download game attack on titan 3d android offline
-alternative way to download game attack on titan 3d android offline
-comparison of different ways to download game attack on titan 3d android offline
-benefits of downloading game attack on titan 3d android offline
-drawbacks of downloading game attack on titan 3d android offline
-risks of downloading game attack on titan 3d android offline
-challenges of downloading game attack on titan 3d android offline
-solutions for downloading game attack on titan 3d android offline
-
-
[APKPure]: This is a popular website that offers APK files of various apps and games, including Attack on Titan 3D. You can download the latest version of the game from [this link], or browse through older versions if you want. The website claims to verify and scan all the APK files for security and performance.
-
[APKMonk]: This is another website that provides APK files of different apps and games, including Attack on Titan 3D. You can download the latest version of the game from [this link], or choose from previous versions if you prefer. The website also claims to check and test all the APK files for safety and quality.
-
[Uptodown]: This is a website that offers direct downloads of apps and games for Android devices, including Attack on Titan 3D. You can download the latest version of the game from [this link], or select from older versions if you wish. The website also claims to scan and review all the apps and games for security and functionality.
-
-
Steps for Installing and Running the Game
-
Installation from Official Source
-
If you downloaded game Attack on Titan 3D Android offline from the Google Play Store, you do not need to do anything else to install it. The game will automatically install itself on your device after the download is complete. However, you may need to grant some permissions or choose some settings before playing the game. Here are some of the things you may need to do:
-
-
Allow access to photos, media, and files: This permission is needed for the game to save your progress and data on your device. To allow this permission, go to Settings > Apps > Attack on Titan 3D > Permissions > Storage > Allow.
-
Choose your language: The game supports multiple languages, such as English, Japanese, Chinese, Korean, etc. To choose your language, go to Settings > Language & input > Language > Select your preferred language.
-
Adjust your graphics: The game allows you to adjust your graphics quality according to your device's performance. To adjust your graphics, go to Settings > Graphics > Choose between Low, Medium, High, or Ultra.
-
-
Installation from Alternative Sources
-
If you downloaded game Attack on Titan 3D Android offline from an alternative source, such as an APK file or a third-party website, you need to follow some extra steps to install it on your device. Here are some of the steps you need to take:
-
-
Enable unknown sources: This setting is needed for your device to accept and install apps from sources other than the Google Play Store. To enable this setting, go to Settings > Security > Unknown sources > Turn on.
-
Verify the file: Before installing the APK file, you should verify that it is safe and authentic. You can use a tool like [VirusTotal] to scan the file for any malware or viruses. You can also check the file size, name, and signature to make sure it matches the original game.
-
Install the file: Once you have verified the file, you can install it on your device by tapping on it and following the instructions. You may need to grant some permissions or choose some settings as mentioned above.
-
-
Running the Game Offline
-
After installing the game on your device, you can run it offline without an internet connection. However, you may need to do some things before playing the game offline. Here are some of the things you may need to do:
-
-
Turn off Wi-Fi or mobile data: To play the game offline, you need to make sure that your device is not connected to any network. To turn off Wi-Fi or mobile data, go to Settings > Wi-Fi or Settings > Data usage > Turn off.
-
Choose offline mode: When you launch the game, you will see a screen that asks you to choose between online mode or offline mode. To play the game offline, you need to select offline mode. This will allow you to play the game without any ads or interruptions.
-
Download additional data: The first time you play the game offline, you may need to download some additional data, such as maps, characters, or missions. This will require an internet connection, so make sure you do this before going offline. To download additional data, go to Settings > Download > Select the data you want to download.
-
-
Tips and Tricks for Playing the Game Offline
-
Playing game Attack on Titan 3D Android offline can be fun and challenging, but it can also be frustrating if you encounter some problems or difficulties. Here are some tips and tricks that can help you enjoy the game offline more:
-
-
Save your progress: The game does not automatically save your progress when you play offline, so you need to manually save it every time you finish a level or a mission. To save your progress, go to Settings > Save > Choose a slot > Confirm.
-
Unlock features: Some features of the game, such as new characters, weapons, or modes, are only available when you play online. However, you can unlock them when you play offline by completing certain tasks or achievements. To see what tasks or achievements you need to complete, go to Settings > Achievements > Select a feature > View details.
-
Avoid bugs: The game may have some bugs or glitches when you play offline, such as freezing, crashing, or missing textures. To avoid these issues, you should update the game regularly, clear the cache and data of the game, and restart your device before playing. To update the game, go to Google Play Store > My apps & games > Attack on Titan 3D > Update. To clear the cache and data of the game, go to Settings > Apps > Attack on Titan 3D > Storage > Clear cache and Clear data.
-
-
Conclusion
-
In this article, we have shown you how to download game Attack on Titan 3D Android offline, so you can play it anytime and anywhere you want. We have also given you some tips and tricks for playing the game offline, so you can have more fun and challenge. We hope that this article has been helpful and informative for you.
-
If you are a fan of Attack on Titan, you should definitely try this game offline. It will give you a chance to experience the thrilling and immersive action of fighting against the Titans using omni-directional mobility gear. You will also be able to play as your favorite characters and explore different maps and missions.
-
So what are you waiting for? Download game Attack on Titan 3D Android offline today and enjoy!
-
FAQs
-
Here are some of the frequently asked questions about downloading and playing game Attack on Titan 3D Android offline:
-
-
Q: Is the game free to download and play? A: Yes, the game is free to download and play from any source. However, it may contain some in-app purchases or ads that require an internet connection.
-
Q: How much storage space does the game take? A: The game takes about 500 MB of storage space on your device. However, this may vary depending on your device model and OS version.
-
Q: Can I play the game with other players offline? A: A: No, the game does not support multiplayer mode offline. You can only play the game with other players online, using Wi-Fi or mobile data.
-
Q: What are the best characters and weapons to use in the game? A: The game allows you to choose from different characters and weapons, each with their own strengths and weaknesses. The best character and weapon to use depends on your personal preference and play style. However, some of the most popular choices are Eren, Mikasa, Levi, and Armin for characters, and blades, guns, and thunder spears for weapons.
-
Q: How can I get more coins and gems in the game? A: Coins and gems are the currencies of the game, which you can use to buy or upgrade items, such as weapons, costumes, or skills. You can get more coins and gems by playing the game online, completing missions, watching ads, or making in-app purchases.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/WE 2016 KONAMI APK - The Latest Version of the Popular Soccer Game.md b/spaces/congsaPfin/Manga-OCR/logs/WE 2016 KONAMI APK - The Latest Version of the Popular Soccer Game.md
deleted file mode 100644
index 1c93646a6c8e5ed6938da61ad4918f12094fa141..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/WE 2016 KONAMI APK - The Latest Version of the Popular Soccer Game.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
How to Download and Play WE 2016 Konami APK on Android
-
If you are a soccer fan who loves playing soccer games on your Android device, you might have heard of WE 2016 Konami APK. This is a modified version of PES 2012, a popular soccer simulation game developed by Konami. WE 2016 Konami APK has updated rosters, kits, stadiums, graphics, and gameplay to match the 2016 season. WE 2016 Konami APK is one of the best soccer games for Android users because it offers a realistic and immersive soccer experience with high-quality graphics, smooth gameplay, and a large selection of teams and players. In this article, we will show you how to download and install WE 2016 Konami APK on your Android device and enjoy playing this amazing game.
What is WE 2016 Konami APK and Why It Is Popular Among Soccer Fans
-
WE 2016 Konami APK is a modified version of PES 2012, a soccer simulation game developed by Konami. PES 2012 was released in 2011 for various platforms, including Android. However, the game was not updated regularly and became outdated as new seasons and players emerged. Therefore, some fans decided to modify the game and create WE 2016 Konami APK, which has updated rosters, kits, stadiums, graphics, and gameplay to match the 2016 season. WE 2016 Konami APK is not an official game from Konami, but a fan-made game that uses the PES 2012 engine and data.
-
WE 2016 Konami APK is popular among soccer fans because it offers a realistic and immersive soccer experience on Android devices. The game has high-quality graphics and sound effects that enhance the visual and auditory appeal of the game. The game also has smooth and responsive gameplay that makes it easy and fun to play. The game has various modes, such as exhibition, league, cup, training, and multiplayer that offer different challenges and rewards. The game also has a large selection of teams and players from various leagues and countries around the world. The game also has licensed kits, logos, names, and faces for most of the teams and players that add to the authenticity of the game.
-
What Are the Features and Benefits of Playing WE 2016 Konami APK on Android
-
WE 2016 Konami APK has many features and benefits that make it a great soccer game for Android users. Some of the features and benefits are:
-
High-quality graphics and sound effects
-
WE 2016 Konami APK has improved graphics and sound effects that create a lifelike soccer environment. The game has realistic animations, shadows, lighting, and textures that make the players and stadiums look more detailed and dynamic. The game also has authentic commentary, crowd noises, and stadium sounds that add to the atmosphere of the game.
-
Smooth and responsive gameplay
-
WE 2016 Konami APK has smooth and responsive gameplay that makes it easy and fun to play. The game has intuitive controls, accurate physics, and smart AI that ensure a satisfying soccer experience. The game also has various modes, such as exhibition, league, cup, training, and multiplayer that offer different challenges and rewards. You can play solo or with your friends online or offline.
-
Large selection of teams and players
-
WE 2016 Konami APK has a large selection of teams and players that cater to different preferences and tastes. The game has over 200 teams and over 2000 players from various leagues and countries around the world. You can choose your favorite team or create your own custom team with your favorite players. You can also edit the kits, logos, names, and faces of the teams and players to suit your liking.
-
we 2016 konami apk free download
-we 2016 konami android game download
-we 2016 konami apk mod download
-we 2016 konami apk offline download
-we 2016 konami apk data download
-we 2016 konami apk obb download
-we 2016 konami apk full version download
-we 2016 konami apk latest version download
-we 2016 konami apk update download
-we 2016 konami apk file download
-we 2016 konami apk direct download
-we 2016 konami apk mirror download
-we 2016 konami apk google drive download
-we 2016 konami apk mediafire download
-we 2016 konami apk mega download
-we 2016 konami apk zippyshare download
-we 2016 konami apk for pc download
-we 2016 konami apk for windows download
-we 2016 konami apk for mac download
-we 2016 konami apk for ios download
-we 2016 konami apk for iphone download
-we 2016 konami apk for ipad download
-we 2016 konami apk for android tv download
-we 2016 konami apk for firestick download
-we 2016 konami apk for chromebook download
-we 2016 konami apk no root download
-we 2016 konami apk no ads download
-we 2016 konami apk no verification download
-we 2016 konami apk unlimited money download
-we 2016 konami apk unlocked download
-we 2016 konami apk hack download
-we 2016 konami apk cheat download
-we 2016 konami apk patch download
-we 2016 konami apk crack download
-we 2016 konami apk premium download
-we 2016 konami apk pro download
-we 2016 konami apk plus download
-we 2016 konami apk gold download
-we 2016 konami apk vip download
-we 2016 konami apk original download
-how to install we 2016 konami apk
-how to play we 2016 konami apk
-how to update we 2016 konami apk
-how to fix we 2016 konami apk
-how to uninstall we 2016 konami apk
-how to transfer we 2016 konami apk
-how to backup we 2016 konami apk
-how to restore we 2016 konami apk
-how to run we 2016 konami apk
-
How to Download and Install WE 2016 Konami APK on Android Devices
-
Downloading and installing WE 2016 Konami APK on Android devices is easy and simple if you follow these steps:
-
Step 1: Enable unknown sources on your device
-
To install WE 2016 Konami APK on your device, you need to enable unknown sources in your device settings. This will allow you to install apps from sources other than the Google Play Store. To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on.
-
Step 2: Download WE 2016 Konami APK file from a trusted source
-
To download WE 2016 Konami APK file, you need to find a trusted source that offers the latest version of the game. One of the trusted sources is WE2012_1.0.1.apk, which is hosted on Google Drive. To download the file, click on the link above and then click on Download or Save to your device.
-
Step 3: Install WE 2016 Konami APK file on your device
-
To install WE 2016 Konami APK file on your device, you need to locate the file in your device storage. To locate the file, go to File Manager > Downloads or wherever you saved the file. To install the file, tap on it and then follow the instructions on the screen. You may need to grant some permissions to the app to run properly. Once the installation is complete, you can launch the game from your app drawer or home screen.
-
Conclusion
-
WE 2016 Konami APK is a fantastic soccer game for Android users who want to enjoy a realistic and immersive soccer experience on their devices. The game has high-quality graphics, smooth gameplay, and a large selection of teams and players that make it one of the best soccer games for Android. To download and install WE 2016 Konami APK on your device, you just need to follow the simple steps we have outlined in this article. We hope you have fun playing WE 2016 Konami APK on your Android device.
-
FAQs
-
Here are some of the frequently asked questions about WE 2016 Konami APK:
-
Is WE 2016 Konami APK safe to download and install?
-
Yes, WE 2016 Konami APK is safe to download and install as long as you get it from a trusted source like the one we have provided in this article. However, you should always be careful when downloading and installing apps from unknown sources and scan them for viruses or malware before installing them.
-
Is WE 2016 Konami APK compatible with all Android devices?
-
No, WE 2016 Konami APK may not be compatible with all Android devices. The game requires at least Android 4.0 or higher and 1 GB of RAM to run smoothly. If your device does not meet these requirements, you may experience some lag or crashes while playing the game.
-
How can I update WE 2016 Konami APK to the latest version?
-
To update WE 2016 Konami APK to the latest version, you need to check for updates from the source where you downloaded the game. If there is a new version available, you can download and install it over the existing one. However, you should always back up your game data before updating to avoid losing your progress or settings.
-
How can I fix WE 2016 Konami APK not working or crashing?
-
If you encounter any problems with WE 2016 Konami APK not working or crashing, you can try some of these solutions:
-
-
Clear the cache and data of the game from your device settings
-
Restart your device and launch the game again
-
Uninstall and reinstall the game from a trusted source
-
Check your internet connection and make sure it is stable and fast
-
Contact the developer or support team of the game for further assistance
-
-
How can I contact the developer or support team of WE 2016 Konami APK?
-
If you have any questions, feedback, or suggestions about WE 2016 Konami APK, you can contact the developer or support team of the game by visiting their official website [WE2012.com] or sending them an email at [we2012@gmail.com]. You can also follow them on their social media accounts [Facebook] and [Twitter] for more updates and news about the game.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/onnx_validate.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/onnx_validate.py
deleted file mode 100644
index ab3e4fb141b6ef660dcc5b447fd9f368a2ea19a0..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/onnx_validate.py
+++ /dev/null
@@ -1,112 +0,0 @@
-""" ONNX-runtime validation script
-
-This script was created to verify accuracy and performance of exported ONNX
-models running with the onnxruntime. It utilizes the PyTorch dataloader/processing
-pipeline for a fair comparison against the originals.
-
-Copyright 2020 Ross Wightman
-"""
-import argparse
-import numpy as np
-import onnxruntime
-from data import create_loader, resolve_data_config, Dataset
-from utils import AverageMeter
-import time
-
-parser = argparse.ArgumentParser(description='Caffe2 ImageNet Validation')
-parser.add_argument('data', metavar='DIR',
- help='path to dataset')
-parser.add_argument('--onnx-input', default='', type=str, metavar='PATH',
- help='path to onnx model/weights file')
-parser.add_argument('--onnx-output-opt', default='', type=str, metavar='PATH',
- help='path to output optimized onnx graph')
-parser.add_argument('--profile', action='store_true', default=False,
- help='Enable profiler output.')
-parser.add_argument('-j', '--workers', default=2, type=int, metavar='N',
- help='number of data loading workers (default: 2)')
-parser.add_argument('-b', '--batch-size', default=256, type=int,
- metavar='N', help='mini-batch size (default: 256)')
-parser.add_argument('--img-size', default=None, type=int,
- metavar='N', help='Input image dimension, uses model default if empty')
-parser.add_argument('--mean', type=float, nargs='+', default=None, metavar='MEAN',
- help='Override mean pixel value of dataset')
-parser.add_argument('--std', type=float, nargs='+', default=None, metavar='STD',
- help='Override std deviation of of dataset')
-parser.add_argument('--crop-pct', type=float, default=None, metavar='PCT',
- help='Override default crop pct of 0.875')
-parser.add_argument('--interpolation', default='', type=str, metavar='NAME',
- help='Image resize interpolation type (overrides model)')
-parser.add_argument('--tf-preprocessing', dest='tf_preprocessing', action='store_true',
- help='use tensorflow mnasnet preporcessing')
-parser.add_argument('--print-freq', '-p', default=10, type=int,
- metavar='N', help='print frequency (default: 10)')
-
-
-def main():
- args = parser.parse_args()
- args.gpu_id = 0
-
- # Set graph optimization level
- sess_options = onnxruntime.SessionOptions()
- sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
- if args.profile:
- sess_options.enable_profiling = True
- if args.onnx_output_opt:
- sess_options.optimized_model_filepath = args.onnx_output_opt
-
- session = onnxruntime.InferenceSession(args.onnx_input, sess_options)
-
- data_config = resolve_data_config(None, args)
- loader = create_loader(
- Dataset(args.data, load_bytes=args.tf_preprocessing),
- input_size=data_config['input_size'],
- batch_size=args.batch_size,
- use_prefetcher=False,
- interpolation=data_config['interpolation'],
- mean=data_config['mean'],
- std=data_config['std'],
- num_workers=args.workers,
- crop_pct=data_config['crop_pct'],
- tensorflow_preprocessing=args.tf_preprocessing)
-
- input_name = session.get_inputs()[0].name
-
- batch_time = AverageMeter()
- top1 = AverageMeter()
- top5 = AverageMeter()
- end = time.time()
- for i, (input, target) in enumerate(loader):
- # run the net and return prediction
- output = session.run([], {input_name: input.data.numpy()})
- output = output[0]
-
- # measure accuracy and record loss
- prec1, prec5 = accuracy_np(output, target.numpy())
- top1.update(prec1.item(), input.size(0))
- top5.update(prec5.item(), input.size(0))
-
- # measure elapsed time
- batch_time.update(time.time() - end)
- end = time.time()
-
- if i % args.print_freq == 0:
- print('Test: [{0}/{1}]\t'
- 'Time {batch_time.val:.3f} ({batch_time.avg:.3f}, {rate_avg:.3f}/s, {ms_avg:.3f} ms/sample) \t'
- 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t'
- 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(
- i, len(loader), batch_time=batch_time, rate_avg=input.size(0) / batch_time.avg,
- ms_avg=100 * batch_time.avg / input.size(0), top1=top1, top5=top5))
-
- print(' * Prec@1 {top1.avg:.3f} ({top1a:.3f}) Prec@5 {top5.avg:.3f} ({top5a:.3f})'.format(
- top1=top1, top1a=100-top1.avg, top5=top5, top5a=100.-top5.avg))
-
-
-def accuracy_np(output, target):
- max_indices = np.argsort(output, axis=1)[:, ::-1]
- top5 = 100 * np.equal(max_indices[:, :5], target[:, np.newaxis]).sum(axis=1).mean()
- top1 = 100 * np.equal(max_indices[:, 0], target).mean()
- return top1, top5
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/drive.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/drive.py
deleted file mode 100644
index 06e8ff606e0d2a4514ec8b7d2c6c436a32efcbf4..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/drive.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# dataset settings
-dataset_type = 'DRIVEDataset'
-data_root = 'data/DRIVE'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-img_scale = (584, 565)
-crop_size = (64, 64)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg'])
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale,
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img'])
- ])
-]
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type='RepeatDataset',
- times=40000,
- dataset=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline)),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/corpvs/test/README.md b/spaces/corpvs/test/README.md
deleted file mode 100644
index 696f1d212c887ffdf9d1b2ee594ec4475a2095f0..0000000000000000000000000000000000000000
--- a/spaces/corpvs/test/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Test
-emoji: 🌍
-colorFrom: red
-colorTo: pink
-sdk: static
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cvlab/zero123-live/taming-transformers/taming/data/conditional_builder/objects_center_points.py b/spaces/cvlab/zero123-live/taming-transformers/taming/data/conditional_builder/objects_center_points.py
deleted file mode 100644
index 9a480329cc47fb38a7b8729d424e092b77d40749..0000000000000000000000000000000000000000
--- a/spaces/cvlab/zero123-live/taming-transformers/taming/data/conditional_builder/objects_center_points.py
+++ /dev/null
@@ -1,168 +0,0 @@
-import math
-import random
-import warnings
-from itertools import cycle
-from typing import List, Optional, Tuple, Callable
-
-from PIL import Image as pil_image, ImageDraw as pil_img_draw, ImageFont
-from more_itertools.recipes import grouper
-from taming.data.conditional_builder.utils import COLOR_PALETTE, WHITE, GRAY_75, BLACK, FULL_CROP, filter_annotations, \
- additional_parameters_string, horizontally_flip_bbox, pad_list, get_circle_size, get_plot_font_size, \
- absolute_bbox, rescale_annotations
-from taming.data.helper_types import BoundingBox, Annotation
-from taming.data.image_transforms import convert_pil_to_tensor
-from torch import LongTensor, Tensor
-
-
-class ObjectsCenterPointsConditionalBuilder:
- def __init__(self, no_object_classes: int, no_max_objects: int, no_tokens: int, encode_crop: bool,
- use_group_parameter: bool, use_additional_parameters: bool):
- self.no_object_classes = no_object_classes
- self.no_max_objects = no_max_objects
- self.no_tokens = no_tokens
- self.encode_crop = encode_crop
- self.no_sections = int(math.sqrt(self.no_tokens))
- self.use_group_parameter = use_group_parameter
- self.use_additional_parameters = use_additional_parameters
-
- @property
- def none(self) -> int:
- return self.no_tokens - 1
-
- @property
- def object_descriptor_length(self) -> int:
- return 2
-
- @property
- def embedding_dim(self) -> int:
- extra_length = 2 if self.encode_crop else 0
- return self.no_max_objects * self.object_descriptor_length + extra_length
-
- def tokenize_coordinates(self, x: float, y: float) -> int:
- """
- Express 2d coordinates with one number.
- Example: assume self.no_tokens = 16, then no_sections = 4:
- 0 0 0 0
- 0 0 # 0
- 0 0 0 0
- 0 0 0 x
- Then the # position corresponds to token 6, the x position to token 15.
- @param x: float in [0, 1]
- @param y: float in [0, 1]
- @return: discrete tokenized coordinate
- """
- x_discrete = int(round(x * (self.no_sections - 1)))
- y_discrete = int(round(y * (self.no_sections - 1)))
- return y_discrete * self.no_sections + x_discrete
-
- def coordinates_from_token(self, token: int) -> (float, float):
- x = token % self.no_sections
- y = token // self.no_sections
- return x / (self.no_sections - 1), y / (self.no_sections - 1)
-
- def bbox_from_token_pair(self, token1: int, token2: int) -> BoundingBox:
- x0, y0 = self.coordinates_from_token(token1)
- x1, y1 = self.coordinates_from_token(token2)
- return x0, y0, x1 - x0, y1 - y0
-
- def token_pair_from_bbox(self, bbox: BoundingBox) -> Tuple[int, int]:
- return self.tokenize_coordinates(bbox[0], bbox[1]), \
- self.tokenize_coordinates(bbox[0] + bbox[2], bbox[1] + bbox[3])
-
- def inverse_build(self, conditional: LongTensor) \
- -> Tuple[List[Tuple[int, Tuple[float, float]]], Optional[BoundingBox]]:
- conditional_list = conditional.tolist()
- crop_coordinates = None
- if self.encode_crop:
- crop_coordinates = self.bbox_from_token_pair(conditional_list[-2], conditional_list[-1])
- conditional_list = conditional_list[:-2]
- table_of_content = grouper(conditional_list, self.object_descriptor_length)
- assert conditional.shape[0] == self.embedding_dim
- return [
- (object_tuple[0], self.coordinates_from_token(object_tuple[1]))
- for object_tuple in table_of_content if object_tuple[0] != self.none
- ], crop_coordinates
-
- def plot(self, conditional: LongTensor, label_for_category_no: Callable[[int], str], figure_size: Tuple[int, int],
- line_width: int = 3, font_size: Optional[int] = None) -> Tensor:
- plot = pil_image.new('RGB', figure_size, WHITE)
- draw = pil_img_draw.Draw(plot)
- circle_size = get_circle_size(figure_size)
- font = ImageFont.truetype('/usr/share/fonts/truetype/lato/Lato-Regular.ttf',
- size=get_plot_font_size(font_size, figure_size))
- width, height = plot.size
- description, crop_coordinates = self.inverse_build(conditional)
- for (representation, (x, y)), color in zip(description, cycle(COLOR_PALETTE)):
- x_abs, y_abs = x * width, y * height
- ann = self.representation_to_annotation(representation)
- label = label_for_category_no(ann.category_no) + ' ' + additional_parameters_string(ann)
- ellipse_bbox = [x_abs - circle_size, y_abs - circle_size, x_abs + circle_size, y_abs + circle_size]
- draw.ellipse(ellipse_bbox, fill=color, width=0)
- draw.text((x_abs, y_abs), label, anchor='md', fill=BLACK, font=font)
- if crop_coordinates is not None:
- draw.rectangle(absolute_bbox(crop_coordinates, width, height), outline=GRAY_75, width=line_width)
- return convert_pil_to_tensor(plot) / 127.5 - 1.
-
- def object_representation(self, annotation: Annotation) -> int:
- modifier = 0
- if self.use_group_parameter:
- modifier |= 1 * (annotation.is_group_of is True)
- if self.use_additional_parameters:
- modifier |= 2 * (annotation.is_occluded is True)
- modifier |= 4 * (annotation.is_depiction is True)
- modifier |= 8 * (annotation.is_inside is True)
- return annotation.category_no + self.no_object_classes * modifier
-
- def representation_to_annotation(self, representation: int) -> Annotation:
- category_no = representation % self.no_object_classes
- modifier = representation // self.no_object_classes
- # noinspection PyTypeChecker
- return Annotation(
- area=None, image_id=None, bbox=None, category_id=None, id=None, source=None, confidence=None,
- category_no=category_no,
- is_group_of=bool((modifier & 1) * self.use_group_parameter),
- is_occluded=bool((modifier & 2) * self.use_additional_parameters),
- is_depiction=bool((modifier & 4) * self.use_additional_parameters),
- is_inside=bool((modifier & 8) * self.use_additional_parameters)
- )
-
- def _crop_encoder(self, crop_coordinates: BoundingBox) -> List[int]:
- return list(self.token_pair_from_bbox(crop_coordinates))
-
- def _make_object_descriptors(self, annotations: List[Annotation]) -> List[Tuple[int, ...]]:
- object_tuples = [
- (self.object_representation(a),
- self.tokenize_coordinates(a.bbox[0] + a.bbox[2] / 2, a.bbox[1] + a.bbox[3] / 2))
- for a in annotations
- ]
- empty_tuple = (self.none, self.none)
- object_tuples = pad_list(object_tuples, empty_tuple, self.no_max_objects)
- return object_tuples
-
- def build(self, annotations: List, crop_coordinates: Optional[BoundingBox] = None, horizontal_flip: bool = False) \
- -> LongTensor:
- if len(annotations) == 0:
- warnings.warn('Did not receive any annotations.')
- if len(annotations) > self.no_max_objects:
- warnings.warn('Received more annotations than allowed.')
- annotations = annotations[:self.no_max_objects]
-
- if not crop_coordinates:
- crop_coordinates = FULL_CROP
-
- random.shuffle(annotations)
- annotations = filter_annotations(annotations, crop_coordinates)
- if self.encode_crop:
- annotations = rescale_annotations(annotations, FULL_CROP, horizontal_flip)
- if horizontal_flip:
- crop_coordinates = horizontally_flip_bbox(crop_coordinates)
- extra = self._crop_encoder(crop_coordinates)
- else:
- annotations = rescale_annotations(annotations, crop_coordinates, horizontal_flip)
- extra = []
-
- object_tuples = self._make_object_descriptors(annotations)
- flattened = [token for tuple_ in object_tuples for token in tuple_] + extra
- assert len(flattened) == self.embedding_dim
- assert all(0 <= value < self.no_tokens for value in flattened)
- return LongTensor(flattened)
diff --git a/spaces/cvlab/zero123-live/taming-transformers/taming/data/custom.py b/spaces/cvlab/zero123-live/taming-transformers/taming/data/custom.py
deleted file mode 100644
index 33f302a4b55ba1e8ec282ec3292b6263c06dfb91..0000000000000000000000000000000000000000
--- a/spaces/cvlab/zero123-live/taming-transformers/taming/data/custom.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import os
-import numpy as np
-import albumentations
-from torch.utils.data import Dataset
-
-from taming.data.base import ImagePaths, NumpyPaths, ConcatDatasetWithIndex
-
-
-class CustomBase(Dataset):
- def __init__(self, *args, **kwargs):
- super().__init__()
- self.data = None
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- example = self.data[i]
- return example
-
-
-
-class CustomTrain(CustomBase):
- def __init__(self, size, training_images_list_file):
- super().__init__()
- with open(training_images_list_file, "r") as f:
- paths = f.read().splitlines()
- self.data = ImagePaths(paths=paths, size=size, random_crop=False)
-
-
-class CustomTest(CustomBase):
- def __init__(self, size, test_images_list_file):
- super().__init__()
- with open(test_images_list_file, "r") as f:
- paths = f.read().splitlines()
- self.data = ImagePaths(paths=paths, size=size, random_crop=False)
-
-
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ufoLib/errors.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ufoLib/errors.py
deleted file mode 100644
index e05dd438b430708aac5163ebfde74ffb0501fbd1..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ufoLib/errors.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from __future__ import annotations
-
-
-class UFOLibError(Exception):
- pass
-
-
-class UnsupportedUFOFormat(UFOLibError):
- pass
-
-
-class GlifLibError(UFOLibError):
- def _add_note(self, note: str) -> None:
- # Loose backport of PEP 678 until we only support Python 3.11+, used for
- # adding additional context to errors.
- # TODO: Replace with https://docs.python.org/3.11/library/exceptions.html#BaseException.add_note
- (message, *rest) = self.args
- self.args = ((message + "\n" + note), *rest)
-
-
-class UnsupportedGLIFFormat(GlifLibError):
- pass
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/defaults.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/defaults.py
deleted file mode 100644
index 638cad3d2d8907330bde56e2b76c9b185c523b45..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/defaults.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import typing as t
-
-from .filters import FILTERS as DEFAULT_FILTERS # noqa: F401
-from .tests import TESTS as DEFAULT_TESTS # noqa: F401
-from .utils import Cycler
-from .utils import generate_lorem_ipsum
-from .utils import Joiner
-from .utils import Namespace
-
-if t.TYPE_CHECKING:
- import typing_extensions as te
-
-# defaults for the parser / lexer
-BLOCK_START_STRING = "{%"
-BLOCK_END_STRING = "%}"
-VARIABLE_START_STRING = "{{"
-VARIABLE_END_STRING = "}}"
-COMMENT_START_STRING = "{#"
-COMMENT_END_STRING = "#}"
-LINE_STATEMENT_PREFIX: t.Optional[str] = None
-LINE_COMMENT_PREFIX: t.Optional[str] = None
-TRIM_BLOCKS = False
-LSTRIP_BLOCKS = False
-NEWLINE_SEQUENCE: "te.Literal['\\n', '\\r\\n', '\\r']" = "\n"
-KEEP_TRAILING_NEWLINE = False
-
-# default filters, tests and namespace
-
-DEFAULT_NAMESPACE = {
- "range": range,
- "dict": dict,
- "lipsum": generate_lorem_ipsum,
- "cycler": Cycler,
- "joiner": Joiner,
- "namespace": Namespace,
-}
-
-# default policies
-DEFAULT_POLICIES: t.Dict[str, t.Any] = {
- "compiler.ascii_str": True,
- "urlize.rel": "noopener",
- "urlize.target": None,
- "urlize.extra_schemes": None,
- "truncate.leeway": 5,
- "json.dumps_function": None,
- "json.dumps_kwargs": {"sort_keys": True},
- "ext.i18n.trimmed": False,
-}
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/common/entities.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/common/entities.py
deleted file mode 100644
index 6bb2d343c2338de4232378bf98d6c034fbe808c0..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/common/entities.py
+++ /dev/null
@@ -1,4 +0,0 @@
-"""HTML5 entities map: { name -> characters }."""
-import html.entities
-
-entities = {name.rstrip(";"): chars for name, chars in html.entities.html5.items()}
diff --git a/spaces/dddmiku/vits-uma-genshin-honkai/text/cleaners.py b/spaces/dddmiku/vits-uma-genshin-honkai/text/cleaners.py
deleted file mode 100644
index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000
--- a/spaces/dddmiku/vits-uma-genshin-honkai/text/cleaners.py
+++ /dev/null
@@ -1,475 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-'''
-Cleaners are transformations that run over the input text at both training and eval time.
-
-Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
-hyperparameter. Some cleaners are English-specific. You'll typically want to use:
- 1. "english_cleaners" for English text
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
- the symbols in symbols.py to match your data).
-'''
-
-import re
-from unidecode import unidecode
-import pyopenjtalk
-from jamo import h2j, j2hcj
-from pypinyin import lazy_pinyin, BOPOMOFO
-import jieba, cn2an
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# Regular expression matching whitespace:
-_whitespace_re = re.compile(r'\s+')
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (regular expression, replacement) pairs for abbreviations:
-_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
- ('mrs', 'misess'),
- ('mr', 'mister'),
- ('dr', 'doctor'),
- ('st', 'saint'),
- ('co', 'company'),
- ('jr', 'junior'),
- ('maj', 'major'),
- ('gen', 'general'),
- ('drs', 'doctors'),
- ('rev', 'reverend'),
- ('lt', 'lieutenant'),
- ('hon', 'honorable'),
- ('sgt', 'sergeant'),
- ('capt', 'captain'),
- ('esq', 'esquire'),
- ('ltd', 'limited'),
- ('col', 'colonel'),
- ('ft', 'fort'),
-]]
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-
-def expand_abbreviations(text):
- for regex, replacement in _abbreviations:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def lowercase(text):
- return text.lower()
-
-
-def collapse_whitespace(text):
- return re.sub(_whitespace_re, ' ', text)
-
-
-def convert_to_ascii(text):
- return unidecode(text)
-
-
-def japanese_to_romaji_with_accent(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = ''
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- if text!='':
- text+=' '
- labels = pyopenjtalk.extract_fullcontext(sentence)
- for n, label in enumerate(labels):
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
- if phoneme not in ['sil','pau']:
- text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q')
- else:
- continue
- n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']:
- a2_next=-1
- else:
- a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
- # Accent phrase boundary
- if a3 == 1 and a2_next == 1:
- text += ' '
- # Falling
- elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras:
- text += '↓'
- # Rising
- elif a2 == 1 and a2_next == 2:
- text += '↑'
- if i torch.FloatTensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`torch.FloatTensor`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `torch.FloatTensor`: scaled input sample
- """
- return sample
-
- def set_timesteps(
- self, num_inference_steps: int, sampling_eps: float = None, device: Union[str, torch.device] = None
- ):
- """
- Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- sampling_eps (`float`, optional):
- final timestep value (overrides value given at Scheduler instantiation).
-
- """
- sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps
-
- self.timesteps = torch.linspace(1, sampling_eps, num_inference_steps, device=device)
-
- def set_sigmas(
- self, num_inference_steps: int, sigma_min: float = None, sigma_max: float = None, sampling_eps: float = None
- ):
- """
- Sets the noise scales used for the diffusion chain. Supporting function to be run before inference.
-
- The sigmas control the weight of the `drift` and `diffusion` components of sample update.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- sigma_min (`float`, optional):
- initial noise scale value (overrides value given at Scheduler instantiation).
- sigma_max (`float`, optional):
- final noise scale value (overrides value given at Scheduler instantiation).
- sampling_eps (`float`, optional):
- final timestep value (overrides value given at Scheduler instantiation).
-
- """
- sigma_min = sigma_min if sigma_min is not None else self.config.sigma_min
- sigma_max = sigma_max if sigma_max is not None else self.config.sigma_max
- sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps
- if self.timesteps is None:
- self.set_timesteps(num_inference_steps, sampling_eps)
-
- self.sigmas = sigma_min * (sigma_max / sigma_min) ** (self.timesteps / sampling_eps)
- self.discrete_sigmas = torch.exp(torch.linspace(math.log(sigma_min), math.log(sigma_max), num_inference_steps))
- self.sigmas = torch.tensor([sigma_min * (sigma_max / sigma_min) ** t for t in self.timesteps])
-
- def get_adjacent_sigma(self, timesteps, t):
- return torch.where(
- timesteps == 0,
- torch.zeros_like(t.to(timesteps.device)),
- self.discrete_sigmas[timesteps - 1].to(timesteps.device),
- )
-
- def step_pred(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- generator: Optional[torch.Generator] = None,
- return_dict: bool = True,
- ) -> Union[SdeVeOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- generator: random number generator.
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_sde_ve.SdeVeOutput`] or `tuple`: [`~schedulers.scheduling_sde_ve.SdeVeOutput`] if
- `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- if self.timesteps is None:
- raise ValueError(
- "`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler"
- )
-
- timestep = timestep * torch.ones(
- sample.shape[0], device=sample.device
- ) # torch.repeat_interleave(timestep, sample.shape[0])
- timesteps = (timestep * (len(self.timesteps) - 1)).long()
-
- # mps requires indices to be in the same device, so we use cpu as is the default with cuda
- timesteps = timesteps.to(self.discrete_sigmas.device)
-
- sigma = self.discrete_sigmas[timesteps].to(sample.device)
- adjacent_sigma = self.get_adjacent_sigma(timesteps, timestep).to(sample.device)
- drift = torch.zeros_like(sample)
- diffusion = (sigma**2 - adjacent_sigma**2) ** 0.5
-
- # equation 6 in the paper: the model_output modeled by the network is grad_x log pt(x)
- # also equation 47 shows the analog from SDE models to ancestral sampling methods
- diffusion = diffusion.flatten()
- while len(diffusion.shape) < len(sample.shape):
- diffusion = diffusion.unsqueeze(-1)
- drift = drift - diffusion**2 * model_output
-
- # equation 6: sample noise for the diffusion term of
- noise = randn_tensor(
- sample.shape, layout=sample.layout, generator=generator, device=sample.device, dtype=sample.dtype
- )
- prev_sample_mean = sample - drift # subtract because `dt` is a small negative timestep
- # TODO is the variable diffusion the correct scaling term for the noise?
- prev_sample = prev_sample_mean + diffusion * noise # add impact of diffusion field g
-
- if not return_dict:
- return (prev_sample, prev_sample_mean)
-
- return SdeVeOutput(prev_sample=prev_sample, prev_sample_mean=prev_sample_mean)
-
- def step_correct(
- self,
- model_output: torch.FloatTensor,
- sample: torch.FloatTensor,
- generator: Optional[torch.Generator] = None,
- return_dict: bool = True,
- ) -> Union[SchedulerOutput, Tuple]:
- """
- Correct the predicted sample based on the output model_output of the network. This is often run repeatedly
- after making the prediction for the previous timestep.
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- generator: random number generator.
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_sde_ve.SdeVeOutput`] or `tuple`: [`~schedulers.scheduling_sde_ve.SdeVeOutput`] if
- `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
-
- """
- if self.timesteps is None:
- raise ValueError(
- "`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler"
- )
-
- # For small batch sizes, the paper "suggest replacing norm(z) with sqrt(d), where d is the dim. of z"
- # sample noise for correction
- noise = randn_tensor(sample.shape, layout=sample.layout, generator=generator).to(sample.device)
-
- # compute step size from the model_output, the noise, and the snr
- grad_norm = torch.norm(model_output.reshape(model_output.shape[0], -1), dim=-1).mean()
- noise_norm = torch.norm(noise.reshape(noise.shape[0], -1), dim=-1).mean()
- step_size = (self.config.snr * noise_norm / grad_norm) ** 2 * 2
- step_size = step_size * torch.ones(sample.shape[0]).to(sample.device)
- # self.repeat_scalar(step_size, sample.shape[0])
-
- # compute corrected sample: model_output term and noise term
- step_size = step_size.flatten()
- while len(step_size.shape) < len(sample.shape):
- step_size = step_size.unsqueeze(-1)
- prev_sample_mean = sample + step_size * model_output
- prev_sample = prev_sample_mean + ((step_size * 2) ** 0.5) * noise
-
- if not return_dict:
- return (prev_sample,)
-
- return SchedulerOutput(prev_sample=prev_sample)
-
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.FloatTensor,
- ) -> torch.FloatTensor:
- # Make sure sigmas and timesteps have the same device and dtype as original_samples
- timesteps = timesteps.to(original_samples.device)
- sigmas = self.discrete_sigmas.to(original_samples.device)[timesteps]
- noise = torch.randn_like(original_samples) * sigmas[:, None, None, None]
- noisy_samples = noise + original_samples
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/dance_diffusion/__init__.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/dance_diffusion/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/memory/longterm_memory.py b/spaces/deepwisdom/MetaGPT/metagpt/memory/longterm_memory.py
deleted file mode 100644
index 041d335acbac81ef5cd98aa158aa70600d62dec7..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/memory/longterm_memory.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Desc : the implement of Long-term memory
-@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation.
-"""
-
-from metagpt.logs import logger
-from metagpt.memory import Memory
-from metagpt.memory.memory_storage import MemoryStorage
-from metagpt.schema import Message
-
-
-class LongTermMemory(Memory):
- """
- The Long-term memory for Roles
- - recover memory when it staruped
- - update memory when it changed
- """
-
- def __init__(self):
- self.memory_storage: MemoryStorage = MemoryStorage()
- super(LongTermMemory, self).__init__()
- self.rc = None # RoleContext
- self.msg_from_recover = False
-
- def recover_memory(self, role_id: str, rc: "RoleContext"):
- messages = self.memory_storage.recover_memory(role_id)
- self.rc = rc
- if not self.memory_storage.is_initialized:
- logger.warning(f"It may the first time to run Agent {role_id}, the long-term memory is empty")
- else:
- logger.warning(
- f"Agent {role_id} has existed memory storage with {len(messages)} messages " f"and has recovered them."
- )
- self.msg_from_recover = True
- self.add_batch(messages)
- self.msg_from_recover = False
-
- def add(self, message: Message, **kwargs):
- super(LongTermMemory, self).add(message)
- for action in self.rc.watch:
- if message.cause_by == action and not self.msg_from_recover:
- # currently, only add role's watching messages to its memory_storage
- # and ignore adding messages from recover repeatedly
- self.memory_storage.add(message, **kwargs)
-
- def remember(self, observed: list[Message], k=0) -> list[Message]:
- """
- remember the most similar k memories from observed Messages, return all when k=0
- 1. remember the short-term memory(stm) news
- 2. integrate the stm news with ltm(long-term memory) news
- """
- stm_news = super(LongTermMemory, self).remember(observed, k=k) # shot-term memory news
- if not self.memory_storage.is_initialized:
- # memory_storage hasn't initialized, use default `remember` to get stm_news
- return stm_news
-
- ltm_news: list[Message] = []
- for mem in stm_news:
- # integrate stm & ltm
- mem_searched = self.memory_storage.search(mem)
- if len(mem_searched) > 0:
- ltm_news.append(mem)
- return ltm_news[-k:]
-
- def delete(self, message: Message):
- super(LongTermMemory, self).delete(message)
- # TODO delete message in memory_storage
-
- def clear(self):
- super(LongTermMemory, self).clear()
- self.memory_storage.clean()
diff --git a/spaces/deven367/yt-video-annotator-hf/README.md b/spaces/deven367/yt-video-annotator-hf/README.md
deleted file mode 100644
index 542a02da8454c9a531c97edd43c0aaa1e61db7dd..0000000000000000000000000000000000000000
--- a/spaces/deven367/yt-video-annotator-hf/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Yt Video Annotator
-emoji: 🐢
-colorFrom: green
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
----
-
-## Try the app on hf-spaces
-
-You can find the deployed app → [**here**](https://huggingface.co/spaces/deven367/yt-video-annotator/)
-
-> **Note**
-> The app inference is slow as the inference is running on a CPU, if you have GPU on your local system, the app will work a lot faster.
-
-## Installation
-
-1. Create a virtual env with the environment manager of your choice
-2. Activate the environment
-3. Install the dependencies using ```pip install -e .```
-4. To run the app locally in your terminal, type `run_app`
-
-## Contributing
-
-Issues and PRs are welcome. If you want me to implement a feature, create a Feature Request in the issues, I'll try my best to implement it.
diff --git a/spaces/diacanFperku/AutoGPT/Adguard Premium 6.3.1399.4073 [REPACK] Crack.md b/spaces/diacanFperku/AutoGPT/Adguard Premium 6.3.1399.4073 [REPACK] Crack.md
deleted file mode 100644
index 2841111c947f31ac733852c8f7d18f5553cf7c69..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Adguard Premium 6.3.1399.4073 [REPACK] Crack.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
Adguard Premium 6.3.1399.4073 Crack: A Powerful Ad Blocker for Windows
-
Adguard Premium 6.3.1399.4073 Crack is a software that blocks ads, trackers, malware, and phishing websites on your Windows PC. It also protects your privacy and speeds up your browsing experience. Adguard Premium 6.3.1399.4073 Crack is the latest version of the popular ad blocker that was released on September 21, 2018.
-
Adguard Premium 6.3.1399.4073 Crack has many features and benefits, such as:
It blocks all types of ads, including banners, pop-ups, video ads, and social media ads.
-
It filters out malicious websites and prevents you from accessing them.
-
It prevents online tracking by hiding your IP address and deleting cookies.
-
It allows you to customize your filtering preferences and whitelist websites that you trust.
-
It works with any browser and does not require any additional extensions or plugins.
-
It has a user-friendly interface and a low system impact.
-
-
If you want to enjoy a clean and safe web browsing experience, you can download Adguard Premium 6.3.1399.4073 Crack from the following sources[^1^] [^2^]. However, please note that using cracked software is illegal and may harm your computer or expose you to security risks. Therefore, we recommend that you purchase a legitimate license from the official website of Adguard.
Ad blocking has many benefits for users, websites, and the internet as a whole. Here are some of the main advantages of using an ad blocker:
-
-
It protects your privacy. Ad blockers prevent ad servers from tracking your online activities and collecting your personal data. This way, you can avoid targeted ads, identity theft, and unwanted profiling by third parties.[^1^] [^2^]
-
It stops malware and phishing attacks. Ad blockers filter out malicious ads that can infect your device with malware, ransomware, spyware, or other threats. They also block phishing ads that try to trick you into revealing your sensitive information or downloading harmful software.[^1^] [^2^]
-
It surfs the web faster and saves battery life. Ad blockers reduce the amount of data and resources that are consumed by ads, especially bloated ones like images and videos. This makes web pages load faster, saves your bandwidth and data usage, and extends your battery life.[^1^] [^4^]
-
It enjoys clean orderly pages. Ad blockers remove clutter and distractions from web pages, making them look more neat and organized. You can focus on the content that you want to read or watch, without being annoyed by pop-ups, banners, or auto-playing videos.[^1^] [^4^]
-
It keeps your kids safe and sane. Ad blockers can help you protect your children from inappropriate or harmful ads that may expose them to violence, sex, drugs, gambling, or other adult content. They can also prevent ads from interrupting their learning or entertainment activities.[^1^]
-
-
As you can see, ad blocking is not only a matter of convenience but also a matter of security and privacy. By using an ad blocker, you can improve your web browsing experience and take control of what you see online.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/American Pie Reunion 720p Br Rip Torrents WORK.md b/spaces/diacanFperku/AutoGPT/American Pie Reunion 720p Br Rip Torrents WORK.md
deleted file mode 100644
index 3836f029c0749718022ad694f4ee27de4c068cc4..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/American Pie Reunion 720p Br Rip Torrents WORK.md
+++ /dev/null
@@ -1,44 +0,0 @@
-
-
-ony
-
-This was the news that broke the LA-set rom-com’s initial buzz that summer, with Brie and Jason confessing their love for one another. They are released in the winter, but Variety said the movie will play in select theaters on Sept. The soap opera stars have been accused of badgering a hapless indie rocker (Menahem Golan), who wrote a song for them, to turn the tune into a big hit, and were later linked to the producer who’s also accused of badgering him.
-
-Donald Trump enjoys reading and writing. The original posted on Facebook by two members of the Trump campaign. “I asked them to do a speech that they would endorse with me.” “I’m not supposed to do that,” Trump said on Fox News, noting he’s already received "strong endorsements" from various conservative and religious groups.
-
-“I don’t think he’s giving up,” the U. The first lady, Melania, 34, arrived to the event as a surprise to the audience. "The White House has been open to ideas for months.Q:
-
-Modifying a git repository on a server to match my local repository
-
-I have a git repository (on a server) with a handful of changes that I want to make. However, when I clone the repository to my local machine, the files don't match. Is there a way to modify the repository on the server and have the changes reflected on my local repository?
-
-Thanks!
-
-A:
-
-Add --bare to your clone command:
-
-git clone --bare myRepo.git myRepo
-
-If the --bare was not there, there was probably no server-side repository.
-
-Q:
-
-BindingModel - AttributeError:'str' object has no attribute'save'
-
-I am trying to save an object. I have given the object an id: str.
-
-class BasePayment(models.Model):
-
- registration = models.ForeignKey(User)
-
- user = models.ForeignKey(User)
-
- paid = models.BooleanField(default=False)
-
- price = models.DecimalField(max_digits=5, decimal_places=2)
-
- name = models.CharField(max_length= 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Amy Winehouse Back To Black Album Download 53 !!LINK!!.md b/spaces/diacanFperku/AutoGPT/Amy Winehouse Back To Black Album Download 53 !!LINK!!.md
deleted file mode 100644
index e4bb197bc1b7d72398cbe8bffadb444b0ed7434a..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Amy Winehouse Back To Black Album Download 53 !!LINK!!.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
Amy Winehouse Back To Black Album Download 53: How to Enjoy This Soulful Masterpiece
-
Amy Winehouse was one of the most talented and influential singers of the 21st century. Her second and final album, Back To Black, released in 2006, was a critical and commercial success that showcased her unique voice and style. The album won five Grammy Awards, including Best Pop Vocal Album, Best Female Pop Vocal Performance, Record of the Year, Song of the Year, and Best New Artist. The album also sold over 20 million copies worldwide and is considered one of the best albums of all time.
-
Back To Black is a soulful and emotional album that explores themes of love, loss, addiction, and regret. The album features 11 tracks that are influenced by various genres, such as R&B, jazz, blues, soul, and rock. The album also features collaborations with producers Mark Ronson and Salaam Remi, as well as musicians such as The Dap-Kings and Amy's own band. Some of the most popular songs from the album are Rehab, You Know I'm No Good, Back To Black, Tears Dry On Their Own, and Love Is A Losing Game.
If you are a fan of Amy Winehouse or soul music in general, you might want to download and listen to Back To Black on your device. However, you might be wondering how to do that and where to find the best quality version of the album. Well, don't worry, because we have got you covered. In this article, we will show you how to download Back To Black in high-resolution audio and enjoy this amazing album online.
-
What is High-Resolution Audio?
-
High-resolution audio (or hi-res audio) is a term that refers to audio files that have a higher quality than standard CD or MP3 formats. Hi-res audio files have a higher sampling rate and bit depth than standard formats, which means they capture more details and nuances of the sound. Hi-res audio files also have a higher dynamic range and frequency response than standard formats, which means they can reproduce a wider range of sounds and volumes.
-
Hi-res audio files can offer a more immersive and realistic listening experience than standard formats. They can make you feel like you are in the studio or at the concert with the artist. They can also reveal subtle details and textures that you might miss in lower quality formats.
-
However, hi-res audio files also have some drawbacks. They are larger in size than standard formats, which means they take up more space on your device and require more bandwidth to stream or download. They also require compatible devices and software to play them properly. Not all devices or software can support hi-res audio files.
-
How to Download Back To Black in High-Resolution Audio?
-
If you want to download Back To Black in high-resolution audio, you will need to find a website that offers this option. There are many websites that claim to have hi-res audio files for download, but not all of them are reliable or safe. Some of them might have broken links, low-quality files, annoying ads, or even malware. That's why you need to be careful and do some research before clicking on any link.
-
To help you out, we have compiled a list of the top 10 websites that have hi-res audio files for Back To Black available for download. These websites are:
-
-
TIDAL: This is one of the most popular and trusted websites for streaming and downloading hi-res audio files. TIDAL offers over 70 million songs in various genres and formats, including MQA (Master Quality Authenticated), which is a technology that delivers studio-quality sound in a smaller file size. You can find Back To Black on TIDAL by searching for its title or following this link: https://tidal.com/browse/album/77661290. This is a direct rip from the DVD that has been upscaled to 1080p. It has the original explicit version with 11 tracks.
-
Qobuz: This is another website that specializes in streaming and downloading hi-res audio files. Qobuz offers over 70 million songs in various genres and formats, including FLAC (Free Lossless Audio Codec), which is a format that preserves the original quality of the sound without any compression or loss. You can find Back To Black on Qobuz by searching for its title or following this link: https://www.qobuz.com/us-en/album/back-to-black-amy-winehouse/0060251713041. This is a high-quality version with 11 tracks.
-
HDtracks: This is a website that focuses on downloading hi-res audio files only. HDtracks offers over 30 million songs in various genres and formats,
-including WAV (Waveform Audio File Format), which is a format that stores uncompressed raw audio data
-
Where to Stream and Download Back To Black Online?
-
If you prefer to stream or download Back To Black online instead of buying a physical copy, you have plenty of options to choose from. There are many websites and platforms that offer streaming and downloading services for music, including hi-res audio files. However, not all of them have the same quality, price, or availability of Back To Black. That's why you need to compare and contrast them before deciding which one to use.
-
-
To help you out, we have compiled a list of the top 10 websites and platforms that offer streaming and downloading services for Back To Black online. These are:
-
-
Spotify: This is one of the most popular and widely used platforms for streaming music online. Spotify offers over 70 million songs in various genres and formats, including MP3 (320kbps) and Ogg Vorbis (320kbps). You can find Back To Black on Spotify by searching for its title or following this link: https://open.spotify.com/album/6wQyf9wLwTnGQFXl0tYnXz. This is a standard version with 11 tracks.
-
Apple Music: This is another platform that specializes in streaming music online. Apple Music offers over 75 million songs in various genres and formats, including AAC (256kbps). You can find Back To Black on Apple Music by searching for its title or following this link: https://music.apple.com/us/album/back-to-black/1440857781. This is a deluxe version with 18 tracks.
-
Amazon Music: This is a platform that offers both streaming and downloading services for music online. Amazon Music offers over 70 million songs in various genres and formats, including MP3 (320kbps) and FLAC (16-bit/44.1kHz). You can find Back To Black on Amazon Music by searching for its title or following this link: https://www.amazon.com/Back-Black-Amy-Winehouse/dp/B000KCHZ1K. This is a standard version with 11 tracks.
-
Deezer: This is a platform that focuses on streaming music online. Deezer offers over 73 million songs in various genres and formats, including MP3 (320kbps) and FLAC (16-bit/44.1kHz). You can find Back To Black on Deezer by searching for its title or following this link: https://www.deezer.com/us/album/121006. This is a standard version with 11 tracks.
-
YouTube Music: This is a platform that allows you to stream music online from YouTube videos. YouTube Music offers over 70 million songs in various genres and formats, including AAC (128kbps) and Opus (160kbps). You can find Back To Black on YouTube Music by searching for its title or following this link: https://music.youtube.com/playlist?list=OLAK5uy_kZqJ7Z8o6k9j0yQmJY0g8F5i4xNl7aCgM. This is a standard version with 11 tracks.
-
-
These are some of the websites and platforms that offer streaming and downloading services for Back To Black online. However, there are many more options that you can explore, such as Pandora, Tidal, Qobuz, HDtracks, etc. You can choose any of them according to your preference and budget.
-
How to Enjoy Back To Black Online?
-
Once you have chosen your preferred website or platform to stream or download Back To Black online, you are ready to enjoy this amazing album on your device. However, there are some tips and tricks that you can follow to enhance your listening experience even more. Here are some of them:
-
-
Use a good pair of headphones or speakers: The quality of your headphones or speakers can make a big difference in how you hear the music. A good pair of headphones or speakers can deliver clear, balanced, and detailed sound that can bring out the best of the album. They can also isolate the noise from your surroundings and immerse you in the music.
-
Adjust the volume and equalizer settings: The volume and equalizer settings can affect how you perceive the music as well. A good volume level can make you feel the emotions and energy of the album without hurting your ears. A good equalizer setting can enhance the bass, treble, or midrange frequencies of the album according to your taste.
-
Read the lyrics and liner notes: The lyrics and liner notes of the album can give you more insight into the meaning and context of the songs. They can also help you appreciate the creativity and craftsmanship of Amy Winehouse as a singer-songwriter. You can find the lyrics and liner notes of Back To Black online on various websites, such as Genius, AZLyrics, AllMusic, etc.
-
Watch the music videos and documentaries: The music videos and documentaries of the album can add more visual and narrative elements to the music. They can also show you different aspects of Amy Winehouse's personality, style, and life story. You can find the music videos and documentaries of Back To Black online on various platforms, such as YouTube, Netflix, Hulu, etc.
-
-
These are some of the tips and tricks that you can follow to enjoy Back To Black online on your device. However, the most important thing is to enjoy the music with an open mind and heart. Back To Black is a masterpiece of soul music that showcases Amy Winehouse's talent and legacy. It is an album that will make you feel every emotion possible: joy, sadness,
-anger,
-love,
-and more
-
How Can You Support the Amy Winehouse Foundation?
-
The Amy Winehouse Foundation is a registered charity in England and Wales that was set up in memory of Amy Winehouse after her death in 2011. The foundation's mission is to support, inform and inspire young people to build their self-esteem and resilience, so they can flourish. The foundation works on various projects and programmes that aim to prevent the effects of drug and alcohol misuse, as well as support young people with mental health issues, homelessness, and other challenges.
-
Some of the projects and programmes that the foundation runs or supports are:
-
-
Resilience work with young people: This is a programme that delivers drug and alcohol education and prevention workshops to schools and other youth settings across the UK. The programme uses peer educators who share their own experiences of overcoming substance misuse and related issues. The programme also provides training and support for teachers and parents.
-
Amy's Place: This is a recovery housing project for young women aged 18-30 who are recovering from drug and alcohol addiction. The project provides safe and supportive accommodation, as well as access to education, training, employment, and other services. The project aims to help the women rebuild their lives and achieve their goals.
-
Music therapy for children: This is a project that provides music therapy sessions to children with life-limiting or life-threatening conditions at hospices across the UK. The project uses music as a tool to help the children express their emotions, cope with pain, and improve their wellbeing.
-
Recovery pathways programme for young people: This is a programme that supports young people aged 16-24 who are in recovery from substance misuse or mental health issues. The programme offers mentoring, coaching, peer support, and access to education, training, employment, and volunteering opportunities. The programme aims to help the young people develop their skills, confidence, and aspirations.
-
Music projects outside the UK: This is a project that supports music-related initiatives for disadvantaged young people in countries such as Jamaica, Ireland, Thailand, and South Africa. The project partners with local organisations that use music as a way to engage, empower, and educate the young people.
-
-
If you want to support the Amy Winehouse Foundation and its work, you can do so in various ways. You can:
-
-
Donate: You can make a one-off or regular donation to the foundation via its website or other platforms such as JustGiving or PayPal. You can also donate by text or by cheque. Your donation will help fund the foundation's projects and programmes.
-
Fundraise: You can organise your own fundraising event or activity for the foundation, such as a bake sale, a quiz night, a sponsored run, or a gig. You can also join an existing fundraising event or challenge, such as a marathon or a skydive. You can register your fundraiser on the foundation's website or on JustGiving.
-
Volunteer: You can offer your time and skills to the foundation as a volunteer. You can help with events, administration, social media, research, or other tasks. You can also become a peer educator or a mentor for the foundation's programmes. You can apply to volunteer on the foundation's website.
-
Shop: You can buy merchandise from the foundation's online shop or from its partner retailers. You can find t-shirts, hoodies, bags, mugs, books, CDs, vinyls, and other items featuring Amy Winehouse's image or logo. A percentage of the sales will go to the foundation.
-
Raise awareness: You can spread the word about the foundation and its work by following its social media accounts, sharing its posts, signing up for its newsletter, or telling your friends and family about it. You can also learn more about the issues that affect young people and how you can help them.
-
-
These are some of the ways that you can support the Amy Winehouse Foundation and become a part of Amy's legacy. By supporting the foundation's work, you will help thousands of young people to feel supported and informed,
-so that they are better able to manage their emotional wellbeing
-and make informed choices around things that can affect their lives
-
Conclusion
-
Amy Winehouse was a brilliant singer-songwriter who left behind a legacy of soulful and powerful music. Her album Back To Black is a masterpiece that showcases her talent and emotion. If you want to download Back To Black in high-resolution audio and enjoy this amazing album online, you can follow the tips and tricks that we have shared in this article. You can also watch some of the documentaries that have been made about Amy Winehouse's life and music, and learn more about her story and impact. And if you want to support the Amy Winehouse Foundation and its work with young people, you can donate, fundraise, volunteer, shop, or raise awareness.
-
We hope this article was helpful and informative for you. If you liked it, please share it with your friends and family who might be interested in downloading Back To Black online. And if you have any questions or feedback, please leave them in the comments section below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Bloodstained.Curse.of.the.Moon.RIP-Unleashed Download ((TOP)) For Computer.md b/spaces/diacanFperku/AutoGPT/Bloodstained.Curse.of.the.Moon.RIP-Unleashed Download ((TOP)) For Computer.md
deleted file mode 100644
index 708182baf6c6518b246be6d28854021486b606c2..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Bloodstained.Curse.of.the.Moon.RIP-Unleashed Download ((TOP)) For Computer.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
And in terms of overhead, the GameCube can technically run off a couple of AAA game cartridges. Well, at least, this is what I found out when I tried out a GameCube on my computer to see what kind of games I could play on it and discovered the processor of a GameCube has a clock speed of 32MHz (0.0625ms). This is apparently not good enough to run any major game, and it can only run small 2D games.
-
Bloodstained.Curse.of.the.Moon.RIP-Unleashed Download For Computer
Games can be added to your game library from the Redeem Codes section of the PlayStation Store. Redeem the code, download and then add to your game library. You can select up to 10 game titles at a time. NOTE: Online multiplayer functionality and item storage will be removed for members.
-
The developers at Otome Altered Dimensions, creators of the very influential Otome game, created a self-contained soundtrack that they have finished developing for their Kickstarter funded game, Bloodstained: Curse of the Moon. Now there is a download available for the soundtrack, appropriately called the Return of the Lunar Witches. It includes 42 mina of music drawn from the Bloodstained: Curse of the Moon soundtrack!
-
Games with Gold is a promotional service that rewards loyal subscribers to the Xbox Live Gold service. Twice a month, subscribers receive a selection of Xbox One games and Xbox 360 games to download and play, free of charge. This offer does not extend to the free Xbox Live subscription; only Xbox Live Gold members have access to the free games.
-
Bloodstained is available on multiple platforms, including the Steam platform. Many of them require special client to be downloaded. We'll see below which of them are actually the Windows version of the game. We have not tested every platform and all games available in the store have not been tested. Note: This wiki is no longer maintained. Please check out the other documentation in the wiki top-level directory, or check the current project information in our WikiProject. Bloodstained.Curse.of.the.Moon.RIP-Unleashed Download For Computer
It was in 2011 when I started hearing about this title, and I was absolutely hyped about it. However, this was a title that no one ever got the chance to play. I was only able to play it in 2015. This particular version of the game is pretty old, but I think the devs are currently still supporting this title and there are plans to make a sequel. The full version of the game can be bought on PC, Mac, and Linux and contains both the console version and the Steam version of the game. The only difference is that the console version does not have an online multiplayer mode.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Desktop Mascot Engine Download Now !!EXCLUSIVE!!.md b/spaces/diacanFperku/AutoGPT/Desktop Mascot Engine Download Now !!EXCLUSIVE!!.md
deleted file mode 100644
index e39e3c29068a444fbce8fd584cb3e1ea82ab21cd..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Desktop Mascot Engine Download Now !!EXCLUSIVE!!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-sonic 06 rom download SONIC THE HEDGEHOG is the fan-made PC remake of the ... via Unity Engine, and a demo is currently available for you to download. ... AM8 development team to develop a game featuring a mascot for the company. 1fdad05405
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Download [BETTER] Terjemahan Financial Accounting Ifrs Edition Weygandt Kieso.md b/spaces/diacanFperku/AutoGPT/Download [BETTER] Terjemahan Financial Accounting Ifrs Edition Weygandt Kieso.md
deleted file mode 100644
index 78e908ce0ab2823327e7ac135df98f676f85e805..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Download [BETTER] Terjemahan Financial Accounting Ifrs Edition Weygandt Kieso.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Download Terjemahan Intermediate Accounting Kieso Bab 15 13 ... Tahukah Anda buku intermediate accounting IFRS edition karya Weygandt Kimmel . ... Intermediate Kieso Bab 7 Financial Accounting and Accounting Standards Kieso Ifrs ... 1fdad05405
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Mad Money Torrent.md b/spaces/diacanFperku/AutoGPT/Mad Money Torrent.md
deleted file mode 100644
index 01acfab35739b7e9604ec067bcecd8d191c20d52..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mad Money Torrent.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
Mad Money: A Comedy Heist Movie Starring Diane Keaton and Queen Latifah
-
Mad Money is a 2008 American comedy film directed by Callie Khouri and starring Diane Keaton, Queen Latifah, Katie Holmes, and Ted Danson. It is loosely based on the 2001 British film Hot Money.
The film follows three female employees of the Federal Reserve Bank of Kansas City who hatch a plan to steal money that is about to be destroyed. The film received mixed reviews from critics and grossed $26.4 million worldwide.
-
If you are looking for a fun and light-hearted movie to watch, you can download Mad Money from various torrent sites. However, you should be careful when downloading torrents, as some of them may contain malware or viruses. You should also use a VPN to protect your privacy and avoid legal troubles.
-
Here are some of the best torrent sites where you can find Mad Money:
-
-
YTS: This site offers high-quality movies in small file sizes. You can find Mad Money in 720p and 1080p resolutions[^1^].
-
YTSDDL: This site is similar to YTS, but it also provides direct download links and magnet links for Mad Money[^2^].
-
The Pirate Bay: This site is one of the most popular and resilient torrent sites in the world. You can find Mad Money in various formats and qualities[^4^].
-
Zooqle: This site is a newcomer in the torrent scene, but it has a large collection of movies and TV shows. You can find Mad Money in 720p and 1080p resolutions[^4^].
-
-
Before you download Mad Money, you may want to check out its IMDb page[^3^] for more information about the cast, plot, trivia, and ratings.
Mad Money is a movie that appeals to fans of comedy and heist genres. It has a witty and fast-paced script, a talented and charismatic cast, and a clever twist at the end. The movie also explores some themes such as greed, friendship, and empowerment.
-
The movie was released in January 2008 and received mixed reviews from critics. Some praised the performances of the lead actresses and the humor of the film, while others criticized the plot holes, the lack of realism, and the wasted potential of the premise. The movie has a 21% rating on Rotten Tomatoes and a 5.8/10 rating on IMDb.
-
-
Mad Money is not a masterpiece, but it is an entertaining and enjoyable movie that will make you laugh and root for the protagonists. If you are looking for a fun way to spend an hour and a half, you can download Mad Money from one of the torrent sites mentioned above.
Mad Money is a movie that has some memorable quotes and scenes. For example, one of the funniest moments is when Bridget (Diane Keaton) tries to explain to her husband Don (Ted Danson) why she needs to buy a shredder. She says: "I'm starting a small business. I'm going to shred cheese."
-
Another hilarious scene is when Nina (Queen Latifah) and Jackie (Katie Holmes) have to distract a security guard named Barry (Roger Cross) who has a crush on Nina. They pretend to be interested in his hobbies and ask him questions like: "Do you like Star Trek or Star Wars better?" and "Do you think aliens are real?"
-
The movie also has some positive messages about female empowerment and friendship. The three women bond over their common goal and support each other through their difficulties. They also show courage and intelligence in executing their plan and outsmarting their enemies.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Mard Movies Torrent.md b/spaces/diacanFperku/AutoGPT/Mard Movies Torrent.md
deleted file mode 100644
index 7b28846424c010923bb5b24eb9bb5f35edb7c250..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mard Movies Torrent.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
you have to have this kind of website to make money. in short, they provide us with the film so we can pay for the movies we watch. but, it's an important thing that we should be aware of that these people are not who they say they are. there is no guarantee that this website is safe, it is 100% illegal. people can also get into trouble for downloading movies and this is an offense. we have to pay for movies, so we watch them all the time. and so, when we are not at home, we watch them on the internet.
-
this website is not safe. these people will try to make money from us by providing the movie for free. they can do this by charging us money to watch the movies, by using our personal information in an inappropriate manner, or by using our computer to perform illegal actions. there is a risk that our personal information could be used by these people. and, if they do, it is not clear whether we can get our money back or if we can get our data back. there is a risk that we may be charged with a crime by the police. we cannot know what the risks are. it is up to us to decide whether we want to download the movie from this website.
they say that they provide us with the film for free. they are not to be trusted. the creators of the website can make money from us. if we download movies from them, then we are responsible for the use of our personal information. at the end of the day, if they are not safe, then it is best that we don't download the movie from the website. there is a risk that we may be charged with a crime by the police. we cannot know what the risks are. it is up to us to decide whether we want to download the movie from this website.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Music Wars Empire FM Download] [Torrent] __HOT__.md b/spaces/diacanFperku/AutoGPT/Music Wars Empire FM Download] [Torrent] __HOT__.md
deleted file mode 100644
index bca3eebb51af7aa1260b58e44bf804cf614615ee..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Music Wars Empire FM Download] [Torrent] __HOT__.md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
- . . much more, including battling in the Mainland, East and West Coast Armies, storming across the islands, building your empire, and waging war for world domination . . . thanks to the new Empire Engine.
-
-The only question is, will you settle for second best when you can have it all with Music Wars Empire: FM?
-
-> Music Wars Empire: FM
-
-> Play the Music Wars Game
-
-> Turn up the volume on the music industry!
-
-Music Wars Empire: FM
-
-Is available for free on all platforms, including: Android, iOS, Windows Phone, Xbox, Web, and now BlackBerry.
-
-Includes
-
-* High-quality music for sale
-
-* Direct Link to royalty-free tracks
-
-* Browse by Artist or by Album
-
-* Play full albums and single tracks, without having to go through artists
-
-* Read, write and delete notes
-
-* Browse the music by Genre
-
-* Browse the music by Mood
-
-* Add and share music
-
-* Sync music with other Music Wars users
-
-* Play music and listen to music along to your games
-
-* Add music to playlists
-
-* Download music to play offline
-
-* Direct Link to the Empire Engine
-
-* Full offline support
-
-* Save your music
-
-* Support for all web browsers and mobile platforms
-
-* Standard/Epic/Legendary Music Packs
-
-* Diverse set of music instruments
-
-* More than 800 music tracks
-
-* Full integration with iOS music player
-
-* Add music to your desktop using the new Music Wars HD iOS app (coming soon)
-
-* Unlimited disk space
-
-* Music Wars support
-
-* Battle in the Mainland, East and West Coast Armies, storming across the islands, building your empire, and waging war for world domination
-
-* Empire Engine: fight battles, make alliances, and forge the future of music
-
-* Free music for Android, iOS, and BlackBerry
-
-* Use your devices' native music player
-
-* Multiple account support (iOS)
-
-* Voice commands
-
-* Support for current and legacy music packs
-
-* Full offline support
-
-* Beautiful graphics
-
-* Fight with the music
-
-* Many more features
-
-* Android version coming soon
-
-* Apple Watch support
-
-* iOS version coming soon
-
-* Blackberry version 4fefd39f24
-
-
-
diff --git a/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/commons.py b/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/commons.py
deleted file mode 100644
index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/commons.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/doluvor/faster-whisper-webui/README.md b/spaces/doluvor/faster-whisper-webui/README.md
deleted file mode 100644
index b3c10b031523b15b2b5efc4df4b1c40121ca0bea..0000000000000000000000000000000000000000
--- a/spaces/doluvor/faster-whisper-webui/README.md
+++ /dev/null
@@ -1,179 +0,0 @@
----
-title: Faster Whisper Webui
-emoji: 🚀
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: aadnk/faster-whisper-webui
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-
-# Running Locally
-
-To run this program locally, first install Python 3.9+ and Git. Then install Pytorch 10.1+ and all the other dependencies:
-```
-pip install -r requirements.txt
-```
-
-You can find detailed instructions for how to install this on Windows 10/11 [here (PDF)](docs/windows/install_win10_win11.pdf).
-
-Finally, run the full version (no audio length restrictions) of the app with parallel CPU/GPU enabled:
-```
-python app.py --input_audio_max_duration -1 --server_name 127.0.0.1 --auto_parallel True
-```
-
-You can also run the CLI interface, which is similar to Whisper's own CLI but also supports the following additional arguments:
-```
-python cli.py \
-[--vad {none,silero-vad,silero-vad-skip-gaps,silero-vad-expand-into-gaps,periodic-vad}] \
-[--vad_merge_window VAD_MERGE_WINDOW] \
-[--vad_max_merge_size VAD_MAX_MERGE_SIZE] \
-[--vad_padding VAD_PADDING] \
-[--vad_prompt_window VAD_PROMPT_WINDOW]
-[--vad_cpu_cores NUMBER_OF_CORES]
-[--vad_parallel_devices COMMA_DELIMITED_DEVICES]
-[--auto_parallel BOOLEAN]
-```
-In addition, you may also use URL's in addition to file paths as input.
-```
-python cli.py --model large --vad silero-vad --language Japanese "https://www.youtube.com/watch?v=4cICErqqRSM"
-```
-
-Rather than supplying arguments to `app.py` or `cli.py`, you can also use the configuration file [config.json5](config.json5). See that file for more information.
-If you want to use a different configuration file, you can use the `WHISPER_WEBUI_CONFIG` environment variable to specify the path to another file.
-
-### Multiple Files
-
-You can upload multiple files either through the "Upload files" option, or as a playlist on YouTube.
-Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section.
-When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files.
-
-## Whisper Implementation
-
-You can choose between using `whisper` or `faster-whisper`. [Faster Whisper](https://github.com/guillaumekln/faster-whisper) as a drop-in replacement for the
-default Whisper which achieves up to a 4x speedup and 2x reduction in memory usage.
-
-You can install the requirements for a specific Whisper implementation in `requirements-fasterWhisper.txt`
-or `requirements-whisper.txt`:
-```
-pip install -r requirements-fasterWhisper.txt
-```
-And then run the App or the CLI with the `--whisper_implementation faster-whisper` flag:
-```
-python app.py --whisper_implementation faster-whisper --input_audio_max_duration -1 --server_name 127.0.0.1 --auto_parallel True
-```
-You can also select the whisper implementation in `config.json5`:
-```json5
-{
- "whisper_implementation": "faster-whisper"
-}
-```
-### GPU Acceleration
-
-In order to use GPU acceleration with Faster Whisper, both CUDA 11.2 and cuDNN 8 must be installed. You may want to install it in a virtual environment like Anaconda.
-
-## Google Colab
-
-You can also run this Web UI directly on [Google Colab](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing), if you haven't got a GPU powerful enough to run the larger models.
-
-See the [colab documentation](docs/colab.md) for more information.
-
-## Parallel Execution
-
-You can also run both the Web-UI or the CLI on multiple GPUs in parallel, using the `vad_parallel_devices` option. This takes a comma-delimited list of
-device IDs (0, 1, etc.) that Whisper should be distributed to and run on concurrently:
-```
-python cli.py --model large --vad silero-vad --language Japanese \
---vad_parallel_devices 0,1 "https://www.youtube.com/watch?v=4cICErqqRSM"
-```
-
-Note that this requires a VAD to function properly, otherwise only the first GPU will be used. Though you could use `period-vad` to avoid taking the hit
-of running Silero-Vad, at a slight cost to accuracy.
-
-This is achieved by creating N child processes (where N is the number of selected devices), where Whisper is run concurrently. In `app.py`, you can also
-set the `vad_process_timeout` option. This configures the number of seconds until a process is killed due to inactivity, freeing RAM and video memory.
-The default value is 30 minutes.
-
-```
-python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600
-```
-
-To execute the Silero VAD itself in parallel, use the `vad_cpu_cores` option:
-```
-python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600 --vad_cpu_cores 4
-```
-
-You may also use `vad_process_timeout` with a single device (`--vad_parallel_devices 0`), if you prefer to always free video memory after a period of time.
-
-### Auto Parallel
-
-You can also set `auto_parallel` to `True`. This will set `vad_parallel_devices` to use all the GPU devices on the system, and `vad_cpu_cores` to be equal to the number of
-cores (up to 8):
-```
-python app.py --input_audio_max_duration -1 --auto_parallel True
-```
-
-# Docker
-
-To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU.
-Then either use the GitLab hosted container below, or check out this repository and build an image:
-```
-sudo docker build -t whisper-webui:1 .
-```
-
-You can then start the WebUI with GPU support like so:
-```
-sudo docker run -d --gpus=all -p 7860:7860 whisper-webui:1
-```
-
-Leave out "--gpus=all" if you don't have access to a GPU with enough memory, and are fine with running it on the CPU only:
-```
-sudo docker run -d -p 7860:7860 whisper-webui:1
-```
-
-# GitLab Docker Registry
-
-This Docker container is also hosted on GitLab:
-
-```
-sudo docker run -d --gpus=all -p 7860:7860 registry.gitlab.com/aadnk/whisper-webui:latest
-```
-
-## Custom Arguments
-
-You can also pass custom arguments to `app.py` in the Docker container, for instance to be able to use all the GPUs in parallel (replace administrator with your user):
-```
-sudo docker run -d --gpus all -p 7860:7860 \
---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \
---mount type=bind,source=/home/administrator/.cache/huggingface,target=/root/.cache/huggingface \
---restart=on-failure:15 registry.gitlab.com/aadnk/whisper-webui:latest \
-app.py --input_audio_max_duration -1 --server_name 0.0.0.0 --auto_parallel True \
---default_vad silero-vad --default_model_name large
-```
-
-You can also call `cli.py` the same way:
-```
-sudo docker run --gpus all \
---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \
---mount type=bind,source=/home/administrator/.cache/huggingface,target=/root/.cache/huggingface \
---mount type=bind,source=${PWD},target=/app/data \
-registry.gitlab.com/aadnk/whisper-webui:latest \
-cli.py --model large --auto_parallel True --vad silero-vad \
---output_dir /app/data /app/data/YOUR-FILE-HERE.mp4
-```
-
-## Caching
-
-Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand.
-To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally)
-prepopulate the directory with the different Whisper models.
-```
-sudo docker run -d --gpus=all -p 7860:7860 \
---mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \
-registry.gitlab.com/aadnk/whisper-webui:latest
-```
\ No newline at end of file
diff --git a/spaces/dongyi/MMFS/data/deprecated/patch_data.py b/spaces/dongyi/MMFS/data/deprecated/patch_data.py
deleted file mode 100644
index 4f893e78cd8554dbe223930e22fcd0db7610512a..0000000000000000000000000000000000000000
--- a/spaces/dongyi/MMFS/data/deprecated/patch_data.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import random
-import torch
-
-def load_patches(patch_batch_size, batch_size, patch_size, num_patch, diff_patch, index, data, transforms, return_dict):
- if patch_size > 0:
- assert (patch_batch_size % batch_size == 0), \
- "patch_batch_size is not divisible by batch_size."
- if 'paired_A' in return_dict or 'paired_B' in return_dict:
- if not diff_patch:
- # load patch from current image
- patchA = return_dict['paired_A'].clone()
- patchB = return_dict['paired_B'].clone()
- else:
- # load patch from a different image
- pathA = data['paired_A_path'][(index + 1) % len(data['paired_A_path'])]
- pathB = data['paired_B_path'][(index + 1) % len(data['paired_B_path'])]
- patchA, patchB = transforms['paired'](pathA, pathB)
- else:
- if not diff_patch:
- # load patch from current image
- patchA = return_dict['unpaired_A'].clone()
- patchB = return_dict['unpaired_B'].clone()
- else:
- # load patch from a different image
- pathA = data['unpaired_A_path'][(index + 1) % len(data['unpaired_A_path'])]
- pathB = data['unpaired_B_path'][(index + 1) % len(data['unpaired_B_path'])]
- patchA, patchB = transforms['unpaired'](pathA, pathB)
-
- # crop patch
- patchAs = []
- patchBs = []
- _, h, w = patchA.size()
-
- for _ in range(num_patch):
- r = random.randint(0, h - patch_size - 1)
- c = random.randint(0, w - patch_size - 1)
- patchAs.append(patchA[:, r:r + patch_size, c:c + patch_size])
- patchBs.append(patchB[:, r:r + patch_size, c:c + patch_size])
-
- patchAs = torch.cat(patchAs, 0)
- patchBs = torch.cat(patchBs, 0)
-
- return_dict['patch_A'] = patchAs
- return_dict['patch_B'] = patchBs
diff --git a/spaces/dukai289/learning_streamlit/pages/3_Layout.py b/spaces/dukai289/learning_streamlit/pages/3_Layout.py
deleted file mode 100644
index ea229ad57bca14046663854c7a3877c919ebaa8d..0000000000000000000000000000000000000000
--- a/spaces/dukai289/learning_streamlit/pages/3_Layout.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import streamlit as st
-import time
-
-
-
-st.markdown('# Layout')
-
-
-st.markdown('## 1. st.sidebar')
-st.sidebar.markdown('## 1. st.sidebar')
-code = '''
- import streamlit as st
-
- # Add a selectbox to the sidebar:
- add_selectbox = st.sidebar.selectbox(
- 'How would you like to be contacted?',
- ('Email', 'Home phone', 'Mobile phone')
- )
-
- # Add a slider to the sidebar:
- add_slider = st.sidebar.slider(
- 'Select a range of values',
- 0.0, 100.0, (25.0, 75.0)
- )
- '''
-st.code(code)
-
-add_selectbox = st.sidebar.selectbox(
- 'How would you like to be contacted?',
- ('Email', 'Home phone', 'Mobile phone')
- )
-
-add_slider = st.sidebar.slider(
- 'Select a range of values',
- 0.0, 100.0, (25.0, 75.0)
- )
-
-
-
-st.divider()
-st.markdown('## 2. st.columns')
-st.sidebar.markdown('## 2. st.columns')
-code = '''
- import streamlit as st
-
- left_column, right_column = st.columns(2)
- # You can use a column just like st.sidebar:
- left_column.button('Press me!')
-
- # Or even better, call Streamlit functions inside a "with" block:
- with right_column:
- chosen = st.radio(
- 'Sorting hat',
- ("Gryffindor", "Ravenclaw", "Hufflepuff", "Slytherin")
- )
- st.write(f"You are in {chosen} house!")
- '''
-st.code(code)
-
-left_column, right_column = st.columns(2)
-left_column.button('Press me!')
-
-with right_column:
- chosen = st.radio(
- 'Radio -- Sorting hat',
- ("Gryffindor", "Ravenclaw", "Hufflepuff", "Slytherin")
- )
- st.write(f"You are in {chosen} house!")
-
-
-
-st.divider()
-st.markdown('## 3. st.expander')
-st.sidebar.markdown('## 3. st.expander')
-
-code = '''
- import streamlit as st
-
- st.bar_chart({"data": [1, 5, 2, 6, 2, 1]})
-
- with st.expander("See explanation"):
- st.write('\'\'
- The chart above shows some numbers I picked for you.
- I rolled actual dice for these, so they're *guaranteed* to
- be random.
- '\'\')
- st.image("https://static.streamlit.io/examples/dice.jpg")
- '''
-st.code(code)
-
-st.bar_chart({"data": [1, 5, 2, 6, 2, 1]})
-
-with st.expander("See explanation"):
- st.write('''
- The chart above shows some numbers I picked for you.
- I rolled actual dice for these, so they're *guaranteed* to
- be random.
- ''')
- st.image("https://static.streamlit.io/examples/dice.jpg")
-
-
-
-st.divider()
-st.markdown('## 4. st.progress')
-st.sidebar.markdown('## 4. st.progress')
-code = '''
- import streamlit as st
- import time
-
- 'Starting a long computation...'
-
- # Add a placeholder
- latest_iteration = st.empty()
- bar = st.progress(0)
-
- for i in range(100):
- # Update the progress bar with each iteration.
- latest_iteration.text(f'Iteration {i+1}')
- bar.progress(i + 1)
- time.sleep(0.1)
-
- '...and now we\'re done!'
-'''
-
-st.code(code)
-
-'Starting a long computation...'
-
-latest_iteration = st.empty()
-bar = st.progress(0)
-
-for i in range(100):
- latest_iteration.text(f'Iteration {i+1}')
- bar.progress(i + 1)
- time.sleep(0.1)
-
-'...and now we\'re done!'
\ No newline at end of file
diff --git a/spaces/dwolfe66/text-generation-webui-space/api-example-stream.py b/spaces/dwolfe66/text-generation-webui-space/api-example-stream.py
deleted file mode 100644
index a5ed420252fdceab73cc26d83a7b87f60981ec95..0000000000000000000000000000000000000000
--- a/spaces/dwolfe66/text-generation-webui-space/api-example-stream.py
+++ /dev/null
@@ -1,90 +0,0 @@
-'''
-
-Contributed by SagsMug. Thank you SagsMug.
-https://github.com/oobabooga/text-generation-webui/pull/175
-
-'''
-
-import asyncio
-import json
-import random
-import string
-
-import websockets
-
-
-def random_hash():
- letters = string.ascii_lowercase + string.digits
- return ''.join(random.choice(letters) for i in range(9))
-
-async def run(context):
- server = "127.0.0.1"
- params = {
- 'max_new_tokens': 200,
- 'do_sample': True,
- 'temperature': 0.5,
- 'top_p': 0.9,
- 'typical_p': 1,
- 'repetition_penalty': 1.05,
- 'top_k': 0,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
- }
- session = random_hash()
-
- async with websockets.connect(f"ws://{server}:7860/queue/join") as websocket:
- while content := json.loads(await websocket.recv()):
- #Python3.10 syntax, replace with if elif on older
- match content["msg"]:
- case "send_hash":
- await websocket.send(json.dumps({
- "session_hash": session,
- "fn_index": 7
- }))
- case "estimation":
- pass
- case "send_data":
- await websocket.send(json.dumps({
- "session_hash": session,
- "fn_index": 7,
- "data": [
- context,
- params['max_new_tokens'],
- params['do_sample'],
- params['temperature'],
- params['top_p'],
- params['typical_p'],
- params['repetition_penalty'],
- params['top_k'],
- params['min_length'],
- params['no_repeat_ngram_size'],
- params['num_beams'],
- params['penalty_alpha'],
- params['length_penalty'],
- params['early_stopping'],
- ]
- }))
- case "process_starts":
- pass
- case "process_generating" | "process_completed":
- yield content["output"]["data"][0]
- # You can search for your desired end indicator and
- # stop generation by closing the websocket here
- if (content["msg"] == "process_completed"):
- break
-
-prompt = "What I would like to say is the following: "
-
-async def get_result():
- async for response in run(prompt):
- # Print intermediate steps
- print(response)
-
- # Print final result
- print(response)
-
-asyncio.run(get_result())
diff --git a/spaces/easrng/text-to-emoji/app.py b/spaces/easrng/text-to-emoji/app.py
deleted file mode 100644
index 88372c0871512daa65dac4f0ea92ea359cd6fefb..0000000000000000000000000000000000000000
--- a/spaces/easrng/text-to-emoji/app.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import emoji_data_python
-import pickle
-from tqdm import tqdm
-from sentence_transformers import SentenceTransformer
-import numpy as np
-model = SentenceTransformer('all-mpnet-base-v2')
-try:
- with open('embeddings_list.pkl', 'rb') as f:
- embeddings_list = pickle.load(f)
-except:
- embeddings_list = []
-emojis_to_compute = [e for e in emoji_data_python.emoji_data if e.unified not in [e[0] for e in embeddings_list]]
-if emojis_to_compute:
- for e in tqdm(emojis_to_compute, desc='Computing embeddings'):
- strings = [n.replace('_', ' ').strip() for n in e.short_names] + [e.name.lower()]
- for s in strings:
- embedding = model.encode(s)
- embeddings_list.append((e.unified, embedding))
- with open('embeddings_list.pkl', 'wb') as f:
- pickle.dump(embeddings_list, f)
-def closest_emoji(text):
- text_embedding = model.encode(text)
- closest_emoji = None
- closest_distance = np.inf
- for emoji, emoji_embedding in embeddings_list:
- distance = np.linalg.norm(text_embedding - emoji_embedding)
- if distance < closest_distance:
- closest_distance = distance
- closest_emoji = emoji
- return emoji_data_python.unified_to_char(closest_emoji)
-import gradio as gr
-emoji_input = gr.inputs.Textbox(label='text in')
-emoji_output = gr.outputs.Textbox(label='emoji out')
-iface = gr.Interface(fn=closest_emoji, inputs=emoji_input, outputs=emoji_output,
- title='text to emoji')
-iface.launch()
\ No newline at end of file
diff --git a/spaces/editing-images/project/static/js/index.js b/spaces/editing-images/project/static/js/index.js
deleted file mode 100644
index 8daed0c67f71a02472f1c35c474aa0888974a6ec..0000000000000000000000000000000000000000
--- a/spaces/editing-images/project/static/js/index.js
+++ /dev/null
@@ -1,78 +0,0 @@
-window.HELP_IMPROVE_VIDEOJS = false;
-
-var INTERP_BASE = "./static/interpolation/stacked";
-var NUM_INTERP_FRAMES = 240;
-
-var interp_images = [];
-function preloadInterpolationImages() {
- for (var i = 0; i < NUM_INTERP_FRAMES; i++) {
- var path = INTERP_BASE + '/' + String(i).padStart(6, '0') + '.jpg';
- interp_images[i] = new Image();
- interp_images[i].src = path;
- }
-}
-
-function setInterpolationImage(i) {
- var image = interp_images[i];
- image.ondragstart = function() { return false; };
- image.oncontextmenu = function() { return false; };
- $('#interpolation-image-wrapper').empty().append(image);
-}
-
-
-$(document).ready(function() {
- // Check for click events on the navbar burger icon
- $(".navbar-burger").click(function() {
- // Toggle the "is-active" class on both the "navbar-burger" and the "navbar-menu"
- $(".navbar-burger").toggleClass("is-active");
- $(".navbar-menu").toggleClass("is-active");
-
- });
-
- var options = {
- slidesToScroll: 1,
- slidesToShow: 3,
- loop: true,
- infinite: true,
- autoplay: false,
- autoplaySpeed: 3000,
- }
-
- // Initialize all div with carousel class
- var carousels = bulmaCarousel.attach('.carousel', options);
-
- // Loop on each carousel initialized
- for(var i = 0; i < carousels.length; i++) {
- // Add listener to event
- carousels[i].on('before:show', state => {
- console.log(state);
- });
- }
-
- // Access to bulmaCarousel instance of an element
- var element = document.querySelector('#my-element');
- if (element && element.bulmaCarousel) {
- // bulmaCarousel instance is available as element.bulmaCarousel
- element.bulmaCarousel.on('before-show', function(state) {
- console.log(state);
- });
- }
-
- /*var player = document.getElementById('interpolation-video');
- player.addEventListener('loadedmetadata', function() {
- $('#interpolation-slider').on('input', function(event) {
- console.log(this.value, player.duration);
- player.currentTime = player.duration / 100 * this.value;
- })
- }, false);*/
- preloadInterpolationImages();
-
- $('#interpolation-slider').on('input', function(event) {
- setInterpolationImage(this.value);
- });
- setInterpolationImage(0);
- $('#interpolation-slider').prop('max', NUM_INTERP_FRAMES - 1);
-
- bulmaSlider.attach();
-
-})
diff --git a/spaces/elijahcilfone/dreambooth-training/README.md b/spaces/elijahcilfone/dreambooth-training/README.md
deleted file mode 100644
index 7992460a04508d8ab5187f8899f2aae7680a2cda..0000000000000000000000000000000000000000
--- a/spaces/elijahcilfone/dreambooth-training/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Dreambooth Training
-emoji: 🌖
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.10.1
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: akhaliq/dreambooth-training
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ennov8ion/stablediffusion-models/README.md b/spaces/ennov8ion/stablediffusion-models/README.md
deleted file mode 100644
index f051d73649aa2f5d89029420185485c2df6fc7ee..0000000000000000000000000000000000000000
--- a/spaces/ennov8ion/stablediffusion-models/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Maximum Multiplier
-emoji: 🛕🛕
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
-duplicated_from: blueorigin6/Scifiart-Models
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/eson/tokenizer-arena/vocab/chatglm2_6b/__init__.py b/spaces/eson/tokenizer-arena/vocab/chatglm2_6b/__init__.py
deleted file mode 100644
index 519f78936ee733e25f54ad2baa09c3e162d7d3b0..0000000000000000000000000000000000000000
--- a/spaces/eson/tokenizer-arena/vocab/chatglm2_6b/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from transformers import AutoTokenizer
-tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True)
\ No newline at end of file
diff --git "a/spaces/f2api/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" "b/spaces/f2api/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py"
deleted file mode 100644
index cbda23b83d759e6a3a4da5847c37ddff662daab2..0000000000000000000000000000000000000000
--- "a/spaces/f2api/gpt-academic/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py"
+++ /dev/null
@@ -1,166 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-import re
-import unicodedata
-fast_debug = False
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-
-def is_paragraph_break(match):
- """
- 根据给定的匹配结果来判断换行符是否表示段落分隔。
- 如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。
- 也可以根据之前的内容长度来判断段落是否已经足够长。
- """
- prev_char, next_char = match.groups()
-
- # 句子结束标志
- sentence_endings = ".!?"
-
- # 设定一个最小段落长度阈值
- min_paragraph_length = 140
-
- if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length:
- return "\n\n"
- else:
- return " "
-
-def normalize_text(text):
- """
- 通过把连字(ligatures)等文本特殊符号转换为其基本形式来对文本进行归一化处理。
- 例如,将连字 "fi" 转换为 "f" 和 "i"。
- """
- # 对文本进行归一化处理,分解连字
- normalized_text = unicodedata.normalize("NFKD", text)
-
- # 替换其他特殊字符
- cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text)
-
- return cleaned_text
-
-def clean_text(raw_text):
- """
- 对从 PDF 提取出的原始文本进行清洗和格式化处理。
- 1. 对原始文本进行归一化处理。
- 2. 替换跨行的连词
- 3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换
- """
- # 对文本进行归一化处理
- normalized_text = normalize_text(raw_text)
-
- # 替换跨行的连词
- text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text)
-
- # 根据前后相邻字符的特点,找到原文本中的换行符
- newlines = re.compile(r'(\S)\n(\S)')
-
- # 根据 heuristic 规则,用空格或段落分隔符替换原换行符
- final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text)
-
- return final_text.strip()
-
-def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import time, glob, os, fitz
- print('begin analysis on:', file_manifest)
- for index, fp in enumerate(file_manifest):
- with fitz.open(fp) as doc:
- file_content = ""
- for page in doc:
- file_content += page.get_text()
- file_content = clean_text(file_content)
- print(file_content)
-
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say_show_user,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=[],
- sys_prompt="总结文章。"
- ) # 带超时倒计时
-
-
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- if not fast_debug: time.sleep(2)
-
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=history,
- sys_prompt="总结文章。"
- ) # 带超时倒计时
-
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
-
-@CatchException
-def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import fitz
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
- # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
-
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 开始正式执行任务
- yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
diff --git a/spaces/facebook/MusicGen/tests/common_utils/temp_utils.py b/spaces/facebook/MusicGen/tests/common_utils/temp_utils.py
deleted file mode 100644
index b45d896836799edcf1fee271409b390b3b6e4127..0000000000000000000000000000000000000000
--- a/spaces/facebook/MusicGen/tests/common_utils/temp_utils.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-import tempfile
-
-
-class TempDirMixin:
- """Mixin to provide easy access to temp dir.
- """
-
- temp_dir_ = None
-
- @classmethod
- def get_base_temp_dir(cls):
- # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory.
- # this is handy for debugging.
- key = "AUDIOCRAFT_TEST_DIR"
- if key in os.environ:
- return os.environ[key]
- if cls.temp_dir_ is None:
- cls.temp_dir_ = tempfile.TemporaryDirectory()
- return cls.temp_dir_.name
-
- @classmethod
- def tearDownClass(cls):
- if cls.temp_dir_ is not None:
- try:
- cls.temp_dir_.cleanup()
- cls.temp_dir_ = None
- except PermissionError:
- # On Windows there is a know issue with `shutil.rmtree`,
- # which fails intermittently.
- # https://github.com/python/cpython/issues/74168
- # Following the above thread, we ignore it.
- pass
- super().tearDownClass()
-
- @property
- def id(self):
- return self.__class__.__name__
-
- def get_temp_path(self, *paths):
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
- path = os.path.join(temp_dir, *paths)
- os.makedirs(os.path.dirname(path), exist_ok=True)
- return path
-
- def get_temp_dir(self, *paths):
- temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
- path = os.path.join(temp_dir, *paths)
- os.makedirs(path, exist_ok=True)
- return path
diff --git a/spaces/falterWliame/Face_Mask_Detection/Love Pyaar Ka Punchnama 2 Full Movie Download 720p Hd.md b/spaces/falterWliame/Face_Mask_Detection/Love Pyaar Ka Punchnama 2 Full Movie Download 720p Hd.md
deleted file mode 100644
index 8a3a9c948f1305a7ee7e4c4b3d6456b53728c700..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Love Pyaar Ka Punchnama 2 Full Movie Download 720p Hd.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
love Pyaar Ka Punchnama 2 full movie download 720p hd
-
- 4d29de3e1b
-
-
-
diff --git a/spaces/fatiXbelha/sd/Download Driver for Sharp AR-6020V Compatible with Windows 10 8 7 and Server.md b/spaces/fatiXbelha/sd/Download Driver for Sharp AR-6020V Compatible with Windows 10 8 7 and Server.md
deleted file mode 100644
index 6163d600544e5f9240312037978ab8a734cb9d4e..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Driver for Sharp AR-6020V Compatible with Windows 10 8 7 and Server.md
+++ /dev/null
@@ -1,143 +0,0 @@
-
-
How to Download Driver for Sharp AR-6020V
-
If you have a sharp ar-6020v printer, you might want to know how to download and install the latest driver for it. A driver is a software component that allows your computer and your printer to communicate with each other. Updating your driver can improve your printer's performance, compatibility, security, and stability. In this article, we will show you how to download driver for sharp ar-6020v printer in two easy methods. We will also explain what are the benefits of updating drivers, what are the common driver issues and solutions, and how to avoid potential risks.
-
What Is a Driver and Why Do You Need to Update It?
-
A driver is a software component that contains the code that enables your computer and your printer to communicate with each other. Without a driver, your computer cannot use any of the features of your printer, such as printing, scanning, copying, etc.
Updating your driver can provide several advantages for your system, such as:
-
-
Fixing bugs and errors that cause malfunctions
-
Improving compatibility with new features and standards
-
Optimizing resource usage for speed and efficiency
-
Protecting from vulnerabilities and security threats
-
Enhancing the quality and functionality of your printer
-
-
However, updating your driver also involves some risks, such as:
-
-
Installing incompatible or corrupted drivers that cause system instability or crashes
-
Downloading drivers from untrusted sources that contain malware or viruses
-
Overwriting or deleting important system files or settings
-
Voiding the warranty or support of your printer manufacturer
-
-
What Are the Common Driver Issues and Solutions?
-
Some of the common driver issues that you may encounter on your Windows computer are:
-
-
Your printer is not recognized or detected by your computer
-
Your printer is not working properly or showing error messages
-
Your printer is printing slowly or with poor quality
-
Your printer is consuming too much ink or paper
-
Your printer is incompatible with some applications or devices
-
-
To fix these driver issues, you can try the following solutions:
-
-
Use Device Manager to update, uninstall, or reinstall your driver
-
Download and install the latest version of your driver from the official website of your printer manufacturer
-
Use Windows Update to check for any available updates for your driver
-
Use a reliable third-party software tool to scan, download, and install drivers automatically
-
Contact your printer manufacturer or customer support for further assistance
-
-
How to Download Driver for Sharp AR-6020V Printer?
-
There are two methods that you can use to download driver for sharp ar-6020v printer. You can either use Device Manager or download and install the driver from the official website.
-
download driver for sharp ar-6020v windows 10
-download driver for sharp ar-6020v windows 7
-download driver for sharp ar-6020v windows 8.1
-download driver for sharp ar-6020v windows server 2016
-download driver for sharp ar-6020v windows server 2012
-download driver for sharp ar-6020v windows server 2008
-download driver for sharp ar-6020v mac os x
-download driver for sharp ar-6020v pcl6
-download driver for sharp ar-6020v ps
-download driver for sharp ar-6020v ppd
-download driver for sharp ar-6020v splc
-download driver for sharp ar-6020v twain
-download driver for sharp ar-6020v button manager
-download driver for sharp ar-6020v software
-download driver for sharp ar-6020v firmware
-download driver for sharp ar-6020v manual
-download driver for sharp ar-6020v brochure
-download driver for sharp ar-6020v specifications
-download driver for sharp ar-6020v features
-download driver for sharp ar-6020v reviews
-download driver for sharp ar-6020v price
-download driver for sharp ar-6020v toner
-download driver for sharp ar-6020v drum
-download driver for sharp ar-6020v scanner
-download driver for sharp ar-6020v printer
-download driver for sharp ar-6020v copier
-download driver for sharp ar-6020v mfp
-download driver for sharp ar-6020v a3
-download driver for sharp ar-6020v black and white
-download driver for sharp ar 6020n v
-
Method 1: Use Device Manager
-
Device Manager is a built-in tool in Windows that allows you to manage and update your device drivers. To use Device Manager to download driver for sharp ar-6020v printer, follow these steps:
-
-
Right-click on the Start menu button and select Device Manager.
-
Expand the Printers category and find your sharp ar-6020v printer.
-
Right-click on your printer and select Update driver.
-
Select Search automatically for updated driver software.
Wait for Windows to search and install the driver for your printer.
-
Restart your computer and test your printer.
-
-
If Windows cannot find the driver for your printer, you can try the second method.
-
Method 2: Download and Install the Driver from the Official Website
-
You can also download and install the driver for sharp ar-6020v printer from the official website of Sharp. To do this, follow these steps:
Select your region and country from the drop-down menus.
-
Click on Support and then Drivers & Software.
-
Type in sharp ar-6020v in the search box and click on Search.
-
Select your operating system and language from the drop-down menus.
-
Click on Download to download the driver file to your computer.
-
Open the downloaded file and follow the on-screen instructions to install the driver.
-
Restart your computer and test your printer.
-
-
Conclusion
-
In this article, we have shown you how to download driver for sharp ar-6020v printer in two easy methods. You can either use Device Manager or download and install the driver from the official website. Updating your driver can improve your printer's performance, compatibility, security, and stability. However, you should also be careful of the risks involved in updating drivers, such as installing incompatible or corrupted drivers, downloading drivers from untrusted sources, overwriting or deleting important system files or settings, or voiding the warranty or support of your printer manufacturer. Here are some tips and warnings for updating drivers:
-
-
Always backup your system and data before updating drivers.
-
Always download drivers from the official website of your printer manufacturer or a trusted source.
-
Always check the compatibility and specifications of your driver before installing it.
-
Always scan your downloaded files for malware or viruses before opening them.
-
Always follow the instructions and guidelines provided by your printer manufacturer or customer support.
-
-
We hope this article has helped you learn how to download driver for sharp ar-6020v printer. If you have any feedback or questions, please feel free to leave a comment below. We would love to hear from you!
-
FAQs
-
What is sharp ar-6020v printer?
-
Sharp ar-6020v printer is a multifunctional device that can print, copy, and scan documents. It has a print speed of up to 20 pages per minute, a paper capacity of up to 1100 sheets, and a resolution of up to 600 x 600 dpi. It also supports duplex printing, network printing, USB printing, and mobile printing. It is suitable for small to medium-sized offices that need a reliable and efficient printer.
-
How to check the driver version of sharp ar-6020v printer?
-
To check the driver version of sharp ar-6020v printer, you can use Device Manager. Here are the steps:
-
-
Right-click on the Start menu button and select Device Manager.
-
Expand the Printers category and find your sharp ar-6020v printer.
Right-click on your printer and select Properties.
-
Click on the Driver tab and check the Driver Version and Driver Date.
-
-
If the driver version is outdated or incorrect, you can update it using one of the methods mentioned above.
-
How to uninstall the driver of sharp ar-6020v printer?
-
To uninstall the driver of sharp ar-6020v printer, you can use Device Manager. Here are the steps:
-
-
Right-click on the Start menu button and select Device Manager.
-
Expand the Printers category and find your sharp ar-6020v printer.
-
Right-click on your printer and select Uninstall device.
-
Check the box that says Delete the driver software for this device and click on Uninstall.
-
Restart your computer and remove any remaining files or folders related to the driver.
-
-
If you want to reinstall the driver, you can use one of the methods mentioned above.
-
How to fix driver errors on Windows?
-
If you encounter any driver errors on Windows, such as blue screen of death, device not working properly, device not recognized, etc., you can try the following solutions:
-
-
Run Windows Troubleshooter to diagnose and fix common problems.
-
Use System Restore to undo any recent changes that may have caused the error.
-
Use Safe Mode to start your computer with minimal drivers and services.
-
Use System File Checker to scan and repair corrupted system files.
-
Use Disk Cleanup to delete any temporary or unnecessary files that may interfere with the driver.
-
-
How to update drivers automatically without risks?
-
If you want to update drivers automatically without risks, you can use a reliable third-party software tool that can scan, download, and install drivers for you. Some of the features that you should look for in such a tool are:
-
-
Compatibility with your operating system and device model
-
Access to a large database of official and up-to-date drivers
-
Ability to backup and restore drivers in case of any problems
-
Security and privacy protection from malware or viruses
If you are a fan of flight simulation games, you might have heard of Infinite Flight Simulator, one of the most comprehensive and realistic flight simulator apps on mobile devices. Infinite Flight Simulator lets you explore high-definition scenery in regions from around the world, fly dozens of aircraft with detailed cockpits and animations, plan your flights with real-world navigation data, and join thousands of other pilots and air traffic controllers in a global multiplayer experience. However, if you want to unlock all the features and content that Infinite Flight Simulator has to offer, you will need to subscribe to Infinite Flight Pro, which can be quite expensive for some users. That's why some people might want to download a mod game version of Infinite Flight Simulator, which can give them access to everything for free or with some extra benefits. But what is a mod game and how can you download it? In this article, we will explain what a mod game is, how to download a mod game version of Infinite Flight Simulator for Android and iOS devices, what are the benefits and risks of doing so, and some tips and tricks for playing it.
A mod game is a modified version of an original game that changes or adds some aspects of the gameplay, graphics, sound, or content. Mod games are usually created by fans or developers who want to improve or customize their gaming experience. Some examples of mod games are Minecraft with different textures, skins, or maps; GTA with new vehicles, weapons, or missions; or Pokemon with new regions, characters, or types. Mod games can be fun and exciting, as they can offer new challenges, features, or options that are not available in the original game. However, mod games are also unofficial and unauthorized, which means they can have some drawbacks or risks that we will discuss later.
-
How to Download Mod Game Infinite Flight Simulator
-
If you want to download a mod game version of Infinite Flight Simulator, you will need to follow these steps:
-
-
Find a reliable source for downloading the mod game file. There are many websites that offer mod games for various apps, but not all of them are safe or trustworthy. You should do some research and read some reviews before choosing a site. Some popular sites for downloading mod games are Nexus Mods and CurseForge.
-
Download the mod game file from the source. The file will usually be in APK format for Android devices or IPA format for iOS devices. You might need to enable unknown sources or trust the developer in your device settings before installing the file.
-
Install the mod game file on your device. You might need to uninstall the original Infinite Flight Simulator app first if you have it. Then, follow the instructions on the screen to complete the installation process.
-
Launch the mod game app and enjoy your flight simulation experience. You should be able to access all the features and content that Infinite Flight Simulator has to offer without paying for a subscription or with some extra benefits depending on the mod game version you downloaded.
-
-
Benefits of Downloading Mod Game Infinite Flight Simulator
-
By downloading a mod game version of Infinite Flight Simulator , you can enjoy some benefits that can enhance your flight simulation experience. Some of these benefits are:
-
-
You can access all the regions, aircraft, and features that Infinite Flight Simulator has to offer without paying for a subscription. This means you can fly anywhere in the world, choose from over 80 aircraft with realistic models and physics, and use advanced flight planning, navigation, and instrumentation tools.
-
You can customize your gameplay with some extra options or settings that are not available in the original game. For example, you can adjust the weather, time, or traffic conditions, enable or disable realistic damage or fuel consumption, or change the camera or control modes.
-
You can have more fun and challenge with some new additions or modifications that are not present in the original game. For example, you can fly with new liveries, sounds, or animations, try out new missions, scenarios, or events, or compete with other players in leaderboards or rankings.
-
-
Risks and Challenges of Downloading Mod Game Infinite Flight Simulator
-
However, downloading a mod game version of Infinite Flight Simulator also comes with some risks and challenges that you should be aware of. Some of these risks and challenges are:
-
-
You might violate the terms of service or the intellectual property rights of the original game developer. This means you might face some legal consequences or penalties for using an unauthorized or modified version of the game.
-
You might compromise the security or performance of your device. This means you might expose your device to malware, viruses, or spyware that can harm your data or system, or cause your device to crash, freeze, or lag.
-
You might lose some features or functionality of the original game. This means you might not be able to update the game to the latest version, access the official online services or support, or sync your progress or achievements with other devices or platforms.
-
-
Tips and Tricks for Playing Mod Game Infinite Flight Simulator
-
If you decide to download a mod game version of Infinite Flight Simulator, here are some tips and tricks that can help you make the most out of your mod game experience:
-
download mod game infinite flight simulator apk
-download mod game infinite flight simulator latest version
-download mod game infinite flight simulator unlocked all planes
-download mod game infinite flight simulator for android
-download mod game infinite flight simulator free
-download mod game infinite flight simulator pro
-download mod game infinite flight simulator offline
-download mod game infinite flight simulator 2023
-download mod game infinite flight simulator global
-download mod game infinite flight simulator with multiplayer
-download mod game infinite flight simulator full
-download mod game infinite flight simulator hack
-download mod game infinite flight simulator premium
-download mod game infinite flight simulator 23.2.1
-download mod game infinite flight simulator apkdone[^1^]
-download mod game infinite flight simulator realistic
-download mod game infinite flight simulator 3d
-download mod game infinite flight simulator hd
-download mod game infinite flight simulator no root
-download mod game infinite flight simulator unlimited money
-download mod game infinite flight simulator apk pure
-download mod game infinite flight simulator rexdl
-download mod game infinite flight simulator revdl
-download mod game infinite flight simulator apk mirror
-download mod game infinite flight simulator apk obb
-download mod game infinite flight simulator for pc
-download mod game infinite flight simulator for ios
-download mod game infinite flight simulator for windows 10
-download mod game infinite flight simulator for mac
-download mod game infinite flight simulator for laptop
-download mod game infinite flight simulator online
-download mod game infinite flight simulator update
-download mod game infinite flight simulator new planes
-download mod game infinite flight simulator new airports
-download mod game infinite flight simulator new features
-download mod game infinite flight simulator cheats
-download mod game infinite flight simulator tips and tricks
-download mod game infinite flight simulator tutorial
-download mod game infinite flight simulator guide
-download mod game infinite flight simulator review
-how to download mod game infinite flight simulator
-where to download mod game infinite flight simulator
-best site to download mod game infinite flight simulator
-best app to download mod game infinite flight simulator
-best way to download mod game infinite flight simulator
-why to download mod game infinite flight simulator
-what is the best mod for the Infinite Flight Simulator Game?
-
-
Choose a reputable and reliable source for downloading the mod game file. As we mentioned before, not all websites that offer mod games are safe or trustworthy. You should do some research and read some reviews before choosing a site. You should also scan the file for any malware or viruses before installing it on your device.
-
Backup your data and system before installing the mod game file. In case something goes wrong or you want to revert to the original game version, you should have a backup of your data and system that you can restore easily. You should also uninstall the original game app first if you have it to avoid any conflicts or errors.
-
Follow the instructions carefully when installing and launching the mod game file. You should follow the steps that we outlined earlier when downloading and installing the mod game file on your device. You should also check if you need to enable unknown sources or trust the developer in your device settings before installing the file. You should also launch the mod game app from your device's home screen and not from the file manager.
-
Enjoy your flight simulation experience with moderation and caution. You should have fun and explore all the features and content that Infinite Flight Simulator has to offer with a mod game version. However, you should also be respectful and responsible when playing online with other players or controllers. You should also be careful not to damage or overheat your device by playing for too long or at high settings.
-
-
Conclusion
-
Infinite Flight Simulator is one of the best flight simulator apps on mobile devices that offers a realistic and comprehensive flight simulation experience. However, if you want to unlock all the features and content that Infinite Flight Simulator has to offer without paying for a subscription or with some extra benefits, you might want to download a mod game version of Infinite Flight Simulator. A mod game is a modified version of an original game that changes or adds some aspects of the gameplay, graphics, sound, or content. Downloading a mod game version of Infinite Flight Simulator can be fun and exciting, as it can offer new challenges, features, or options that are not available in the original game. However, downloading a mod game version of Infinite Flight Simulator also comes with some risks and challenges, such as violating the terms of service or the intellectual property rights of the original game developer, compromising the security or performance of your device, or losing some features or functionality of the original game. Therefore, you should be careful and cautious when downloading and playing a mod game version of Infinite Flight Simulator. You should also follow some tips and tricks that we shared in this article to help you make the most out of your mod game experience.
-
FAQs
-
Here are some frequently asked questions and answers about downloading mod game infinite flight simulator:
-
-
What is Infinite Flight Pro and how much does it cost?
-Infinite Flight Pro is a subscription service that gives you access to all the regions, aircraft, and features that Infinite Flight Simulator has to offer. It also allows you to join the global multiplayer mode and fly with other pilots and controllers. Infinite Flight Pro costs $9.99 per month, $49.99 per six months, or $79.99 per year.
-
Is downloading a mod game version of Infinite Flight Simulator legal?
-Downloading a mod game version of Infinite Flight Simulator is not legal, as it violates the terms of service and the intellectual property rights of the original game developer. You might face some legal consequences or penalties for using an unauthorized or modified version of the game.
-
Is downloading a mod game version of Infinite Flight Simulator safe?
-Downloading a mod game version of Infinite Flight Simulator is not safe, as it might compromise the security or performance of your device. You might expose your device to malware, viruses, or spyware that can harm your data or system, or cause your device to crash, freeze, or lag.
-
Can I update a mod game version of Infinite Flight Simulator?
-You cannot update a mod game version of Infinite Flight Simulator to the latest version, as it might not be compatible with the original game or the mod game source. You might also lose some features or functionality of the original game or the mod game by updating it.
-
Can I play online with a mod game version of Infinite Flight Simulator?
-You can play online with a mod game version of Infinite Flight Simulator, but you might not be able to access the official online services or support, or sync your progress or achievements with other devices or platforms. You might also encounter some issues or errors when playing online with other players or controllers who are using the original game or a different mod game version.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/DragGAN/drag_gan.py b/spaces/fffiloni/DragGAN/drag_gan.py
deleted file mode 100644
index a1f939fb09c3082bae26bc0a9f893c6d3095e58b..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/DragGAN/drag_gan.py
+++ /dev/null
@@ -1,238 +0,0 @@
-import copy
-import os
-import random
-import urllib.request
-
-import torch
-import torch.nn.functional as FF
-import torch.optim
-from torchvision import utils
-from tqdm import tqdm
-
-from stylegan2.model import Generator
-
-
-class DownloadProgressBar(tqdm):
- def update_to(self, b=1, bsize=1, tsize=None):
- if tsize is not None:
- self.total = tsize
- self.update(b * bsize - self.n)
-
-
-def get_path(base_path):
- BASE_DIR = os.path.join('checkpoints')
-
- save_path = os.path.join(BASE_DIR, base_path)
- if not os.path.exists(save_path):
- url = f"https://huggingface.co/aaronb/StyleGAN2/resolve/main/{base_path}"
- print(f'{base_path} not found')
- print('Try to download from huggingface: ', url)
- os.makedirs(os.path.dirname(save_path), exist_ok=True)
- download_url(url, save_path)
- print('Downloaded to ', save_path)
- return save_path
-
-
-def download_url(url, output_path):
- with DownloadProgressBar(unit='B', unit_scale=True,
- miniters=1, desc=url.split('/')[-1]) as t:
- urllib.request.urlretrieve(url, filename=output_path, reporthook=t.update_to)
-
-
-class CustomGenerator(Generator):
- def prepare(
- self,
- styles,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_latent:
- styles = [self.style(s) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, f"noise_{i}") for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.n_latent - 1)
-
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- return latent, noise
-
- def generate(
- self,
- latent,
- noise,
- ):
- out = self.input(latent)
- out = self.conv1(out, latent[:, 0], noise=noise[0])
-
- skip = self.to_rgb1(out, latent[:, 1])
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- out = conv2(out, latent[:, i + 1], noise=noise2)
- skip = to_rgb(out, latent[:, i + 2], skip)
- if out.shape[-1] == 256: F = out
- i += 2
-
- image = skip
- F = FF.interpolate(F, image.shape[-2:], mode='bilinear')
- return image, F
-
-
-def stylegan2(
- size=512,
- channel_multiplier=2,
- latent=512,
- n_mlp=8,
- ckpt='stylegan2-ffhq-config-f.pt'
-):
- g_ema = CustomGenerator(size, latent, n_mlp, channel_multiplier=channel_multiplier)
- checkpoint = torch.load(get_path(ckpt))
- g_ema.load_state_dict(checkpoint["g_ema"], strict=False)
- g_ema.requires_grad_(False)
- g_ema.eval()
- return g_ema
-
-
-def bilinear_interpolate_torch(im, y, x):
- """
- im : B,C,H,W
- y : 1,numPoints -- pixel location y float
- x : 1,numPOints -- pixel location y float
- """
-
- x0 = torch.floor(x).long()
- x1 = x0 + 1
-
- y0 = torch.floor(y).long()
- y1 = y0 + 1
-
- wa = (x1.float() - x) * (y1.float() - y)
- wb = (x1.float() - x) * (y - y0.float())
- wc = (x - x0.float()) * (y1.float() - y)
- wd = (x - x0.float()) * (y - y0.float())
- # Instead of clamp
- x1 = x1 - torch.floor(x1 / im.shape[3]).int()
- y1 = y1 - torch.floor(y1 / im.shape[2]).int()
- Ia = im[:, :, y0, x0]
- Ib = im[:, :, y1, x0]
- Ic = im[:, :, y0, x1]
- Id = im[:, :, y1, x1]
-
- return Ia * wa + Ib * wb + Ic * wc + Id * wd
-
-
-def drag_gan(g_ema, latent: torch.Tensor, noise, F, handle_points, target_points, mask, max_iters=1000):
- handle_points0 = copy.deepcopy(handle_points)
- n = len(handle_points)
- r1, r2, lam, d = 3, 12, 20, 1
-
- def neighbor(x, y, d):
- points = []
- for i in range(x - d, x + d):
- for j in range(y - d, y + d):
- points.append(torch.tensor([i, j]).float().cuda())
- return points
-
- F0 = F.detach().clone()
-
- latent_trainable = latent[:, :6, :].detach().clone().requires_grad_(True)
- latent_untrainable = latent[:, 6:, :].detach().clone().requires_grad_(False)
- optimizer = torch.optim.Adam([latent_trainable], lr=2e-3)
- for iter in range(max_iters):
- for s in range(1):
- optimizer.zero_grad()
- latent = torch.cat([latent_trainable, latent_untrainable], dim=1)
- sample2, F2 = g_ema.generate(latent, noise)
-
- # motion supervision
- loss = 0
- for i in range(n):
- pi, ti = handle_points[i], target_points[i]
- di = (ti - pi) / torch.sum((ti - pi)**2)
-
- for qi in neighbor(int(pi[0]), int(pi[1]), r1):
- # f1 = F[..., int(qi[0]), int(qi[1])]
- # f2 = F2[..., int(qi[0] + di[0]), int(qi[1] + di[1])]
- f1 = bilinear_interpolate_torch(F2, qi[0], qi[1]).detach()
- f2 = bilinear_interpolate_torch(F2, qi[0] + di[0], qi[1] + di[1])
- loss += FF.l1_loss(f2, f1)
-
- loss += ((F2 - F0) * (1 - mask)).abs().mean() * lam
-
- loss.backward()
- optimizer.step()
-
- # point tracking
- with torch.no_grad():
- sample2, F2 = g_ema.generate(latent, noise)
- for i in range(n):
- pi = handle_points0[i]
- # f = F0[..., int(pi[0]), int(pi[1])]
- f0 = bilinear_interpolate_torch(F0, pi[0], pi[1])
- minv = 1e9
- minx = 1e9
- miny = 1e9
- for qi in neighbor(int(handle_points[i][0]), int(handle_points[i][1]), r2):
- # f2 = F2[..., int(qi[0]), int(qi[1])]
- try:
- f2 = bilinear_interpolate_torch(F2, qi[0], qi[1])
- except:
- import ipdb
- ipdb.set_trace()
- v = torch.norm(f2 - f0, p=1)
- if v < minv:
- minv = v
- minx = int(qi[0])
- miny = int(qi[1])
- handle_points[i][0] = minx
- handle_points[i][1] = miny
-
- F = F2.detach().clone()
- if iter % 1 == 0:
- print(iter, loss.item(), handle_points, target_points)
- # p = handle_points[0].int()
- # sample2[0, :, p[0] - 5:p[0] + 5, p[1] - 5:p[1] + 5] = sample2[0, :, p[0] - 5:p[0] + 5, p[1] - 5:p[1] + 5] * 0
- # t = target_points[0].int()
- # sample2[0, :, t[0] - 5:t[0] + 5, t[1] - 5:t[1] + 5] = sample2[0, :, t[0] - 5:t[0] + 5, t[1] - 5:t[1] + 5] * 255
-
- # sample2[0, :, 210, 134] = sample2[0, :, 210, 134] * 0
- # utils.save_image(sample2, "test2.png", normalize=True, range=(-1, 1))
-
- yield sample2, latent, F2, handle_points
diff --git a/spaces/fffiloni/Music_Source_Separation/separate_scripts/separate.py b/spaces/fffiloni/Music_Source_Separation/separate_scripts/separate.py
deleted file mode 100644
index 8aee07387b53ea6d143bd7b44efeee0f367cb797..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Music_Source_Separation/separate_scripts/separate.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import sys
-sys.path.append('.')
-import argparse
-import time
-
-import librosa
-import soundfile
-
-from bytesep.inference import SeparatorWrapper
-
-sample_rate = 44100 # Must be 44100 when using the downloaded checkpoints.
-
-
-def separate(args):
-
- audio_path = args.audio_path
- source_type = args.source_type
- device = "cpu" # "cuda" | "cpu"
-
- # Load audio.
- audio, fs = librosa.load(audio_path, sr=sample_rate, mono=False)
-
- if audio.ndim == 1:
- audio = audio[None, :]
- # (2, segment_samples)
-
- # separator
- separator = SeparatorWrapper(
- source_type=source_type,
- model=None,
- checkpoint_path=None,
- device=device,
- )
-
- t1 = time.time()
-
- # Separate.
- sep_wav = separator.separate(audio)
-
- sep_time = time.time() - t1
-
- # Write out audio
- sep_audio_path = 'sep_{}.wav'.format(source_type)
-
- soundfile.write(file=sep_audio_path, data=sep_wav.T, samplerate=sample_rate)
-
- print("Write out to {}".format(sep_audio_path))
- print("Time: {:.3f}".format(sep_time))
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--audio_path',
- type=str,
- default="resources/vocals_accompaniment_10s.mp3",
- help="Audio path",
- )
- parser.add_argument(
- '--source_type',
- type=str,
- choices=['vocals', 'accompaniment'],
- default="accompaniment",
- help="Source type to be separated.",
- )
-
- args = parser.parse_args()
-
- separate(args)
diff --git a/spaces/fisehara/openai-whisper-base/README.md b/spaces/fisehara/openai-whisper-base/README.md
deleted file mode 100644
index 2532d9236ff647e37d3a93e29eb156a3580b77a4..0000000000000000000000000000000000000000
--- a/spaces/fisehara/openai-whisper-base/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Openai Whisper Base
-emoji: 📚
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/florim/MedGPT/autogpt/memory/no_memory.py b/spaces/florim/MedGPT/autogpt/memory/no_memory.py
deleted file mode 100644
index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000
--- a/spaces/florim/MedGPT/autogpt/memory/no_memory.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""A class that does not store any data. This is the default memory provider."""
-from __future__ import annotations
-
-from typing import Any
-
-from autogpt.memory.base import MemoryProviderSingleton
-
-
-class NoMemory(MemoryProviderSingleton):
- """
- A class that does not store any data. This is the default memory provider.
- """
-
- def __init__(self, cfg):
- """
- Initializes the NoMemory provider.
-
- Args:
- cfg: The config object.
-
- Returns: None
- """
- pass
-
- def add(self, data: str) -> str:
- """
- Adds a data point to the memory. No action is taken in NoMemory.
-
- Args:
- data: The data to add.
-
- Returns: An empty string.
- """
- return ""
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
-
- Returns: None
- """
- return None
-
- def clear(self) -> str:
- """
- Clears the memory. No action is taken in NoMemory.
-
- Returns: An empty string.
- """
- return ""
-
- def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
- """
- Returns all the data in the memory that is relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
- num_relevant: The number of relevant data to return.
-
- Returns: None
- """
- return None
-
- def get_stats(self):
- """
- Returns: An empty dictionary as there are no stats in NoMemory.
- """
- return {}
diff --git a/spaces/freddyaboulton/all_demos_3/demos/reverse_audio/run.py b/spaces/freddyaboulton/all_demos_3/demos/reverse_audio/run.py
deleted file mode 100644
index f58e82f855e2a39cb38f37870e5322e2bcea1363..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/all_demos_3/demos/reverse_audio/run.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import os
-
-import numpy as np
-
-import gradio as gr
-
-
-def reverse_audio(audio):
- sr, data = audio
- return (sr, np.flipud(data))
-
-
-demo = gr.Interface(fn=reverse_audio,
- inputs="microphone",
- outputs="audio",
- examples=[
- os.path.join(os.path.dirname(__file__), "audio/cantina.wav"),
- os.path.join(os.path.dirname(__file__), "audio/recording1.wav")
- ], cache_examples=True)
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/freddyaboulton/fastapi-request/app.py b/spaces/freddyaboulton/fastapi-request/app.py
deleted file mode 100644
index 115713b54f37b876eeb32ffe9008b5927588fcce..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/fastapi-request/app.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import gradio as gr
-import fastapi
-
-def echo(request: gr.Request):
- return request.headers, request.client.host
-
-with gr.Blocks() as demo:
- with gr.Column():
- with gr.Row():
- with_q = gr.Button(value="Print request with queue")
- with gr.Row():
- with_q_headers = gr.JSON()
- with_q_host = gr.Textbox()
- with gr.Column():
- with gr.Row():
- without_q = gr.Button(value="print request without queue")
- with gr.Row():
- without_q_headers = gr.JSON()
- without_q_host = gr.Textbox()
- with_q.click(echo, inputs=None, outputs=[with_q_headers, with_q_host], queue=True)
- without_q.click(echo, inputs=None, outputs=[without_q_headers, without_q_host], queue=False)
-
-demo.queue().launch()
diff --git a/spaces/freddyaboulton/structured-data-classification/README.md b/spaces/freddyaboulton/structured-data-classification/README.md
deleted file mode 100644
index 147a7231e98433dbf167b75bb11a086479063b11..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/structured-data-classification/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Structured Data Classification
-emoji: 🧮
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.3
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/fun-research/FC-CLIP/datasets/prepare_ade20k_sem_seg.py b/spaces/fun-research/FC-CLIP/datasets/prepare_ade20k_sem_seg.py
deleted file mode 100644
index b0edfeb340edaff45beb14b3f9438aef2c65e78f..0000000000000000000000000000000000000000
--- a/spaces/fun-research/FC-CLIP/datasets/prepare_ade20k_sem_seg.py
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-import os
-from pathlib import Path
-
-import numpy as np
-import tqdm
-from PIL import Image
-
-
-def convert(input, output):
- img = np.asarray(Image.open(input))
- assert img.dtype == np.uint8
- img = img - 1 # 0 (ignore) becomes 255. others are shifted by 1
- Image.fromarray(img).save(output)
-
-
-if __name__ == "__main__":
- dataset_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) / "ADEChallengeData2016"
- for name in ["training", "validation"]:
- annotation_dir = dataset_dir / "annotations" / name
- output_dir = dataset_dir / "annotations_detectron2" / name
- output_dir.mkdir(parents=True, exist_ok=True)
- for file in tqdm.tqdm(list(annotation_dir.iterdir())):
- output_file = output_dir / file.name
- convert(file, output_file)
diff --git a/spaces/g4f/freegpt-webui/g4f/__init__.py b/spaces/g4f/freegpt-webui/g4f/__init__.py
deleted file mode 100644
index a0b4bac6aa4de9c0449095a3874c2cb9716169d7..0000000000000000000000000000000000000000
--- a/spaces/g4f/freegpt-webui/g4f/__init__.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import sys
-from . import Provider
-from g4f.models import Model, ModelUtils
-
-
-class ChatCompletion:
- @staticmethod
- def create(model: Model.model or str, messages: list, provider: Provider.Provider = None, stream: bool = False, auth: str = False, **kwargs):
- kwargs['auth'] = auth
-
- if provider and provider.needs_auth and not auth:
- print(
- f'ValueError: {provider.__name__} requires authentication (use auth="cookie or token or jwt ..." param)', file=sys.stderr)
- sys.exit(1)
-
- try:
- if isinstance(model, str):
- try:
- model = ModelUtils.convert[model]
- except KeyError:
- raise Exception(f'The model: {model} does not exist')
-
- engine = model.best_provider if not provider else provider
-
- if not engine.supports_stream and stream == True:
- print(
- f"ValueError: {engine.__name__} does not support 'stream' argument", file=sys.stderr)
- sys.exit(1)
-
- print(f'Using {engine.__name__} provider')
-
- return (engine._create_completion(model.name, messages, stream, **kwargs)
- if stream else ''.join(engine._create_completion(model.name, messages, stream, **kwargs)))
- except TypeError as e:
- print(e)
- arg: str = str(e).split("'")[1]
- print(
- f"ValueError: {engine.__name__} does not support '{arg}' argument", file=sys.stderr)
- sys.exit(1)
diff --git a/spaces/gagan3012/T5-Summarization/src/visualization/__init__.py b/spaces/gagan3012/T5-Summarization/src/visualization/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/giswqs/Streamlit/apps/heatmap.py b/spaces/giswqs/Streamlit/apps/heatmap.py
deleted file mode 100644
index ae8d1fe0558f9303d3d0a69fa1da82fda5902b09..0000000000000000000000000000000000000000
--- a/spaces/giswqs/Streamlit/apps/heatmap.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import streamlit as st
-import leafmap.foliumap as leafmap
-
-
-def app():
-
- st.title('Heatmaps')
-
- filepath = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us_cities.csv"
- m = leafmap.Map(tiles="stamentoner")
- m.add_heatmap(
- filepath,
- latitude="latitude",
- longitude="longitude",
- value="pop_max",
- name="Heat map",
- radius=20,
- )
- m.to_streamlit(width=700, height=500)
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Firmware Myway RNEG 40.03 R4030 How to Update Your PeugeotCitroen Navigation System.md b/spaces/gotiQspiryo/whisper-ui/examples/Firmware Myway RNEG 40.03 R4030 How to Update Your PeugeotCitroen Navigation System.md
deleted file mode 100644
index 2f563f86e9c8af4925233aea3e85290e86bfec89..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Firmware Myway RNEG 40.03 R4030 How to Update Your PeugeotCitroen Navigation System.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
faiel a77f14ba26 >download deep web tv player free
>redneck hunting & rescue old army video
>lightroom 3 review by Kobo
>aag jhansi ki rani movie 2004 serial
>download pengalaman karakter film full
>Enigmatic Rainbows Modo de Opción
>Kaal de Volume modulator
>james bond - 007 movie 2013 full hd video download
>Download Viber now
-
pngmonkey a77f14ba26 >Roxio video editor for windows 32 bit (80%)
>fassino kamayutke
>HD Online Player (Pink panther 2 movie in hindi 3gp fr)
>xilile chikuta len
>Viper play torrent
>Freewallet 5.5 serial number
>Unibrowser-Settler-7.1.1.0.rar
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Malathi Teacher Full Pdf 32 EXCLUSIVE.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Malathi Teacher Full Pdf 32 EXCLUSIVE.md
deleted file mode 100644
index e7c648c948860bc9b6aac6b4ccd9e62afb716353..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Malathi Teacher Full Pdf 32 EXCLUSIVE.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
the purpose of the study is to examine the usefulness of the conceptual-based teaching approach to learning for teacher trainers. it was a mixed methods research design and data collection methods included observation, interview and survey. the data were analyzed using content analysis approach. quantitative analysis showed that most teachers used conceptual-based teaching approach which was confirmed by the data collected from the interview and the survey. the qualitative data showed that conceptual-based teaching approach was useful for teacher trainers to learn in a variety of ways. most of the teacher trainers were able to use this approach to teach in different ways which resulted in learning through teaching. the data analysis also showed that, for the most part, teacher trainers learned to use this teaching approach while conducting training and mentoring activities. thus, conceptual-based teaching approach was helpful to the teachers trainers in their learning. thus, it was concluded that, conceptual-based teaching approach was useful for teacher trainers in learning.
-
this investigation explores how the application of online games in a training for teacher trainers will enhance their critical thinking skills. the training program was developed using the guidelines developed by the ministry of education. the training program was implemented by a specialized education agency in singapore. the participants were thirteen (13) teacher trainers who were trained and certified by the agency. they were from ten countries of the asean region. the research instrument was an observation checklist that was used to collect the data in two training sessions. the data were further analyzed using content analysis approach.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Autodesk Fusion 360 2.0 Crack Full Keygen Free UPD 2020.md b/spaces/inreVtussa/clothingai/Examples/Autodesk Fusion 360 2.0 Crack Full Keygen Free UPD 2020.md
deleted file mode 100644
index 68f08164b5ec694079829d2fe3a6426429c6f415..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Autodesk Fusion 360 2.0 Crack Full Keygen Free UPD 2020.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
How to Download and Install Autodesk Fusion 360 2.0 Crack Full Keygen Free 2020
-
Autodesk Fusion 360 is a powerful and versatile software that allows you to design, simulate, and manufacture products in 3D. It combines CAD, CAM, CAE, and PCB tools in a cloud-based platform that works on both Mac and PC. Whether you are a hobbyist, a student, or a professional, you can use Fusion 360 to create anything from simple models to complex assemblies.
-
However, Fusion 360 is not a free software. You need to pay a monthly or yearly subscription fee to access its full features and capabilities. If you are looking for a way to get Fusion 360 for free, you might be tempted to download a cracked version of the software from the internet. But is it worth it? What are the risks and consequences of using a cracked version of Fusion 360?
-
Autodesk Fusion 360 2.0 Crack Full Keygen Free 2020
In this article, we will explain why you should avoid downloading and installing Autodesk Fusion 360 2.0 Crack Full Keygen Free 2020, and what are the best alternatives to get Fusion 360 legally and safely.
-
Why You Should Not Use Autodesk Fusion 360 2.0 Crack Full Keygen Free 2020
-
Downloading and installing Autodesk Fusion 360 2.0 Crack Full Keygen Free 2020 might seem like a good idea if you want to save money and enjoy the benefits of Fusion 360 without paying anything. However, there are many reasons why you should not do it. Here are some of them:
-
-
It is illegal. Using a cracked version of Fusion 360 violates the terms of service and the intellectual property rights of Autodesk. You could face legal actions or penalties if you are caught using or distributing a cracked version of Fusion 360.
-
It is unsafe. Downloading a cracked version of Fusion 360 from an unknown source exposes your computer to malware, viruses, spyware, ransomware, or other harmful programs that could damage your system or steal your personal information. You could also lose your data or compromise your privacy if you use a cracked version of Fusion 360 that connects to the cloud.
-
It is unreliable. Using a cracked version of Fusion 360 means that you will not receive any updates, bug fixes, security patches, or new features from Autodesk. You could also experience errors, crashes, compatibility issues, or performance problems that could affect your work or projects.
-
It is unethical. Using a cracked version of Fusion 360 means that you are not supporting the developers and creators of the software. You are also depriving yourself of the opportunity to learn from the official resources, tutorials, forums, and community of Fusion 360 users.
-
-
As you can see, using Autodesk Fusion 360 2.0 Crack Full Keygen Free 2020 is not worth the risk or the hassle. You could end up wasting your time, money, or reputation by using a cracked version of Fusion 360.
-
How to Get Fusion 360 Legally and Safely
-
If you want to use Fusion 360 for your personal or professional projects, there are better ways to get it legally and safely than downloading a cracked version of the software. Here are some of them:
-
-
Get a free trial. Autodesk offers a free trial of Fusion 360 for up to 30 days[^1^]. You can use this option to test out the features and functions of Fusion 360 before deciding whether to subscribe or not. To get the free trial, you need to create an Autodesk account and download the software from the official website.
-
Get an educational license. If you are a student or an educator at a qualified academic institution[^1^], you can get a free educational license of Fusion 360 for up to three years[^1^]. You can use this option to learn and teach with Fusion 360 for your academic projects. To get the educational license, you need to verify your eligibility with Autodesk and download the software from the official website.
-
Get a personal use license.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/ivntl/MMS/app.py b/spaces/ivntl/MMS/app.py
deleted file mode 100644
index 8c4417d7dc96aee4b41e77cc1664b083ef5c2e69..0000000000000000000000000000000000000000
--- a/spaces/ivntl/MMS/app.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import gradio as gr
-import librosa
-from asr import transcribe, ASR_EXAMPLES, ASR_LANGUAGES, ASR_NOTE
-from tts import synthesize, TTS_EXAMPLES, TTS_LANGUAGES
-from lid import identify, LID_EXAMPLES
-
-
-demo = gr.Blocks()
-
-mms_select_source_trans = gr.Radio(
- ["Record from Mic", "Upload audio"],
- label="Audio input",
- value="Record from Mic",
-)
-mms_mic_source_trans = gr.Audio(source="microphone", type="filepath", label="Use mic")
-mms_upload_source_trans = gr.Audio(
- source="upload", type="filepath", label="Upload file", visible=False
-)
-mms_transcribe = gr.Interface(
- fn=transcribe,
- inputs=[
- mms_select_source_trans,
- mms_mic_source_trans,
- mms_upload_source_trans,
- gr.Dropdown(
- [f"{k} ({v})" for k, v in ASR_LANGUAGES.items()],
- label="Language",
- value="eng English",
- ),
- # gr.Checkbox(label="Use Language Model (if available)", default=True),
- ],
- outputs="text",
- examples=ASR_EXAMPLES,
- title="Speech-to-text",
- description=(
- "Transcribe audio from a microphone or input file in your desired language."
- ),
- article=ASR_NOTE,
- allow_flagging="never",
-)
-
-mms_synthesize = gr.Interface(
- fn=synthesize,
- inputs=[
- gr.Text(label="Input text"),
- gr.Dropdown(
- [f"{k} ({v})" for k, v in TTS_LANGUAGES.items()],
- label="Language",
- value="eng English",
- ),
- gr.Slider(minimum=0.1, maximum=4.0, value=1.0, step=0.1, label="Speed"),
- ],
- outputs=[
- gr.Audio(label="Generated Audio", type="numpy"),
- gr.Text(label="Filtered text after removing OOVs"),
- ],
- examples=TTS_EXAMPLES,
- title="Text-to-speech",
- description=("Generate audio in your desired language from input text."),
- allow_flagging="never",
-)
-
-mms_select_source_iden = gr.Radio(
- ["Record from Mic", "Upload audio"],
- label="Audio input",
- value="Record from Mic",
-)
-mms_mic_source_iden = gr.Audio(source="microphone", type="filepath", label="Use mic")
-mms_upload_source_iden = gr.Audio(
- source="upload", type="filepath", label="Upload file", visible=False
-)
-mms_identify = gr.Interface(
- fn=identify,
- inputs=[
- mms_select_source_iden,
- mms_mic_source_iden,
- mms_upload_source_iden,
- ],
- outputs=gr.Label(num_top_classes=10),
- examples=LID_EXAMPLES,
- title="Language Identification",
- description=("Identity the language of input audio."),
- allow_flagging="never",
-)
-
-tabbed_interface = gr.TabbedInterface(
- [mms_transcribe, mms_synthesize, mms_identify],
- ["Speech-to-text", "Text-to-speech", "Language Identification"],
-)
-
-with gr.Blocks() as demo:
- gr.Markdown(
- "
MMS: Scaling Speech Technology to 1000+ languages demo. See our blog post and paper.
"
- )
- gr.HTML(
- """
Click on the appropriate tab to explore Speech-to-text (ASR), Text-to-speech (TTS) and Language identification (LID) demos.
"""
- )
- gr.HTML(
- """
for more control and no queue.
"""
- )
-
- tabbed_interface.render()
- mms_select_source_trans.change(
- lambda x: [
- gr.update(visible=True if x == "Record from Mic" else False),
- gr.update(visible=True if x == "Upload audio" else False),
- ],
- inputs=[mms_select_source_trans],
- outputs=[mms_mic_source_trans, mms_upload_source_trans],
- queue=False,
- )
- mms_select_source_iden.change(
- lambda x: [
- gr.update(visible=True if x == "Record from Mic" else False),
- gr.update(visible=True if x == "Upload audio" else False),
- ],
- inputs=[mms_select_source_iden],
- outputs=[mms_mic_source_iden, mms_upload_source_iden],
- queue=False,
- )
- gr.HTML(
- """
-
- """
- )
-
-demo.queue(concurrency_count=3)
-demo.launch()
diff --git a/spaces/jacinthes/PubMed-fact-checker/GPTHelper.py b/spaces/jacinthes/PubMed-fact-checker/GPTHelper.py
deleted file mode 100644
index be100b0ebb605d24e534c1e385e18a90067c7868..0000000000000000000000000000000000000000
--- a/spaces/jacinthes/PubMed-fact-checker/GPTHelper.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import openai
-from time import time
-import os
-import logging
-import streamlit as st
-
-openai.api_key = st.secrets['openai_API_key']
-
-
-def open_file(filepath):
- with open(filepath, 'r', encoding='utf-8') as file:
- return file.read()
-
-
-def gpt35_rephrase(fact):
- # Dynamically generate the prompt to rephrase the fact as a PubMed query using GPT3.5
- prompt = open_file('prompts/gpt35_rephrase.txt').replace('<>', fact)
- try:
- response = openai.Completion.create(
- model='text-davinci-003',
- prompt=prompt,
- max_tokens=250,
- temperature=0
- )
- response = response['choices'][0]['text'].strip()
- filename = '%s_gpt3.txt' % time()
-
- # Create the logs folder if it does not exist
- if not os.path.exists('gpt3_rephrase_logs'):
- os.makedirs('gpt3_rephrase_logs')
-
- # Save the whole prompt and the response so that we can inspect it when necessary
- with open('gpt3_rephrase_logs/%s' % filename, 'w', encoding="utf-8") as outfile:
- outfile.write('PROMPT:\n\n' + prompt + '\n\n###############\n\nRESPONSE:\n\n' + response)
-
- return response
-
- except Exception as e:
- logging.error('Error communicating with OpenAI (rephrase): ', exc_info=e)
-
-
-def gpt35_check_fact(evidence, fact):
- # Dynamically generate the prompt to check the fact against the given PubMed article conclusion/abstract
- prompt = open_file('prompts/gpt35_fact_check.txt').replace('<>', evidence).replace('<>', fact)
- try:
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=prompt,
- max_tokens=3, # Don't need more for Entails/Contradicts/Undetermined
- temperature=0
- )
- response = response['choices'][0]['text'].strip()
- response = response.replace('.', '')
- filename = '%s_gpt3.txt' % time()
-
- if not os.path.exists('gpt3_factchecking_logs'):
- os.makedirs('gpt3_factchecking_logs')
-
- with open('gpt3_factchecking_logs/%s' % filename, 'w', encoding='utf-8') as outfile:
- outfile.write('PROMPT:\n\n' + prompt + '\n\n###############\n\nRESPONSE:\n\n' + response)
-
- return response
-
- except Exception as e:
- logging.error('Error communicating with OpenAI (check_fact): ', exc_info=e)
-
-
-def gpt35_turbo_rephrase(fact):
- # Dynamically generate the prompt to rephrase the fact as a PubMed query using GPT3.5 turbo - lower cost than 3.5
- prompt = open_file('prompts/gpt35_rephrase.txt').replace('<>', fact)
- try:
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {'role': 'user',
- 'content': prompt}
- ]
- )
- response = response['choices'][0]['message']['content'].strip()
- filename = '%s_gpt3.txt' % time()
-
- if not os.path.exists('gpt35_rephrase_logs'):
- os.makedirs('gpt35_rephrase_logs')
-
- with open('gpt35_rephrase_logs/%s' % filename, 'w', encoding="utf-8") as outfile:
- outfile.write('PROMPT:\n\n' + prompt + '\n\n###############\n\nRESPONSE:\n\n' + response)
-
- return response
-
- except Exception as e:
- logging.error('Error communicating with OpenAI (gpt35_rephrase): ', exc_info=e)
diff --git a/spaces/jamesnzeex/resale_HDB_price_prediction_model/utils.py b/spaces/jamesnzeex/resale_HDB_price_prediction_model/utils.py
deleted file mode 100644
index c3347d1dca0383246edb31abcb3e45dcfdc45cd2..0000000000000000000000000000000000000000
--- a/spaces/jamesnzeex/resale_HDB_price_prediction_model/utils.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import json
-import pickle
-import requests
-import geopy.distance
-
-import numpy as np
-import pandas as pd
-
-from sklearn.metrics import mean_squared_error, mean_absolute_error
-from sklearn.model_selection import train_test_split
-from sklearn.pipeline import make_pipeline
-from sklearn.preprocessing import StandardScaler
-from sklearn.ensemble import RandomForestRegressor
-
-def get_data_data_gov(url):
- query_string = url
- resp = requests.get(query_string)
- data = json.loads(resp.content) # convert from json to python dict
- print('Number of records:', len(data.get('result').get('records')))
- data = data['result']['records']
- df = pd.DataFrame.from_dict(data).drop(['_id'], axis=1)
- # print(df.isnull().sum())
- # print(df.dtypes)
- return df
-
-def get_data_singstat_price_index(url):
- query_string = url
- resp = requests.get(query_string)
- data = json.loads(resp.content) # convert from json to python dict
- print('Number of records:', len(data.get('Data').get('row')[0].get('columns')))
- df = pd.DataFrame.from_dict(data.get('Data').get('row')[0].get('columns'))
- # print(df.isnull().sum())
- # print(df.dtypes)
- return df
-
-def get_data_one_map(address, is_LAT_LONG_only=False):
- query_string = 'https://developers.onemap.sg/commonapi/search?searchVal='+str(address)+'&returnGeom=Y&getAddrDetails=Y'
- resp = requests.get(query_string)
- data = json.loads(resp.content) # convert from json to python dict
- if data['found'] != 0 and is_LAT_LONG_only:
- data = data['results'][0]
- data = (data['LATITUDE'], data['LONGITUDE'])
- elif data['found'] != 0:
- data = data['results'][0]
- else:
- data = None
- return data
-
-def distance_to_city(address):
- hdb_coordinates = get_data_one_map(address, is_LAT_LONG_only = True)
- return geopy.distance.great_circle((1.29293672227779, 103.852585580366), hdb_coordinates).km
-
-def distance_to_nearest_MRT_station(address):
- hdb_coordinates = get_data_one_map(address, is_LAT_LONG_only = True)
- df_MRT = pd.read_pickle('./df_MRT.pkl')
- MRT_coordinates = df_MRT.T.iloc[-1,:].tolist()
- dist = []
- for coordinates in MRT_coordinates:
- dist.append(geopy.distance.great_circle(hdb_coordinates, coordinates).km)
- return min(dist)
-
-def month_to_quarter(x):
- year = int(x.split('-')[0])
- month = int(x.split('-')[1])
- if month <= 3:
- month = '1Q'
- elif month <= 6:
- month = '2Q'
- elif month <= 9:
- month = '3Q'
- else:
- month = '4Q'
- return (str(year) + '-' + str(month))
-
-def get_update(data_year):
- ### DATA EXTRACTION AND PREPROCESSING ###
- df_raw_data = get_data_data_gov('https://data.gov.sg/api/action/datastore_search?resource_id=f1765b54-a209-4718-8d38-a39237f502b3&limit=1000000')
- df_raw_data['address'] = df_raw_data['block'] + ' ' + df_raw_data['street_name']
- df_raw_data['quarter'] = df_raw_data['month'].apply(month_to_quarter)
- df_raw_data['year_sold'] = df_raw_data['month'].apply((lambda x: int(x.split('-')[0])))
- df_raw_data['month_sold'] = df_raw_data['month'].apply((lambda x: int(x.split('-')[1])))
- df_raw_data['remaining_lease_years'] = df_raw_data['remaining_lease'].apply(lambda x: int(x.split()[0]))
- df_raw_data['floor_area_sqft'] = round((df_raw_data['floor_area_sqm'].astype(float))*10.764)
-
- df_price_index = get_data_singstat_price_index('https://tablebuilder.singstat.gov.sg/api/table/tabledata/M212161?isTestApi=true')
- df_price_index = df_price_index.rename(columns = {'key':'quarter', 'value': 'index'})
- df_price_index['quarter'] = df_price_index['quarter'].apply(lambda x : x.replace(' ', '-'))
-
- df_selected_year = df_raw_data[df_raw_data['year_sold']>=data_year]
- quarter_list = list(df_price_index['quarter'])
- df_selected_year['quarter'] = df_selected_year['quarter'].apply(lambda x : x if x in quarter_list else quarter_list[-1])
- df_selected_year = pd.merge(df_selected_year, df_price_index, how='left', on='quarter')
-
- # convert to float
- df_selected_year['index'] = pd.to_numeric(df_selected_year['index'])
- df_selected_year['resale_price'] = pd.to_numeric(df_selected_year['resale_price'])
-
- # normalised to latest price index
- df_selected_year['normalised_resale_price'] = round(df_selected_year['resale_price']*float(df_price_index.tail(1)['index'])/df_selected_year['index'],0)
- df_selected_year['price_psf'] = round(df_selected_year['normalised_resale_price']/df_selected_year['floor_area_sqft'])
-
- df_selected_year['storey_range'] = df_selected_year['storey_range'] \
- .str.replace('01 TO 03', 'Low Floor') \
- .str.replace('04 TO 06', 'Mid Floor') \
- .str.replace('07 TO 09', 'Mid Floor') \
- .str.replace('10 TO 12', 'High Floor') \
- .str.replace('13 TO 15', 'High Floor') \
- .str.replace('16 TO 18', 'High Floor') \
- .str.replace('19 TO 21', 'High Floor') \
- .str.replace('22 TO 24', 'High Floor') \
- .str.replace('25 TO 27', 'High Floor') \
- .str.replace('25 TO 27', 'High Floor') \
- .str.replace('25 TO 27', 'High Floor') \
- .str.replace('28 TO 30', 'High Floor') \
- .str.replace('31 TO 33', 'High Floor') \
- .str.replace('34 TO 36', 'High Floor') \
- .str.replace('37 TO 39', 'High Floor') \
- .str.replace('40 TO 42', 'High Floor') \
- .str.replace('43 TO 45', 'High Floor') \
- .str.replace('46 TO 48', 'High Floor') \
- .str.replace('49 TO 51', 'High Floor')
-
- df_selected_year = df_selected_year.drop(columns=['street_name', 'resale_price', 'remaining_lease', 'lease_commence_date', 'block'])
- HDB_address_list = df_selected_year['address'].unique().tolist()
-
- data_list = []
-
- for i in range(0, len(HDB_address_list)):
- data = get_data_one_map(HDB_address_list[i])
- if data is not None:
- data_list.append(data)
-
- df_HDB = pd.DataFrame.from_dict(data_list)
- #creating primary key for df_HDB for further table join
- df_HDB['LAT_LONG']= (df_HDB['LATITUDE'] +' '+ df_HDB['LONGITUDE']).apply(lambda x: tuple(x.split(' ')))
-
- tmp_df = pd.read_csv('./MRT_LRT_STATION.csv')
- MRT_list = tmp_df['Station'].unique().tolist()
- print('Number of MRT stations:', len(MRT_list))
-
- data_list = []
-
- for i in range(0, len(MRT_list)):
- data = get_data_one_map(MRT_list[i])
- if data is not None:
- data_list.append(data)
-
- df_MRT = pd.DataFrame.from_dict(data_list)
- df_MRT['LAT_LONG'] = (df_MRT['LATITUDE'] +' '+ df_MRT['LONGITUDE']).apply(lambda x: tuple(x.split(' ')))
- df_MRT.to_pickle('df_MRT.pkl')
-
- MRT_coordinates = df_MRT.T.iloc[-1,:].tolist()
- df_HDB_coordinates = pd.DataFrame(df_HDB['LAT_LONG'])
-
- df_HDB_coordinates = pd.DataFrame(df_HDB['LAT_LONG'])
- for coordinates in MRT_coordinates:
- df_HDB_coordinates[coordinates]=df_HDB['LAT_LONG'].apply(lambda y: geopy.distance.great_circle(y, coordinates).km)
-
- # get the distance from each address to the nearest station
- df_HDB_coordinates['distance_to_nearest_MRT_station'] = df_HDB_coordinates.iloc[:,1:].apply(lambda x: min(x), axis=1)
- df_HDB_coordinates_with_MRT_distance = df_HDB_coordinates.iloc[:,[0,-1]]
-
- df_HDB = pd.merge(df_HDB, df_HDB_coordinates_with_MRT_distance, on='LAT_LONG', how='left')
- df_HDB = df_HDB.drop(columns=['SEARCHVAL', 'BUILDING', 'ADDRESS' ,'X', 'Y', 'LONGTITUDE'])
-
- # City Hall: 1.29293672227779 103.852585580366
- df_HDB['distance_to_city'] = df_HDB['LAT_LONG'].apply(lambda x: geopy.distance.great_circle((1.29293672227779, 103.852585580366), x).km)
-
- df_HDB['address'] = df_HDB ['BLK_NO'] + ' ' + df_HDB ['ROAD_NAME']
-
- df_HDB['address'] = df_HDB['address'] \
- .str.replace('AVENUE', 'AVE') \
- .str.replace('CRESCENT', 'CRES') \
- .str.replace('ROAD', 'RD') \
- .str.replace('STREET', 'ST') \
- .str.replace('CENTRAL', 'CTRL') \
- .str.replace('HEIGHTS', 'HTS') \
- .str.replace('TERRACE', 'TER') \
- .str.replace('JALAN', 'JLN') \
- .str.replace('DRIVE', 'DR') \
- .str.replace('PLACE', 'PL') \
- .str.replace('CLOSE', 'CL') \
- .str.replace('PARK', 'PK') \
- .str.replace('GARDENS', 'GDNS') \
- .str.replace('NORTH', 'NTH') \
- .str.replace('SOUTH', 'STH') \
- .str.replace('BUKIT', 'BT') \
- .str.replace('UPPER', 'UPP}') \
- .str.replace('COMMONWEALTH', "C'WEALTH")
-
- df_clean_data = pd.merge(df_selected_year, df_HDB, on='address', how='left')
- df_clean_data = df_clean_data.dropna(subset=['LAT_LONG'])
-
- ### FEATURE SELECTION ###
- features = [
- 'flat_type',
- 'storey_range',
- 'floor_area_sqft',
- 'remaining_lease_years',
- 'distance_to_nearest_MRT_station',
- 'distance_to_city',
- 'price_psf']
-
- df = df_clean_data[features]
- df = pd.get_dummies(df)
-
- ### MODEL TRAINING AND TESTING ###
- X = df.drop('price_psf', axis=1)
- y = df['price_psf']
-
- print('Average price_per_sqm:', y.mean())
- print('Min price_per_sqm:', y.min())
- print('Max price_per_sqm:', y.max())
-
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
-
- model_RFR = make_pipeline(StandardScaler(),RandomForestRegressor())
- model_RFR.fit(X_train, y_train) # apply scaling on training data
- model_RFR.score(X_test, y_test)
-
- print('Mean Absolute Error:', mean_absolute_error(y_test, model_RFR.predict(X_test)))
- print('Mean Squared Error:', mean_squared_error(y_test, model_RFR.predict(X_test)))
- print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, model_RFR.predict(X_test))))
-
- f = open("model.log", "r")
- data = f.readline()
- model_score = float(data.split()[-1])
- MSE = np.sqrt(mean_squared_error(y_test, model_RFR.predict(X_test)))
-
- if MSE < model_score:
- pickle.dump(model_RFR, open('model.sav', 'wb'))
- f = open("model.log", "w")
- f.write(f'model_score = {MSE}')
- f.close()
\ No newline at end of file
diff --git a/spaces/jbilcke-hf/AnimateDiff/animatediff/models/attention.py b/spaces/jbilcke-hf/AnimateDiff/animatediff/models/attention.py
deleted file mode 100644
index ad23583c1367227c0eef362778b25a38d5668cf5..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/AnimateDiff/animatediff/models/attention.py
+++ /dev/null
@@ -1,300 +0,0 @@
-# Adapted from https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention.py
-
-from dataclasses import dataclass
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from diffusers.configuration_utils import ConfigMixin, register_to_config
-from diffusers.modeling_utils import ModelMixin
-from diffusers.utils import BaseOutput
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.models.attention import CrossAttention, FeedForward, AdaLayerNorm
-
-from einops import rearrange, repeat
-import pdb
-
-@dataclass
-class Transformer3DModelOutput(BaseOutput):
- sample: torch.FloatTensor
-
-
-if is_xformers_available():
- import xformers
- import xformers.ops
-else:
- xformers = None
-
-
-class Transformer3DModel(ModelMixin, ConfigMixin):
- @register_to_config
- def __init__(
- self,
- num_attention_heads: int = 16,
- attention_head_dim: int = 88,
- in_channels: Optional[int] = None,
- num_layers: int = 1,
- dropout: float = 0.0,
- norm_num_groups: int = 32,
- cross_attention_dim: Optional[int] = None,
- attention_bias: bool = False,
- activation_fn: str = "geglu",
- num_embeds_ada_norm: Optional[int] = None,
- use_linear_projection: bool = False,
- only_cross_attention: bool = False,
- upcast_attention: bool = False,
-
- unet_use_cross_frame_attention=None,
- unet_use_temporal_attention=None,
- ):
- super().__init__()
- self.use_linear_projection = use_linear_projection
- self.num_attention_heads = num_attention_heads
- self.attention_head_dim = attention_head_dim
- inner_dim = num_attention_heads * attention_head_dim
-
- # Define input layers
- self.in_channels = in_channels
-
- self.norm = torch.nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True)
- if use_linear_projection:
- self.proj_in = nn.Linear(in_channels, inner_dim)
- else:
- self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0)
-
- # Define transformers blocks
- self.transformer_blocks = nn.ModuleList(
- [
- BasicTransformerBlock(
- inner_dim,
- num_attention_heads,
- attention_head_dim,
- dropout=dropout,
- cross_attention_dim=cross_attention_dim,
- activation_fn=activation_fn,
- num_embeds_ada_norm=num_embeds_ada_norm,
- attention_bias=attention_bias,
- only_cross_attention=only_cross_attention,
- upcast_attention=upcast_attention,
-
- unet_use_cross_frame_attention=unet_use_cross_frame_attention,
- unet_use_temporal_attention=unet_use_temporal_attention,
- )
- for d in range(num_layers)
- ]
- )
-
- # 4. Define output layers
- if use_linear_projection:
- self.proj_out = nn.Linear(in_channels, inner_dim)
- else:
- self.proj_out = nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0)
-
- def forward(self, hidden_states, encoder_hidden_states=None, timestep=None, return_dict: bool = True):
- # Input
- assert hidden_states.dim() == 5, f"Expected hidden_states to have ndim=5, but got ndim={hidden_states.dim()}."
- video_length = hidden_states.shape[2]
- hidden_states = rearrange(hidden_states, "b c f h w -> (b f) c h w")
- encoder_hidden_states = repeat(encoder_hidden_states, 'b n c -> (b f) n c', f=video_length)
-
- batch, channel, height, weight = hidden_states.shape
- residual = hidden_states
-
- hidden_states = self.norm(hidden_states)
- if not self.use_linear_projection:
- hidden_states = self.proj_in(hidden_states)
- inner_dim = hidden_states.shape[1]
- hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, inner_dim)
- else:
- inner_dim = hidden_states.shape[1]
- hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * weight, inner_dim)
- hidden_states = self.proj_in(hidden_states)
-
- # Blocks
- for block in self.transformer_blocks:
- hidden_states = block(
- hidden_states,
- encoder_hidden_states=encoder_hidden_states,
- timestep=timestep,
- video_length=video_length
- )
-
- # Output
- if not self.use_linear_projection:
- hidden_states = (
- hidden_states.reshape(batch, height, weight, inner_dim).permute(0, 3, 1, 2).contiguous()
- )
- hidden_states = self.proj_out(hidden_states)
- else:
- hidden_states = self.proj_out(hidden_states)
- hidden_states = (
- hidden_states.reshape(batch, height, weight, inner_dim).permute(0, 3, 1, 2).contiguous()
- )
-
- output = hidden_states + residual
-
- output = rearrange(output, "(b f) c h w -> b c f h w", f=video_length)
- if not return_dict:
- return (output,)
-
- return Transformer3DModelOutput(sample=output)
-
-
-class BasicTransformerBlock(nn.Module):
- def __init__(
- self,
- dim: int,
- num_attention_heads: int,
- attention_head_dim: int,
- dropout=0.0,
- cross_attention_dim: Optional[int] = None,
- activation_fn: str = "geglu",
- num_embeds_ada_norm: Optional[int] = None,
- attention_bias: bool = False,
- only_cross_attention: bool = False,
- upcast_attention: bool = False,
-
- unet_use_cross_frame_attention = None,
- unet_use_temporal_attention = None,
- ):
- super().__init__()
- self.only_cross_attention = only_cross_attention
- self.use_ada_layer_norm = num_embeds_ada_norm is not None
- self.unet_use_cross_frame_attention = unet_use_cross_frame_attention
- self.unet_use_temporal_attention = unet_use_temporal_attention
-
- # SC-Attn
- assert unet_use_cross_frame_attention is not None
- if unet_use_cross_frame_attention:
- self.attn1 = SparseCausalAttention2D(
- query_dim=dim,
- heads=num_attention_heads,
- dim_head=attention_head_dim,
- dropout=dropout,
- bias=attention_bias,
- cross_attention_dim=cross_attention_dim if only_cross_attention else None,
- upcast_attention=upcast_attention,
- )
- else:
- self.attn1 = CrossAttention(
- query_dim=dim,
- heads=num_attention_heads,
- dim_head=attention_head_dim,
- dropout=dropout,
- bias=attention_bias,
- upcast_attention=upcast_attention,
- )
- self.norm1 = AdaLayerNorm(dim, num_embeds_ada_norm) if self.use_ada_layer_norm else nn.LayerNorm(dim)
-
- # Cross-Attn
- if cross_attention_dim is not None:
- self.attn2 = CrossAttention(
- query_dim=dim,
- cross_attention_dim=cross_attention_dim,
- heads=num_attention_heads,
- dim_head=attention_head_dim,
- dropout=dropout,
- bias=attention_bias,
- upcast_attention=upcast_attention,
- )
- else:
- self.attn2 = None
-
- if cross_attention_dim is not None:
- self.norm2 = AdaLayerNorm(dim, num_embeds_ada_norm) if self.use_ada_layer_norm else nn.LayerNorm(dim)
- else:
- self.norm2 = None
-
- # Feed-forward
- self.ff = FeedForward(dim, dropout=dropout, activation_fn=activation_fn)
- self.norm3 = nn.LayerNorm(dim)
-
- # Temp-Attn
- assert unet_use_temporal_attention is not None
- if unet_use_temporal_attention:
- self.attn_temp = CrossAttention(
- query_dim=dim,
- heads=num_attention_heads,
- dim_head=attention_head_dim,
- dropout=dropout,
- bias=attention_bias,
- upcast_attention=upcast_attention,
- )
- nn.init.zeros_(self.attn_temp.to_out[0].weight.data)
- self.norm_temp = AdaLayerNorm(dim, num_embeds_ada_norm) if self.use_ada_layer_norm else nn.LayerNorm(dim)
-
- def set_use_memory_efficient_attention_xformers(self, use_memory_efficient_attention_xformers: bool):
- if not is_xformers_available():
- print("Here is how to install it")
- raise ModuleNotFoundError(
- "Refer to https://github.com/facebookresearch/xformers for more information on how to install"
- " xformers",
- name="xformers",
- )
- elif not torch.cuda.is_available():
- raise ValueError(
- "torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only"
- " available for GPU "
- )
- else:
- try:
- # Make sure we can run the memory efficient attention
- _ = xformers.ops.memory_efficient_attention(
- torch.randn((1, 2, 40), device="cuda"),
- torch.randn((1, 2, 40), device="cuda"),
- torch.randn((1, 2, 40), device="cuda"),
- )
- except Exception as e:
- raise e
- self.attn1._use_memory_efficient_attention_xformers = use_memory_efficient_attention_xformers
- if self.attn2 is not None:
- self.attn2._use_memory_efficient_attention_xformers = use_memory_efficient_attention_xformers
- # self.attn_temp._use_memory_efficient_attention_xformers = use_memory_efficient_attention_xformers
-
- def forward(self, hidden_states, encoder_hidden_states=None, timestep=None, attention_mask=None, video_length=None):
- # SparseCausal-Attention
- norm_hidden_states = (
- self.norm1(hidden_states, timestep) if self.use_ada_layer_norm else self.norm1(hidden_states)
- )
-
- # if self.only_cross_attention:
- # hidden_states = (
- # self.attn1(norm_hidden_states, encoder_hidden_states, attention_mask=attention_mask) + hidden_states
- # )
- # else:
- # hidden_states = self.attn1(norm_hidden_states, attention_mask=attention_mask, video_length=video_length) + hidden_states
-
- # pdb.set_trace()
- if self.unet_use_cross_frame_attention:
- hidden_states = self.attn1(norm_hidden_states, attention_mask=attention_mask, video_length=video_length) + hidden_states
- else:
- hidden_states = self.attn1(norm_hidden_states, attention_mask=attention_mask) + hidden_states
-
- if self.attn2 is not None:
- # Cross-Attention
- norm_hidden_states = (
- self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states)
- )
- hidden_states = (
- self.attn2(
- norm_hidden_states, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask
- )
- + hidden_states
- )
-
- # Feed-forward
- hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states
-
- # Temporal-Attention
- if self.unet_use_temporal_attention:
- d = hidden_states.shape[1]
- hidden_states = rearrange(hidden_states, "(b f) d c -> (b d) f c", f=video_length)
- norm_hidden_states = (
- self.norm_temp(hidden_states, timestep) if self.use_ada_layer_norm else self.norm_temp(hidden_states)
- )
- hidden_states = self.attn_temp(norm_hidden_states) + hidden_states
- hidden_states = rearrange(hidden_states, "(b d) f c -> (b f) d c", d=d)
-
- return hidden_states
diff --git a/spaces/jbilcke-hf/media-server/Dockerfile b/spaces/jbilcke-hf/media-server/Dockerfile
deleted file mode 100644
index e06d41ee3db30bca637b7c073525624fef3222a9..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/media-server/Dockerfile
+++ /dev/null
@@ -1,37 +0,0 @@
-FROM node:18
-
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-RUN apt update
-
-RUN apt --yes install wget unzip
-
-RUN apt --yes install ffmpeg libnss3 libatk1.0-0 libatk-bridge2.0-0 libcups2 libgbm1 libasound2 libpangocairo-1.0-0 libxss1 libgtk-3-0
-
-# Set up a new user named "user" with user ID 1000
-RUN useradd -o -u 1000 user
-
-# Switch to the "user" user
-USER user
-
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-# Set the working directory to the user's home directory
-WORKDIR $HOME/app
-
-# Install app dependencies
-# A wildcard is used to ensure both package.json AND package-lock.json are copied
-# where available (npm@5+)
-COPY --chown=user package*.json $HOME/app
-
-RUN npm install
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app
-
-EXPOSE 8000
-
-CMD [ "bash", "start.sh" ]
\ No newline at end of file
diff --git a/spaces/jlmarrugom/voice_fixer_app/voicefixer/restorer/model.py b/spaces/jlmarrugom/voice_fixer_app/voicefixer/restorer/model.py
deleted file mode 100644
index 125ed31975113794c37008f05882b386eeec31a7..0000000000000000000000000000000000000000
--- a/spaces/jlmarrugom/voice_fixer_app/voicefixer/restorer/model.py
+++ /dev/null
@@ -1,680 +0,0 @@
-# import pytorch_lightning as pl
-
-import torch.utils
-from voicefixer.tools.mel_scale import MelScale
-import torch.utils.data
-import matplotlib.pyplot as plt
-import librosa.display
-from voicefixer.vocoder.base import Vocoder
-from voicefixer.tools.pytorch_util import *
-from voicefixer.restorer.model_kqq_bn import UNetResComplex_100Mb
-from voicefixer.tools.random_ import *
-from voicefixer.tools.wav import *
-from voicefixer.tools.modules.fDomainHelper import FDomainHelper
-
-from voicefixer.tools.io import load_json, write_json
-from matplotlib import cm
-
-os.environ["KMP_DUPLICATE_LIB_OK"] = "True"
-EPS = 1e-8
-
-
-class BN_GRU(torch.nn.Module):
- def __init__(
- self,
- input_dim,
- hidden_dim,
- layer=1,
- bidirectional=False,
- batchnorm=True,
- dropout=0.0,
- ):
- super(BN_GRU, self).__init__()
- self.batchnorm = batchnorm
- if batchnorm:
- self.bn = nn.BatchNorm2d(1)
- self.gru = torch.nn.GRU(
- input_size=input_dim,
- hidden_size=hidden_dim,
- num_layers=layer,
- bidirectional=bidirectional,
- dropout=dropout,
- batch_first=True,
- )
- self.init_weights()
-
- def init_weights(self):
- for m in self.modules():
- if type(m) in [nn.GRU, nn.LSTM, nn.RNN]:
- for name, param in m.named_parameters():
- if "weight_ih" in name:
- torch.nn.init.xavier_uniform_(param.data)
- elif "weight_hh" in name:
- torch.nn.init.orthogonal_(param.data)
- elif "bias" in name:
- param.data.fill_(0)
-
- def forward(self, inputs):
- # (batch, 1, seq, feature)
- if self.batchnorm:
- inputs = self.bn(inputs)
- out, _ = self.gru(inputs.squeeze(1))
- return out.unsqueeze(1)
-
-
-class Generator(nn.Module):
- def __init__(self, n_mel, hidden, channels):
- super(Generator, self).__init__()
- # todo the currently running trail don't have dropout
- self.denoiser = nn.Sequential(
- nn.BatchNorm2d(1),
- nn.Linear(n_mel, n_mel * 2),
- nn.ReLU(inplace=True),
- nn.BatchNorm2d(1),
- nn.Linear(n_mel * 2, n_mel * 4),
- nn.Dropout(0.5),
- nn.ReLU(inplace=True),
- BN_GRU(
- input_dim=n_mel * 4,
- hidden_dim=n_mel * 2,
- bidirectional=True,
- layer=2,
- batchnorm=True,
- ),
- BN_GRU(
- input_dim=n_mel * 4,
- hidden_dim=n_mel * 2,
- bidirectional=True,
- layer=2,
- batchnorm=True,
- ),
- nn.BatchNorm2d(1),
- nn.ReLU(inplace=True),
- nn.Linear(n_mel * 4, n_mel * 4),
- nn.Dropout(0.5),
- nn.BatchNorm2d(1),
- nn.ReLU(inplace=True),
- nn.Linear(n_mel * 4, n_mel),
- nn.Sigmoid(),
- )
-
- self.unet = UNetResComplex_100Mb(channels=channels)
-
- def forward(self, sp, mel_orig):
- # Denoising
- noisy = mel_orig.clone()
- clean = self.denoiser(noisy) * noisy
- x = to_log(clean.detach())
- unet_in = torch.cat([to_log(mel_orig), x], dim=1)
- # unet_in = lstm_out
- unet_out = self.unet(unet_in)["mel"]
- # masks
- mel = unet_out + x
- # todo mel and addition here are in log scales
- return {
- "mel": mel,
- "lstm_out": unet_out,
- "unet_out": unet_out,
- "noisy": noisy,
- "clean": clean,
- }
-
-
-class VoiceFixer(nn.Module):
- def __init__(
- self,
- channels,
- type_target="vocals",
- nsrc=1,
- loss="l1",
- lr=0.002,
- gamma=0.9,
- batchsize=None,
- frame_length=None,
- sample_rate=None,
- warm_up_steps=1000,
- reduce_lr_steps=15000,
- # datas
- check_val_every_n_epoch=5,
- ):
- super(VoiceFixer, self).__init__()
-
- if sample_rate == 44100:
- window_size = 2048
- hop_size = 441
- n_mel = 128
- elif sample_rate == 24000:
- window_size = 768
- hop_size = 240
- n_mel = 80
- elif sample_rate == 16000:
- window_size = 512
- hop_size = 160
- n_mel = 80
- else:
- raise ValueError(
- "Error: Sample rate " + str(sample_rate) + " not supported"
- )
-
- center = (True,)
- pad_mode = "reflect"
- window = "hann"
- freeze_parameters = True
-
- # self.save_hyperparameters()
- self.nsrc = nsrc
- self.type_target = type_target
- self.channels = channels
- self.lr = lr
- self.generated = None
- self.gamma = gamma
- self.sample_rate = sample_rate
- self.sample_rate = sample_rate
- self.batchsize = batchsize
- self.frame_length = frame_length
- # self.hparams['channels'] = 2
-
- # self.am = AudioMetrics()
- # self.im = ImgMetrics()
-
- self.vocoder = Vocoder(sample_rate=44100)
-
- self.valid = None
- self.fake = None
-
- self.train_step = 0
- self.val_step = 0
- self.val_result_save_dir = None
- self.val_result_save_dir_step = None
- self.downsample_ratio = 2**6 # This number equals 2^{#encoder_blcoks}
- self.check_val_every_n_epoch = check_val_every_n_epoch
-
- self.f_helper = FDomainHelper(
- window_size=window_size,
- hop_size=hop_size,
- center=center,
- pad_mode=pad_mode,
- window=window,
- freeze_parameters=freeze_parameters,
- )
-
- hidden = window_size // 2 + 1
-
- self.mel = MelScale(n_mels=n_mel, sample_rate=sample_rate, n_stft=hidden)
-
- # masking
- self.generator = Generator(n_mel, hidden, channels)
-
- self.lr_lambda = lambda step: self.get_lr_lambda(
- step,
- gamma=self.gamma,
- warm_up_steps=warm_up_steps,
- reduce_lr_steps=reduce_lr_steps,
- )
-
- self.lr_lambda_2 = lambda step: self.get_lr_lambda(
- step, gamma=self.gamma, warm_up_steps=10, reduce_lr_steps=reduce_lr_steps
- )
-
- self.mel_weight_44k_128 = (
- torch.tensor(
- [
- 19.40951426,
- 19.94047336,
- 20.4859038,
- 21.04629067,
- 21.62194148,
- 22.21335214,
- 22.8210215,
- 23.44529231,
- 24.08660962,
- 24.74541882,
- 25.42234287,
- 26.11770576,
- 26.83212784,
- 27.56615283,
- 28.32007747,
- 29.0947679,
- 29.89060111,
- 30.70832636,
- 31.54828121,
- 32.41121487,
- 33.29780773,
- 34.20865341,
- 35.14437675,
- 36.1056621,
- 37.09332763,
- 38.10795802,
- 39.15039691,
- 40.22119881,
- 41.32154931,
- 42.45172373,
- 43.61293329,
- 44.80609379,
- 46.031602,
- 47.29070223,
- 48.58427549,
- 49.91327905,
- 51.27863232,
- 52.68119708,
- 54.1222372,
- 55.60274206,
- 57.12364703,
- 58.68617876,
- 60.29148652,
- 61.94081306,
- 63.63501986,
- 65.37562658,
- 67.16408954,
- 69.00109084,
- 70.88850318,
- 72.82736101,
- 74.81985537,
- 76.86654792,
- 78.96885475,
- 81.12900906,
- 83.34840929,
- 85.62810662,
- 87.97005418,
- 90.37689804,
- 92.84887686,
- 95.38872881,
- 97.99777002,
- 100.67862715,
- 103.43232942,
- 106.26140638,
- 109.16827015,
- 112.15470471,
- 115.22184756,
- 118.37439245,
- 121.6122689,
- 124.93877158,
- 128.35661454,
- 131.86761321,
- 135.47417938,
- 139.18059494,
- 142.98713744,
- 146.89771854,
- 150.91684347,
- 155.0446638,
- 159.28614648,
- 163.64270198,
- 168.12035831,
- 172.71749158,
- 177.44220154,
- 182.29556933,
- 187.28286676,
- 192.40502126,
- 197.6682721,
- 203.07516896,
- 208.63088733,
- 214.33770931,
- 220.19910108,
- 226.22363072,
- 232.41087124,
- 238.76803591,
- 245.30079083,
- 252.01064464,
- 258.90261676,
- 265.98474,
- 273.26010248,
- 280.73496362,
- 288.41440094,
- 296.30489752,
- 304.41180337,
- 312.7377183,
- 321.28877878,
- 330.07870237,
- 339.10812951,
- 348.38276173,
- 357.91393924,
- 367.70513992,
- 377.76413924,
- 388.09467408,
- 398.70920178,
- 409.61813793,
- 420.81980127,
- 432.33215467,
- 444.16083117,
- 456.30919947,
- 468.78589276,
- 481.61325588,
- 494.78824596,
- 508.31969844,
- 522.2238331,
- 536.51163441,
- 551.18859414,
- 566.26142988,
- 581.75006061,
- 597.66210737,
- ]
- )
- / 19.40951426
- )
- self.mel_weight_44k_128 = self.mel_weight_44k_128[None, None, None, ...]
-
- self.g_loss_weight = 0.01
- self.d_loss_weight = 1
-
- def get_vocoder(self):
- return self.vocoder
-
- def get_f_helper(self):
- return self.f_helper
-
- def get_lr_lambda(self, step, gamma, warm_up_steps, reduce_lr_steps):
- r"""Get lr_lambda for LambdaLR. E.g.,
-
- .. code-block: python
- lr_lambda = lambda step: get_lr_lambda(step, warm_up_steps=1000, reduce_lr_steps=10000)
-
- from torch.optim.lr_scheduler import LambdaLR
- LambdaLR(optimizer, lr_lambda)
- """
- if step <= warm_up_steps:
- return step / warm_up_steps
- else:
- return gamma ** (step // reduce_lr_steps)
-
- def init_weights(self, module: nn.Module):
- for m in module.modules():
- if type(m) in [nn.GRU, nn.LSTM, nn.RNN]:
- for name, param in m.named_parameters():
- if "weight_ih" in name:
- torch.nn.init.xavier_uniform_(param.data)
- elif "weight_hh" in name:
- torch.nn.init.orthogonal_(param.data)
- elif "bias" in name:
- param.data.fill_(0)
-
- def pre(self, input):
- sp, _, _ = self.f_helper.wav_to_spectrogram_phase(input)
- mel_orig = self.mel(sp.permute(0, 1, 3, 2)).permute(0, 1, 3, 2)
- return sp, mel_orig
-
- def forward(self, sp, mel_orig):
- """
- Args:
- input: (batch_size, channels_num, segment_samples)
-
- Outputs:
- output_dict: {
- 'wav': (batch_size, channels_num, segment_samples),
- 'sp': (batch_size, channels_num, time_steps, freq_bins)}
- """
- return self.generator(sp, mel_orig)
-
- def configure_optimizers(self):
- optimizer_g = torch.optim.Adam(
- [{"params": self.generator.parameters()}],
- lr=self.lr,
- amsgrad=True,
- betas=(0.5, 0.999),
- )
- optimizer_d = torch.optim.Adam(
- [{"params": self.discriminator.parameters()}],
- lr=self.lr,
- amsgrad=True,
- betas=(0.5, 0.999),
- )
-
- scheduler_g = {
- "scheduler": torch.optim.lr_scheduler.LambdaLR(optimizer_g, self.lr_lambda),
- "interval": "step",
- "frequency": 1,
- }
- scheduler_d = {
- "scheduler": torch.optim.lr_scheduler.LambdaLR(optimizer_d, self.lr_lambda),
- "interval": "step",
- "frequency": 1,
- }
- return [optimizer_g, optimizer_d], [scheduler_g, scheduler_d]
-
- def preprocess(self, batch, train=False, cutoff=None):
- if train:
- vocal = batch[self.type_target] # final target
- noise = batch["noise_LR"] # augmented low resolution audio with noise
- augLR = batch[
- self.type_target + "_aug_LR"
- ] # # augment low resolution audio
- LR = batch[self.type_target + "_LR"]
- # embed()
- vocal, LR, augLR, noise = (
- vocal.float().permute(0, 2, 1),
- LR.float().permute(0, 2, 1),
- augLR.float().permute(0, 2, 1),
- noise.float().permute(0, 2, 1),
- )
- # LR, noise = self.add_random_noise(LR, noise)
- snr, scale = [], []
- for i in range(vocal.size()[0]):
- (
- vocal[i, ...],
- LR[i, ...],
- augLR[i, ...],
- noise[i, ...],
- _snr,
- _scale,
- ) = add_noise_and_scale_with_HQ_with_Aug(
- vocal[i, ...],
- LR[i, ...],
- augLR[i, ...],
- noise[i, ...],
- snr_l=-5,
- snr_h=45,
- scale_lower=0.6,
- scale_upper=1.0,
- )
- snr.append(_snr), scale.append(_scale)
- # vocal, LR = self.amp_to_original_f(vocal, LR)
- # noise = (noise * 0.0) + 1e-8 # todo
- return vocal, augLR, LR, noise + augLR
- else:
- if cutoff is None:
- LR_noisy = batch["noisy"]
- LR = batch["vocals"]
- vocals = batch["vocals"]
- vocals, LR, LR_noisy = (
- vocals.float().permute(0, 2, 1),
- LR.float().permute(0, 2, 1),
- LR_noisy.float().permute(0, 2, 1),
- )
- return vocals, LR, LR_noisy, batch["fname"][0]
- else:
- LR_noisy = batch["noisy" + "LR" + "_" + str(cutoff)]
- LR = batch["vocals" + "LR" + "_" + str(cutoff)]
- vocals = batch["vocals"]
- vocals, LR, LR_noisy = (
- vocals.float().permute(0, 2, 1),
- LR.float().permute(0, 2, 1),
- LR_noisy.float().permute(0, 2, 1),
- )
- return vocals, LR, LR_noisy, batch["fname"][0]
-
- def training_step(self, batch, batch_nb, optimizer_idx):
- # dict_keys(['vocals', 'vocals_aug', 'vocals_augLR', 'noise'])
- config = load_json("temp_path.json")
- if "g_loss_weight" not in config.keys():
- config["g_loss_weight"] = self.g_loss_weight
- config["d_loss_weight"] = self.d_loss_weight
- write_json(config, "temp_path.json")
- elif (
- config["g_loss_weight"] != self.g_loss_weight
- or config["d_loss_weight"] != self.d_loss_weight
- ):
- print(
- "Update d_loss weight, from",
- self.d_loss_weight,
- "to",
- config["d_loss_weight"],
- )
- print(
- "Update g_loss weight, from",
- self.g_loss_weight,
- "to",
- config["g_loss_weight"],
- )
- self.g_loss_weight = config["g_loss_weight"]
- self.d_loss_weight = config["d_loss_weight"]
-
- if optimizer_idx == 0:
- self.vocal, self.augLR, _, self.LR_noisy = self.preprocess(
- batch, train=True
- )
-
- for i in range(self.vocal.size()[0]):
- save_wave(
- tensor2numpy(self.vocal[i, ...]),
- str(i) + "vocal" + ".wav",
- sample_rate=44100,
- )
- save_wave(
- tensor2numpy(self.LR_noisy[i, ...]),
- str(i) + "LR_noisy" + ".wav",
- sample_rate=44100,
- )
-
- # all_mel_e2e in non-log scale
- _, self.mel_target = self.pre(self.vocal)
- self.sp_LR_target, self.mel_LR_target = self.pre(self.augLR)
- self.sp_LR_target_noisy, self.mel_LR_target_noisy = self.pre(self.LR_noisy)
-
- if self.valid is None or self.valid.size()[0] != self.mel_target.size()[0]:
- self.valid = torch.ones(
- self.mel_target.size()[0], 1, self.mel_target.size()[2], 1
- )
- self.valid = self.valid.type_as(self.mel_target)
- if self.fake is None or self.fake.size()[0] != self.mel_target.size()[0]:
- self.fake = torch.zeros(
- self.mel_target.size()[0], 1, self.mel_target.size()[2], 1
- )
- self.fake = self.fake.type_as(self.mel_target)
-
- self.generated = self(self.sp_LR_target_noisy, self.mel_LR_target_noisy)
-
- denoise_loss = self.l1loss(self.generated["clean"], self.mel_LR_target)
- targ_loss = self.l1loss(self.generated["mel"], to_log(self.mel_target))
-
- self.log(
- "targ-l",
- targ_loss,
- on_step=True,
- on_epoch=False,
- logger=True,
- sync_dist=True,
- prog_bar=True,
- )
- self.log(
- "noise-l",
- denoise_loss,
- on_step=True,
- on_epoch=False,
- logger=True,
- sync_dist=True,
- prog_bar=True,
- )
-
- loss = targ_loss + denoise_loss
-
- if self.train_step >= 18000:
- g_loss = self.bce_loss(
- self.discriminator(self.generated["mel"]), self.valid
- )
- self.log(
- "g_l",
- g_loss,
- on_step=True,
- on_epoch=False,
- logger=True,
- sync_dist=True,
- prog_bar=True,
- )
- # print("g_loss", g_loss)
- all_loss = loss + self.g_loss_weight * g_loss
- self.log(
- "all_loss",
- all_loss,
- on_step=True,
- on_epoch=True,
- logger=True,
- sync_dist=True,
- )
- else:
- all_loss = loss
- self.train_step += 0.5
- return {"loss": all_loss}
-
- elif optimizer_idx == 1:
- if self.train_step >= 16000:
- self.generated = self(self.sp_LR_target_noisy, self.mel_LR_target_noisy)
- self.train_step += 0.5
- real_loss = self.bce_loss(
- self.discriminator(to_log(self.mel_target)), self.valid
- )
- self.log(
- "r_l",
- real_loss,
- on_step=True,
- on_epoch=False,
- logger=True,
- sync_dist=True,
- prog_bar=True,
- )
- fake_loss = self.bce_loss(
- self.discriminator(self.generated["mel"].detach()), self.fake
- )
- self.log(
- "d_l",
- fake_loss,
- on_step=True,
- on_epoch=False,
- logger=True,
- sync_dist=True,
- prog_bar=True,
- )
- d_loss = self.d_loss_weight * (real_loss + fake_loss) / 2
- self.log(
- "discriminator_loss",
- d_loss,
- on_step=True,
- on_epoch=True,
- logger=True,
- sync_dist=True,
- )
- return {"loss": d_loss}
-
- def draw_and_save(
- self, mel: torch.Tensor, path, clip_max=None, clip_min=None, needlog=True
- ):
- plt.figure(figsize=(15, 5))
- if clip_min is None:
- clip_max, clip_min = self.clip(mel)
- mel = np.transpose(tensor2numpy(mel)[0, 0, ...], (1, 0))
- # assert np.sum(mel < 0) == 0, str(np.sum(mel < 0)) + str(np.sum(mel < 0))
-
- if needlog:
- assert np.sum(mel < 0) == 0, str(np.sum(mel < 0)) + "-" + path
- mel_log = np.log10(mel + EPS)
- else:
- mel_log = mel
-
- # plt.imshow(mel)
- librosa.display.specshow(
- mel_log,
- sr=44100,
- x_axis="frames",
- y_axis="mel",
- cmap=cm.jet,
- vmax=clip_max,
- vmin=clip_min,
- )
- plt.colorbar()
- plt.savefig(path)
- plt.close()
-
- def clip(self, *args):
- val_max, val_min = [], []
- for each in args:
- val_max.append(torch.max(each))
- val_min.append(torch.min(each))
- return max(val_max), min(val_min)
diff --git a/spaces/jmesikto/whisper-webui/src/hooks/progressListener.py b/spaces/jmesikto/whisper-webui/src/hooks/progressListener.py
deleted file mode 100644
index a7852a24e237ae864bbce5f37674e1f7c817a1b3..0000000000000000000000000000000000000000
--- a/spaces/jmesikto/whisper-webui/src/hooks/progressListener.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from typing import Union
-
-class ProgressListener:
- def on_progress(self, current: Union[int, float], total: Union[int, float]):
- self.total = total
-
- def on_finished(self):
- pass
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/KMAC128.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/KMAC128.py
deleted file mode 100644
index 05061fc2ea402ae614fb235f9fde0e91b9e83286..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/KMAC128.py
+++ /dev/null
@@ -1,179 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2021, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-from binascii import unhexlify
-
-from Crypto.Util.py3compat import bord, tobytes, is_bytes
-from Crypto.Random import get_random_bytes
-
-from . import cSHAKE128, SHA3_256
-from .cSHAKE128 import _bytepad, _encode_str, _right_encode
-
-
-class KMAC_Hash(object):
- """A KMAC hash object.
- Do not instantiate directly.
- Use the :func:`new` function.
- """
-
- def __init__(self, data, key, mac_len, custom,
- oid_variant, cshake, rate):
-
- # See https://tools.ietf.org/html/rfc8702
- self.oid = "2.16.840.1.101.3.4.2." + oid_variant
- self.digest_size = mac_len
-
- self._mac = None
-
- partial_newX = _bytepad(_encode_str(tobytes(key)), rate)
- self._cshake = cshake._new(partial_newX, custom, b"KMAC")
-
- if data:
- self._cshake.update(data)
-
- def update(self, data):
- """Authenticate the next chunk of message.
-
- Args:
- data (bytes/bytearray/memoryview): The next chunk of the message to
- authenticate.
- """
-
- if self._mac:
- raise TypeError("You can only call 'digest' or 'hexdigest' on this object")
-
- self._cshake.update(data)
- return self
-
- def digest(self):
- """Return the **binary** (non-printable) MAC tag of the message.
-
- :return: The MAC tag. Binary form.
- :rtype: byte string
- """
-
- if not self._mac:
- self._cshake.update(_right_encode(self.digest_size * 8))
- self._mac = self._cshake.read(self.digest_size)
-
- return self._mac
-
- def hexdigest(self):
- """Return the **printable** MAC tag of the message.
-
- :return: The MAC tag. Hexadecimal encoded.
- :rtype: string
- """
-
- return "".join(["%02x" % bord(x) for x in tuple(self.digest())])
-
- def verify(self, mac_tag):
- """Verify that a given **binary** MAC (computed by another party)
- is valid.
-
- Args:
- mac_tag (bytes/bytearray/memoryview): the expected MAC of the message.
-
- Raises:
- ValueError: if the MAC does not match. It means that the message
- has been tampered with or that the MAC key is incorrect.
- """
-
- secret = get_random_bytes(16)
-
- mac1 = SHA3_256.new(secret + mac_tag)
- mac2 = SHA3_256.new(secret + self.digest())
-
- if mac1.digest() != mac2.digest():
- raise ValueError("MAC check failed")
-
- def hexverify(self, hex_mac_tag):
- """Verify that a given **printable** MAC (computed by another party)
- is valid.
-
- Args:
- hex_mac_tag (string): the expected MAC of the message, as a hexadecimal string.
-
- Raises:
- ValueError: if the MAC does not match. It means that the message
- has been tampered with or that the MAC key is incorrect.
- """
-
- self.verify(unhexlify(tobytes(hex_mac_tag)))
-
- def new(self, **kwargs):
- """Return a new instance of a KMAC hash object.
- See :func:`new`.
- """
-
- if "mac_len" not in kwargs:
- kwargs["mac_len"] = self.digest_size
-
- return new(**kwargs)
-
-
-def new(**kwargs):
- """Create a new KMAC128 object.
-
- Args:
- key (bytes/bytearray/memoryview):
- The key to use to compute the MAC.
- It must be at least 128 bits long (16 bytes).
- data (bytes/bytearray/memoryview):
- Optional. The very first chunk of the message to authenticate.
- It is equivalent to an early call to :meth:`KMAC_Hash.update`.
- mac_len (integer):
- Optional. The size of the authentication tag, in bytes.
- Default is 64. Minimum is 8.
- custom (bytes/bytearray/memoryview):
- Optional. A customization byte string (``S`` in SP 800-185).
-
- Returns:
- A :class:`KMAC_Hash` hash object
- """
-
- key = kwargs.pop("key", None)
- if not is_bytes(key):
- raise TypeError("You must pass a key to KMAC128")
- if len(key) < 16:
- raise ValueError("The key must be at least 128 bits long (16 bytes)")
-
- data = kwargs.pop("data", None)
-
- mac_len = kwargs.pop("mac_len", 64)
- if mac_len < 8:
- raise ValueError("'mac_len' must be 8 bytes or more")
-
- custom = kwargs.pop("custom", b"")
-
- if kwargs:
- raise TypeError("Unknown parameters: " + str(kwargs))
-
- return KMAC_Hash(data, key, mac_len, custom, "19", cSHAKE128, 168)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/list/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/list/__init__.py
deleted file mode 100644
index b24c607f33df4130279f81016b5fdc2a22d19fd0..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/list/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-"""List-based data structures."""
-
-from gpt_index.indices.list.base import GPTListIndex
-
-__all__ = [
- "GPTListIndex",
-]
diff --git a/spaces/jordonpeter01/Top-20-Diffusion-g/README.md b/spaces/jordonpeter01/Top-20-Diffusion-g/README.md
deleted file mode 100644
index 696320ce4e6fe30a88701f81fb7eb0607e85858e..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/Top-20-Diffusion-g/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Top 20 Diffusion
-emoji: 👑
-colorFrom: blue
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-duplicated_from: Omnibus/Top-20-Diffusion-g
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/kaleidoscope-data/data-cleaning-llm/app/openai_chat_completion.py b/spaces/kaleidoscope-data/data-cleaning-llm/app/openai_chat_completion.py
deleted file mode 100644
index d68c68d8ab5c2af6aa8d00adbc97cca74884c643..0000000000000000000000000000000000000000
--- a/spaces/kaleidoscope-data/data-cleaning-llm/app/openai_chat_completion.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import os
-from io import BytesIO
-import pandas as pd
-from dotenv import load_dotenv
-load_dotenv()
-import openai
-import streamlit as st
-
-# # set OPENAI_API_KEY environment variable from .streamlit/secrets.toml file
-openai.api_key = st.secrets["OPENAI_API_KEY"]
-
-# # set OPENAI_API_KEY environment variable from .env file
-# openai.api_key = os.getenv("OPENAI_API_KEY")
-
-# # read in llm-data-cleaner/prompts/gpt4-system-message.txt file into variable system_message
-# system_message = open('../prompts/gpt4-system-message.txt', 'r').read()
-
-class OpenAIChatCompletions:
- def __init__(self, model="gpt-4", system_message=None):
- self.model = model
- self.system_message = system_message
-
-
- # function to input args such as model, prompt, etc. and return completion
- def openai_chat_completion(self, prompt, n_shot=None):
- messages = [{"role": "system", "content": self.system_message}] if self.system_message else []
-
- # add n_shot number of samples to messages list ... if n_shot is None, then only system_message and prompt will be added to messages list
- if n_shot is not None:
- messages = self._add_samples(messages, n_samples=n_shot)
-
- messages.append({"role": "user", "content": prompt})
-
- # set up the API request parameters for OpenAI
- chat_request_kwargs = dict(
- model=self.model,
- messages=messages,
- )
-
- # make the API request to OpenAI
- response = openai.ChatCompletion.create(**chat_request_kwargs)
-
- # return only the completion text
- # return response['choices'][0]['message']['content']
- # return response
- return response
-
-
- # function to use test data to predict completions
- def predict_jsonl(
- self,
- path_or_buf='../data/cookies_train.jsonl',
- # path_or_buf='~/data/cookies_train.jsonl',
- n_samples=None,
- n_shot=None
- ):
-
- jsonObj = pd.read_json(path_or_buf=path_or_buf, lines=True)
- if n_samples is not None:
- jsonObj = jsonObj.sample(n_samples, random_state=42)
-
- iter_range = range(len(jsonObj))
- prompts = [jsonObj.iloc[i]['prompt'] for i in iter_range]
- completions = [jsonObj.iloc[i]['completion'] for i in iter_range]
- predictions = [self.openai_chat_completion(prompt, n_shot=n_shot) for prompt in prompts]
-
- return prompts, completions, predictions
-
-
- # a method that adds prompt and completion samples to messages
- @staticmethod
- def _add_samples(messages, n_samples=None):
- if n_samples is None:
- return messages
-
- samples = OpenAIChatCompletions._sample_jsonl(n_samples=n_samples)
- for i in range(n_samples):
- messages.append({"role": "user", "content": samples.iloc[i]['prompt']})
- messages.append({"role": "assistant", "content": samples.iloc[i]['completion']})
-
- return messages
-
-
- # a method that samples n rows from a jsonl file, returning a pandas dataframe
- @staticmethod
- def _sample_jsonl(
- path_or_buf='data/cookies_train.jsonl',
- # path_or_buf='~/data/cookies_train.jsonl',
- n_samples=5
- ):
-
- # jsonObj = pd.read_json(path_or_buf=path_or_buf, lines=True)
-
- # if running locally, True
- # else running on HF Spaces, False
- if "Kaleidoscope Data" in os.getcwd():
- # file_path = os.path.join(os.getcwd(), "..", path_or_buf)
- file_path = os.path.join("/".join(os.getcwd().split('/')[:-1]), path_or_buf)
- else:
- file_path = os.path.join(os.getcwd(), path_or_buf)
-
-
- try:
- with open(file_path, "r") as file:
- jsonl_str = file.read()
-
- jsonObj = pd.read_json(BytesIO(jsonl_str.encode()), lines=True, engine="pyarrow")
- except FileNotFoundError:
- # Handle the case where the file is not found
- # Display an error message or take appropriate action
- st.write(f"File not found: {file_path}")
-
- return jsonObj.sample(n_samples, random_state=42)
diff --git a/spaces/katebor/Taxonomy/README.md b/spaces/katebor/Taxonomy/README.md
deleted file mode 100644
index 2fa428936f0b1b0be7e29f812b547d0a5be32867..0000000000000000000000000000000000000000
--- a/spaces/katebor/Taxonomy/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Taxonomy
-emoji: 🐢
-colorFrom: red
-colorTo: indigo
-sdk: static
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/katielink/compare-bio-llm/app.py b/spaces/katielink/compare-bio-llm/app.py
deleted file mode 100644
index 3db36b1509c42301ef4a379c007a0d34c4b804f0..0000000000000000000000000000000000000000
--- a/spaces/katielink/compare-bio-llm/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-import os
-import gradio as gr
-import torch
-import numpy as np
-from transformers import pipeline
-
-name_list = ['microsoft/biogpt', 'stanford-crfm/BioMedLM', 'facebook/galactica-1.3b']
-
-examples = [['COVID-19 is'],['A 65-year-old female patient with a past medical history of']]
-
-print(f"Is CUDA available: {torch.cuda.is_available()}")
-print(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}")
-
-pipe_biogpt = pipeline("text-generation", model="microsoft/BioGPT-Large", device="cuda:0", model_kwargs={"torch_dtype":torch.bfloat16})
-pipe_biomedlm = pipeline("text-generation", model="stanford-crfm/BioMedLM", device="cuda:0", model_kwargs={"torch_dtype":torch.bfloat16})
-pipe_galactica = pipeline("text-generation", model="facebook/galactica-1.3b", device="cuda:0", model_kwargs={"torch_dtype":torch.bfloat16})
-
-title = "Compare generative biomedical LLMs!"
-description = "**Disclaimer:** this demo was made for research purposes only and should not be used for medical purposes."
-
-def inference(text):
- output_biogpt = pipe_biogpt(text, max_length=100)[0]["generated_text"]
- output_biomedlm = pipe_biomedlm(text, max_length=100)[0]["generated_text"]
- output_galactica = pipe_galactica(text, max_length=100)[0]["generated_text"]
- return [
- output_biogpt,
- output_biomedlm,
- output_galactica
- ]
-
-io = gr.Interface(
- inference,
- gr.Textbox(lines=3),
- outputs=[
- gr.Textbox(lines=3, label="BioGPT-Large"),
- gr.Textbox(lines=3, label="BioMedLM (fka PubmedGPT)"),
- gr.Textbox(lines=3, label="Galactica 1.3B"),
- ],
- title=title,
- description=description,
- examples=examples
-)
-io.launch()
diff --git a/spaces/kdrkdrkdr/ZhongliTTS/utils.py b/spaces/kdrkdrkdr/ZhongliTTS/utils.py
deleted file mode 100644
index 4cb5b43d0ca2bae496e7871b2094f2ffb26ab642..0000000000000000000000000000000000000000
--- a/spaces/kdrkdrkdr/ZhongliTTS/utils.py
+++ /dev/null
@@ -1,226 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.ERROR)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- learning_rate = checkpoint_dict['learning_rate']
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict['optimizer'])
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info("Loaded checkpoint '{}' (iteration {})".format(
- checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger('matplotlib')
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower',
- interpolation='none')
- fig.colorbar(im, ax=ax)
- xlabel = 'Decoder timestep'
- if info is not None:
- xlabel += '\n\n' + info
- plt.xlabel(xlabel)
- plt.ylabel('Encoder timestep')
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default="./configs/base.json",
- help='JSON file for configuration')
- parser.add_argument('-m', '--model', type=str, required=True,
- help='Model name')
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r", encoding="utf-8") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- ))
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn("git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]))
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/Wewordle.py b/spaces/kepl/gpt/g4f/Provider/Providers/Wewordle.py
deleted file mode 100644
index 090d0bf3ab2e1f3851880393d43662edfbe9d984..0000000000000000000000000000000000000000
--- a/spaces/kepl/gpt/g4f/Provider/Providers/Wewordle.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import os
-import requests
-import json
-import random
-import time
-import string
-from ...typing import sha256, Dict, get_type_hints
-
-url = "https://wewordle.org/gptapi/v1/android/turbo"
-model = ['gpt-3.5-turbo']
-supports_stream = False
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- base = ''
- for message in messages:
- base += '%s: %s\n' % (message['role'], message['content'])
- base += 'assistant:'
- # randomize user id and app id
- _user_id = ''.join(random.choices(
- f'{string.ascii_lowercase}{string.digits}', k=16))
- _app_id = ''.join(random.choices(
- f'{string.ascii_lowercase}{string.digits}', k=31))
- # make current date with format utc
- _request_date = time.strftime("%Y-%m-%dT%H:%M:%S.000Z", time.gmtime())
- headers = {
- 'accept': '*/*',
- 'pragma': 'no-cache',
- 'Content-Type': 'application/json',
- 'Connection': 'keep-alive'
- }
- data = {
- "user": _user_id,
- "messages": [
- {"role": "user", "content": base}
- ],
- "subscriber": {
- "originalPurchaseDate": None,
- "originalApplicationVersion": None,
- "allPurchaseDatesMillis": {},
- "entitlements": {
- "active": {},
- "all": {}
- },
- "allPurchaseDates": {},
- "allExpirationDatesMillis": {},
- "allExpirationDates": {},
- "originalAppUserId": f"$RCAnonymousID:{_app_id}",
- "latestExpirationDate": None,
- "requestDate": _request_date,
- "latestExpirationDateMillis": None,
- "nonSubscriptionTransactions": [],
- "originalPurchaseDateMillis": None,
- "managementURL": None,
- "allPurchasedProductIdentifiers": [],
- "firstSeen": _request_date,
- "activeSubscriptions": []
- }
- }
- response = requests.post(url, headers=headers, data=json.dumps(data))
- if response.status_code == 200:
- _json = response.json()
- if 'message' in _json:
- message_content = _json['message']['content']
- message_content = message_content.replace('**assistant:** ', '')
- yield message_content
- else:
- print(f"Error Occurred::{response.status_code}")
- return None
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/docs/modelzoo.md b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/docs/modelzoo.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/utils/utils_amp.py b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/utils/utils_amp.py
deleted file mode 100644
index 9ac2a03f4212faa129faed447a8f4519c0a00a8b..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/utils/utils_amp.py
+++ /dev/null
@@ -1,88 +0,0 @@
-from typing import Dict, List
-
-import torch
-
-if torch.__version__ < '1.9':
- Iterable = torch._six.container_abcs.Iterable
-else:
- import collections
-
- Iterable = collections.abc.Iterable
-from torch.cuda.amp import GradScaler
-
-
-class _MultiDeviceReplicator(object):
- """
- Lazily serves copies of a tensor to requested devices. Copies are cached per-device.
- """
-
- def __init__(self, master_tensor: torch.Tensor) -> None:
- assert master_tensor.is_cuda
- self.master = master_tensor
- self._per_device_tensors: Dict[torch.device, torch.Tensor] = {}
-
- def get(self, device) -> torch.Tensor:
- retval = self._per_device_tensors.get(device, None)
- if retval is None:
- retval = self.master.to(device=device, non_blocking=True, copy=True)
- self._per_device_tensors[device] = retval
- return retval
-
-
-class MaxClipGradScaler(GradScaler):
- def __init__(self, init_scale, max_scale: float, growth_interval=100):
- GradScaler.__init__(self, init_scale=init_scale, growth_interval=growth_interval)
- self.max_scale = max_scale
-
- def scale_clip(self):
- if self.get_scale() == self.max_scale:
- self.set_growth_factor(1)
- elif self.get_scale() < self.max_scale:
- self.set_growth_factor(2)
- elif self.get_scale() > self.max_scale:
- self._scale.fill_(self.max_scale)
- self.set_growth_factor(1)
-
- def scale(self, outputs):
- """
- Multiplies ('scales') a tensor or list of tensors by the scale factor.
-
- Returns scaled outputs. If this instance of :class:`GradScaler` is not enabled, outputs are returned
- unmodified.
-
- Arguments:
- outputs (Tensor or iterable of Tensors): Outputs to scale.
- """
- if not self._enabled:
- return outputs
- self.scale_clip()
- # Short-circuit for the common case.
- if isinstance(outputs, torch.Tensor):
- assert outputs.is_cuda
- if self._scale is None:
- self._lazy_init_scale_growth_tracker(outputs.device)
- assert self._scale is not None
- return outputs * self._scale.to(device=outputs.device, non_blocking=True)
-
- # Invoke the more complex machinery only if we're treating multiple outputs.
- stash: List[_MultiDeviceReplicator] = [] # holds a reference that can be overwritten by apply_scale
-
- def apply_scale(val):
- if isinstance(val, torch.Tensor):
- assert val.is_cuda
- if len(stash) == 0:
- if self._scale is None:
- self._lazy_init_scale_growth_tracker(val.device)
- assert self._scale is not None
- stash.append(_MultiDeviceReplicator(self._scale))
- return val * stash[0].get(val.device)
- elif isinstance(val, Iterable):
- iterable = map(apply_scale, val)
- if isinstance(val, list) or isinstance(val, tuple):
- return type(val)(iterable)
- else:
- return iterable
- else:
- raise ValueError("outputs must be a Tensor or an iterable of Tensors")
-
- return apply_scale(outputs)
diff --git a/spaces/kevinwang676/VoiceChanger/src/face3d/util/generate_list.py b/spaces/kevinwang676/VoiceChanger/src/face3d/util/generate_list.py
deleted file mode 100644
index 943d906781063c3584a7e5b5c784f8aac0694985..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChanger/src/face3d/util/generate_list.py
+++ /dev/null
@@ -1,34 +0,0 @@
-"""This script is to generate training list files for Deep3DFaceRecon_pytorch
-"""
-
-import os
-
-# save path to training data
-def write_list(lms_list, imgs_list, msks_list, mode='train',save_folder='datalist', save_name=''):
- save_path = os.path.join(save_folder, mode)
- if not os.path.isdir(save_path):
- os.makedirs(save_path)
- with open(os.path.join(save_path, save_name + 'landmarks.txt'), 'w') as fd:
- fd.writelines([i + '\n' for i in lms_list])
-
- with open(os.path.join(save_path, save_name + 'images.txt'), 'w') as fd:
- fd.writelines([i + '\n' for i in imgs_list])
-
- with open(os.path.join(save_path, save_name + 'masks.txt'), 'w') as fd:
- fd.writelines([i + '\n' for i in msks_list])
-
-# check if the path is valid
-def check_list(rlms_list, rimgs_list, rmsks_list):
- lms_list, imgs_list, msks_list = [], [], []
- for i in range(len(rlms_list)):
- flag = 'false'
- lm_path = rlms_list[i]
- im_path = rimgs_list[i]
- msk_path = rmsks_list[i]
- if os.path.isfile(lm_path) and os.path.isfile(im_path) and os.path.isfile(msk_path):
- flag = 'true'
- lms_list.append(rlms_list[i])
- imgs_list.append(rimgs_list[i])
- msks_list.append(rmsks_list[i])
- print(i, rlms_list[i], flag)
- return lms_list, imgs_list, msks_list
diff --git a/spaces/kira4424/VITS-fast-fine-tuning/monotonic_align/__init__.py b/spaces/kira4424/VITS-fast-fine-tuning/monotonic_align/__init__.py
deleted file mode 100644
index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000
--- a/spaces/kira4424/VITS-fast-fine-tuning/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import numpy as np
-import torch
-from .monotonic_align.core import maximum_path_c
-
-
-def maximum_path(neg_cent, mask):
- """ Cython optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
- path = np.zeros(neg_cent.shape, dtype=np.int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
- return torch.from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/video/__init__.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/video/__init__.py
deleted file mode 100644
index 73199b01dec52820dc6ca0139903536344d5a1eb..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/video/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .io import Cache, VideoReader, frames2video
-from .optflow import (dequantize_flow, flow_from_bytes, flow_warp, flowread,
- flowwrite, quantize_flow, sparse_flow_from_bytes)
-from .processing import concat_video, convert_video, cut_video, resize_video
-
-__all__ = [
- 'Cache', 'VideoReader', 'frames2video', 'convert_video', 'resize_video',
- 'cut_video', 'concat_video', 'flowread', 'flowwrite', 'quantize_flow',
- 'dequantize_flow', 'flow_warp', 'flow_from_bytes', 'sparse_flow_from_bytes'
-]
diff --git a/spaces/kohrisatou-infinity/KIP_01_beta/vdecoder/hifigan/env.py b/spaces/kohrisatou-infinity/KIP_01_beta/vdecoder/hifigan/env.py
deleted file mode 100644
index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000
--- a/spaces/kohrisatou-infinity/KIP_01_beta/vdecoder/hifigan/env.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import os
-import shutil
-
-
-class AttrDict(dict):
- def __init__(self, *args, **kwargs):
- super(AttrDict, self).__init__(*args, **kwargs)
- self.__dict__ = self
-
-
-def build_env(config, config_name, path):
- t_path = os.path.join(path, config_name)
- if config != t_path:
- os.makedirs(path, exist_ok=True)
- shutil.copyfile(config, os.path.join(path, config_name))
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/UploadText-cb8fda80.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/UploadText-cb8fda80.js
deleted file mode 100644
index d58265c3bdcbd4ba1737b207c890f83916b95686..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/UploadText-cb8fda80.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as h,i as k,s as q,G as w,I as _,H as y,C as U,g as C,E as l,K as v,F as b,q as S,Q as T}from"./index-7c0e54a6.js";import{X as E}from"./Blocks-61158678.js";function F(t){let e,o=t[1](t[2][t[0]])+"",n,r,s,i,c=t[1]("or")+"",d,m,g,f=t[1]("interface.click_to_upload")+"",u;return{c(){e=w("div"),n=_(o),r=y(),s=w("span"),i=_("- "),d=_(c),m=_(" -"),g=y(),u=_(f),U(s,"class","or svelte-xwlu1w"),U(e,"class","wrap svelte-xwlu1w")},m(a,p){C(a,e,p),l(e,n),l(e,r),l(e,s),l(s,i),l(s,d),l(s,m),l(e,g),l(e,u)},p(a,[p]){p&3&&o!==(o=a[1](a[2][a[0]])+"")&&v(n,o),p&2&&c!==(c=a[1]("or")+"")&&v(d,c),p&2&&f!==(f=a[1]("interface.click_to_upload")+"")&&v(u,f)},i:b,o:b,d(a){a&&S(e)}}}function G(t,e,o){let n;T(t,E,i=>o(1,n=i));let{type:r="file"}=e;const s={image:"interface.drop_image",video:"interface.drop_video",audio:"interface.drop_audio",file:"interface.drop_file",csv:"interface.drop_csv"};return t.$$set=i=>{"type"in i&&o(0,r=i.type)},[r,n,s]}class K extends h{constructor(e){super(),k(this,e,G,F,q,{type:0})}}export{K as U};
-//# sourceMappingURL=UploadText-cb8fda80.js.map
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/optimizer.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/optimizer.py
deleted file mode 100644
index fe1010705e7b29d4fa1900b3a0438ab93d7b582c..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/optimizer.py
+++ /dev/null
@@ -1,47 +0,0 @@
-"""The optimizer tries to constant fold expressions and modify the AST
-in place so that it should be faster to evaluate.
-
-Because the AST does not contain all the scoping information and the
-compiler has to find that out, we cannot do all the optimizations we
-want. For example, loop unrolling doesn't work because unrolled loops
-would have a different scope. The solution would be a second syntax tree
-that stored the scoping rules.
-"""
-import typing as t
-
-from . import nodes
-from .visitor import NodeTransformer
-
-if t.TYPE_CHECKING:
- from .environment import Environment
-
-
-def optimize(node: nodes.Node, environment: "Environment") -> nodes.Node:
- """The context hint can be used to perform an static optimization
- based on the context given."""
- optimizer = Optimizer(environment)
- return t.cast(nodes.Node, optimizer.visit(node))
-
-
-class Optimizer(NodeTransformer):
- def __init__(self, environment: "t.Optional[Environment]") -> None:
- self.environment = environment
-
- def generic_visit(
- self, node: nodes.Node, *args: t.Any, **kwargs: t.Any
- ) -> nodes.Node:
- node = super().generic_visit(node, *args, **kwargs)
-
- # Do constant folding. Some other nodes besides Expr have
- # as_const, but folding them causes errors later on.
- if isinstance(node, nodes.Expr):
- try:
- return nodes.Const.from_untrusted(
- node.as_const(args[0] if args else None),
- lineno=node.lineno,
- environment=self.environment,
- )
- except nodes.Impossible:
- pass
-
- return node
diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/multimodal_embedder.py b/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/multimodal_embedder.py
deleted file mode 100644
index 626077cb80987d66af90f390e31aa2f2def76fec..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/extensions/multimodal/multimodal_embedder.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import base64
-import re
-from dataclasses import dataclass
-from io import BytesIO
-from typing import Any, List, Optional
-
-import torch
-from PIL import Image
-
-from extensions.multimodal.pipeline_loader import load_pipeline
-from modules import shared
-from modules.logging_colors import logger
-from modules.text_generation import encode, get_max_prompt_length
-
-
-@dataclass
-class PromptPart:
- text: str
- image: Optional[Image.Image] = None
- is_image: bool = False
- input_ids: Optional[torch.Tensor] = None
- embedding: Optional[torch.Tensor] = None
-
-
-class MultimodalEmbedder:
- def __init__(self, params: dict):
- pipeline, source = load_pipeline(params)
- self.pipeline = pipeline
- logger.info(f'Multimodal: loaded pipeline {self.pipeline.name()} from pipelines/{source} ({self.pipeline.__class__.__name__})')
-
- def _split_prompt(self, prompt: str, load_images: bool = False) -> List[PromptPart]:
- """Splits a prompt into a list of `PromptParts` to separate image data from text.
- It will also append `image_start` and `image_end` before and after the image, and optionally parse and load the images,
- if `load_images` is `True`.
- """
- parts: List[PromptPart] = []
- curr = 0
- while True:
- match = re.search(r'', prompt[curr:])
- if match is None:
- # no more image tokens, append the rest of the prompt
- if curr > 0:
- # add image end token after last image
- parts.append(PromptPart(text=self.pipeline.image_end() + prompt[curr:]))
- else:
- parts.append(PromptPart(text=prompt))
- break
- # found an image, append image start token to the text
- if match.start() > 0:
- parts.append(PromptPart(text=prompt[curr:curr + match.start()] + self.pipeline.image_start()))
- else:
- parts.append(PromptPart(text=self.pipeline.image_start()))
- # append the image
- parts.append(PromptPart(
- text=match.group(0),
- image=Image.open(BytesIO(base64.b64decode(match.group(1)))) if load_images else None,
- is_image=True
- ))
- curr += match.end()
- return parts
-
- def _len_in_tokens_prompt_parts(self, parts: List[PromptPart]) -> int:
- """Total length in tokens of all `parts`"""
- tokens = 0
- for part in parts:
- if part.is_image:
- tokens += self.pipeline.num_image_embeds()
- elif part.input_ids is not None:
- tokens += len(part.input_ids)
- else:
- tokens += len(encode(part.text)[0])
- return tokens
-
- def len_in_tokens(self, prompt: str) -> int:
- """Total length in tokens for a given text `prompt`"""
- parts = self._split_prompt(prompt, False)
- return self._len_in_tokens_prompt_parts(parts)
-
- def _encode_single_text(self, part: PromptPart, add_bos_token: bool) -> PromptPart:
- """Encode a single prompt `part` to `input_ids`. Returns a `PromptPart`"""
- if part.is_image:
- placeholders = torch.ones((self.pipeline.num_image_embeds())) * self.pipeline.placeholder_token_id()
- part.input_ids = placeholders.to(shared.model.device, dtype=torch.int64)
- else:
- part.input_ids = encode(part.text, add_bos_token=add_bos_token)[0].to(shared.model.device, dtype=torch.int64)
- return part
-
- @staticmethod
- def _num_images(parts: List[PromptPart]) -> int:
- count = 0
- for part in parts:
- if part.is_image:
- count += 1
- return count
-
- def _encode_text(self, state, parts: List[PromptPart]) -> List[PromptPart]:
- """Encode text to token_ids, also truncate the prompt, if necessary.
-
- The chat/instruct mode should make prompts that fit in get_max_prompt_length, but if max_new_tokens are set
- such that the context + min_rows don't fit, we can get a prompt which is too long.
- We can't truncate image embeddings, as it leads to broken generation, so remove the images instead and warn the user
- """
- encoded: List[PromptPart] = []
- for i, part in enumerate(parts):
- encoded.append(self._encode_single_text(part, i == 0 and state['add_bos_token']))
-
- # truncation:
- max_len = get_max_prompt_length(state)
- removed_images = 0
-
- # 1. remove entire text/image blocks
- while self._len_in_tokens_prompt_parts(encoded[1:]) > max_len:
- if encoded[0].is_image:
- removed_images += 1
- encoded = encoded[1:]
-
- # 2. check if the last prompt part doesn't need to get truncated
- if self._len_in_tokens_prompt_parts(encoded) > max_len:
- if encoded[0].is_image:
- # don't truncate image embeddings, just remove the image, otherwise generation will be broken
- removed_images += 1
- encoded = encoded[1:]
- elif len(encoded) > 1 and encoded[0].text.endswith(self.pipeline.image_start()):
- # see if we can keep image_start token
- len_image_start = len(encode(self.pipeline.image_start(), add_bos_token=state['add_bos_token'])[0])
- if self._len_in_tokens_prompt_parts(encoded[1:]) + len_image_start > max_len:
- # we can't -> remove this text, and the image
- encoded = encoded[2:]
- removed_images += 1
- else:
- # we can -> just truncate the text
- trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len
- encoded[0].input_ids = encoded[0].input_ids[trunc_len:]
- elif len(encoded) > 0:
- # only one text left, truncate it normally
- trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len
- encoded[0].input_ids = encoded[0].input_ids[trunc_len:]
-
- # notify user if we truncated an image
- if removed_images > 0:
- logger.warning(f"Multimodal: removed {removed_images} image(s) from prompt. Try decreasing max_new_tokens if generation is broken")
-
- return encoded
-
- def _embed(self, parts: List[PromptPart]) -> List[PromptPart]:
- # batch images
- image_indicies = [i for i, part in enumerate(parts) if part.is_image]
- embedded = self.pipeline.embed_images([parts[i].image for i in image_indicies])
- for i, embeds in zip(image_indicies, embedded):
- parts[i].embedding = embeds
- # embed text
- for (i, part) in enumerate(parts):
- if not part.is_image:
- parts[i].embedding = self.pipeline.embed_tokens(part.input_ids)
- return parts
-
- def _remove_old_images(self, parts: List[PromptPart], params: dict) -> List[PromptPart]:
- if params['add_all_images_to_prompt']:
- return parts
- already_added = False
- for i, part in reversed(list(enumerate(parts))):
- if part.is_image:
- if already_added:
- parts[i].embedding = self.pipeline.placeholder_embeddings()
- else:
- already_added = True
- return parts
-
- def forward(self, prompt: str, state: Any, params: dict):
- prompt_parts = self._split_prompt(prompt, True)
- prompt_parts = self._encode_text(state, prompt_parts)
- prompt_parts = self._embed(prompt_parts)
- prompt_parts = self._remove_old_images(prompt_parts, params)
- embeds = tuple(part.embedding for part in prompt_parts)
- ids = tuple(part.input_ids for part in prompt_parts)
- input_embeds = torch.cat(embeds, dim=0)
- input_ids = torch.cat(ids, dim=0)
- return prompt, input_ids, input_embeds, self._num_images(prompt_parts)
diff --git a/spaces/lewisliuX123/wechatglm_demo/bot/baidu/baidu_unit_bot.py b/spaces/lewisliuX123/wechatglm_demo/bot/baidu/baidu_unit_bot.py
deleted file mode 100644
index a84ac57c9b7843a00e689b662807c9ec4710d6af..0000000000000000000000000000000000000000
--- a/spaces/lewisliuX123/wechatglm_demo/bot/baidu/baidu_unit_bot.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# encoding:utf-8
-
-import requests
-from bot.bot import Bot
-
-
-# Baidu Unit对话接口 (可用, 但能力较弱)
-class BaiduUnitBot(Bot):
- def reply(self, query, context=None):
- token = self.get_token()
- url = 'https://aip.baidubce.com/rpc/2.0/unit/service/v3/chat?access_token=' + token
- post_data = "{\"version\":\"3.0\",\"service_id\":\"S73177\",\"session_id\":\"\",\"log_id\":\"7758521\",\"skill_ids\":[\"1221886\"],\"request\":{\"terminal_id\":\"88888\",\"query\":\"" + query + "\", \"hyper_params\": {\"chat_custom_bot_profile\": 1}}}"
- print(post_data)
- headers = {'content-type': 'application/x-www-form-urlencoded'}
- response = requests.post(url, data=post_data.encode(), headers=headers)
- if response:
- return response.json()['result']['context']['SYS_PRESUMED_HIST'][1]
-
- def get_token(self):
- access_key = 'YOUR_ACCESS_KEY'
- secret_key = 'YOUR_SECRET_KEY'
- host = 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=' + access_key + '&client_secret=' + secret_key
- response = requests.get(host)
- if response:
- print(response.json())
- return response.json()['access_token']
diff --git a/spaces/lightli/bingo-newbing/src/lib/hooks/chat-history.ts b/spaces/lightli/bingo-newbing/src/lib/hooks/chat-history.ts
deleted file mode 100644
index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000
--- a/spaces/lightli/bingo-newbing/src/lib/hooks/chat-history.ts
+++ /dev/null
@@ -1,62 +0,0 @@
-import { zip } from 'lodash-es'
-import { ChatMessageModel, BotId } from '@/lib/bots/bing/types'
-import { Storage } from '../storage'
-
-/**
- * conversations:$botId => Conversation[]
- * conversation:$botId:$cid:messages => ChatMessageModel[]
- */
-
-interface Conversation {
- id: string
- createdAt: number
-}
-
-type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] }
-
-async function loadHistoryConversations(botId: BotId): Promise {
- const key = `conversations:${botId}`
- const { [key]: value } = await Storage.get(key)
- return value || []
-}
-
-async function deleteHistoryConversation(botId: BotId, cid: string) {
- const conversations = await loadHistoryConversations(botId)
- const newConversations = conversations.filter((c) => c.id !== cid)
- await Storage.set({ [`conversations:${botId}`]: newConversations })
-}
-
-async function loadConversationMessages(botId: BotId, cid: string): Promise {
- const key = `conversation:${botId}:${cid}:messages`
- const { [key]: value } = await Storage.get(key)
- return value || []
-}
-
-export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) {
- const conversations = await loadHistoryConversations(botId)
- if (!conversations.some((c) => c.id === cid)) {
- conversations.unshift({ id: cid, createdAt: Date.now() })
- await Storage.set({ [`conversations:${botId}`]: conversations })
- }
- const key = `conversation:${botId}:${cid}:messages`
- await Storage.set({ [key]: messages })
-}
-
-export async function loadHistoryMessages(botId: BotId): Promise {
- const conversations = await loadHistoryConversations(botId)
- const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id)))
- return zip(conversations, messagesList).map(([c, messages]) => ({
- id: c!.id,
- createdAt: c!.createdAt,
- messages: messages!,
- }))
-}
-
-export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) {
- const messages = await loadConversationMessages(botId, conversationId)
- const newMessages = messages.filter((m) => m.id !== messageId)
- await setConversationMessages(botId, conversationId, newMessages)
- if (!newMessages.length) {
- await deleteHistoryConversation(botId, conversationId)
- }
-}
diff --git a/spaces/limcheekin/Mistral-7B-Instruct-v0.1-GGUF/Dockerfile b/spaces/limcheekin/Mistral-7B-Instruct-v0.1-GGUF/Dockerfile
deleted file mode 100644
index 09b60df1dc5c50039a507eb3fad3e1d200bacc00..0000000000000000000000000000000000000000
--- a/spaces/limcheekin/Mistral-7B-Instruct-v0.1-GGUF/Dockerfile
+++ /dev/null
@@ -1,35 +0,0 @@
-# Grab a fresh copy of the Python image
-FROM python:3.11-slim
-
-# Install build and runtime dependencies
-RUN apt-get update && \
- apt-get install -y \
- libopenblas-dev \
- ninja-build \
- build-essential \
- pkg-config \
- curl
-
-RUN pip install -U pip setuptools wheel && \
- CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install --verbose llama-cpp-python[server]
-
-# Download model
-RUN mkdir model && \
- curl -L https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf -o model/gguf-model.bin
-
-COPY ./start_server.sh ./
-COPY ./main.py ./
-COPY ./index.html ./
-
-# Make the server start script executable
-RUN chmod +x ./start_server.sh
-
-# Set environment variable for the host
-ENV HOST=0.0.0.0
-ENV PORT=7860
-
-# Expose a port for the server
-EXPOSE ${PORT}
-
-# Run the server start script
-CMD ["/bin/sh", "./start_server.sh"]
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Adobe Premiere Pro CC 2019 (v13.0.2.38) RePack [MULTILANG] X64 Keygen.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Adobe Premiere Pro CC 2019 (v13.0.2.38) RePack [MULTILANG] X64 Keygen.md
deleted file mode 100644
index 016ccb807e86407212dfbc56780443e1a42cf120..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Adobe Premiere Pro CC 2019 (v13.0.2.38) RePack [MULTILANG] X64 Keygen.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
Adobe Premiere Pro CC 2019 (v13.0.2.38) RePack [MULTILANG] x64 keygen
-
-Adobe Premiere CC 2019 10.1.2 Activation Keys Full Version For Windows x64,
-
-Adobe Premiere Pro CC 2019 10.1.2 Activation Keys Full Version For Windows x64,
-
-Sildenafil Citrate 100mg Buy Online
-
-He spent another season with the Tampa Bay Buccaneers but retired after the 2009 season after announcing his retirement. Prior to the start of the 2014 season, Mayhew was promoted to the general manager position after the firing of General Manager Ted Thompson. This was Mayhew’s first full season as the general manager of the Detroit Lions.
-
-Thanks, I’ve recently been looking for info about this topic for a while and yours is the greatest I’ve came upon so far. However, what concerning the bottom line? Are you certain in regards to the source?
-
-An interesting discussion is price comment. I believe that you ought to write more about this subject matter, it may not be a taboo subject but typically people are not sufficient to speak on such topics. To the next. Cheers
-
-Sweet blog! I found it while searching on Yahoo News. Do you have any tips on how to get listed in Yahoo News? I’ve been trying for a while but I never seem to get there! Anyway, I’ve bookmarked it in my google bookmarks to come back then.
-
-I’m really enjoying the design and layout of your site. It’s a very easy on the eyes which makes it much more pleasant for me to come here and visit more often. Did you hire out a developer to create your theme? Great work!
-
-You are so interesting! I do not suppose I’ve truly read through something like this before. So good to discover somebody with some genuine thoughts on this subject matter. Seriously.. thanks for starting this up. This web site is something that is needed on the internet, someone with some originality!
-
-I’ve been browsing online more than 3 hours today, yet I never found any interesting article like yours. It is lovely worth enough for me. In my opinion, if all website owners and bloggers made good content as you did, the net will be a lot more useful than ever before.
-
-I have to point out my appreciation for your kindness in support of all those that need help on this one subject matter. Your real commitment to passing the solution all through appeared to be incredibly powerful and have constantly enabled regular 4fefd39f24
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/CiberCut 5 6 MAX DONGLE CRACKED SiGNMAKER ZIP.md b/spaces/lincquiQcaudo/Top-20-Diffusion/CiberCut 5 6 MAX DONGLE CRACKED SiGNMAKER ZIP.md
deleted file mode 100644
index d49d9f3d9120b57188bfd35dba83fd8228561000..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/CiberCut 5 6 MAX DONGLE CRACKED SiGNMAKER ZIP.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-EaseUS Data Recovery key. FUIERUI-REUIE83UW-ERIOE93-TRIOE93. CKSKQ0-WKSDOWLQ-SDCNX-W02917. Easeus data recovery wizard serial number. D3KPEX-WKSDOWLQ-SDCNX-W02917. Easeus data recovery wizard serial number. NCSF-WKSDOWLQ-SDCNX-W02917. Easeus data recovery wizard serial number. B3J9B4-WKSDOWL.Q-SDCNX-W02917. Easeus data recovery wizard serial number. J2K9W2-WKSDOWLQ-SDCNX-W02917. Easeus data recovery wizard serial number. C9J9B4-WKSDOWLQ-SDCNX-W02917. Easeus data recovery wizard serial number. C3JP9D-4WKSDOWLQ-SDCNX-W02917. Easeus data recovery wizard serial number. 3J9B4-WKSDOWL 8a78ff9644
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/M3 Raw Drive Recovery 52 Licence Key TOP Keygen.md b/spaces/lincquiQcaudo/Top-20-Diffusion/M3 Raw Drive Recovery 52 Licence Key TOP Keygen.md
deleted file mode 100644
index 263263db669bb6ed08a4909cc1284d9b033a7e45..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/M3 Raw Drive Recovery 52 Licence Key TOP Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Based on the 1:250,000 Geological Series Sheet (1976) Sheet SF 52-10, other ... Configuration API is only available if the configuration data is in the ... For those wishing to make tweaks to Regolith, such as changing the meta key binding or ... two hotkeys options and serial port options (Serial port is for debugging PearPC). 4d29de3e1b
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Masino Extensions For Phpmaker 54 UPD.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Masino Extensions For Phpmaker 54 UPD.md
deleted file mode 100644
index 452cd5794665851372d8e447e0551716e01f0245..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Masino Extensions For Phpmaker 54 UPD.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Ibm-adcd-zos-113 is a code name for the latest edition of the Application Development and Control Environment (ADCD) for z/OS, a software package that provides a complete set of tools and products for developing, testing, and debugging applications on IBM Z systems.
-
ADCD z/OS V2R5 May Edition of 2022 was released in May 2022 and contains many new features and enhancements that can help you improve your productivity and performance. Some of the highlights include:
The addition of several new products, such as IBM Z Monitoring Suite V1.2.0, IBM Open Enterprise SDK for GO 1.1.7, IBM Application Discovery Connect for Mainframe v6.0.2, IBM Urbancode Deploy for z/OS v7.2.0, IBM Open Enterprise SDK for Python 3.A.0, IBM Engineering Workflow Management v7.0.2, IBM z Development and Test Environment Enterprise v13.0.0, IBM XML Toolkit for z/OS v1.11.0, IBM Data Virtualization Manager 1.1.0, and IBM MQ for z/OS CD V9.2.2.
-
The update of all z/OS base, z/OS products and middleware to PUT2203 / RSU2203 service levels.
-
The increase of all z/OS base, z/OS products and middleware volume size to mod-9 (10,017 cylinders) to provide more free space for expansion.
-
The availability of Db2 v12 Full Instance workflow provision on the Market Place.
-
The discontinuation of distribution via DVDs and the exclusive availability of download via the internet.
ADCD z/OS V2R5 May Edition of 2022 is not only a powerful tool for application development and testing, but also a great way to explore and experience the latest features and capabilities of z/OS and IBM Z systems. You can use ADCD to learn new skills, experiment with new technologies, or prototype new solutions on a realistic z/OS environment.
-
-
ADCD z/OS V2R5 May Edition of 2022 is compatible with IBM Cloud Pak for Applications, a hybrid cloud solution that enables you to build, deploy, and manage applications across multiple platforms and environments. With ADCD and IBM Cloud Pak for Applications, you can leverage the best of both worlds: the scalability, security, and reliability of IBM Z systems and the flexibility, agility, and innovation of cloud-native development.
-
ADCD z/OS V2R5 May Edition of 2022 is available for download to qualified IBM PartnerWorld members and IBM Z software customers. To get started with ADCD, you need to have a compatible hardware or software emulator that can run z/OS images. You also need to have enough disk space and memory to accommodate the ADCD volumes. For more details on the system requirements and installation instructions, please refer to the ADCD Release Guide z/OS V2R5 May Edition of 2022.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Moonu Full !!LINK!! Movie Hd 1080p Blu Ray 23l.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Moonu Full !!LINK!! Movie Hd 1080p Blu Ray 23l.md
deleted file mode 100644
index de7a9f84b30896bd56f9b7d2c7ea0a4b4d25e0d2..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Moonu Full !!LINK!! Movie Hd 1080p Blu Ray 23l.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-Here is a possible title and article with HTML formatting for the keyword "Moonu Full Movie Hd 1080p Blu Ray 23l":
-
-
Moonu: A Romantic Tragedy That Will Make You Cry
-
Moonu is a 2012 Tamil romantic drama film directed by Aishwarya R. Dhanush and starring Dhanush and Shruti Haasan in the lead roles. The film follows the love story of Ram (Dhanush) and Janani (Shruti Haasan), who fall in love in their school days and get married after their graduation. However, their marital bliss is shattered when Ram suffers from bipolar disorder and commits suicide, leaving Janani devastated.
-
The film was praised for its realistic portrayal of mental illness, its emotional impact, and its musical score by Anirudh Ravichander, which included the viral hit song "Why This Kolaveri Di". The film was also a commercial success, grossing over â¹60 crore worldwide. The film was dubbed into Telugu as 3 and Hindi as 3: The Unforgettable Love Story.
If you are looking for a high-quality version of this film, you can download or watch it online in HD 1080p Blu Ray format from various websites. However, be careful of the illegal and pirated sources that may harm your device or violate the copyright laws. Here are some of the legitimate and safe websites where you can find Moonu Full Movie Hd 1080p Blu Ray 23l:
-
-
Isaimini: This website offers a wide range of Tamil movies and songs in various formats and resolutions. You can download or stream Moonu Full Movie Hd 1080p Blu Ray 23l from this website for free.
-
Wixsite: This website is a personal blog that provides reviews and links to various Tamil movies. You can find Moonu Full Movie Hd 1080p Blu Ray 23l on this website along with other information about the film.
-
Sway: This website is a Microsoft service that allows users to create and share interactive presentations. You can find Moonu Full Movie Hd 1080p Blu Ray 23l on this website as a part of a presentation about the film.
-
-
Moonu is a film that will touch your heart and make you cry with its tragic story and beautiful music. If you are a fan of romantic dramas, you should not miss this film. You can watch or download Moonu Full Movie Hd 1080p Blu Ray 23l from any of the websites mentioned above and enjoy this masterpiece.
Here are a few more paragraphs with HTML formatting for the keyword "Moonu Full Movie Hd 1080p Blu Ray 23l":
-
-
Moonu: A Critical and Audience Success
-
Moonu received positive reviews from critics and audiences alike, who praised the film's direction, performances, screenplay, and music. The film was also nominated for several awards, including the Filmfare Awards South, the Vijay Awards, the SIIMA Awards, and the Edison Awards. The film won six awards at the Vijay Awards, including Best Director, Best Actor, Best Actress, Best Music Director, Best Lyricist, and Best Song of the Year. The film also won two awards at the SIIMA Awards, including Best Actor and Best Music Director.
-
The film was also a box office hit, earning â¹60 crore worldwide on a budget of â¹12 crore. The film was released in over 450 screens across India and overseas. The film had a strong opening weekend, collecting â¹19 crore in India and â¹6 crore overseas. The film also had a good run in the overseas markets, especially in Malaysia, Singapore, Sri Lanka, and the United States.
-
-
Moonu: A Cult Classic That Inspired Many
-
Moonu is considered to be a cult classic among Tamil cinema lovers, who have embraced the film's emotional story and memorable characters. The film has also inspired many fans to create fan art, fan fiction, memes, videos, and tribute songs based on the film. The film's dialogues and scenes have become popular among the youth, who often quote them or reenact them on social media platforms.
-
The film has also influenced many filmmakers and actors, who have expressed their admiration for the film and its makers. Some of the celebrities who have praised the film include Rajinikanth, Kamal Haasan, A.R. Rahman, Shankar, Mani Ratnam, Karthik Subbaraj, Vijay Sethupathi, Suriya, Trisha Krishnan, Samantha Akkineni, Nivin Pauly, Dulquer Salmaan, Anushka Shetty, and Taapsee Pannu.
- 7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/New Moon The Graphic Novel Pdf.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/New Moon The Graphic Novel Pdf.md
deleted file mode 100644
index b317568f209abcf3d48b64ce7092527d2e9ffcc8..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/New Moon The Graphic Novel Pdf.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
New Moon: The Graphic Novel, Vol. 1 - A Review
-
New Moon: The Graphic Novel, Vol. 1 is the first part of the adaptation of Stephenie Meyer's bestselling novel New Moon, the second book in the Twilight Saga. It follows the story of Bella Swan, a human girl who falls in love with Edward Cullen, a vampire. In this volume, Bella and Edward face new challenges, such as a painful separation, the appearance of werewolves, a vengeful vampire, and a dangerous encounter with the Volturi, the royal family of vampires.
The graphic novel is adapted by Young Kim, who also illustrated the first Twilight graphic novel. Kim's art style is realistic and detailed, capturing the emotions and expressions of the characters. The color palette is mostly dark and muted, reflecting the somber mood of the story. The graphic novel also includes some original scenes and dialogue that are not in the original novel, adding some depth and insight to the characters and their relationships.
-
New Moon: The Graphic Novel, Vol. 1 is a faithful and captivating adaptation of Meyer's novel, appealing to both fans and newcomers of the Twilight Saga. It is a romantic and thrilling story that explores the themes of love, loss, identity, and choice. The graphic novel is available in pdf format online from various sources[^1^] [^2^] [^3^], or can be purchased in print from Yen Press[^2^].
-
-
The graphic novel follows the events of the novel closely, starting with Bella's 18th birthday party at the Cullens' house. There, she accidentally cuts her finger and triggers a bloodlust in one of the vampires, Jasper. Edward decides to leave Bella for her own safety, breaking her heart and leaving her in a state of depression. Months later, she finds some solace in her friendship with Jacob Black, a Quileute boy who has a secret of his own: he is a werewolf, sworn to protect humans from vampires.
-
-
Bella also discovers that she can hear Edward's voice in her head whenever she does something reckless or dangerous. She starts to seek out adrenaline rushes, such as riding a motorcycle or jumping off a cliff, to feel closer to him. However, this also puts her in more danger, as she attracts the attention of Victoria, a vampire who wants to kill her as revenge for Edward killing her mate, James. Meanwhile, Edward believes that Bella is dead after hearing a false report from his sister Alice. He decides to end his own life by provoking the Volturi, the most powerful and feared vampires in the world.
-
Alice returns to Forks and finds Bella alive. She tells her about Edward's plan and they rush to Italy to stop him. There, they face the Volturi, who are intrigued by Bella's ability to resist their mental powers. They let them go on the condition that Bella will be turned into a vampire soon, or else they will kill her. Edward and Bella reunite and return to Forks, where they have to deal with the consequences of their actions and choices.
-
-
New Moon: The Graphic Novel, Vol. 1 is a compelling and emotional read that will keep you hooked until the end. The graphic novel captures the essence of the novel and adds some visual flair and creativity. The characters are well-developed and relatable, and the plot is full of twists and turns. The graphic novel also explores some important themes, such as the nature of love, the consequences of choices, and the value of life.
-
If you are a fan of the Twilight Saga, you will enjoy this graphic novel as a new way to experience the story. If you are new to the series, you will find this graphic novel as a good introduction to the world and the characters. Either way, you will be immersed in a captivating and romantic story that will leave you wanting more.
-
New Moon: The Graphic Novel, Vol. 1 is a must-read for anyone who loves fantasy, romance, and drama. It is a graphic novel that will make you laugh, cry, and swoon. It is a graphic novel that will stay with you long after you finish it.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sound Forge Noise Reduction Plugin Keygen.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sound Forge Noise Reduction Plugin Keygen.md
deleted file mode 100644
index 1de96c360aa09fa7a6d5cf092d6faa1b46b46e66..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sound Forge Noise Reduction Plugin Keygen.md
+++ /dev/null
@@ -1,80 +0,0 @@
-## Sound Forge Noise Reduction Plugin Keygen
-
-
-
-
-
-
-
-
-
-**CLICK HERE ☑ [https://maudaracte.blogspot.com/?file=2tyUwR](https://maudaracte.blogspot.com/?file=2tyUwR)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Use Sound Forge Noise Reduction Plugin to Clean Up Your Audio
-
-
-
-If you are looking for a way to remove unwanted noise from your audio recordings, you might want to try Sound Forge Noise Reduction Plugin. This is a powerful tool that can help you reduce hiss, hum, clicks, pops, crackles, and other audio artifacts. In this article, we will show you how to use Sound Forge Noise Reduction Plugin to clean up your audio and improve its quality.
-
-
-
-## What is Sound Forge Noise Reduction Plugin?
-
-
-
-Sound Forge Noise Reduction Plugin is a plugin that works with Sound Forge Pro, a professional audio editing software. It uses advanced algorithms to analyze and remove noise from your audio files. It has four modules: Noise Reduction, Click and Crackle Removal, Clipped Peak Restoration, and Audio Restoration. Each module has its own settings and presets that you can customize according to your needs.
-
-
-
-## How to Use Sound Forge Noise Reduction Plugin?
-
-
-
-To use Sound Forge Noise Reduction Plugin, you need to have Sound Forge Pro installed on your computer. You also need to have a valid license key for the plugin, which you can get from various sources online. Here are the steps to use Sound Forge Noise Reduction Plugin:
-
-
-
-1. Open Sound Forge Pro and load the audio file that you want to edit.
-
-2. Select the part of the audio that contains noise. You can use the zoom and selection tools to make precise selections.
-
-3. Go to Tools > Noise Reduction > Noise Reduction (Process). This will open the Noise Reduction module.
-
-4. Click on Capture Noise Print. This will analyze the selected audio and create a noise profile that will be used to remove noise.
-
-5. Adjust the settings of the Noise Reduction module. You can use the sliders and buttons to change the noise reduction level, sensitivity, smoothing, attack, release, and other parameters. You can also choose from different presets that are suitable for different types of noise.
-
-6. Click on Preview to listen to the result of the noise reduction. You can compare it with the original audio by clicking on Bypass.
-
-7. If you are satisfied with the result, click on OK to apply the noise reduction. If not, you can go back and tweak the settings until you get the desired outcome.
-
-8. Repeat steps 2-7 for other parts of the audio that need noise reduction. You can also use other modules of the plugin to remove clicks, crackles, clipped peaks, and restore audio quality.
-
-9. Save your edited audio file as a new file or overwrite the original one.
-
-
-
-## Conclusion
-
-
-
-Sound Forge Noise Reduction Plugin is a useful tool that can help you enhance your audio recordings by removing unwanted noise. It has four modules that can handle different types of noise and audio problems. It is easy to use and has customizable settings and presets that can suit your needs. However, you need to have Sound Forge Pro and a valid license key for the plugin to use it. You can find more information about Sound Forge Noise Reduction Plugin on its official website or online forums.
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Temtem Activation Code [Password] LINK.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Temtem Activation Code [Password] LINK.md
deleted file mode 100644
index a06c1801e9e75febad3b5f99598d317dbcf2764a..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Temtem Activation Code [Password] LINK.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-```html
-
How to Get Temtem Activation Code [Password] for Free
-
Temtem is a massively multiplayer creature-collection adventure game that has been gaining popularity among gamers. If you want to play Temtem on Steam, you need an activation code [password] that certifies that your copy of the game is original. But how can you get one for free?
In this article, I will show you some legit ways to get a free Temtem CD key that you can redeem on Steam and enjoy the game without spending a dime.
-
What is Temtem?
-
Temtem is a game by Crema, a Spanish game developer. It is inspired by the Pokemon franchise, but with some unique features and mechanics. In Temtem, you can explore the beautiful Islands of the Airborne Archipelago, catch and collect hundreds of different creatures called Temtem, battle other tamers, customize your house, join a friend's adventure or explore the dynamic online world.
-
Temtem was released on Steam Early Access on January 21, 2020, and has received positive reviews from critics and players alike. The game is still in development and will receive regular updates and new content until its full release.
-
How to Get Temtem Activation Code [Password] for Free
-
There are several ways to get a free Temtem CD key that you can use to activate the game on Steam. Here are some of them:
-
-
Use a valid Steam keys website. There are some websites that offer free Steam keys for various games, including Temtem. These keys are provided by the developers and are legit and safe to use. However, these websites may have limited availability and may require you to complete some tasks or surveys to get the keys. One example of such a website is ValidSteamKeys.com, where you can find a list of free Temtem CD keys that you can copy and paste on Steam[^6^].
-
Participate in giveaways or contests. Another way to get a free Temtem CD key is to join giveaways or contests that are hosted by various websites, blogs, YouTube channels, Twitch streamers or other platforms. These giveaways or contests may have different rules and requirements, such as subscribing, liking, commenting, sharing or following. You may also need to enter your email address or other personal information to participate. Some examples of giveaways or contests for Temtem CD keys are this one by Gleam.io or this one by MMORPG.com.
-
Use a key generator software. A key generator software is a program that can create random and valid activation codes [passwords] for various games, including Temtem. These programs are usually easy to use and do not require any installation or registration. However, they may also be illegal, unsafe or unreliable. Some key generators may contain viruses, malware or spyware that can harm your computer or steal your personal information. Some key generators may also generate invalid or duplicate codes that will not work on Steam. One example of a key generator software for Temtem is this one, which claims to generate a Temtem activation code [password] in PDF format[^1^].
-
-
Conclusion
-
Temtem is a fun and engaging game that you can play on Steam if you have an activation code [password]. There are some ways to get a free Temtem CD key, but you should be careful and cautious when using them. Some of them may be legit and safe, while others may be illegal, unsafe or unreliable.
-
-
The best way to get a Temtem activation code [password] is to buy it from the official
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/README.md b/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/README.md
deleted file mode 100644
index 322f4b96fc8498bd007c2ff03ba97effdf222025..0000000000000000000000000000000000000000
--- a/spaces/nihaldsouza1/clearlydefined_license_summarizer/src/README.md
+++ /dev/null
@@ -1,34 +0,0 @@
-## SRC Directory
-
-### [abstractive_sum.py](/src/abstractive_sum.py)
-- Contains all the neceassary functions for the abstractive summarizer.
-- Functions from this script are called when 'abstractive' or 'both' are selected as methods for summarization from the UI.
-
-### [clean.py](/src/clean.py)
-- Contains all the necessary functions for text cleaning and preprocessing.
-- Cleaning such as removal of HTML, PHP, JSON, RTF and other elements such as URLs and unwanted characters are defined as functions here.
-- Segmentation functions such as sentence and paragraph segmentations are defined here as well.
-
-### [diff.py](/src/diff.py)
-- Responsible for generating the diff when 'Display Cleaned License + Diff' is selected under 'Cleaned License View' option in the UI.
-
-### [doc2vec.py](/src/doc2vec.py)
-- Preprocesses the (cleaned) input text such that it can be converted into a vector. It then compares this vector against 41 other vectors representing a list of known licenses from [choosealicense.com](https://www.choosealicense.com/appendix).
-- The 41 license vectors are pre-trained into a model and stored [here](/models/).
-
-### [evaluate.py](/src/evaluate.py)
-- Contains function used to calculate the performance metrics of the custom textrank algorithm such as precision, recall, F1 score @k.
-
-### [parameters.py](/src/parameters.py)
-- Contains the custom vocabulary and custom scores for the custom TextRank algorithm.
-- Contains the string macros for UI options and colors.
-- Also contains parameters and hyperparameters for the complete application.
-
-### [read_data.py](/src/read_data.py)
-- Contains functions for ingestion of data from files stored in the [data](/data/) folder. This includes information such as license names, properties and labels that are processed into suitable data structures such as dataframes and dictionaries.
-
-### [textrank.py](/src/textrank.py)
-- Contains functions needed to run the custom TextRank algorithm for extractive summarization.
-
-### [tfidf.py](/src/tfidf.py)
-- Used during EDA to calculate the TF-IDF scores to obtain the most important words while developing the custom TextRank algorithm.
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/structures/chart_result.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/structures/chart_result.py
deleted file mode 100644
index 003933d03d153d045c0bf551c465bc7a224d90cb..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/structures/chart_result.py
+++ /dev/null
@@ -1,183 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-from dataclasses import dataclass
-from typing import Any, Optional, Tuple
-import torch
-
-
-@dataclass
-class DensePoseChartResult:
- """
- DensePose results for chart-based methods represented by labels and inner
- coordinates (U, V) of individual charts. Each chart is a 2D manifold
- that has an associated label and is parameterized by two coordinates U and V.
- Both U and V take values in [0, 1].
- Thus the results are represented by two tensors:
- - labels (tensor [H, W] of long): contains estimated label for each pixel of
- the detection bounding box of size (H, W)
- - uv (tensor [2, H, W] of float): contains estimated U and V coordinates
- for each pixel of the detection bounding box of size (H, W)
- """
-
- labels: torch.Tensor
- uv: torch.Tensor
-
- def to(self, device: torch.device):
- """
- Transfers all tensors to the given device
- """
- labels = self.labels.to(device)
- uv = self.uv.to(device)
- return DensePoseChartResult(labels=labels, uv=uv)
-
-
-@dataclass
-class DensePoseChartResultWithConfidences:
- """
- We add confidence values to DensePoseChartResult
- Thus the results are represented by two tensors:
- - labels (tensor [H, W] of long): contains estimated label for each pixel of
- the detection bounding box of size (H, W)
- - uv (tensor [2, H, W] of float): contains estimated U and V coordinates
- for each pixel of the detection bounding box of size (H, W)
- Plus one [H, W] tensor of float for each confidence type
- """
-
- labels: torch.Tensor
- uv: torch.Tensor
- sigma_1: Optional[torch.Tensor] = None
- sigma_2: Optional[torch.Tensor] = None
- kappa_u: Optional[torch.Tensor] = None
- kappa_v: Optional[torch.Tensor] = None
- fine_segm_confidence: Optional[torch.Tensor] = None
- coarse_segm_confidence: Optional[torch.Tensor] = None
-
- def to(self, device: torch.device):
- """
- Transfers all tensors to the given device, except if their value is None
- """
-
- def to_device_if_tensor(var: Any):
- if isinstance(var, torch.Tensor):
- return var.to(device)
- return var
-
- return DensePoseChartResultWithConfidences(
- labels=self.labels.to(device),
- uv=self.uv.to(device),
- sigma_1=to_device_if_tensor(self.sigma_1),
- sigma_2=to_device_if_tensor(self.sigma_2),
- kappa_u=to_device_if_tensor(self.kappa_u),
- kappa_v=to_device_if_tensor(self.kappa_v),
- fine_segm_confidence=to_device_if_tensor(self.fine_segm_confidence),
- coarse_segm_confidence=to_device_if_tensor(self.coarse_segm_confidence),
- )
-
-
-@dataclass
-class DensePoseChartResultQuantized:
- """
- DensePose results for chart-based methods represented by labels and quantized
- inner coordinates (U, V) of individual charts. Each chart is a 2D manifold
- that has an associated label and is parameterized by two coordinates U and V.
- Both U and V take values in [0, 1].
- Quantized coordinates Uq and Vq have uint8 values which are obtained as:
- Uq = U * 255 (hence 0 <= Uq <= 255)
- Vq = V * 255 (hence 0 <= Vq <= 255)
- Thus the results are represented by one tensor:
- - labels_uv_uint8 (tensor [3, H, W] of uint8): contains estimated label
- and quantized coordinates Uq and Vq for each pixel of the detection
- bounding box of size (H, W)
- """
-
- labels_uv_uint8: torch.Tensor
-
- def to(self, device: torch.device):
- """
- Transfers all tensors to the given device
- """
- labels_uv_uint8 = self.labels_uv_uint8.to(device)
- return DensePoseChartResultQuantized(labels_uv_uint8=labels_uv_uint8)
-
-
-@dataclass
-class DensePoseChartResultCompressed:
- """
- DensePose results for chart-based methods represented by a PNG-encoded string.
- The tensor of quantized DensePose results of size [3, H, W] is considered
- as an image with 3 color channels. PNG compression is applied and the result
- is stored as a Base64-encoded string. The following attributes are defined:
- - shape_chw (tuple of 3 int): contains shape of the result tensor
- (number of channels, height, width)
- - labels_uv_str (str): contains Base64-encoded results tensor of size
- [3, H, W] compressed with PNG compression methods
- """
-
- shape_chw: Tuple[int, int, int]
- labels_uv_str: str
-
-
-def quantize_densepose_chart_result(result: DensePoseChartResult) -> DensePoseChartResultQuantized:
- """
- Applies quantization to DensePose chart-based result.
-
- Args:
- result (DensePoseChartResult): DensePose chart-based result
- Return:
- Quantized DensePose chart-based result (DensePoseChartResultQuantized)
- """
- h, w = result.labels.shape
- labels_uv_uint8 = torch.zeros([3, h, w], dtype=torch.uint8, device=result.labels.device)
- labels_uv_uint8[0] = result.labels
- labels_uv_uint8[1:] = (result.uv * 255).clamp(0, 255).byte()
- return DensePoseChartResultQuantized(labels_uv_uint8=labels_uv_uint8)
-
-
-def compress_quantized_densepose_chart_result(
- result: DensePoseChartResultQuantized,
-) -> DensePoseChartResultCompressed:
- """
- Compresses quantized DensePose chart-based result
-
- Args:
- result (DensePoseChartResultQuantized): quantized DensePose chart-based result
- Return:
- Compressed DensePose chart-based result (DensePoseChartResultCompressed)
- """
- import base64
- import numpy as np
- from io import BytesIO
- from PIL import Image
-
- labels_uv_uint8_np_chw = result.labels_uv_uint8.cpu().numpy()
- labels_uv_uint8_np_hwc = np.moveaxis(labels_uv_uint8_np_chw, 0, -1)
- im = Image.fromarray(labels_uv_uint8_np_hwc)
- fstream = BytesIO()
- im.save(fstream, format="png", optimize=True)
- labels_uv_str = base64.encodebytes(fstream.getvalue()).decode()
- shape_chw = labels_uv_uint8_np_chw.shape
- return DensePoseChartResultCompressed(labels_uv_str=labels_uv_str, shape_chw=shape_chw)
-
-
-def decompress_compressed_densepose_chart_result(
- result: DensePoseChartResultCompressed,
-) -> DensePoseChartResultQuantized:
- """
- Decompresses DensePose chart-based result encoded into a base64 string
-
- Args:
- result (DensePoseChartResultCompressed): compressed DensePose chart result
- Return:
- Quantized DensePose chart-based result (DensePoseChartResultQuantized)
- """
- import base64
- import numpy as np
- from io import BytesIO
- from PIL import Image
-
- fstream = BytesIO(base64.decodebytes(result.labels_uv_str.encode()))
- im = Image.open(fstream)
- labels_uv_uint8_np_chw = np.moveaxis(np.array(im, dtype=np.uint8), -1, 0)
- return DensePoseChartResultQuantized(
- labels_uv_uint8=torch.from_numpy(labels_uv_uint8_np_chw.reshape(result.shape_chw))
- )
diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/extract_tacotrons_model.py b/spaces/ntt123/vietnam-male-voice-wavegru-tts/extract_tacotrons_model.py
deleted file mode 100644
index c4185f72fda0920e47fdf1417f0fc3650080c1b6..0000000000000000000000000000000000000000
--- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/extract_tacotrons_model.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import pickle
-
-import jax
-
-dic = pickle.load(open("./mono_tts_cbhg_small_0700000.ckpt", "rb"))
-del dic["optim_state_dict"]
-dic = jax.device_get(dic)
-pickle.dump(dic, open("./mono_tts_cbhg_small_0700000.ckpt", "wb"))
diff --git a/spaces/nyx-ai/stylegan2-flax-tpu/dataset_utils/images_to_tfrecords.py b/spaces/nyx-ai/stylegan2-flax-tpu/dataset_utils/images_to_tfrecords.py
deleted file mode 100644
index 1620cdadda7649e39b5cbf1fbc8ae30f9251093a..0000000000000000000000000000000000000000
--- a/spaces/nyx-ai/stylegan2-flax-tpu/dataset_utils/images_to_tfrecords.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import tensorflow as tf
-import numpy as np
-from PIL import Image
-from typing import Sequence
-from tqdm import tqdm
-import argparse
-import json
-import os
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-def images_to_tfrecords(image_dir, data_dir, has_labels):
- """
- Converts a folder of images to a TFRecord file.
-
- The image directory should have one of the following structures:
-
- If has_labels = False, image_dir should look like this:
-
- path/to/image_dir/
- 0.jpg
- 1.jpg
- 2.jpg
- 4.jpg
- ...
-
-
- If has_labels = True, image_dir should look like this:
-
- path/to/image_dir/
- label0/
- 0.jpg
- 1.jpg
- ...
- label1/
- a.jpg
- b.jpg
- c.jpg
- ...
- ...
-
-
- The labels will be label0 -> 0, label1 -> 1.
-
- Args:
- image_dir (str): Path to images.
- data_dir (str): Path where the TFrecords dataset is stored.
- has_labels (bool): If True, 'image_dir' contains label directories.
-
- Returns:
- (dict): Dataset info.
- """
-
- def _bytes_feature(value):
- return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
-
- def _int64_feature(value):
- return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
-
- os.makedirs(data_dir, exist_ok=True)
- writer = tf.io.TFRecordWriter(os.path.join(data_dir, 'dataset.tfrecords'))
-
- num_examples = 0
- num_classes = 0
-
- if has_labels:
- for label_dir in os.listdir(image_dir):
- if not os.path.isdir(os.path.join(image_dir, label_dir)):
- logger.warning('The image directory should contain one directory for each label.')
- logger.warning('These label directories should contain the image files.')
- if os.path.exists(os.path.join(data_dir, 'dataset.tfrecords')):
- os.remove(os.path.join(data_dir, 'dataset.tfrecords'))
- return
-
- for img_file in tqdm(os.listdir(os.path.join(image_dir, label_dir))):
- file_format = img_file[img_file.rfind('.') + 1:]
- if file_format not in ['png', 'jpg', 'jpeg']:
- continue
-
- #img = Image.open(os.path.join(image_dir, label_dir, img_file)).resize(img_size)
- img = Image.open(os.path.join(image_dir, label_dir, img_file))
- img = np.array(img, dtype=np.uint8)
-
- height = img.shape[0]
- width = img.shape[1]
- channels = img.shape[2]
-
- img_encoded = img.tobytes()
-
- example = tf.train.Example(features=tf.train.Features(feature={
- 'height': _int64_feature(height),
- 'width': _int64_feature(width),
- 'channels': _int64_feature(channels),
- 'image': _bytes_feature(img_encoded),
- 'label': _int64_feature(num_classes)}))
-
- writer.write(example.SerializeToString())
- num_examples += 1
-
- num_classes += 1
- else:
- for img_file in tqdm(os.listdir(os.path.join(image_dir))):
- file_format = img_file[img_file.rfind('.') + 1:]
- if file_format not in ['png', 'jpg', 'jpeg']:
- continue
-
- #img = Image.open(os.path.join(image_dir, label_dir, img_file)).resize(img_size)
- img = Image.open(os.path.join(image_dir, img_file))
- img = np.array(img, dtype=np.uint8)
-
- height = img.shape[0]
- width = img.shape[1]
- channels = img.shape[2]
-
- img_encoded = img.tobytes()
-
- example = tf.train.Example(features=tf.train.Features(feature={
- 'height': _int64_feature(height),
- 'width': _int64_feature(width),
- 'channels': _int64_feature(channels),
- 'image': _bytes_feature(img_encoded),
- 'label': _int64_feature(num_classes)})) # dummy label
-
- writer.write(example.SerializeToString())
- num_examples += 1
-
- writer.close()
-
- dataset_info = {'num_examples': num_examples, 'num_classes': num_classes}
- with open(os.path.join(data_dir, 'dataset_info.json'), 'w') as fout:
- json.dump(dataset_info, fout)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--image_dir', type=str, help='Path to the image directory.')
- parser.add_argument('--data_dir', type=str, help='Path where the TFRecords dataset is stored.')
- parser.add_argument('--has_labels', action='store_true', help='If True, image_dir contains label directories.')
-
- args = parser.parse_args()
-
- images_to_tfrecords(args.image_dir, args.data_dir, args.has_labels)
-
diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/RAFT/demo.py b/spaces/oguzakif/video-object-remover/FGT_codes/RAFT/demo.py
deleted file mode 100644
index 096963bdbb36aed3df673f131d6e044d8c6f95ea..0000000000000000000000000000000000000000
--- a/spaces/oguzakif/video-object-remover/FGT_codes/RAFT/demo.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import sys
-import argparse
-import os
-import cv2
-import glob
-import numpy as np
-import torch
-from PIL import Image
-
-from .raft import RAFT
-from .utils import flow_viz
-from .utils.utils import InputPadder
-
-
-
-DEVICE = 'cuda'
-
-def load_image(imfile):
- img = np.array(Image.open(imfile)).astype(np.uint8)
- img = torch.from_numpy(img).permute(2, 0, 1).float()
- return img
-
-
-def load_image_list(image_files):
- images = []
- for imfile in sorted(image_files):
- images.append(load_image(imfile))
-
- images = torch.stack(images, dim=0)
- images = images.to(DEVICE)
-
- padder = InputPadder(images.shape)
- return padder.pad(images)[0]
-
-
-def viz(img, flo):
- img = img[0].permute(1,2,0).cpu().numpy()
- flo = flo[0].permute(1,2,0).cpu().numpy()
-
- # map flow to rgb image
- flo = flow_viz.flow_to_image(flo)
- # img_flo = np.concatenate([img, flo], axis=0)
- img_flo = flo
-
- cv2.imwrite('/home/chengao/test/flow.png', img_flo[:, :, [2,1,0]])
- # cv2.imshow('image', img_flo[:, :, [2,1,0]]/255.0)
- # cv2.waitKey()
-
-
-def demo(args):
- model = torch.nn.DataParallel(RAFT(args))
- model.load_state_dict(torch.load(args.model))
-
- model = model.module
- model.to(DEVICE)
- model.eval()
-
- with torch.no_grad():
- images = glob.glob(os.path.join(args.path, '*.png')) + \
- glob.glob(os.path.join(args.path, '*.jpg'))
-
- images = load_image_list(images)
- for i in range(images.shape[0]-1):
- image1 = images[i,None]
- image2 = images[i+1,None]
-
- flow_low, flow_up = model(image1, image2, iters=20, test_mode=True)
- viz(image1, flow_up)
-
-
-def RAFT_infer(args):
- model = torch.nn.DataParallel(RAFT(args))
- model.load_state_dict(torch.load(args.model))
-
- model = model.module
- model.to(DEVICE)
- model.eval()
-
- return model
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_diffusers_to_original_sdxl.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_diffusers_to_original_sdxl.py
deleted file mode 100644
index 1f11ef45706898cf4408fdeecbb3b3249aa45d76..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_diffusers_to_original_sdxl.py
+++ /dev/null
@@ -1,340 +0,0 @@
-# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint.
-# *Only* converts the UNet, VAE, and Text Encoder.
-# Does not convert optimizer state or any other thing.
-
-import argparse
-import os.path as osp
-import re
-
-import torch
-from safetensors.torch import load_file, save_file
-
-
-# =================#
-# UNet Conversion #
-# =================#
-
-unet_conversion_map = [
- # (stable-diffusion, HF Diffusers)
- ("time_embed.0.weight", "time_embedding.linear_1.weight"),
- ("time_embed.0.bias", "time_embedding.linear_1.bias"),
- ("time_embed.2.weight", "time_embedding.linear_2.weight"),
- ("time_embed.2.bias", "time_embedding.linear_2.bias"),
- ("input_blocks.0.0.weight", "conv_in.weight"),
- ("input_blocks.0.0.bias", "conv_in.bias"),
- ("out.0.weight", "conv_norm_out.weight"),
- ("out.0.bias", "conv_norm_out.bias"),
- ("out.2.weight", "conv_out.weight"),
- ("out.2.bias", "conv_out.bias"),
- # the following are for sdxl
- ("label_emb.0.0.weight", "add_embedding.linear_1.weight"),
- ("label_emb.0.0.bias", "add_embedding.linear_1.bias"),
- ("label_emb.0.2.weight", "add_embedding.linear_2.weight"),
- ("label_emb.0.2.bias", "add_embedding.linear_2.bias"),
-]
-
-unet_conversion_map_resnet = [
- # (stable-diffusion, HF Diffusers)
- ("in_layers.0", "norm1"),
- ("in_layers.2", "conv1"),
- ("out_layers.0", "norm2"),
- ("out_layers.3", "conv2"),
- ("emb_layers.1", "time_emb_proj"),
- ("skip_connection", "conv_shortcut"),
-]
-
-unet_conversion_map_layer = []
-# hardcoded number of downblocks and resnets/attentions...
-# would need smarter logic for other networks.
-for i in range(3):
- # loop over downblocks/upblocks
-
- for j in range(2):
- # loop over resnets/attentions for downblocks
- hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
- sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
- unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))
-
- if i > 0:
- hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
- sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
- unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix))
-
- for j in range(4):
- # loop over resnets/attentions for upblocks
- hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
- sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
- unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))
-
- if i < 2:
- # no attention layers in up_blocks.0
- hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
- sd_up_atn_prefix = f"output_blocks.{3 * i + j}.1."
- unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))
-
- if i < 3:
- # no downsample in down_blocks.3
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
- sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
- unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix))
-
- # no upsample in up_blocks.3
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
- sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}."
- unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))
-unet_conversion_map_layer.append(("output_blocks.2.2.conv.", "output_blocks.2.1.conv."))
-
-hf_mid_atn_prefix = "mid_block.attentions.0."
-sd_mid_atn_prefix = "middle_block.1."
-unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix))
-for j in range(2):
- hf_mid_res_prefix = f"mid_block.resnets.{j}."
- sd_mid_res_prefix = f"middle_block.{2*j}."
- unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))
-
-
-def convert_unet_state_dict(unet_state_dict):
- # buyer beware: this is a *brittle* function,
- # and correct output requires that all of these pieces interact in
- # the exact order in which I have arranged them.
- mapping = {k: k for k in unet_state_dict.keys()}
- for sd_name, hf_name in unet_conversion_map:
- mapping[hf_name] = sd_name
- for k, v in mapping.items():
- if "resnets" in k:
- for sd_part, hf_part in unet_conversion_map_resnet:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- for k, v in mapping.items():
- for sd_part, hf_part in unet_conversion_map_layer:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- new_state_dict = {sd_name: unet_state_dict[hf_name] for hf_name, sd_name in mapping.items()}
- return new_state_dict
-
-
-# ================#
-# VAE Conversion #
-# ================#
-
-vae_conversion_map = [
- # (stable-diffusion, HF Diffusers)
- ("nin_shortcut", "conv_shortcut"),
- ("norm_out", "conv_norm_out"),
- ("mid.attn_1.", "mid_block.attentions.0."),
-]
-
-for i in range(4):
- # down_blocks have two resnets
- for j in range(2):
- hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}."
- sd_down_prefix = f"encoder.down.{i}.block.{j}."
- vae_conversion_map.append((sd_down_prefix, hf_down_prefix))
-
- if i < 3:
- hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0."
- sd_downsample_prefix = f"down.{i}.downsample."
- vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix))
-
- hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
- sd_upsample_prefix = f"up.{3-i}.upsample."
- vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix))
-
- # up_blocks have three resnets
- # also, up blocks in hf are numbered in reverse from sd
- for j in range(3):
- hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}."
- sd_up_prefix = f"decoder.up.{3-i}.block.{j}."
- vae_conversion_map.append((sd_up_prefix, hf_up_prefix))
-
-# this part accounts for mid blocks in both the encoder and the decoder
-for i in range(2):
- hf_mid_res_prefix = f"mid_block.resnets.{i}."
- sd_mid_res_prefix = f"mid.block_{i+1}."
- vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix))
-
-
-vae_conversion_map_attn = [
- # (stable-diffusion, HF Diffusers)
- ("norm.", "group_norm."),
- # the following are for SDXL
- ("q.", "to_q."),
- ("k.", "to_k."),
- ("v.", "to_v."),
- ("proj_out.", "to_out.0."),
-]
-
-
-def reshape_weight_for_sd(w):
- # convert HF linear weights to SD conv2d weights
- return w.reshape(*w.shape, 1, 1)
-
-
-def convert_vae_state_dict(vae_state_dict):
- mapping = {k: k for k in vae_state_dict.keys()}
- for k, v in mapping.items():
- for sd_part, hf_part in vae_conversion_map:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- for k, v in mapping.items():
- if "attentions" in k:
- for sd_part, hf_part in vae_conversion_map_attn:
- v = v.replace(hf_part, sd_part)
- mapping[k] = v
- new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()}
- weights_to_convert = ["q", "k", "v", "proj_out"]
- for k, v in new_state_dict.items():
- for weight_name in weights_to_convert:
- if f"mid.attn_1.{weight_name}.weight" in k:
- print(f"Reshaping {k} for SD format")
- new_state_dict[k] = reshape_weight_for_sd(v)
- return new_state_dict
-
-
-# =========================#
-# Text Encoder Conversion #
-# =========================#
-
-
-textenc_conversion_lst = [
- # (stable-diffusion, HF Diffusers)
- ("transformer.resblocks.", "text_model.encoder.layers."),
- ("ln_1", "layer_norm1"),
- ("ln_2", "layer_norm2"),
- (".c_fc.", ".fc1."),
- (".c_proj.", ".fc2."),
- (".attn", ".self_attn"),
- ("ln_final.", "text_model.final_layer_norm."),
- ("token_embedding.weight", "text_model.embeddings.token_embedding.weight"),
- ("positional_embedding", "text_model.embeddings.position_embedding.weight"),
-]
-protected = {re.escape(x[1]): x[0] for x in textenc_conversion_lst}
-textenc_pattern = re.compile("|".join(protected.keys()))
-
-# Ordering is from https://github.com/pytorch/pytorch/blob/master/test/cpp/api/modules.cpp
-code2idx = {"q": 0, "k": 1, "v": 2}
-
-
-def convert_openclip_text_enc_state_dict(text_enc_dict):
- new_state_dict = {}
- capture_qkv_weight = {}
- capture_qkv_bias = {}
- for k, v in text_enc_dict.items():
- if (
- k.endswith(".self_attn.q_proj.weight")
- or k.endswith(".self_attn.k_proj.weight")
- or k.endswith(".self_attn.v_proj.weight")
- ):
- k_pre = k[: -len(".q_proj.weight")]
- k_code = k[-len("q_proj.weight")]
- if k_pre not in capture_qkv_weight:
- capture_qkv_weight[k_pre] = [None, None, None]
- capture_qkv_weight[k_pre][code2idx[k_code]] = v
- continue
-
- if (
- k.endswith(".self_attn.q_proj.bias")
- or k.endswith(".self_attn.k_proj.bias")
- or k.endswith(".self_attn.v_proj.bias")
- ):
- k_pre = k[: -len(".q_proj.bias")]
- k_code = k[-len("q_proj.bias")]
- if k_pre not in capture_qkv_bias:
- capture_qkv_bias[k_pre] = [None, None, None]
- capture_qkv_bias[k_pre][code2idx[k_code]] = v
- continue
-
- relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k)
- new_state_dict[relabelled_key] = v
-
- for k_pre, tensors in capture_qkv_weight.items():
- if None in tensors:
- raise Exception("CORRUPTED MODEL: one of the q-k-v values for the text encoder was missing")
- relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k_pre)
- new_state_dict[relabelled_key + ".in_proj_weight"] = torch.cat(tensors)
-
- for k_pre, tensors in capture_qkv_bias.items():
- if None in tensors:
- raise Exception("CORRUPTED MODEL: one of the q-k-v values for the text encoder was missing")
- relabelled_key = textenc_pattern.sub(lambda m: protected[re.escape(m.group(0))], k_pre)
- new_state_dict[relabelled_key + ".in_proj_bias"] = torch.cat(tensors)
-
- return new_state_dict
-
-
-def convert_openai_text_enc_state_dict(text_enc_dict):
- return text_enc_dict
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
-
- parser.add_argument("--model_path", default=None, type=str, required=True, help="Path to the model to convert.")
- parser.add_argument("--checkpoint_path", default=None, type=str, required=True, help="Path to the output model.")
- parser.add_argument("--half", action="store_true", help="Save weights in half precision.")
- parser.add_argument(
- "--use_safetensors", action="store_true", help="Save weights use safetensors, default is ckpt."
- )
-
- args = parser.parse_args()
-
- assert args.model_path is not None, "Must provide a model path!"
-
- assert args.checkpoint_path is not None, "Must provide a checkpoint path!"
-
- # Path for safetensors
- unet_path = osp.join(args.model_path, "unet", "diffusion_pytorch_model.safetensors")
- vae_path = osp.join(args.model_path, "vae", "diffusion_pytorch_model.safetensors")
- text_enc_path = osp.join(args.model_path, "text_encoder", "model.safetensors")
- text_enc_2_path = osp.join(args.model_path, "text_encoder_2", "model.safetensors")
-
- # Load models from safetensors if it exists, if it doesn't pytorch
- if osp.exists(unet_path):
- unet_state_dict = load_file(unet_path, device="cpu")
- else:
- unet_path = osp.join(args.model_path, "unet", "diffusion_pytorch_model.bin")
- unet_state_dict = torch.load(unet_path, map_location="cpu")
-
- if osp.exists(vae_path):
- vae_state_dict = load_file(vae_path, device="cpu")
- else:
- vae_path = osp.join(args.model_path, "vae", "diffusion_pytorch_model.bin")
- vae_state_dict = torch.load(vae_path, map_location="cpu")
-
- if osp.exists(text_enc_path):
- text_enc_dict = load_file(text_enc_path, device="cpu")
- else:
- text_enc_path = osp.join(args.model_path, "text_encoder", "pytorch_model.bin")
- text_enc_dict = torch.load(text_enc_path, map_location="cpu")
-
- if osp.exists(text_enc_2_path):
- text_enc_2_dict = load_file(text_enc_2_path, device="cpu")
- else:
- text_enc_2_path = osp.join(args.model_path, "text_encoder_2", "pytorch_model.bin")
- text_enc_2_dict = torch.load(text_enc_2_path, map_location="cpu")
-
- # Convert the UNet model
- unet_state_dict = convert_unet_state_dict(unet_state_dict)
- unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()}
-
- # Convert the VAE model
- vae_state_dict = convert_vae_state_dict(vae_state_dict)
- vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()}
-
- text_enc_dict = convert_openai_text_enc_state_dict(text_enc_dict)
- text_enc_dict = {"conditioner.embedders.0.transformer." + k: v for k, v in text_enc_dict.items()}
-
- text_enc_2_dict = convert_openclip_text_enc_state_dict(text_enc_2_dict)
- text_enc_2_dict = {"conditioner.embedders.1.model." + k: v for k, v in text_enc_2_dict.items()}
-
- # Put together new checkpoint
- state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict, **text_enc_2_dict}
-
- if args.half:
- state_dict = {k: v.half() for k, v in state_dict.items()}
-
- if args.use_safetensors:
- save_file(state_dict, args.checkpoint_path)
- else:
- state_dict = {"state_dict": state_dict}
- torch.save(state_dict, args.checkpoint_path)
diff --git a/spaces/paulbricman/conceptarium/backend/microverses.py b/spaces/paulbricman/conceptarium/backend/microverses.py
deleted file mode 100644
index e499edfa01c9bf48fd8fb3b3d4f59db66ffd3c9f..0000000000000000000000000000000000000000
--- a/spaces/paulbricman/conceptarium/backend/microverses.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import json
-from pathlib import Path
-from util import encode, get_content
-import secrets
-import time
-from PIL import Image
-import io
-import os
-
-
-def create_microverse(modality, query, auth_result, text_encoder, text_image_encoder):
- knowledge_base_path = Path('..') / 'knowledge'
- microverses_path = knowledge_base_path / 'microverses.json'
-
- if auth_result['custodian'] == False:
- return {
- 'message': 'Only the conceptarium\'s custodian can create microverses in it.'
- }
- else:
- if not microverses_path.exists():
- json.dump([], open(microverses_path, 'w'))
-
- query_embedding = encode(
- modality, query, text_encoder, text_image_encoder)
- token = secrets.token_urlsafe(16)
-
- if modality == 'text':
- filename = secrets.token_urlsafe(16) + '.md'
- open(knowledge_base_path / filename, 'w').write(query)
-
- microverses = json.load(open(microverses_path))
- microverses += [{
- "filename": filename,
- "modality": modality,
- "timestamp": time.time(),
- "token": token,
- "embeddings": query_embedding
- }]
- json.dump(microverses, open(microverses_path, 'w'))
- elif modality == 'image':
- filename = secrets.token_urlsafe(16) + '.jpg'
- query = Image.open(io.BytesIO(query)).convert('RGB')
- query.save(knowledge_base_path / filename, quality=50)
-
- microverses = json.load(open(microverses_path))
- microverses += [{
- "filename": filename,
- "modality": modality,
- "timestamp": time.time(),
- "token": token,
- "embeddings": query_embedding
- }]
- json.dump(microverses, open(microverses_path, 'w'))
-
- return {
- "token": token
- }
-
-
-def remove_microverse(auth_result, microverse_token):
- knowledge_base_path = Path('..') / 'knowledge'
- microverses_path = knowledge_base_path / 'microverses.json'
-
- if auth_result['custodian'] == False:
- return {
- 'message': 'Only the conceptarium\'s custodian can create microverses in it.'
- }
- else:
- microverses = json.load(open(microverses_path))
- microverses = [
- e for e in microverses if e['token'] != microverse_token]
- removal_target = [
- e for e in microverses if e['token'] == microverse_token]
- json.dump(microverses, open(microverses_path, 'w'))
- if len(removal_target) > 0:
- os.remove(knowledge_base_path / removal_target[0]['filename'])
-
-
-def list_microverses(auth_result):
- microverses_path = Path('..') / 'knowledge' / 'microverses.json'
-
- if auth_result['custodian'] == False:
- return {
- 'message': 'Only the conceptarium\'s custodian can list all microverses in it.'
- }
- else:
- if not microverses_path.exists():
- json.dump([], open(microverses_path, 'w'))
-
- microverses = json.load(open(microverses_path))
-
- for e_idx, e in enumerate(microverses):
- microverses[e_idx]['content'] = get_content(
- e, True)
- return microverses
diff --git a/spaces/petervavank/VoiceConvertion/app.py b/spaces/petervavank/VoiceConvertion/app.py
deleted file mode 100644
index f6ee63b26efa3fcf975e5a2116cd777ddafa4c32..0000000000000000000000000000000000000000
--- a/spaces/petervavank/VoiceConvertion/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import argparse
-import gradio as gr
-import numpy as np
-import soundfile as sf
-import torch
-import os
-from data_utils import denormalize, file2mel, load_model, mel2wav, normalize
-
-#torch.load(map_location=torch.device('cpu'))
-
-model_dir = "vctk_model"
-output = "output.wav"
-
-def main(source, target):
- model, config, attr, device = load_model(model_dir)
-
- src_mel = file2mel(source, **config["preprocess"])
- tgt_mel = file2mel(target, **config["preprocess"])
-
- src_mel = normalize(src_mel, attr)
- tgt_mel = normalize(tgt_mel, attr)
-
- src_mel = torch.from_numpy(src_mel).T.unsqueeze(0).to(device)
- tgt_mel = torch.from_numpy(tgt_mel).T.unsqueeze(0).to(device)
-
- with torch.no_grad():
- out_mel = model.inference(src_mel, tgt_mel)
- out_mel = out_mel.squeeze(0).T
- out_mel = denormalize(out_mel.data.cpu().numpy(), attr)
- out_wav = mel2wav(out_mel, **config["preprocess"])
-
- sf.write(output, out_wav, config["preprocess"]["sample_rate"])
- #return "D:/新桌面/Nlpclass/contentsecurity/attack-vc/result/保护后.wav"
- return output
-
-inputs = [
- gr.inputs.Audio(type="filepath", label="提供内容的语音(攻击者)"),
- gr.inputs.Audio(type="filepath", label="提供音色的语音(受害者)"),
-]
-
-examples = [
- ["Self.wav",
- "p333_003.wav"],
- ["Self.wav",
- "output7/p333_003.wav"]
-]
-demo = gr.Interface(fn = main,
- inputs = inputs,
- examples=examples,
- outputs=gr.Audio(type="filepath", label="VC后语音")
-)
-
-
-if __name__ == "__main__":
- #os.environ["CUDA_VISIBLE_DEVICES"] = "0"
- #main()
- demo.launch()
diff --git a/spaces/pixiou/bingo/src/pages/api/image.ts b/spaces/pixiou/bingo/src/pages/api/image.ts
deleted file mode 100644
index 26fdb31076a9c71e70d1725a630844b27f5a3221..0000000000000000000000000000000000000000
--- a/spaces/pixiou/bingo/src/pages/api/image.ts
+++ /dev/null
@@ -1,38 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { debug } from '@/lib/isomorphic'
-import { createHeaders } from '@/lib/utils'
-import { createImage } from '@/lib/bots/bing/utils'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- const { prompt, id } = req.query
- if (!prompt) {
- return res.json({
- result: {
- value: 'Image',
- message: 'No Prompt'
- }
- })
- }
- try {
- const headers = createHeaders(req.cookies, 'image')
-
- debug('headers', headers)
- const response = await createImage(String(prompt), String(id), {
- ...headers,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- })
- res.writeHead(200, {
- 'Content-Type': 'text/plain; charset=UTF-8',
- })
- return res.end(response)
- } catch (e) {
- return res.json({
- result: {
- value: 'Error',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/status_codes.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/status_codes.py
deleted file mode 100644
index 5e29502cddfa9a9887a93399ab4193fb75dfe605..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/status_codes.py
+++ /dev/null
@@ -1,6 +0,0 @@
-SUCCESS = 0
-ERROR = 1
-UNKNOWN_ERROR = 2
-VIRTUALENV_NOT_FOUND = 3
-PREVIOUS_BUILD_DIR_ERROR = 4
-NO_MATCHES_FOUND = 23
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/tomli/_types.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/tomli/_types.py
deleted file mode 100644
index d949412e03b29d70592c7721fe747e5085c2e280..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/tomli/_types.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# SPDX-License-Identifier: MIT
-# SPDX-FileCopyrightText: 2021 Taneli Hukkinen
-# Licensed to PSF under a Contributor Agreement.
-
-from typing import Any, Callable, Tuple
-
-# Type annotations
-ParseFloat = Callable[[str], Any]
-Key = Tuple[str, ...]
-Pos = int
diff --git a/spaces/pragnakalp/Question_Generation_T5/questiongenerator.py b/spaces/pragnakalp/Question_Generation_T5/questiongenerator.py
deleted file mode 100644
index 0e0e2ee84bce101be4bb314f1000f75b1c3f6428..0000000000000000000000000000000000000000
--- a/spaces/pragnakalp/Question_Generation_T5/questiongenerator.py
+++ /dev/null
@@ -1,345 +0,0 @@
-import os
-import sys
-import math
-import numpy as np
-import torch
-import spacy
-import re
-import random
-import json
-import en_core_web_sm
-from string import punctuation
-
-#from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
-#from transformers import BertTokenizer, BertForSequenceClassification
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoModelForSequenceClassification
-class QuestionGenerator():
-
- def __init__(self, model_dir=None):
-
- QG_PRETRAINED = 'iarfmoose/t5-base-question-generator'
- self.ANSWER_TOKEN = ''
- self.CONTEXT_TOKEN = ''
- self.SEQ_LENGTH = 512
-
- self.device = torch.device('cpu')
- # self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
- self.qg_tokenizer = AutoTokenizer.from_pretrained(QG_PRETRAINED)
- self.qg_model = AutoModelForSeq2SeqLM.from_pretrained(QG_PRETRAINED)
- self.qg_model.to(self.device)
-
- self.qa_evaluator = QAEvaluator(model_dir)
-
- def generate(self, article, use_evaluator=True, num_questions=None, answer_style='all'):
-
- print("Generating questions...\n")
-
- qg_inputs, qg_answers = self.generate_qg_inputs(article, answer_style)
- print("qg_inputs, qg_answers=>",qg_inputs, qg_answers)
- generated_questions = self.generate_questions_from_inputs(qg_inputs,num_questions)
- print("generated_questions(generate)=>",generated_questions)
- return generated_questions
- message = "{} questions doesn't match {} answers".format(
- len(generated_questions),
- len(qg_answers))
- assert len(generated_questions) == len(qg_answers), message
-
- if use_evaluator:
-
- print("Evaluating QA pairs...\n")
-
- encoded_qa_pairs = self.qa_evaluator.encode_qa_pairs(generated_questions, qg_answers)
- scores = self.qa_evaluator.get_scores(encoded_qa_pairs)
- if num_questions:
- qa_list = self._get_ranked_qa_pairs(generated_questions, qg_answers, scores, num_questions)
- else:
- qa_list = self._get_ranked_qa_pairs(generated_questions, qg_answers, scores)
-
- else:
- print("Skipping evaluation step.\n")
- qa_list = self._get_all_qa_pairs(generated_questions, qg_answers)
-
- return qa_list
-
- def generate_qg_inputs(self, text, answer_style):
-
- VALID_ANSWER_STYLES = ['all', 'sentences', 'multiple_choice']
-
- if answer_style not in VALID_ANSWER_STYLES:
- raise ValueError(
- "Invalid answer style {}. Please choose from {}".format(
- answer_style,
- VALID_ANSWER_STYLES
- )
- )
-
- inputs = []
- answers = []
-
- if answer_style == 'sentences' or answer_style == 'all':
- segments = self._split_into_segments(text)
- for segment in segments:
- sentences = self._split_text(segment)
- prepped_inputs, prepped_answers = self._prepare_qg_inputs(sentences, segment)
- inputs.extend(prepped_inputs)
- answers.extend(prepped_answers)
-
- if answer_style == 'multiple_choice' or answer_style == 'all':
- sentences = self._split_text(text)
- prepped_inputs, prepped_answers = self._prepare_qg_inputs_MC(sentences)
- inputs.extend(prepped_inputs)
- answers.extend(prepped_answers)
-
- return inputs, answers
-
- def generate_questions_from_inputs(self, qg_inputs,num_questions):
- generated_questions = []
- count = 0
- print("num que => ", num_questions)
- for qg_input in qg_inputs:
- if count < int(num_questions):
- question = self._generate_question(qg_input)
-
- question = question.strip() #remove trailing spaces
- question = question.strip(punctuation) #remove trailing questionmarks
- question += "?" #add one ?
- if question not in generated_questions:
- generated_questions.append(question)
- print("question ===> ",question)
- count += 1
- else:
- return generated_questions
- return generated_questions #
- def _split_text(self, text):
- MAX_SENTENCE_LEN = 128
-
- sentences = re.findall('.*?[.!\?]', text)
-
- cut_sentences = []
- for sentence in sentences:
- if len(sentence) > MAX_SENTENCE_LEN:
- cut_sentences.extend(re.split('[,;:)]', sentence))
- # temporary solution to remove useless post-quote sentence fragments
- cut_sentences = [s for s in sentences if len(s.split(" ")) > 5]
- sentences = sentences + cut_sentences
-
- return list(set([s.strip(" ") for s in sentences]))
-
- def _split_into_segments(self, text):
- MAX_TOKENS = 490
-
- paragraphs = text.split('\n')
- tokenized_paragraphs = [self.qg_tokenizer(p)['input_ids'] for p in paragraphs if len(p) > 0]
-
- segments = []
- while len(tokenized_paragraphs) > 0:
- segment = []
- while len(segment) < MAX_TOKENS and len(tokenized_paragraphs) > 0:
- paragraph = tokenized_paragraphs.pop(0)
- segment.extend(paragraph)
- segments.append(segment)
- return [self.qg_tokenizer.decode(s) for s in segments]
-
- def _prepare_qg_inputs(self, sentences, text):
- inputs = []
- answers = []
-
- for sentence in sentences:
- qg_input = '{} {} {} {}'.format(
- self.ANSWER_TOKEN,
- sentence,
- self.CONTEXT_TOKEN,
- text
- )
- inputs.append(qg_input)
- answers.append(sentence)
-
- return inputs, answers
-
- def _prepare_qg_inputs_MC(self, sentences):
-
- spacy_nlp = en_core_web_sm.load()
- docs = list(spacy_nlp.pipe(sentences, disable=['parser']))
- inputs_from_text = []
- answers_from_text = []
-
- for i in range(len(sentences)):
- entities = docs[i].ents
- if entities:
- for entity in entities:
- qg_input = '{} {} {} {}'.format(
- self.ANSWER_TOKEN,
- entity,
- self.CONTEXT_TOKEN,
- sentences[i]
- )
- answers = self._get_MC_answers(entity, docs)
- inputs_from_text.append(qg_input)
- answers_from_text.append(answers)
-
- return inputs_from_text, answers_from_text
-
- def _get_MC_answers(self, correct_answer, docs):
-
- entities = []
- for doc in docs:
- entities.extend([{'text': e.text, 'label_': e.label_} for e in doc.ents])
-
- # remove duplicate elements
- entities_json = [json.dumps(kv) for kv in entities]
- pool = set(entities_json)
- num_choices = min(4, len(pool)) - 1 # -1 because we already have the correct answer
-
- # add the correct answer
- final_choices = []
- correct_label = correct_answer.label_
- final_choices.append({'answer': correct_answer.text, 'correct': True})
- pool.remove(json.dumps({'text': correct_answer.text, 'label_': correct_answer.label_}))
-
- # find answers with the same NER label
- matches = [e for e in pool if correct_label in e]
-
- # if we don't have enough then add some other random answers
- if len(matches) < num_choices:
- choices = matches
- pool = pool.difference(set(choices))
- choices.extend(random.sample(pool, num_choices - len(choices)))
- else:
- choices = random.sample(matches, num_choices)
-
- choices = [json.loads(s) for s in choices]
- for choice in choices:
- final_choices.append({'answer': choice['text'], 'correct': False})
- random.shuffle(final_choices)
- return final_choices
-
- def _generate_question(self, qg_input):
- self.qg_model.eval()
- encoded_input = self._encode_qg_input(qg_input)
- with torch.no_grad():
- output = self.qg_model.generate(input_ids=encoded_input['input_ids'])
- return self.qg_tokenizer.decode(output[0])
-
- def _encode_qg_input(self, qg_input):
- return self.qg_tokenizer(
- qg_input,
- pad_to_max_length=True,
- max_length=self.SEQ_LENGTH,
- truncation=True,
- return_tensors="pt"
- ).to(self.device)
-
- def _get_ranked_qa_pairs(self, generated_questions, qg_answers, scores, num_questions=10):
- if num_questions > len(scores):
- num_questions = len(scores)
- print("\nWas only able to generate {} questions. For more questions, please input a longer text.".format(num_questions))
-
- qa_list = []
- for i in range(num_questions):
- index = scores[i]
- qa = self._make_dict(
- generated_questions[index].split('?')[0] + '?',
- qg_answers[index])
- qa_list.append(qa)
- return qa_list
-
- def _get_all_qa_pairs(self, generated_questions, qg_answers):
- qa_list = []
- for i in range(len(generated_questions)):
- qa = self._make_dict(
- generated_questions[i].split('?')[0] + '?',
- qg_answers[i])
- qa_list.append(qa)
- return qa_list
-
- def _make_dict(self, question, answer):
- qa = {}
- qa['question'] = question
- qa['answer'] = answer
- return qa
-
-
-class QAEvaluator():
- def __init__(self, model_dir=None):
-
- QAE_PRETRAINED = 'iarfmoose/bert-base-cased-qa-evaluator'
- self.SEQ_LENGTH = 512
-
- self.device = torch.device('cpu')
- # self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
- self.qae_tokenizer = AutoTokenizer.from_pretrained(QAE_PRETRAINED)
- self.qae_model = AutoModelForSequenceClassification.from_pretrained(QAE_PRETRAINED)
- self.qae_model.to(self.device)
-
-
- def encode_qa_pairs(self, questions, answers):
- encoded_pairs = []
- for i in range(len(questions)):
- encoded_qa = self._encode_qa(questions[i], answers[i])
- encoded_pairs.append(encoded_qa.to(self.device))
- return encoded_pairs
-
- def get_scores(self, encoded_qa_pairs):
- scores = {}
- self.qae_model.eval()
- with torch.no_grad():
- for i in range(len(encoded_qa_pairs)):
- scores[i] = self._evaluate_qa(encoded_qa_pairs[i])
-
- return [k for k, v in sorted(scores.items(), key=lambda item: item[1], reverse=True)]
-
- def _encode_qa(self, question, answer):
- if type(answer) is list:
- for a in answer:
- if a['correct']:
- correct_answer = a['answer']
- else:
- correct_answer = answer
- return self.qae_tokenizer(
- text=question,
- text_pair=correct_answer,
- pad_to_max_length=True,
- max_length=self.SEQ_LENGTH,
- truncation=True,
- return_tensors="pt"
- )
-
- def _evaluate_qa(self, encoded_qa_pair):
- output = self.qae_model(**encoded_qa_pair)
- return output[0][0][1]
-
-
-def print_qa(qa_list, show_answers=True):
- for i in range(len(qa_list)):
- space = ' ' * int(np.where(i < 9, 3, 4)) # wider space for 2 digit q nums
-
- print('{}) Q: {}'.format(i + 1, qa_list[i]['question']))
-
- answer = qa_list[i]['answer']
-
- # print a list of multiple choice answers
- if type(answer) is list:
-
- if show_answers:
- print('{}A: 1.'.format(space),
- answer[0]['answer'],
- np.where(answer[0]['correct'], '(correct)', ''))
- for j in range(1, len(answer)):
- print('{}{}.'.format(space + ' ', j + 1),
- answer[j]['answer'],
- np.where(answer[j]['correct'] == True, '(correct)', ''))
-
- else:
- print('{}A: 1.'.format(space),
- answer[0]['answer'])
- for j in range(1, len(answer)):
- print('{}{}.'.format(space + ' ', j + 1),
- answer[j]['answer'])
- print('')
-
- # print full sentence answers
- else:
- if show_answers:
- print('{}A:'.format(space), answer, '\n')
\ No newline at end of file
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py
deleted file mode 100644
index 46cea0bd5a1c44c265f33311b7dce1516f48d77f..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/contourpy/util/bokeh_renderer.py
+++ /dev/null
@@ -1,329 +0,0 @@
-from __future__ import annotations
-
-import io
-from typing import TYPE_CHECKING, Any
-
-from bokeh.io import export_png, export_svg, show
-from bokeh.io.export import get_screenshot_as_png
-from bokeh.layouts import gridplot
-from bokeh.models.annotations.labels import Label
-from bokeh.palettes import Category10
-from bokeh.plotting import figure
-import numpy as np
-
-from contourpy import FillType, LineType
-from contourpy.util.bokeh_util import filled_to_bokeh, lines_to_bokeh
-from contourpy.util.renderer import Renderer
-
-if TYPE_CHECKING:
- from bokeh.models import GridPlot
- from bokeh.palettes import Palette
- from numpy.typing import ArrayLike
- from selenium.webdriver.remote.webdriver import WebDriver
-
- from contourpy._contourpy import FillReturn, LineReturn
-
-
-class BokehRenderer(Renderer):
- _figures: list[figure]
- _layout: GridPlot
- _palette: Palette
- _want_svg: bool
-
- """Utility renderer using Bokeh to render a grid of plots over the same (x, y) range.
-
- Args:
- nrows (int, optional): Number of rows of plots, default ``1``.
- ncols (int, optional): Number of columns of plots, default ``1``.
- figsize (tuple(float, float), optional): Figure size in inches (assuming 100 dpi), default
- ``(9, 9)``.
- show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``.
- want_svg (bool, optional): Whether output is required in SVG format or not, default
- ``False``.
-
- Warning:
- :class:`~contourpy.util.bokeh_renderer.BokehRenderer`, unlike
- :class:`~contourpy.util.mpl_renderer.MplRenderer`, needs to be told in advance if output to
- SVG format will be required later, otherwise it will assume PNG output.
- """
- def __init__(
- self,
- nrows: int = 1,
- ncols: int = 1,
- figsize: tuple[float, float] = (9, 9),
- show_frame: bool = True,
- want_svg: bool = False,
- ) -> None:
- self._want_svg = want_svg
- self._palette = Category10[10]
-
- total_size = 100*np.asarray(figsize, dtype=int) # Assuming 100 dpi.
-
- nfigures = nrows*ncols
- self._figures = []
- backend = "svg" if self._want_svg else "canvas"
- for _ in range(nfigures):
- fig = figure(output_backend=backend)
- fig.xgrid.visible = False
- fig.ygrid.visible = False
- self._figures.append(fig)
- if not show_frame:
- fig.outline_line_color = None # type: ignore[assignment]
- fig.axis.visible = False
-
- self._layout = gridplot(
- self._figures, ncols=ncols, toolbar_location=None, # type: ignore[arg-type]
- width=total_size[0] // ncols, height=total_size[1] // nrows)
-
- def _convert_color(self, color: str) -> str:
- if isinstance(color, str) and color[0] == "C":
- index = int(color[1:])
- color = self._palette[index]
- return color
-
- def _get_figure(self, ax: figure | int) -> figure:
- if isinstance(ax, int):
- ax = self._figures[ax]
- return ax
-
- def filled(
- self,
- filled: FillReturn,
- fill_type: FillType,
- ax: figure | int = 0,
- color: str = "C0",
- alpha: float = 0.7,
- ) -> None:
- """Plot filled contours on a single plot.
-
- Args:
- filled (sequence of arrays): Filled contour data as returned by
- :func:`~contourpy.ContourGenerator.filled`.
- fill_type (FillType): Type of ``filled`` data, as returned by
- :attr:`~contourpy.ContourGenerator.fill_type`.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Color to plot with. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``Category10`` palette. Default ``"C0"``.
- alpha (float, optional): Opacity to plot with, default ``0.7``.
- """
- fig = self._get_figure(ax)
- color = self._convert_color(color)
- xs, ys = filled_to_bokeh(filled, fill_type)
- if len(xs) > 0:
- fig.multi_polygons(xs=[xs], ys=[ys], color=color, fill_alpha=alpha, line_width=0)
-
- def grid(
- self,
- x: ArrayLike,
- y: ArrayLike,
- ax: figure | int = 0,
- color: str = "black",
- alpha: float = 0.1,
- point_color: str | None = None,
- quad_as_tri_alpha: float = 0,
- ) -> None:
- """Plot quad grid lines on a single plot.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Color to plot grid lines, default ``"black"``.
- alpha (float, optional): Opacity to plot lines with, default ``0.1``.
- point_color (str, optional): Color to plot grid points or ``None`` if grid points
- should not be plotted, default ``None``.
- quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default
- ``0``.
-
- Colors may be a string color or the letter ``"C"`` followed by an integer in the range
- ``"C0"`` to ``"C9"`` to use a color from the ``Category10`` palette.
-
- Warning:
- ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked.
- """
- fig = self._get_figure(ax)
- x, y = self._grid_as_2d(x, y)
- xs = [row for row in x] + [row for row in x.T]
- ys = [row for row in y] + [row for row in y.T]
- kwargs = dict(line_color=color, alpha=alpha)
- fig.multi_line(xs, ys, **kwargs)
- if quad_as_tri_alpha > 0:
- # Assumes no quad mask.
- xmid = (0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:])).ravel()
- ymid = (0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:])).ravel()
- fig.multi_line(
- [row for row in np.stack((x[:-1, :-1].ravel(), xmid, x[1:, 1:].ravel()), axis=1)],
- [row for row in np.stack((y[:-1, :-1].ravel(), ymid, y[1:, 1:].ravel()), axis=1)],
- **kwargs)
- fig.multi_line(
- [row for row in np.stack((x[:-1, 1:].ravel(), xmid, x[1:, :-1].ravel()), axis=1)],
- [row for row in np.stack((y[:-1, 1:].ravel(), ymid, y[1:, :-1].ravel()), axis=1)],
- **kwargs)
- if point_color is not None:
- fig.circle(
- x=x.ravel(), y=y.ravel(), fill_color=color, line_color=None, alpha=alpha, size=8)
-
- def lines(
- self,
- lines: LineReturn,
- line_type: LineType,
- ax: figure | int = 0,
- color: str = "C0",
- alpha: float = 1.0,
- linewidth: float = 1,
- ) -> None:
- """Plot contour lines on a single plot.
-
- Args:
- lines (sequence of arrays): Contour line data as returned by
- :func:`~contourpy.ContourGenerator.lines`.
- line_type (LineType): Type of ``lines`` data, as returned by
- :attr:`~contourpy.ContourGenerator.line_type`.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Color to plot lines. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``Category10`` palette. Default ``"C0"``.
- alpha (float, optional): Opacity to plot lines with, default ``1.0``.
- linewidth (float, optional): Width of lines, default ``1``.
-
- Note:
- Assumes all lines are open line strips not closed line loops.
- """
- fig = self._get_figure(ax)
- color = self._convert_color(color)
- xs, ys = lines_to_bokeh(lines, line_type)
- if len(xs) > 0:
- fig.multi_line(xs, ys, line_color=color, line_alpha=alpha, line_width=linewidth)
-
- def mask(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike | np.ma.MaskedArray[Any, Any],
- ax: figure | int = 0,
- color: str = "black",
- ) -> None:
- """Plot masked out grid points as circles on a single plot.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- z (masked array of shape (ny, nx): z-values.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Circle color, default ``"black"``.
- """
- mask = np.ma.getmask(z) # type: ignore[no-untyped-call]
- if mask is np.ma.nomask:
- return
- fig = self._get_figure(ax)
- color = self._convert_color(color)
- x, y = self._grid_as_2d(x, y)
- fig.circle(x[mask], y[mask], fill_color=color, size=10)
-
- def save(
- self,
- filename: str,
- transparent: bool = False,
- *,
- webdriver: WebDriver | None = None,
- ) -> None:
- """Save plots to SVG or PNG file.
-
- Args:
- filename (str): Filename to save to.
- transparent (bool, optional): Whether background should be transparent, default
- ``False``.
- webdriver (WebDriver, optional): Selenium WebDriver instance to use to create the image.
-
- Warning:
- To output to SVG file, ``want_svg=True`` must have been passed to the constructor.
- """
- if transparent:
- for fig in self._figures:
- fig.background_fill_color = None # type: ignore[assignment]
- fig.border_fill_color = None # type: ignore[assignment]
-
- if self._want_svg:
- export_svg(self._layout, filename=filename, webdriver=webdriver)
- else:
- export_png(self._layout, filename=filename, webdriver=webdriver)
-
- def save_to_buffer(self, *, webdriver: WebDriver | None = None) -> io.BytesIO:
- """Save plots to an ``io.BytesIO`` buffer.
-
- Args:
- webdriver (WebDriver, optional): Selenium WebDriver instance to use to create the image.
-
- Return:
- BytesIO: PNG image buffer.
- """
- image = get_screenshot_as_png(self._layout, driver=webdriver)
- buffer = io.BytesIO()
- image.save(buffer, "png")
- return buffer
-
- def show(self) -> None:
- """Show plots in web browser, in usual Bokeh manner.
- """
- show(self._layout)
-
- def title(self, title: str, ax: figure | int = 0, color: str | None = None) -> None:
- """Set the title of a single plot.
-
- Args:
- title (str): Title text.
- ax (int or Bokeh Figure, optional): Which plot to set the title of, default ``0``.
- color (str, optional): Color to set title. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``Category10`` palette. Default ``None`` which is ``black``.
- """
- fig = self._get_figure(ax)
- fig.title = title # type: ignore[assignment]
- fig.title.align = "center" # type: ignore[attr-defined]
- if color is not None:
- fig.title.text_color = self._convert_color(color) # type: ignore[attr-defined]
-
- def z_values(
- self,
- x: ArrayLike,
- y: ArrayLike,
- z: ArrayLike,
- ax: figure | int = 0,
- color: str = "green",
- fmt: str = ".1f",
- quad_as_tri: bool = False,
- ) -> None:
- """Show ``z`` values on a single plot.
-
- Args:
- x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.
- y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.
- z (array-like of shape (ny, nx): z-values.
- ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.
- color (str, optional): Color of added text. May be a string color or the letter ``"C"``
- followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the
- ``Category10`` palette. Default ``"green"``.
- fmt (str, optional): Format to display z-values, default ``".1f"``.
- quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centres
- of quads.
-
- Warning:
- ``quad_as_tri=True`` shows z-values for all quads, even if masked.
- """
- fig = self._get_figure(ax)
- color = self._convert_color(color)
- x, y = self._grid_as_2d(x, y)
- z = np.asarray(z)
- ny, nx = z.shape
- kwargs = dict(text_color=color, text_align="center", text_baseline="middle")
- for j in range(ny):
- for i in range(nx):
- fig.add_layout(Label(x=x[j, i], y=y[j, i], text=f"{z[j, i]:{fmt}}", **kwargs))
- if quad_as_tri:
- for j in range(ny-1):
- for i in range(nx-1):
- xx = np.mean(x[j:j+2, i:i+2])
- yy = np.mean(y[j:j+2, i:i+2])
- zz = np.mean(z[j:j+2, i:i+2])
- fig.add_layout(Label(x=xx, y=yy, text=f"{zz:{fmt}}", **kwargs))
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_ticker.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_ticker.py
deleted file mode 100644
index 961daaa1d167ab13a604f43ed963984c3976ac09..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_ticker.py
+++ /dev/null
@@ -1,1791 +0,0 @@
-from contextlib import nullcontext
-import itertools
-import locale
-import logging
-import re
-
-import numpy as np
-from numpy.testing import assert_almost_equal, assert_array_equal
-import pytest
-
-import matplotlib as mpl
-import matplotlib.pyplot as plt
-import matplotlib.ticker as mticker
-
-
-class TestMaxNLocator:
- basic_data = [
- (20, 100, np.array([20., 40., 60., 80., 100.])),
- (0.001, 0.0001, np.array([0., 0.0002, 0.0004, 0.0006, 0.0008, 0.001])),
- (-1e15, 1e15, np.array([-1.0e+15, -5.0e+14, 0e+00, 5e+14, 1.0e+15])),
- (0, 0.85e-50, np.arange(6) * 2e-51),
- (-0.85e-50, 0, np.arange(-5, 1) * 2e-51),
- ]
-
- integer_data = [
- (-0.1, 1.1, None, np.array([-1, 0, 1, 2])),
- (-0.1, 0.95, None, np.array([-0.25, 0, 0.25, 0.5, 0.75, 1.0])),
- (1, 55, [1, 1.5, 5, 6, 10], np.array([0, 15, 30, 45, 60])),
- ]
-
- @pytest.mark.parametrize('vmin, vmax, expected', basic_data)
- def test_basic(self, vmin, vmax, expected):
- loc = mticker.MaxNLocator(nbins=5)
- assert_almost_equal(loc.tick_values(vmin, vmax), expected)
-
- @pytest.mark.parametrize('vmin, vmax, steps, expected', integer_data)
- def test_integer(self, vmin, vmax, steps, expected):
- loc = mticker.MaxNLocator(nbins=5, integer=True, steps=steps)
- assert_almost_equal(loc.tick_values(vmin, vmax), expected)
-
- @pytest.mark.parametrize('kwargs, errortype, match', [
- ({'foo': 0}, TypeError,
- re.escape("set_params() got an unexpected keyword argument 'foo'")),
- ({'steps': [2, 1]}, ValueError, "steps argument must be an increasing"),
- ({'steps': 2}, ValueError, "steps argument must be an increasing"),
- ({'steps': [2, 11]}, ValueError, "steps argument must be an increasing"),
- ])
- def test_errors(self, kwargs, errortype, match):
- with pytest.raises(errortype, match=match):
- mticker.MaxNLocator(**kwargs)
-
- @pytest.mark.parametrize('steps, result', [
- ([1, 2, 10], [1, 2, 10]),
- ([2, 10], [1, 2, 10]),
- ([1, 2], [1, 2, 10]),
- ([2], [1, 2, 10]),
- ])
- def test_padding(self, steps, result):
- loc = mticker.MaxNLocator(steps=steps)
- assert (loc._steps == result).all()
-
-
-class TestLinearLocator:
- def test_basic(self):
- loc = mticker.LinearLocator(numticks=3)
- test_value = np.array([-0.8, -0.3, 0.2])
- assert_almost_equal(loc.tick_values(-0.8, 0.2), test_value)
-
- def test_zero_numticks(self):
- loc = mticker.LinearLocator(numticks=0)
- loc.tick_values(-0.8, 0.2) == []
-
- def test_set_params(self):
- """
- Create linear locator with presets={}, numticks=2 and change it to
- something else. See if change was successful. Should not exception.
- """
- loc = mticker.LinearLocator(numticks=2)
- loc.set_params(numticks=8, presets={(0, 1): []})
- assert loc.numticks == 8
- assert loc.presets == {(0, 1): []}
-
- def test_presets(self):
- loc = mticker.LinearLocator(presets={(1, 2): [1, 1.25, 1.75],
- (0, 2): [0.5, 1.5]})
- assert loc.tick_values(1, 2) == [1, 1.25, 1.75]
- assert loc.tick_values(2, 1) == [1, 1.25, 1.75]
- assert loc.tick_values(0, 2) == [0.5, 1.5]
- assert loc.tick_values(0.0, 2.0) == [0.5, 1.5]
- assert (loc.tick_values(0, 1) == np.linspace(0, 1, 11)).all()
-
-
-class TestMultipleLocator:
- def test_basic(self):
- loc = mticker.MultipleLocator(base=3.147)
- test_value = np.array([-9.441, -6.294, -3.147, 0., 3.147, 6.294,
- 9.441, 12.588])
- assert_almost_equal(loc.tick_values(-7, 10), test_value)
-
- def test_basic_with_offset(self):
- loc = mticker.MultipleLocator(base=3.147, offset=1.2)
- test_value = np.array([-8.241, -5.094, -1.947, 1.2, 4.347, 7.494,
- 10.641])
- assert_almost_equal(loc.tick_values(-7, 10), test_value)
-
- def test_view_limits(self):
- """
- Test basic behavior of view limits.
- """
- with mpl.rc_context({'axes.autolimit_mode': 'data'}):
- loc = mticker.MultipleLocator(base=3.147)
- assert_almost_equal(loc.view_limits(-5, 5), (-5, 5))
-
- def test_view_limits_round_numbers(self):
- """
- Test that everything works properly with 'round_numbers' for auto
- limit.
- """
- with mpl.rc_context({'axes.autolimit_mode': 'round_numbers'}):
- loc = mticker.MultipleLocator(base=3.147)
- assert_almost_equal(loc.view_limits(-4, 4), (-6.294, 6.294))
-
- def test_view_limits_round_numbers_with_offset(self):
- """
- Test that everything works properly with 'round_numbers' for auto
- limit.
- """
- with mpl.rc_context({'axes.autolimit_mode': 'round_numbers'}):
- loc = mticker.MultipleLocator(base=3.147, offset=1.3)
- assert_almost_equal(loc.view_limits(-4, 4), (-4.994, 4.447))
-
- def test_set_params(self):
- """
- Create multiple locator with 0.7 base, and change it to something else.
- See if change was successful.
- """
- mult = mticker.MultipleLocator(base=0.7)
- mult.set_params(base=1.7)
- assert mult._edge.step == 1.7
- mult.set_params(offset=3)
- assert mult._offset == 3
-
-
-class TestAutoMinorLocator:
- def test_basic(self):
- fig, ax = plt.subplots()
- ax.set_xlim(0, 1.39)
- ax.minorticks_on()
- test_value = np.array([0.05, 0.1, 0.15, 0.25, 0.3, 0.35, 0.45,
- 0.5, 0.55, 0.65, 0.7, 0.75, 0.85, 0.9,
- 0.95, 1.05, 1.1, 1.15, 1.25, 1.3, 1.35])
- assert_almost_equal(ax.xaxis.get_ticklocs(minor=True), test_value)
-
- # NB: the following values are assuming that *xlim* is [0, 5]
- params = [
- (0, 0), # no major tick => no minor tick either
- (1, 0) # a single major tick => no minor tick
- ]
-
- def test_first_and_last_minorticks(self):
- """
- Test that first and last minor tick appear as expected.
- """
- # This test is related to issue #22331
- fig, ax = plt.subplots()
- ax.set_xlim(-1.9, 1.9)
- ax.xaxis.set_minor_locator(mticker.AutoMinorLocator())
- test_value = np.array([-1.9, -1.8, -1.7, -1.6, -1.4, -1.3, -1.2, -1.1,
- -0.9, -0.8, -0.7, -0.6, -0.4, -0.3, -0.2, -0.1,
- 0.1, 0.2, 0.3, 0.4, 0.6, 0.7, 0.8, 0.9, 1.1,
- 1.2, 1.3, 1.4, 1.6, 1.7, 1.8, 1.9])
- assert_almost_equal(ax.xaxis.get_ticklocs(minor=True), test_value)
-
- ax.set_xlim(-5, 5)
- test_value = np.array([-5.0, -4.5, -3.5, -3.0, -2.5, -1.5, -1.0, -0.5,
- 0.5, 1.0, 1.5, 2.5, 3.0, 3.5, 4.5, 5.0])
- assert_almost_equal(ax.xaxis.get_ticklocs(minor=True), test_value)
-
- @pytest.mark.parametrize('nb_majorticks, expected_nb_minorticks', params)
- def test_low_number_of_majorticks(
- self, nb_majorticks, expected_nb_minorticks):
- # This test is related to issue #8804
- fig, ax = plt.subplots()
- xlims = (0, 5) # easier to test the different code paths
- ax.set_xlim(*xlims)
- ax.set_xticks(np.linspace(xlims[0], xlims[1], nb_majorticks))
- ax.minorticks_on()
- ax.xaxis.set_minor_locator(mticker.AutoMinorLocator())
- assert len(ax.xaxis.get_minorticklocs()) == expected_nb_minorticks
-
- majorstep_minordivisions = [(1, 5),
- (2, 4),
- (2.5, 5),
- (5, 5),
- (10, 5)]
-
- # This test is meant to verify the parameterization for
- # test_number_of_minor_ticks
- def test_using_all_default_major_steps(self):
- with mpl.rc_context({'_internal.classic_mode': False}):
- majorsteps = [x[0] for x in self.majorstep_minordivisions]
- np.testing.assert_allclose(majorsteps,
- mticker.AutoLocator()._steps)
-
- @pytest.mark.parametrize('major_step, expected_nb_minordivisions',
- majorstep_minordivisions)
- def test_number_of_minor_ticks(
- self, major_step, expected_nb_minordivisions):
- fig, ax = plt.subplots()
- xlims = (0, major_step)
- ax.set_xlim(*xlims)
- ax.set_xticks(xlims)
- ax.minorticks_on()
- ax.xaxis.set_minor_locator(mticker.AutoMinorLocator())
- nb_minor_divisions = len(ax.xaxis.get_minorticklocs()) + 1
- assert nb_minor_divisions == expected_nb_minordivisions
-
- limits = [(0, 1.39), (0, 0.139),
- (0, 0.11e-19), (0, 0.112e-12),
- (-2.0e-07, -3.3e-08), (1.20e-06, 1.42e-06),
- (-1.34e-06, -1.44e-06), (-8.76e-07, -1.51e-06)]
-
- reference = [
- [0.05, 0.1, 0.15, 0.25, 0.3, 0.35, 0.45, 0.5, 0.55, 0.65, 0.7,
- 0.75, 0.85, 0.9, 0.95, 1.05, 1.1, 1.15, 1.25, 1.3, 1.35],
- [0.005, 0.01, 0.015, 0.025, 0.03, 0.035, 0.045, 0.05, 0.055, 0.065,
- 0.07, 0.075, 0.085, 0.09, 0.095, 0.105, 0.11, 0.115, 0.125, 0.13,
- 0.135],
- [5.00e-22, 1.00e-21, 1.50e-21, 2.50e-21, 3.00e-21, 3.50e-21, 4.50e-21,
- 5.00e-21, 5.50e-21, 6.50e-21, 7.00e-21, 7.50e-21, 8.50e-21, 9.00e-21,
- 9.50e-21, 1.05e-20, 1.10e-20],
- [5.00e-15, 1.00e-14, 1.50e-14, 2.50e-14, 3.00e-14, 3.50e-14, 4.50e-14,
- 5.00e-14, 5.50e-14, 6.50e-14, 7.00e-14, 7.50e-14, 8.50e-14, 9.00e-14,
- 9.50e-14, 1.05e-13, 1.10e-13],
- [-1.95e-07, -1.90e-07, -1.85e-07, -1.75e-07, -1.70e-07, -1.65e-07,
- -1.55e-07, -1.50e-07, -1.45e-07, -1.35e-07, -1.30e-07, -1.25e-07,
- -1.15e-07, -1.10e-07, -1.05e-07, -9.50e-08, -9.00e-08, -8.50e-08,
- -7.50e-08, -7.00e-08, -6.50e-08, -5.50e-08, -5.00e-08, -4.50e-08,
- -3.50e-08],
- [1.21e-06, 1.22e-06, 1.23e-06, 1.24e-06, 1.26e-06, 1.27e-06, 1.28e-06,
- 1.29e-06, 1.31e-06, 1.32e-06, 1.33e-06, 1.34e-06, 1.36e-06, 1.37e-06,
- 1.38e-06, 1.39e-06, 1.41e-06, 1.42e-06],
- [-1.435e-06, -1.430e-06, -1.425e-06, -1.415e-06, -1.410e-06,
- -1.405e-06, -1.395e-06, -1.390e-06, -1.385e-06, -1.375e-06,
- -1.370e-06, -1.365e-06, -1.355e-06, -1.350e-06, -1.345e-06],
- [-1.48e-06, -1.46e-06, -1.44e-06, -1.42e-06, -1.38e-06, -1.36e-06,
- -1.34e-06, -1.32e-06, -1.28e-06, -1.26e-06, -1.24e-06, -1.22e-06,
- -1.18e-06, -1.16e-06, -1.14e-06, -1.12e-06, -1.08e-06, -1.06e-06,
- -1.04e-06, -1.02e-06, -9.80e-07, -9.60e-07, -9.40e-07, -9.20e-07,
- -8.80e-07]]
-
- additional_data = list(zip(limits, reference))
-
- @pytest.mark.parametrize('lim, ref', additional_data)
- def test_additional(self, lim, ref):
- fig, ax = plt.subplots()
-
- ax.minorticks_on()
- ax.grid(True, 'minor', 'y', linewidth=1)
- ax.grid(True, 'major', color='k', linewidth=1)
- ax.set_ylim(lim)
-
- assert_almost_equal(ax.yaxis.get_ticklocs(minor=True), ref)
-
- @pytest.mark.parametrize('use_rcparam', [False, True])
- @pytest.mark.parametrize(
- 'lim, ref', [
- ((0, 1.39),
- [0.05, 0.1, 0.15, 0.25, 0.3, 0.35, 0.45, 0.5, 0.55, 0.65, 0.7,
- 0.75, 0.85, 0.9, 0.95, 1.05, 1.1, 1.15, 1.25, 1.3, 1.35]),
- ((0, 0.139),
- [0.005, 0.01, 0.015, 0.025, 0.03, 0.035, 0.045, 0.05, 0.055,
- 0.065, 0.07, 0.075, 0.085, 0.09, 0.095, 0.105, 0.11, 0.115,
- 0.125, 0.13, 0.135]),
- ])
- def test_number_of_minor_ticks_auto(self, lim, ref, use_rcparam):
- if use_rcparam:
- context = {'xtick.minor.ndivs': 'auto', 'ytick.minor.ndivs': 'auto'}
- kwargs = {}
- else:
- context = {}
- kwargs = {'n': 'auto'}
-
- with mpl.rc_context(context):
- fig, ax = plt.subplots()
- ax.set_xlim(*lim)
- ax.set_ylim(*lim)
- ax.xaxis.set_minor_locator(mticker.AutoMinorLocator(**kwargs))
- ax.yaxis.set_minor_locator(mticker.AutoMinorLocator(**kwargs))
- assert_almost_equal(ax.xaxis.get_ticklocs(minor=True), ref)
- assert_almost_equal(ax.yaxis.get_ticklocs(minor=True), ref)
-
- @pytest.mark.parametrize('use_rcparam', [False, True])
- @pytest.mark.parametrize(
- 'n, lim, ref', [
- (2, (0, 4), [0.5, 1.5, 2.5, 3.5]),
- (4, (0, 2), [0.25, 0.5, 0.75, 1.25, 1.5, 1.75]),
- (10, (0, 1), [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]),
- ])
- def test_number_of_minor_ticks_int(self, n, lim, ref, use_rcparam):
- if use_rcparam:
- context = {'xtick.minor.ndivs': n, 'ytick.minor.ndivs': n}
- kwargs = {}
- else:
- context = {}
- kwargs = {'n': n}
-
- with mpl.rc_context(context):
- fig, ax = plt.subplots()
- ax.set_xlim(*lim)
- ax.set_ylim(*lim)
- ax.xaxis.set_major_locator(mticker.MultipleLocator(1))
- ax.xaxis.set_minor_locator(mticker.AutoMinorLocator(**kwargs))
- ax.yaxis.set_major_locator(mticker.MultipleLocator(1))
- ax.yaxis.set_minor_locator(mticker.AutoMinorLocator(**kwargs))
- assert_almost_equal(ax.xaxis.get_ticklocs(minor=True), ref)
- assert_almost_equal(ax.yaxis.get_ticklocs(minor=True), ref)
-
-
-class TestLogLocator:
- def test_basic(self):
- loc = mticker.LogLocator(numticks=5)
- with pytest.raises(ValueError):
- loc.tick_values(0, 1000)
-
- test_value = np.array([1.00000000e-05, 1.00000000e-03, 1.00000000e-01,
- 1.00000000e+01, 1.00000000e+03, 1.00000000e+05,
- 1.00000000e+07, 1.000000000e+09])
- assert_almost_equal(loc.tick_values(0.001, 1.1e5), test_value)
-
- loc = mticker.LogLocator(base=2)
- test_value = np.array([0.5, 1., 2., 4., 8., 16., 32., 64., 128., 256.])
- assert_almost_equal(loc.tick_values(1, 100), test_value)
-
- def test_polar_axes(self):
- """
- Polar axes have a different ticking logic.
- """
- fig, ax = plt.subplots(subplot_kw={'projection': 'polar'})
- ax.set_yscale('log')
- ax.set_ylim(1, 100)
- assert_array_equal(ax.get_yticks(), [10, 100, 1000])
-
- def test_switch_to_autolocator(self):
- loc = mticker.LogLocator(subs="all")
- assert_array_equal(loc.tick_values(0.45, 0.55),
- [0.44, 0.46, 0.48, 0.5, 0.52, 0.54, 0.56])
- # check that we *skip* 1.0, and 10, because this is a minor locator
- loc = mticker.LogLocator(subs=np.arange(2, 10))
- assert 1.0 not in loc.tick_values(0.9, 20.)
- assert 10.0 not in loc.tick_values(0.9, 20.)
-
- def test_set_params(self):
- """
- Create log locator with default value, base=10.0, subs=[1.0],
- numdecs=4, numticks=15 and change it to something else.
- See if change was successful. Should not raise exception.
- """
- loc = mticker.LogLocator()
- with pytest.warns(mpl.MatplotlibDeprecationWarning, match="numdecs"):
- loc.set_params(numticks=7, numdecs=8, subs=[2.0], base=4)
- assert loc.numticks == 7
- with pytest.warns(mpl.MatplotlibDeprecationWarning, match="numdecs"):
- assert loc.numdecs == 8
- assert loc._base == 4
- assert list(loc._subs) == [2.0]
-
- def test_tick_values_correct(self):
- ll = mticker.LogLocator(subs=(1, 2, 5))
- test_value = np.array([1.e-01, 2.e-01, 5.e-01, 1.e+00, 2.e+00, 5.e+00,
- 1.e+01, 2.e+01, 5.e+01, 1.e+02, 2.e+02, 5.e+02,
- 1.e+03, 2.e+03, 5.e+03, 1.e+04, 2.e+04, 5.e+04,
- 1.e+05, 2.e+05, 5.e+05, 1.e+06, 2.e+06, 5.e+06,
- 1.e+07, 2.e+07, 5.e+07, 1.e+08, 2.e+08, 5.e+08])
- assert_almost_equal(ll.tick_values(1, 1e7), test_value)
-
- def test_tick_values_not_empty(self):
- mpl.rcParams['_internal.classic_mode'] = False
- ll = mticker.LogLocator(subs=(1, 2, 5))
- test_value = np.array([1.e-01, 2.e-01, 5.e-01, 1.e+00, 2.e+00, 5.e+00,
- 1.e+01, 2.e+01, 5.e+01, 1.e+02, 2.e+02, 5.e+02,
- 1.e+03, 2.e+03, 5.e+03, 1.e+04, 2.e+04, 5.e+04,
- 1.e+05, 2.e+05, 5.e+05, 1.e+06, 2.e+06, 5.e+06,
- 1.e+07, 2.e+07, 5.e+07, 1.e+08, 2.e+08, 5.e+08,
- 1.e+09, 2.e+09, 5.e+09])
- assert_almost_equal(ll.tick_values(1, 1e8), test_value)
-
- def test_multiple_shared_axes(self):
- rng = np.random.default_rng(19680801)
- dummy_data = [rng.normal(size=100), [], []]
- fig, axes = plt.subplots(len(dummy_data), sharex=True, sharey=True)
-
- for ax, data in zip(axes.flatten(), dummy_data):
- ax.hist(data, bins=10)
- ax.set_yscale('log', nonpositive='clip')
-
- for ax in axes.flatten():
- assert all(ax.get_yticks() == axes[0].get_yticks())
- assert ax.get_ylim() == axes[0].get_ylim()
-
-
-class TestNullLocator:
- def test_set_params(self):
- """
- Create null locator, and attempt to call set_params() on it.
- Should not exception, and should raise a warning.
- """
- loc = mticker.NullLocator()
- with pytest.warns(UserWarning):
- loc.set_params()
-
-
-class _LogitHelper:
- @staticmethod
- def isclose(x, y):
- return (np.isclose(-np.log(1/x-1), -np.log(1/y-1))
- if 0 < x < 1 and 0 < y < 1 else False)
-
- @staticmethod
- def assert_almost_equal(x, y):
- ax = np.array(x)
- ay = np.array(y)
- assert np.all(ax > 0) and np.all(ax < 1)
- assert np.all(ay > 0) and np.all(ay < 1)
- lx = -np.log(1/ax-1)
- ly = -np.log(1/ay-1)
- assert_almost_equal(lx, ly)
-
-
-class TestLogitLocator:
- ref_basic_limits = [
- (5e-2, 1 - 5e-2),
- (5e-3, 1 - 5e-3),
- (5e-4, 1 - 5e-4),
- (5e-5, 1 - 5e-5),
- (5e-6, 1 - 5e-6),
- (5e-7, 1 - 5e-7),
- (5e-8, 1 - 5e-8),
- (5e-9, 1 - 5e-9),
- ]
-
- ref_basic_major_ticks = [
- 1 / (10 ** np.arange(1, 3)),
- 1 / (10 ** np.arange(1, 4)),
- 1 / (10 ** np.arange(1, 5)),
- 1 / (10 ** np.arange(1, 6)),
- 1 / (10 ** np.arange(1, 7)),
- 1 / (10 ** np.arange(1, 8)),
- 1 / (10 ** np.arange(1, 9)),
- 1 / (10 ** np.arange(1, 10)),
- ]
-
- ref_maxn_limits = [(0.4, 0.6), (5e-2, 2e-1), (1 - 2e-1, 1 - 5e-2)]
-
- @pytest.mark.parametrize(
- "lims, expected_low_ticks",
- zip(ref_basic_limits, ref_basic_major_ticks),
- )
- def test_basic_major(self, lims, expected_low_ticks):
- """
- Create logit locator with huge number of major, and tests ticks.
- """
- expected_ticks = sorted(
- [*expected_low_ticks, 0.5, *(1 - expected_low_ticks)]
- )
- loc = mticker.LogitLocator(nbins=100)
- _LogitHelper.assert_almost_equal(
- loc.tick_values(*lims),
- expected_ticks
- )
-
- @pytest.mark.parametrize("lims", ref_maxn_limits)
- def test_maxn_major(self, lims):
- """
- When the axis is zoomed, the locator must have the same behavior as
- MaxNLocator.
- """
- loc = mticker.LogitLocator(nbins=100)
- maxn_loc = mticker.MaxNLocator(nbins=100, steps=[1, 2, 5, 10])
- for nbins in (4, 8, 16):
- loc.set_params(nbins=nbins)
- maxn_loc.set_params(nbins=nbins)
- ticks = loc.tick_values(*lims)
- maxn_ticks = maxn_loc.tick_values(*lims)
- assert ticks.shape == maxn_ticks.shape
- assert (ticks == maxn_ticks).all()
-
- @pytest.mark.parametrize("lims", ref_basic_limits + ref_maxn_limits)
- def test_nbins_major(self, lims):
- """
- Assert logit locator for respecting nbins param.
- """
-
- basic_needed = int(-np.floor(np.log10(lims[0]))) * 2 + 1
- loc = mticker.LogitLocator(nbins=100)
- for nbins in range(basic_needed, 2, -1):
- loc.set_params(nbins=nbins)
- assert len(loc.tick_values(*lims)) <= nbins + 2
-
- @pytest.mark.parametrize(
- "lims, expected_low_ticks",
- zip(ref_basic_limits, ref_basic_major_ticks),
- )
- def test_minor(self, lims, expected_low_ticks):
- """
- In large scale, test the presence of minor,
- and assert no minor when major are subsampled.
- """
-
- expected_ticks = sorted(
- [*expected_low_ticks, 0.5, *(1 - expected_low_ticks)]
- )
- basic_needed = len(expected_ticks)
- loc = mticker.LogitLocator(nbins=100)
- minor_loc = mticker.LogitLocator(nbins=100, minor=True)
- for nbins in range(basic_needed, 2, -1):
- loc.set_params(nbins=nbins)
- minor_loc.set_params(nbins=nbins)
- major_ticks = loc.tick_values(*lims)
- minor_ticks = minor_loc.tick_values(*lims)
- if len(major_ticks) >= len(expected_ticks):
- # no subsample, we must have a lot of minors ticks
- assert (len(major_ticks) - 1) * 5 < len(minor_ticks)
- else:
- # subsample
- _LogitHelper.assert_almost_equal(
- sorted([*major_ticks, *minor_ticks]), expected_ticks)
-
- def test_minor_attr(self):
- loc = mticker.LogitLocator(nbins=100)
- assert not loc.minor
- loc.minor = True
- assert loc.minor
- loc.set_params(minor=False)
- assert not loc.minor
-
- acceptable_vmin_vmax = [
- *(2.5 ** np.arange(-3, 0)),
- *(1 - 2.5 ** np.arange(-3, 0)),
- ]
-
- @pytest.mark.parametrize(
- "lims",
- [
- (a, b)
- for (a, b) in itertools.product(acceptable_vmin_vmax, repeat=2)
- if a != b
- ],
- )
- def test_nonsingular_ok(self, lims):
- """
- Create logit locator, and test the nonsingular method for acceptable
- value
- """
- loc = mticker.LogitLocator()
- lims2 = loc.nonsingular(*lims)
- assert sorted(lims) == sorted(lims2)
-
- @pytest.mark.parametrize("okval", acceptable_vmin_vmax)
- def test_nonsingular_nok(self, okval):
- """
- Create logit locator, and test the nonsingular method for non
- acceptable value
- """
- loc = mticker.LogitLocator()
- vmin, vmax = (-1, okval)
- vmin2, vmax2 = loc.nonsingular(vmin, vmax)
- assert vmax2 == vmax
- assert 0 < vmin2 < vmax2
- vmin, vmax = (okval, 2)
- vmin2, vmax2 = loc.nonsingular(vmin, vmax)
- assert vmin2 == vmin
- assert vmin2 < vmax2 < 1
-
-
-class TestFixedLocator:
- def test_set_params(self):
- """
- Create fixed locator with 5 nbins, and change it to something else.
- See if change was successful.
- Should not exception.
- """
- fixed = mticker.FixedLocator(range(0, 24), nbins=5)
- fixed.set_params(nbins=7)
- assert fixed.nbins == 7
-
-
-class TestIndexLocator:
- def test_set_params(self):
- """
- Create index locator with 3 base, 4 offset. and change it to something
- else. See if change was successful.
- Should not exception.
- """
- index = mticker.IndexLocator(base=3, offset=4)
- index.set_params(base=7, offset=7)
- assert index._base == 7
- assert index.offset == 7
-
-
-class TestSymmetricalLogLocator:
- def test_set_params(self):
- """
- Create symmetrical log locator with default subs =[1.0] numticks = 15,
- and change it to something else.
- See if change was successful.
- Should not exception.
- """
- sym = mticker.SymmetricalLogLocator(base=10, linthresh=1)
- sym.set_params(subs=[2.0], numticks=8)
- assert sym._subs == [2.0]
- assert sym.numticks == 8
-
- @pytest.mark.parametrize(
- 'vmin, vmax, expected',
- [
- (0, 1, [0, 1]),
- (-1, 1, [-1, 0, 1]),
- ],
- )
- def test_values(self, vmin, vmax, expected):
- # https://github.com/matplotlib/matplotlib/issues/25945
- sym = mticker.SymmetricalLogLocator(base=10, linthresh=1)
- ticks = sym.tick_values(vmin=vmin, vmax=vmax)
- assert_array_equal(ticks, expected)
-
- def test_subs(self):
- sym = mticker.SymmetricalLogLocator(base=10, linthresh=1, subs=[2.0, 4.0])
- sym.create_dummy_axis()
- sym.axis.set_view_interval(-10, 10)
- assert (sym() == [-20., -40., -2., -4., 0., 2., 4., 20., 40.]).all()
-
- def test_extending(self):
- sym = mticker.SymmetricalLogLocator(base=10, linthresh=1)
- sym.create_dummy_axis()
- sym.axis.set_view_interval(8, 9)
- assert (sym() == [1.0]).all()
- sym.axis.set_view_interval(8, 12)
- assert (sym() == [1.0, 10.0]).all()
- assert sym.view_limits(10, 10) == (1, 100)
- assert sym.view_limits(-10, -10) == (-100, -1)
- assert sym.view_limits(0, 0) == (-0.001, 0.001)
-
-
-class TestAsinhLocator:
- def test_init(self):
- lctr = mticker.AsinhLocator(linear_width=2.718, numticks=19)
- assert lctr.linear_width == 2.718
- assert lctr.numticks == 19
- assert lctr.base == 10
-
- def test_set_params(self):
- lctr = mticker.AsinhLocator(linear_width=5,
- numticks=17, symthresh=0.125,
- base=4, subs=(2.5, 3.25))
- assert lctr.numticks == 17
- assert lctr.symthresh == 0.125
- assert lctr.base == 4
- assert lctr.subs == (2.5, 3.25)
-
- lctr.set_params(numticks=23)
- assert lctr.numticks == 23
- lctr.set_params(None)
- assert lctr.numticks == 23
-
- lctr.set_params(symthresh=0.5)
- assert lctr.symthresh == 0.5
- lctr.set_params(symthresh=None)
- assert lctr.symthresh == 0.5
-
- lctr.set_params(base=7)
- assert lctr.base == 7
- lctr.set_params(base=None)
- assert lctr.base == 7
-
- lctr.set_params(subs=(2, 4.125))
- assert lctr.subs == (2, 4.125)
- lctr.set_params(subs=None)
- assert lctr.subs == (2, 4.125)
- lctr.set_params(subs=[])
- assert lctr.subs is None
-
- def test_linear_values(self):
- lctr = mticker.AsinhLocator(linear_width=100, numticks=11, base=0)
-
- assert_almost_equal(lctr.tick_values(-1, 1),
- np.arange(-1, 1.01, 0.2))
- assert_almost_equal(lctr.tick_values(-0.1, 0.1),
- np.arange(-0.1, 0.101, 0.02))
- assert_almost_equal(lctr.tick_values(-0.01, 0.01),
- np.arange(-0.01, 0.0101, 0.002))
-
- def test_wide_values(self):
- lctr = mticker.AsinhLocator(linear_width=0.1, numticks=11, base=0)
-
- assert_almost_equal(lctr.tick_values(-100, 100),
- [-100, -20, -5, -1, -0.2,
- 0, 0.2, 1, 5, 20, 100])
- assert_almost_equal(lctr.tick_values(-1000, 1000),
- [-1000, -100, -20, -3, -0.4,
- 0, 0.4, 3, 20, 100, 1000])
-
- def test_near_zero(self):
- """Check that manually injected zero will supersede nearby tick"""
- lctr = mticker.AsinhLocator(linear_width=100, numticks=3, base=0)
-
- assert_almost_equal(lctr.tick_values(-1.1, 0.9), [-1.0, 0.0, 0.9])
-
- def test_fallback(self):
- lctr = mticker.AsinhLocator(1.0, numticks=11)
-
- assert_almost_equal(lctr.tick_values(101, 102),
- np.arange(101, 102.01, 0.1))
-
- def test_symmetrizing(self):
- lctr = mticker.AsinhLocator(linear_width=1, numticks=3,
- symthresh=0.25, base=0)
- lctr.create_dummy_axis()
-
- lctr.axis.set_view_interval(-1, 2)
- assert_almost_equal(lctr(), [-1, 0, 2])
-
- lctr.axis.set_view_interval(-1, 0.9)
- assert_almost_equal(lctr(), [-1, 0, 1])
-
- lctr.axis.set_view_interval(-0.85, 1.05)
- assert_almost_equal(lctr(), [-1, 0, 1])
-
- lctr.axis.set_view_interval(1, 1.1)
- assert_almost_equal(lctr(), [1, 1.05, 1.1])
-
- def test_base_rounding(self):
- lctr10 = mticker.AsinhLocator(linear_width=1, numticks=8,
- base=10, subs=(1, 3, 5))
- assert_almost_equal(lctr10.tick_values(-110, 110),
- [-500, -300, -100, -50, -30, -10, -5, -3, -1,
- -0.5, -0.3, -0.1, 0, 0.1, 0.3, 0.5,
- 1, 3, 5, 10, 30, 50, 100, 300, 500])
-
- lctr5 = mticker.AsinhLocator(linear_width=1, numticks=20, base=5)
- assert_almost_equal(lctr5.tick_values(-1050, 1050),
- [-625, -125, -25, -5, -1, -0.2, 0,
- 0.2, 1, 5, 25, 125, 625])
-
-
-class TestScalarFormatter:
- offset_data = [
- (123, 189, 0),
- (-189, -123, 0),
- (12341, 12349, 12340),
- (-12349, -12341, -12340),
- (99999.5, 100010.5, 100000),
- (-100010.5, -99999.5, -100000),
- (99990.5, 100000.5, 100000),
- (-100000.5, -99990.5, -100000),
- (1233999, 1234001, 1234000),
- (-1234001, -1233999, -1234000),
- (1, 1, 1),
- (123, 123, 0),
- # Test cases courtesy of @WeatherGod
- (.4538, .4578, .45),
- (3789.12, 3783.1, 3780),
- (45124.3, 45831.75, 45000),
- (0.000721, 0.0007243, 0.00072),
- (12592.82, 12591.43, 12590),
- (9., 12., 0),
- (900., 1200., 0),
- (1900., 1200., 0),
- (0.99, 1.01, 1),
- (9.99, 10.01, 10),
- (99.99, 100.01, 100),
- (5.99, 6.01, 6),
- (15.99, 16.01, 16),
- (-0.452, 0.492, 0),
- (-0.492, 0.492, 0),
- (12331.4, 12350.5, 12300),
- (-12335.3, 12335.3, 0),
- ]
-
- use_offset_data = [True, False]
-
- useMathText_data = [True, False]
-
- # (sci_type, scilimits, lim, orderOfMag, fewticks)
- scilimits_data = [
- (False, (0, 0), (10.0, 20.0), 0, False),
- (True, (-2, 2), (-10, 20), 0, False),
- (True, (-2, 2), (-20, 10), 0, False),
- (True, (-2, 2), (-110, 120), 2, False),
- (True, (-2, 2), (-120, 110), 2, False),
- (True, (-2, 2), (-.001, 0.002), -3, False),
- (True, (-7, 7), (0.18e10, 0.83e10), 9, True),
- (True, (0, 0), (-1e5, 1e5), 5, False),
- (True, (6, 6), (-1e5, 1e5), 6, False),
- ]
-
- cursor_data = [
- [0., "0.000"],
- [0.0123, "0.012"],
- [0.123, "0.123"],
- [1.23, "1.230"],
- [12.3, "12.300"],
- ]
-
- format_data = [
- (.1, "1e-1"),
- (.11, "1.1e-1"),
- (1e8, "1e8"),
- (1.1e8, "1.1e8"),
- ]
-
- @pytest.mark.parametrize('unicode_minus, result',
- [(True, "\N{MINUS SIGN}1"), (False, "-1")])
- def test_unicode_minus(self, unicode_minus, result):
- mpl.rcParams['axes.unicode_minus'] = unicode_minus
- assert (
- plt.gca().xaxis.get_major_formatter().format_data_short(-1).strip()
- == result)
-
- @pytest.mark.parametrize('left, right, offset', offset_data)
- def test_offset_value(self, left, right, offset):
- fig, ax = plt.subplots()
- formatter = ax.xaxis.get_major_formatter()
-
- with (pytest.warns(UserWarning, match='Attempting to set identical')
- if left == right else nullcontext()):
- ax.set_xlim(left, right)
- ax.xaxis._update_ticks()
- assert formatter.offset == offset
-
- with (pytest.warns(UserWarning, match='Attempting to set identical')
- if left == right else nullcontext()):
- ax.set_xlim(right, left)
- ax.xaxis._update_ticks()
- assert formatter.offset == offset
-
- @pytest.mark.parametrize('use_offset', use_offset_data)
- def test_use_offset(self, use_offset):
- with mpl.rc_context({'axes.formatter.useoffset': use_offset}):
- tmp_form = mticker.ScalarFormatter()
- assert use_offset == tmp_form.get_useOffset()
- assert tmp_form.offset == 0
-
- @pytest.mark.parametrize('use_math_text', useMathText_data)
- def test_useMathText(self, use_math_text):
- with mpl.rc_context({'axes.formatter.use_mathtext': use_math_text}):
- tmp_form = mticker.ScalarFormatter()
- assert use_math_text == tmp_form.get_useMathText()
-
- def test_set_use_offset_float(self):
- tmp_form = mticker.ScalarFormatter()
- tmp_form.set_useOffset(0.5)
- assert not tmp_form.get_useOffset()
- assert tmp_form.offset == 0.5
-
- def test_use_locale(self):
- conv = locale.localeconv()
- sep = conv['thousands_sep']
- if not sep or conv['grouping'][-1:] in ([], [locale.CHAR_MAX]):
- pytest.skip('Locale does not apply grouping') # pragma: no cover
-
- with mpl.rc_context({'axes.formatter.use_locale': True}):
- tmp_form = mticker.ScalarFormatter()
- assert tmp_form.get_useLocale()
-
- tmp_form.create_dummy_axis()
- tmp_form.axis.set_data_interval(0, 10)
- tmp_form.set_locs([1, 2, 3])
- assert sep in tmp_form(1e9)
-
- @pytest.mark.parametrize(
- 'sci_type, scilimits, lim, orderOfMag, fewticks', scilimits_data)
- def test_scilimits(self, sci_type, scilimits, lim, orderOfMag, fewticks):
- tmp_form = mticker.ScalarFormatter()
- tmp_form.set_scientific(sci_type)
- tmp_form.set_powerlimits(scilimits)
- fig, ax = plt.subplots()
- ax.yaxis.set_major_formatter(tmp_form)
- ax.set_ylim(*lim)
- if fewticks:
- ax.yaxis.set_major_locator(mticker.MaxNLocator(4))
-
- tmp_form.set_locs(ax.yaxis.get_majorticklocs())
- assert orderOfMag == tmp_form.orderOfMagnitude
-
- @pytest.mark.parametrize('value, expected', format_data)
- def test_format_data(self, value, expected):
- mpl.rcParams['axes.unicode_minus'] = False
- sf = mticker.ScalarFormatter()
- assert sf.format_data(value) == expected
-
- @pytest.mark.parametrize('data, expected', cursor_data)
- def test_cursor_precision(self, data, expected):
- fig, ax = plt.subplots()
- ax.set_xlim(-1, 1) # Pointing precision of 0.001.
- fmt = ax.xaxis.get_major_formatter().format_data_short
- assert fmt(data) == expected
-
- @pytest.mark.parametrize('data, expected', cursor_data)
- def test_cursor_dummy_axis(self, data, expected):
- # Issue #17624
- sf = mticker.ScalarFormatter()
- sf.create_dummy_axis()
- sf.axis.set_view_interval(0, 10)
- fmt = sf.format_data_short
- assert fmt(data) == expected
- assert sf.axis.get_tick_space() == 9
- assert sf.axis.get_minpos() == 0
-
- def test_mathtext_ticks(self):
- mpl.rcParams.update({
- 'font.family': 'serif',
- 'font.serif': 'cmr10',
- 'axes.formatter.use_mathtext': False
- })
-
- with pytest.warns(UserWarning, match='cmr10 font should ideally'):
- fig, ax = plt.subplots()
- ax.set_xticks([-1, 0, 1])
- fig.canvas.draw()
-
- def test_cmr10_substitutions(self, caplog):
- mpl.rcParams.update({
- 'font.family': 'cmr10',
- 'mathtext.fontset': 'cm',
- 'axes.formatter.use_mathtext': True,
- })
-
- # Test that it does not log a warning about missing glyphs.
- with caplog.at_level(logging.WARNING, logger='matplotlib.mathtext'):
- fig, ax = plt.subplots()
- ax.plot([-0.03, 0.05], [40, 0.05])
- ax.set_yscale('log')
- yticks = [0.02, 0.3, 4, 50]
- formatter = mticker.LogFormatterSciNotation()
- ax.set_yticks(yticks, map(formatter, yticks))
- fig.canvas.draw()
- assert not caplog.text
-
- def test_empty_locs(self):
- sf = mticker.ScalarFormatter()
- sf.set_locs([])
- assert sf(0.5) == ''
-
-
-class TestLogFormatterExponent:
- param_data = [
- (True, 4, np.arange(-3, 4.0), np.arange(-3, 4.0),
- ['-3', '-2', '-1', '0', '1', '2', '3']),
- # With labelOnlyBase=False, non-integer powers should be nicely
- # formatted.
- (False, 10, np.array([0.1, 0.00001, np.pi, 0.2, -0.2, -0.00001]),
- range(6), ['0.1', '1e-05', '3.14', '0.2', '-0.2', '-1e-05']),
- (False, 50, np.array([3, 5, 12, 42], dtype=float), range(6),
- ['3', '5', '12', '42']),
- ]
-
- base_data = [2.0, 5.0, 10.0, np.pi, np.e]
-
- @pytest.mark.parametrize(
- 'labelOnlyBase, exponent, locs, positions, expected', param_data)
- @pytest.mark.parametrize('base', base_data)
- def test_basic(self, labelOnlyBase, base, exponent, locs, positions,
- expected):
- formatter = mticker.LogFormatterExponent(base=base,
- labelOnlyBase=labelOnlyBase)
- formatter.create_dummy_axis()
- formatter.axis.set_view_interval(1, base**exponent)
- vals = base**locs
- labels = [formatter(x, pos) for (x, pos) in zip(vals, positions)]
- expected = [label.replace('-', '\N{Minus Sign}') for label in expected]
- assert labels == expected
-
- def test_blank(self):
- # Should be a blank string for non-integer powers if labelOnlyBase=True
- formatter = mticker.LogFormatterExponent(base=10, labelOnlyBase=True)
- formatter.create_dummy_axis()
- formatter.axis.set_view_interval(1, 10)
- assert formatter(10**0.1) == ''
-
-
-class TestLogFormatterMathtext:
- fmt = mticker.LogFormatterMathtext()
- test_data = [
- (0, 1, '$\\mathdefault{10^{0}}$'),
- (0, 1e-2, '$\\mathdefault{10^{-2}}$'),
- (0, 1e2, '$\\mathdefault{10^{2}}$'),
- (3, 1, '$\\mathdefault{1}$'),
- (3, 1e-2, '$\\mathdefault{0.01}$'),
- (3, 1e2, '$\\mathdefault{100}$'),
- (3, 1e-3, '$\\mathdefault{10^{-3}}$'),
- (3, 1e3, '$\\mathdefault{10^{3}}$'),
- ]
-
- @pytest.mark.parametrize('min_exponent, value, expected', test_data)
- def test_min_exponent(self, min_exponent, value, expected):
- with mpl.rc_context({'axes.formatter.min_exponent': min_exponent}):
- assert self.fmt(value) == expected
-
-
-class TestLogFormatterSciNotation:
- test_data = [
- (2, 0.03125, '$\\mathdefault{2^{-5}}$'),
- (2, 1, '$\\mathdefault{2^{0}}$'),
- (2, 32, '$\\mathdefault{2^{5}}$'),
- (2, 0.0375, '$\\mathdefault{1.2\\times2^{-5}}$'),
- (2, 1.2, '$\\mathdefault{1.2\\times2^{0}}$'),
- (2, 38.4, '$\\mathdefault{1.2\\times2^{5}}$'),
- (10, -1, '$\\mathdefault{-10^{0}}$'),
- (10, 1e-05, '$\\mathdefault{10^{-5}}$'),
- (10, 1, '$\\mathdefault{10^{0}}$'),
- (10, 100000, '$\\mathdefault{10^{5}}$'),
- (10, 2e-05, '$\\mathdefault{2\\times10^{-5}}$'),
- (10, 2, '$\\mathdefault{2\\times10^{0}}$'),
- (10, 200000, '$\\mathdefault{2\\times10^{5}}$'),
- (10, 5e-05, '$\\mathdefault{5\\times10^{-5}}$'),
- (10, 5, '$\\mathdefault{5\\times10^{0}}$'),
- (10, 500000, '$\\mathdefault{5\\times10^{5}}$'),
- ]
-
- @mpl.style.context('default')
- @pytest.mark.parametrize('base, value, expected', test_data)
- def test_basic(self, base, value, expected):
- formatter = mticker.LogFormatterSciNotation(base=base)
- with mpl.rc_context({'text.usetex': False}):
- assert formatter(value) == expected
-
-
-class TestLogFormatter:
- pprint_data = [
- (3.141592654e-05, 0.001, '3.142e-5'),
- (0.0003141592654, 0.001, '3.142e-4'),
- (0.003141592654, 0.001, '3.142e-3'),
- (0.03141592654, 0.001, '3.142e-2'),
- (0.3141592654, 0.001, '3.142e-1'),
- (3.141592654, 0.001, '3.142'),
- (31.41592654, 0.001, '3.142e1'),
- (314.1592654, 0.001, '3.142e2'),
- (3141.592654, 0.001, '3.142e3'),
- (31415.92654, 0.001, '3.142e4'),
- (314159.2654, 0.001, '3.142e5'),
- (1e-05, 0.001, '1e-5'),
- (0.0001, 0.001, '1e-4'),
- (0.001, 0.001, '1e-3'),
- (0.01, 0.001, '1e-2'),
- (0.1, 0.001, '1e-1'),
- (1, 0.001, '1'),
- (10, 0.001, '10'),
- (100, 0.001, '100'),
- (1000, 0.001, '1000'),
- (10000, 0.001, '1e4'),
- (100000, 0.001, '1e5'),
- (3.141592654e-05, 0.015, '0'),
- (0.0003141592654, 0.015, '0'),
- (0.003141592654, 0.015, '0.003'),
- (0.03141592654, 0.015, '0.031'),
- (0.3141592654, 0.015, '0.314'),
- (3.141592654, 0.015, '3.142'),
- (31.41592654, 0.015, '31.416'),
- (314.1592654, 0.015, '314.159'),
- (3141.592654, 0.015, '3141.593'),
- (31415.92654, 0.015, '31415.927'),
- (314159.2654, 0.015, '314159.265'),
- (1e-05, 0.015, '0'),
- (0.0001, 0.015, '0'),
- (0.001, 0.015, '0.001'),
- (0.01, 0.015, '0.01'),
- (0.1, 0.015, '0.1'),
- (1, 0.015, '1'),
- (10, 0.015, '10'),
- (100, 0.015, '100'),
- (1000, 0.015, '1000'),
- (10000, 0.015, '10000'),
- (100000, 0.015, '100000'),
- (3.141592654e-05, 0.5, '0'),
- (0.0003141592654, 0.5, '0'),
- (0.003141592654, 0.5, '0.003'),
- (0.03141592654, 0.5, '0.031'),
- (0.3141592654, 0.5, '0.314'),
- (3.141592654, 0.5, '3.142'),
- (31.41592654, 0.5, '31.416'),
- (314.1592654, 0.5, '314.159'),
- (3141.592654, 0.5, '3141.593'),
- (31415.92654, 0.5, '31415.927'),
- (314159.2654, 0.5, '314159.265'),
- (1e-05, 0.5, '0'),
- (0.0001, 0.5, '0'),
- (0.001, 0.5, '0.001'),
- (0.01, 0.5, '0.01'),
- (0.1, 0.5, '0.1'),
- (1, 0.5, '1'),
- (10, 0.5, '10'),
- (100, 0.5, '100'),
- (1000, 0.5, '1000'),
- (10000, 0.5, '10000'),
- (100000, 0.5, '100000'),
- (3.141592654e-05, 5, '0'),
- (0.0003141592654, 5, '0'),
- (0.003141592654, 5, '0'),
- (0.03141592654, 5, '0.03'),
- (0.3141592654, 5, '0.31'),
- (3.141592654, 5, '3.14'),
- (31.41592654, 5, '31.42'),
- (314.1592654, 5, '314.16'),
- (3141.592654, 5, '3141.59'),
- (31415.92654, 5, '31415.93'),
- (314159.2654, 5, '314159.27'),
- (1e-05, 5, '0'),
- (0.0001, 5, '0'),
- (0.001, 5, '0'),
- (0.01, 5, '0.01'),
- (0.1, 5, '0.1'),
- (1, 5, '1'),
- (10, 5, '10'),
- (100, 5, '100'),
- (1000, 5, '1000'),
- (10000, 5, '10000'),
- (100000, 5, '100000'),
- (3.141592654e-05, 100, '0'),
- (0.0003141592654, 100, '0'),
- (0.003141592654, 100, '0'),
- (0.03141592654, 100, '0'),
- (0.3141592654, 100, '0.3'),
- (3.141592654, 100, '3.1'),
- (31.41592654, 100, '31.4'),
- (314.1592654, 100, '314.2'),
- (3141.592654, 100, '3141.6'),
- (31415.92654, 100, '31415.9'),
- (314159.2654, 100, '314159.3'),
- (1e-05, 100, '0'),
- (0.0001, 100, '0'),
- (0.001, 100, '0'),
- (0.01, 100, '0'),
- (0.1, 100, '0.1'),
- (1, 100, '1'),
- (10, 100, '10'),
- (100, 100, '100'),
- (1000, 100, '1000'),
- (10000, 100, '10000'),
- (100000, 100, '100000'),
- (3.141592654e-05, 1000000.0, '3.1e-5'),
- (0.0003141592654, 1000000.0, '3.1e-4'),
- (0.003141592654, 1000000.0, '3.1e-3'),
- (0.03141592654, 1000000.0, '3.1e-2'),
- (0.3141592654, 1000000.0, '3.1e-1'),
- (3.141592654, 1000000.0, '3.1'),
- (31.41592654, 1000000.0, '3.1e1'),
- (314.1592654, 1000000.0, '3.1e2'),
- (3141.592654, 1000000.0, '3.1e3'),
- (31415.92654, 1000000.0, '3.1e4'),
- (314159.2654, 1000000.0, '3.1e5'),
- (1e-05, 1000000.0, '1e-5'),
- (0.0001, 1000000.0, '1e-4'),
- (0.001, 1000000.0, '1e-3'),
- (0.01, 1000000.0, '1e-2'),
- (0.1, 1000000.0, '1e-1'),
- (1, 1000000.0, '1'),
- (10, 1000000.0, '10'),
- (100, 1000000.0, '100'),
- (1000, 1000000.0, '1000'),
- (10000, 1000000.0, '1e4'),
- (100000, 1000000.0, '1e5'),
- ]
-
- @pytest.mark.parametrize('value, domain, expected', pprint_data)
- def test_pprint(self, value, domain, expected):
- fmt = mticker.LogFormatter()
- label = fmt._pprint_val(value, domain)
- assert label == expected
-
- @pytest.mark.parametrize('value, long, short', [
- (0.0, "0", "0 "),
- (0, "0", "0 "),
- (-1.0, "-10^0", "-1 "),
- (2e-10, "2x10^-10", "2e-10 "),
- (1e10, "10^10", "1e+10 "),
- ])
- def test_format_data(self, value, long, short):
- fig, ax = plt.subplots()
- ax.set_xscale('log')
- fmt = ax.xaxis.get_major_formatter()
- assert fmt.format_data(value) == long
- assert fmt.format_data_short(value) == short
-
- def _sub_labels(self, axis, subs=()):
- """Test whether locator marks subs to be labeled."""
- fmt = axis.get_minor_formatter()
- minor_tlocs = axis.get_minorticklocs()
- fmt.set_locs(minor_tlocs)
- coefs = minor_tlocs / 10**(np.floor(np.log10(minor_tlocs)))
- label_expected = [round(c) in subs for c in coefs]
- label_test = [fmt(x) != '' for x in minor_tlocs]
- assert label_test == label_expected
-
- @mpl.style.context('default')
- def test_sublabel(self):
- # test label locator
- fig, ax = plt.subplots()
- ax.set_xscale('log')
- ax.xaxis.set_major_locator(mticker.LogLocator(base=10, subs=[]))
- ax.xaxis.set_minor_locator(mticker.LogLocator(base=10,
- subs=np.arange(2, 10)))
- ax.xaxis.set_major_formatter(mticker.LogFormatter(labelOnlyBase=True))
- ax.xaxis.set_minor_formatter(mticker.LogFormatter(labelOnlyBase=False))
- # axis range above 3 decades, only bases are labeled
- ax.set_xlim(1, 1e4)
- fmt = ax.xaxis.get_major_formatter()
- fmt.set_locs(ax.xaxis.get_majorticklocs())
- show_major_labels = [fmt(x) != ''
- for x in ax.xaxis.get_majorticklocs()]
- assert np.all(show_major_labels)
- self._sub_labels(ax.xaxis, subs=[])
-
- # For the next two, if the numdec threshold in LogFormatter.set_locs
- # were 3, then the label sub would be 3 for 2-3 decades and (2, 5)
- # for 1-2 decades. With a threshold of 1, subs are not labeled.
- # axis range at 2 to 3 decades
- ax.set_xlim(1, 800)
- self._sub_labels(ax.xaxis, subs=[])
-
- # axis range at 1 to 2 decades
- ax.set_xlim(1, 80)
- self._sub_labels(ax.xaxis, subs=[])
-
- # axis range at 0.4 to 1 decades, label subs 2, 3, 4, 6
- ax.set_xlim(1, 8)
- self._sub_labels(ax.xaxis, subs=[2, 3, 4, 6])
-
- # axis range at 0 to 0.4 decades, label all
- ax.set_xlim(0.5, 0.9)
- self._sub_labels(ax.xaxis, subs=np.arange(2, 10, dtype=int))
-
- @pytest.mark.parametrize('val', [1, 10, 100, 1000])
- def test_LogFormatter_call(self, val):
- # test _num_to_string method used in __call__
- temp_lf = mticker.LogFormatter()
- temp_lf.create_dummy_axis()
- temp_lf.axis.set_view_interval(1, 10)
- assert temp_lf(val) == str(val)
-
- @pytest.mark.parametrize('val', [1e-323, 2e-323, 10e-323, 11e-323])
- def test_LogFormatter_call_tiny(self, val):
- # test coeff computation in __call__
- temp_lf = mticker.LogFormatter()
- temp_lf.create_dummy_axis()
- temp_lf.axis.set_view_interval(1, 10)
- temp_lf(val)
-
-
-class TestLogitFormatter:
- @staticmethod
- def logit_deformatter(string):
- r"""
- Parser to convert string as r'$\mathdefault{1.41\cdot10^{-4}}$' in
- float 1.41e-4, as '0.5' or as r'$\mathdefault{\frac{1}{2}}$' in float
- 0.5,
- """
- match = re.match(
- r"[^\d]*"
- r"(?P1-)?"
- r"(?P\d*\.?\d*)?"
- r"(?:\\cdot)?"
- r"(?:10\^\{(?P-?\d*)})?"
- r"[^\d]*$",
- string,
- )
- if match:
- comp = match["comp"] is not None
- mantissa = float(match["mant"]) if match["mant"] else 1
- expo = int(match["expo"]) if match["expo"] is not None else 0
- value = mantissa * 10 ** expo
- if match["mant"] or match["expo"] is not None:
- if comp:
- return 1 - value
- return value
- match = re.match(
- r"[^\d]*\\frac\{(?P\d+)\}\{(?P\d+)\}[^\d]*$", string
- )
- if match:
- num, deno = float(match["num"]), float(match["deno"])
- return num / deno
- raise ValueError("Not formatted by LogitFormatter")
-
- @pytest.mark.parametrize(
- "fx, x",
- [
- (r"STUFF0.41OTHERSTUFF", 0.41),
- (r"STUFF1.41\cdot10^{-2}OTHERSTUFF", 1.41e-2),
- (r"STUFF1-0.41OTHERSTUFF", 1 - 0.41),
- (r"STUFF1-1.41\cdot10^{-2}OTHERSTUFF", 1 - 1.41e-2),
- (r"STUFF", None),
- (r"STUFF12.4e-3OTHERSTUFF", None),
- ],
- )
- def test_logit_deformater(self, fx, x):
- if x is None:
- with pytest.raises(ValueError):
- TestLogitFormatter.logit_deformatter(fx)
- else:
- y = TestLogitFormatter.logit_deformatter(fx)
- assert _LogitHelper.isclose(x, y)
-
- decade_test = sorted(
- [10 ** (-i) for i in range(1, 10)]
- + [1 - 10 ** (-i) for i in range(1, 10)]
- + [1 / 2]
- )
-
- @pytest.mark.parametrize("x", decade_test)
- def test_basic(self, x):
- """
- Test the formatted value correspond to the value for ideal ticks in
- logit space.
- """
- formatter = mticker.LogitFormatter(use_overline=False)
- formatter.set_locs(self.decade_test)
- s = formatter(x)
- x2 = TestLogitFormatter.logit_deformatter(s)
- assert _LogitHelper.isclose(x, x2)
-
- @pytest.mark.parametrize("x", (-1, -0.5, -0.1, 1.1, 1.5, 2))
- def test_invalid(self, x):
- """
- Test that invalid value are formatted with empty string without
- raising exception.
- """
- formatter = mticker.LogitFormatter(use_overline=False)
- formatter.set_locs(self.decade_test)
- s = formatter(x)
- assert s == ""
-
- @pytest.mark.parametrize("x", 1 / (1 + np.exp(-np.linspace(-7, 7, 10))))
- def test_variablelength(self, x):
- """
- The format length should change depending on the neighbor labels.
- """
- formatter = mticker.LogitFormatter(use_overline=False)
- for N in (10, 20, 50, 100, 200, 1000, 2000, 5000, 10000):
- if x + 1 / N < 1:
- formatter.set_locs([x - 1 / N, x, x + 1 / N])
- sx = formatter(x)
- sx1 = formatter(x + 1 / N)
- d = (
- TestLogitFormatter.logit_deformatter(sx1)
- - TestLogitFormatter.logit_deformatter(sx)
- )
- assert 0 < d < 2 / N
-
- lims_minor_major = [
- (True, (5e-8, 1 - 5e-8), ((25, False), (75, False))),
- (True, (5e-5, 1 - 5e-5), ((25, False), (75, True))),
- (True, (5e-2, 1 - 5e-2), ((25, True), (75, True))),
- (False, (0.75, 0.76, 0.77), ((7, True), (25, True), (75, True))),
- ]
-
- @pytest.mark.parametrize("method, lims, cases", lims_minor_major)
- def test_minor_vs_major(self, method, lims, cases):
- """
- Test minor/major displays.
- """
-
- if method:
- min_loc = mticker.LogitLocator(minor=True)
- ticks = min_loc.tick_values(*lims)
- else:
- ticks = np.array(lims)
- min_form = mticker.LogitFormatter(minor=True)
- for threshold, has_minor in cases:
- min_form.set_minor_threshold(threshold)
- formatted = min_form.format_ticks(ticks)
- labelled = [f for f in formatted if len(f) > 0]
- if has_minor:
- assert len(labelled) > 0, (threshold, has_minor)
- else:
- assert len(labelled) == 0, (threshold, has_minor)
-
- def test_minor_number(self):
- """
- Test the parameter minor_number
- """
- min_loc = mticker.LogitLocator(minor=True)
- min_form = mticker.LogitFormatter(minor=True)
- ticks = min_loc.tick_values(5e-2, 1 - 5e-2)
- for minor_number in (2, 4, 8, 16):
- min_form.set_minor_number(minor_number)
- formatted = min_form.format_ticks(ticks)
- labelled = [f for f in formatted if len(f) > 0]
- assert len(labelled) == minor_number
-
- def test_use_overline(self):
- """
- Test the parameter use_overline
- """
- x = 1 - 1e-2
- fx1 = r"$\mathdefault{1-10^{-2}}$"
- fx2 = r"$\mathdefault{\overline{10^{-2}}}$"
- form = mticker.LogitFormatter(use_overline=False)
- assert form(x) == fx1
- form.use_overline(True)
- assert form(x) == fx2
- form.use_overline(False)
- assert form(x) == fx1
-
- def test_one_half(self):
- """
- Test the parameter one_half
- """
- form = mticker.LogitFormatter()
- assert r"\frac{1}{2}" in form(1/2)
- form.set_one_half("1/2")
- assert "1/2" in form(1/2)
- form.set_one_half("one half")
- assert "one half" in form(1/2)
-
- @pytest.mark.parametrize("N", (100, 253, 754))
- def test_format_data_short(self, N):
- locs = np.linspace(0, 1, N)[1:-1]
- form = mticker.LogitFormatter()
- for x in locs:
- fx = form.format_data_short(x)
- if fx.startswith("1-"):
- x2 = 1 - float(fx[2:])
- else:
- x2 = float(fx)
- assert abs(x - x2) < 1 / N
-
-
-class TestFormatStrFormatter:
- def test_basic(self):
- # test % style formatter
- tmp_form = mticker.FormatStrFormatter('%05d')
- assert '00002' == tmp_form(2)
-
-
-class TestStrMethodFormatter:
- test_data = [
- ('{x:05d}', (2,), '00002'),
- ('{x:03d}-{pos:02d}', (2, 1), '002-01'),
- ]
-
- @pytest.mark.parametrize('format, input, expected', test_data)
- def test_basic(self, format, input, expected):
- fmt = mticker.StrMethodFormatter(format)
- assert fmt(*input) == expected
-
-
-class TestEngFormatter:
- # (unicode_minus, input, expected) where ''expected'' corresponds to the
- # outputs respectively returned when (places=None, places=0, places=2)
- # unicode_minus is a boolean value for the rcParam['axes.unicode_minus']
- raw_format_data = [
- (False, -1234.56789, ('-1.23457 k', '-1 k', '-1.23 k')),
- (True, -1234.56789, ('\N{MINUS SIGN}1.23457 k', '\N{MINUS SIGN}1 k',
- '\N{MINUS SIGN}1.23 k')),
- (False, -1.23456789, ('-1.23457', '-1', '-1.23')),
- (True, -1.23456789, ('\N{MINUS SIGN}1.23457', '\N{MINUS SIGN}1',
- '\N{MINUS SIGN}1.23')),
- (False, -0.123456789, ('-123.457 m', '-123 m', '-123.46 m')),
- (True, -0.123456789, ('\N{MINUS SIGN}123.457 m', '\N{MINUS SIGN}123 m',
- '\N{MINUS SIGN}123.46 m')),
- (False, -0.00123456789, ('-1.23457 m', '-1 m', '-1.23 m')),
- (True, -0.00123456789, ('\N{MINUS SIGN}1.23457 m', '\N{MINUS SIGN}1 m',
- '\N{MINUS SIGN}1.23 m')),
- (True, -0.0, ('0', '0', '0.00')),
- (True, -0, ('0', '0', '0.00')),
- (True, 0, ('0', '0', '0.00')),
- (True, 1.23456789e-6, ('1.23457 µ', '1 µ', '1.23 µ')),
- (True, 0.123456789, ('123.457 m', '123 m', '123.46 m')),
- (True, 0.1, ('100 m', '100 m', '100.00 m')),
- (True, 1, ('1', '1', '1.00')),
- (True, 1.23456789, ('1.23457', '1', '1.23')),
- # places=0: corner-case rounding
- (True, 999.9, ('999.9', '1 k', '999.90')),
- # corner-case rounding for all
- (True, 999.9999, ('1 k', '1 k', '1.00 k')),
- # negative corner-case
- (False, -999.9999, ('-1 k', '-1 k', '-1.00 k')),
- (True, -999.9999, ('\N{MINUS SIGN}1 k', '\N{MINUS SIGN}1 k',
- '\N{MINUS SIGN}1.00 k')),
- (True, 1000, ('1 k', '1 k', '1.00 k')),
- (True, 1001, ('1.001 k', '1 k', '1.00 k')),
- (True, 100001, ('100.001 k', '100 k', '100.00 k')),
- (True, 987654.321, ('987.654 k', '988 k', '987.65 k')),
- # OoR value (> 1000 Q)
- (True, 1.23e33, ('1230 Q', '1230 Q', '1230.00 Q'))
- ]
-
- @pytest.mark.parametrize('unicode_minus, input, expected', raw_format_data)
- def test_params(self, unicode_minus, input, expected):
- """
- Test the formatting of EngFormatter for various values of the 'places'
- argument, in several cases:
-
- 0. without a unit symbol but with a (default) space separator;
- 1. with both a unit symbol and a (default) space separator;
- 2. with both a unit symbol and some non default separators;
- 3. without a unit symbol but with some non default separators.
-
- Note that cases 2. and 3. are looped over several separator strings.
- """
-
- plt.rcParams['axes.unicode_minus'] = unicode_minus
- UNIT = 's' # seconds
- DIGITS = '0123456789' # %timeit showed 10-20% faster search than set
-
- # Case 0: unit='' (default) and sep=' ' (default).
- # 'expected' already corresponds to this reference case.
- exp_outputs = expected
- formatters = (
- mticker.EngFormatter(), # places=None (default)
- mticker.EngFormatter(places=0),
- mticker.EngFormatter(places=2)
- )
- for _formatter, _exp_output in zip(formatters, exp_outputs):
- assert _formatter(input) == _exp_output
-
- # Case 1: unit=UNIT and sep=' ' (default).
- # Append a unit symbol to the reference case.
- # Beware of the values in [1, 1000), where there is no prefix!
- exp_outputs = (_s + " " + UNIT if _s[-1] in DIGITS # case w/o prefix
- else _s + UNIT for _s in expected)
- formatters = (
- mticker.EngFormatter(unit=UNIT), # places=None (default)
- mticker.EngFormatter(unit=UNIT, places=0),
- mticker.EngFormatter(unit=UNIT, places=2)
- )
- for _formatter, _exp_output in zip(formatters, exp_outputs):
- assert _formatter(input) == _exp_output
-
- # Test several non default separators: no separator, a narrow
- # no-break space (Unicode character) and an extravagant string.
- for _sep in ("", "\N{NARROW NO-BREAK SPACE}", "@_@"):
- # Case 2: unit=UNIT and sep=_sep.
- # Replace the default space separator from the reference case
- # with the tested one `_sep` and append a unit symbol to it.
- exp_outputs = (_s + _sep + UNIT if _s[-1] in DIGITS # no prefix
- else _s.replace(" ", _sep) + UNIT
- for _s in expected)
- formatters = (
- mticker.EngFormatter(unit=UNIT, sep=_sep), # places=None
- mticker.EngFormatter(unit=UNIT, places=0, sep=_sep),
- mticker.EngFormatter(unit=UNIT, places=2, sep=_sep)
- )
- for _formatter, _exp_output in zip(formatters, exp_outputs):
- assert _formatter(input) == _exp_output
-
- # Case 3: unit='' (default) and sep=_sep.
- # Replace the default space separator from the reference case
- # with the tested one `_sep`. Reference case is already unitless.
- exp_outputs = (_s.replace(" ", _sep) for _s in expected)
- formatters = (
- mticker.EngFormatter(sep=_sep), # places=None (default)
- mticker.EngFormatter(places=0, sep=_sep),
- mticker.EngFormatter(places=2, sep=_sep)
- )
- for _formatter, _exp_output in zip(formatters, exp_outputs):
- assert _formatter(input) == _exp_output
-
-
-def test_engformatter_usetex_useMathText():
- fig, ax = plt.subplots()
- ax.plot([0, 500, 1000], [0, 500, 1000])
- ax.set_xticks([0, 500, 1000])
- for formatter in (mticker.EngFormatter(usetex=True),
- mticker.EngFormatter(useMathText=True)):
- ax.xaxis.set_major_formatter(formatter)
- fig.canvas.draw()
- x_tick_label_text = [labl.get_text() for labl in ax.get_xticklabels()]
- # Checking if the dollar `$` signs have been inserted around numbers
- # in tick labels.
- assert x_tick_label_text == ['$0$', '$500$', '$1$ k']
-
-
-class TestPercentFormatter:
- percent_data = [
- # Check explicitly set decimals over different intervals and values
- (100, 0, '%', 120, 100, '120%'),
- (100, 0, '%', 100, 90, '100%'),
- (100, 0, '%', 90, 50, '90%'),
- (100, 0, '%', -1.7, 40, '-2%'),
- (100, 1, '%', 90.0, 100, '90.0%'),
- (100, 1, '%', 80.1, 90, '80.1%'),
- (100, 1, '%', 70.23, 50, '70.2%'),
- # 60.554 instead of 60.55: see https://bugs.python.org/issue5118
- (100, 1, '%', -60.554, 40, '-60.6%'),
- # Check auto decimals over different intervals and values
- (100, None, '%', 95, 1, '95.00%'),
- (1.0, None, '%', 3, 6, '300%'),
- (17.0, None, '%', 1, 8.5, '6%'),
- (17.0, None, '%', 1, 8.4, '5.9%'),
- (5, None, '%', -100, 0.000001, '-2000.00000%'),
- # Check percent symbol
- (1.0, 2, None, 1.2, 100, '120.00'),
- (75, 3, '', 50, 100, '66.667'),
- (42, None, '^^Foobar$$', 21, 12, '50.0^^Foobar$$'),
- ]
-
- percent_ids = [
- # Check explicitly set decimals over different intervals and values
- 'decimals=0, x>100%',
- 'decimals=0, x=100%',
- 'decimals=0, x<100%',
- 'decimals=0, x<0%',
- 'decimals=1, x>100%',
- 'decimals=1, x=100%',
- 'decimals=1, x<100%',
- 'decimals=1, x<0%',
- # Check auto decimals over different intervals and values
- 'autodecimal, x<100%, display_range=1',
- 'autodecimal, x>100%, display_range=6 (custom xmax test)',
- 'autodecimal, x<100%, display_range=8.5 (autodecimal test 1)',
- 'autodecimal, x<100%, display_range=8.4 (autodecimal test 2)',
- 'autodecimal, x<-100%, display_range=1e-6 (tiny display range)',
- # Check percent symbol
- 'None as percent symbol',
- 'Empty percent symbol',
- 'Custom percent symbol',
- ]
-
- latex_data = [
- (False, False, r'50\{t}%'),
- (False, True, r'50\\\{t\}\%'),
- (True, False, r'50\{t}%'),
- (True, True, r'50\{t}%'),
- ]
-
- @pytest.mark.parametrize(
- 'xmax, decimals, symbol, x, display_range, expected',
- percent_data, ids=percent_ids)
- def test_basic(self, xmax, decimals, symbol,
- x, display_range, expected):
- formatter = mticker.PercentFormatter(xmax, decimals, symbol)
- with mpl.rc_context(rc={'text.usetex': False}):
- assert formatter.format_pct(x, display_range) == expected
-
- @pytest.mark.parametrize('is_latex, usetex, expected', latex_data)
- def test_latex(self, is_latex, usetex, expected):
- fmt = mticker.PercentFormatter(symbol='\\{t}%', is_latex=is_latex)
- with mpl.rc_context(rc={'text.usetex': usetex}):
- assert fmt.format_pct(50, 100) == expected
-
-
-def _impl_locale_comma():
- try:
- locale.setlocale(locale.LC_ALL, 'de_DE.UTF-8')
- except locale.Error:
- print('SKIP: Locale de_DE.UTF-8 is not supported on this machine')
- return
- ticks = mticker.ScalarFormatter(useMathText=True, useLocale=True)
- fmt = '$\\mathdefault{%1.1f}$'
- x = ticks._format_maybe_minus_and_locale(fmt, 0.5)
- assert x == '$\\mathdefault{0{,}5}$'
- # Do not change , in the format string
- fmt = ',$\\mathdefault{,%1.1f},$'
- x = ticks._format_maybe_minus_and_locale(fmt, 0.5)
- assert x == ',$\\mathdefault{,0{,}5},$'
- # Make sure no brackets are added if not using math text
- ticks = mticker.ScalarFormatter(useMathText=False, useLocale=True)
- fmt = '%1.1f'
- x = ticks._format_maybe_minus_and_locale(fmt, 0.5)
- assert x == '0,5'
-
-
-def test_locale_comma():
- # On some systems/pytest versions, `pytest.skip` in an exception handler
- # does not skip, but is treated as an exception, so directly running this
- # test can incorrectly fail instead of skip.
- # Instead, run this test in a subprocess, which avoids the problem, and the
- # need to fix the locale after.
- proc = mpl.testing.subprocess_run_helper(_impl_locale_comma, timeout=60,
- extra_env={'MPLBACKEND': 'Agg'})
- skip_msg = next((line[len('SKIP:'):].strip()
- for line in proc.stdout.splitlines()
- if line.startswith('SKIP:')),
- '')
- if skip_msg:
- pytest.skip(skip_msg)
-
-
-def test_majformatter_type():
- fig, ax = plt.subplots()
- with pytest.raises(TypeError):
- ax.xaxis.set_major_formatter(mticker.LogLocator())
-
-
-def test_minformatter_type():
- fig, ax = plt.subplots()
- with pytest.raises(TypeError):
- ax.xaxis.set_minor_formatter(mticker.LogLocator())
-
-
-def test_majlocator_type():
- fig, ax = plt.subplots()
- with pytest.raises(TypeError):
- ax.xaxis.set_major_locator(mticker.LogFormatter())
-
-
-def test_minlocator_type():
- fig, ax = plt.subplots()
- with pytest.raises(TypeError):
- ax.xaxis.set_minor_locator(mticker.LogFormatter())
-
-
-def test_minorticks_rc():
- fig = plt.figure()
-
- def minorticksubplot(xminor, yminor, i):
- rc = {'xtick.minor.visible': xminor,
- 'ytick.minor.visible': yminor}
- with plt.rc_context(rc=rc):
- ax = fig.add_subplot(2, 2, i)
-
- assert (len(ax.xaxis.get_minor_ticks()) > 0) == xminor
- assert (len(ax.yaxis.get_minor_ticks()) > 0) == yminor
-
- minorticksubplot(False, False, 1)
- minorticksubplot(True, False, 2)
- minorticksubplot(False, True, 3)
- minorticksubplot(True, True, 4)
-
-
-@pytest.mark.parametrize('remove_overlapping_locs, expected_num',
- ((True, 6),
- (None, 6), # this tests the default
- (False, 9)))
-def test_remove_overlap(remove_overlapping_locs, expected_num):
- t = np.arange("2018-11-03", "2018-11-06", dtype="datetime64")
- x = np.ones(len(t))
-
- fig, ax = plt.subplots()
- ax.plot(t, x)
-
- ax.xaxis.set_major_locator(mpl.dates.DayLocator())
- ax.xaxis.set_major_formatter(mpl.dates.DateFormatter('\n%a'))
-
- ax.xaxis.set_minor_locator(mpl.dates.HourLocator((0, 6, 12, 18)))
- ax.xaxis.set_minor_formatter(mpl.dates.DateFormatter('%H:%M'))
- # force there to be extra ticks
- ax.xaxis.get_minor_ticks(15)
- if remove_overlapping_locs is not None:
- ax.xaxis.remove_overlapping_locs = remove_overlapping_locs
-
- # check that getter/setter exists
- current = ax.xaxis.remove_overlapping_locs
- assert (current == ax.xaxis.get_remove_overlapping_locs())
- plt.setp(ax.xaxis, remove_overlapping_locs=current)
- new = ax.xaxis.remove_overlapping_locs
- assert (new == ax.xaxis.remove_overlapping_locs)
-
- # check that the accessors filter correctly
- # this is the method that does the actual filtering
- assert len(ax.xaxis.get_minorticklocs()) == expected_num
- # these three are derivative
- assert len(ax.xaxis.get_minor_ticks()) == expected_num
- assert len(ax.xaxis.get_minorticklabels()) == expected_num
- assert len(ax.xaxis.get_minorticklines()) == expected_num*2
-
-
-@pytest.mark.parametrize('sub', [
- ['hi', 'aardvark'],
- np.zeros((2, 2))])
-def test_bad_locator_subs(sub):
- ll = mticker.LogLocator()
- with pytest.raises(ValueError):
- ll.set_params(subs=sub)
-
-
-@pytest.mark.parametrize('numticks', [1, 2, 3, 9])
-@mpl.style.context('default')
-def test_small_range_loglocator(numticks):
- ll = mticker.LogLocator()
- ll.set_params(numticks=numticks)
- for top in [5, 7, 9, 11, 15, 50, 100, 1000]:
- ticks = ll.tick_values(.5, top)
- assert (np.diff(np.log10(ll.tick_values(6, 150))) == 1).all()
-
-
-def test_NullFormatter():
- formatter = mticker.NullFormatter()
- assert formatter(1.0) == ''
- assert formatter.format_data(1.0) == ''
- assert formatter.format_data_short(1.0) == ''
-
-
-@pytest.mark.parametrize('formatter', (
- mticker.FuncFormatter(lambda a: f'val: {a}'),
- mticker.FixedFormatter(('foo', 'bar'))))
-def test_set_offset_string(formatter):
- assert formatter.get_offset() == ''
- formatter.set_offset_string('mpl')
- assert formatter.get_offset() == 'mpl'
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_size.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_size.py
deleted file mode 100644
index bd2c349df585bd316a9e2547a4a3e50b16364d09..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/test_size.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import os
-import pytest
-import numpy as np
-
-from . import util
-
-
-class TestSizeSumExample(util.F2PyTest):
- sources = [util.getpath("tests", "src", "size", "foo.f90")]
-
- @pytest.mark.slow
- def test_all(self):
- r = self.module.foo([[]])
- assert r == [0]
-
- r = self.module.foo([[1, 2]])
- assert r == [3]
-
- r = self.module.foo([[1, 2], [3, 4]])
- assert np.allclose(r, [3, 7])
-
- r = self.module.foo([[1, 2], [3, 4], [5, 6]])
- assert np.allclose(r, [3, 7, 11])
-
- @pytest.mark.slow
- def test_transpose(self):
- r = self.module.trans([[]])
- assert np.allclose(r.T, np.array([[]]))
-
- r = self.module.trans([[1, 2]])
- assert np.allclose(r, [[1.], [2.]])
-
- r = self.module.trans([[1, 2, 3], [4, 5, 6]])
- assert np.allclose(r, [[1, 4], [2, 5], [3, 6]])
-
- @pytest.mark.slow
- def test_flatten(self):
- r = self.module.flatten([[]])
- assert np.allclose(r, [])
-
- r = self.module.flatten([[1, 2]])
- assert np.allclose(r, [1, 2])
-
- r = self.module.flatten([[1, 2, 3], [4, 5, 6]])
- assert np.allclose(r, [1, 2, 3, 4, 5, 6])
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/__init__.py
deleted file mode 100644
index c4e7baf2c683e27fca27f81e72c348fe8d225089..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/__init__.py
+++ /dev/null
@@ -1,185 +0,0 @@
-"""
-A sub-package for efficiently dealing with polynomials.
-
-Within the documentation for this sub-package, a "finite power series,"
-i.e., a polynomial (also referred to simply as a "series") is represented
-by a 1-D numpy array of the polynomial's coefficients, ordered from lowest
-order term to highest. For example, array([1,2,3]) represents
-``P_0 + 2*P_1 + 3*P_2``, where P_n is the n-th order basis polynomial
-applicable to the specific module in question, e.g., `polynomial` (which
-"wraps" the "standard" basis) or `chebyshev`. For optimal performance,
-all operations on polynomials, including evaluation at an argument, are
-implemented as operations on the coefficients. Additional (module-specific)
-information can be found in the docstring for the module of interest.
-
-This package provides *convenience classes* for each of six different kinds
-of polynomials:
-
- ======================== ================
- **Name** **Provides**
- ======================== ================
- `~polynomial.Polynomial` Power series
- `~chebyshev.Chebyshev` Chebyshev series
- `~legendre.Legendre` Legendre series
- `~laguerre.Laguerre` Laguerre series
- `~hermite.Hermite` Hermite series
- `~hermite_e.HermiteE` HermiteE series
- ======================== ================
-
-These *convenience classes* provide a consistent interface for creating,
-manipulating, and fitting data with polynomials of different bases.
-The convenience classes are the preferred interface for the `~numpy.polynomial`
-package, and are available from the ``numpy.polynomial`` namespace.
-This eliminates the need to navigate to the corresponding submodules, e.g.
-``np.polynomial.Polynomial`` or ``np.polynomial.Chebyshev`` instead of
-``np.polynomial.polynomial.Polynomial`` or
-``np.polynomial.chebyshev.Chebyshev``, respectively.
-The classes provide a more consistent and concise interface than the
-type-specific functions defined in the submodules for each type of polynomial.
-For example, to fit a Chebyshev polynomial with degree ``1`` to data given
-by arrays ``xdata`` and ``ydata``, the
-`~chebyshev.Chebyshev.fit` class method::
-
- >>> from numpy.polynomial import Chebyshev
- >>> c = Chebyshev.fit(xdata, ydata, deg=1)
-
-is preferred over the `chebyshev.chebfit` function from the
-``np.polynomial.chebyshev`` module::
-
- >>> from numpy.polynomial.chebyshev import chebfit
- >>> c = chebfit(xdata, ydata, deg=1)
-
-See :doc:`routines.polynomials.classes` for more details.
-
-Convenience Classes
-===================
-
-The following lists the various constants and methods common to all of
-the classes representing the various kinds of polynomials. In the following,
-the term ``Poly`` represents any one of the convenience classes (e.g.
-`~polynomial.Polynomial`, `~chebyshev.Chebyshev`, `~hermite.Hermite`, etc.)
-while the lowercase ``p`` represents an **instance** of a polynomial class.
-
-Constants
----------
-
-- ``Poly.domain`` -- Default domain
-- ``Poly.window`` -- Default window
-- ``Poly.basis_name`` -- String used to represent the basis
-- ``Poly.maxpower`` -- Maximum value ``n`` such that ``p**n`` is allowed
-- ``Poly.nickname`` -- String used in printing
-
-Creation
---------
-
-Methods for creating polynomial instances.
-
-- ``Poly.basis(degree)`` -- Basis polynomial of given degree
-- ``Poly.identity()`` -- ``p`` where ``p(x) = x`` for all ``x``
-- ``Poly.fit(x, y, deg)`` -- ``p`` of degree ``deg`` with coefficients
- determined by the least-squares fit to the data ``x``, ``y``
-- ``Poly.fromroots(roots)`` -- ``p`` with specified roots
-- ``p.copy()`` -- Create a copy of ``p``
-
-Conversion
-----------
-
-Methods for converting a polynomial instance of one kind to another.
-
-- ``p.cast(Poly)`` -- Convert ``p`` to instance of kind ``Poly``
-- ``p.convert(Poly)`` -- Convert ``p`` to instance of kind ``Poly`` or map
- between ``domain`` and ``window``
-
-Calculus
---------
-- ``p.deriv()`` -- Take the derivative of ``p``
-- ``p.integ()`` -- Integrate ``p``
-
-Validation
-----------
-- ``Poly.has_samecoef(p1, p2)`` -- Check if coefficients match
-- ``Poly.has_samedomain(p1, p2)`` -- Check if domains match
-- ``Poly.has_sametype(p1, p2)`` -- Check if types match
-- ``Poly.has_samewindow(p1, p2)`` -- Check if windows match
-
-Misc
-----
-- ``p.linspace()`` -- Return ``x, p(x)`` at equally-spaced points in ``domain``
-- ``p.mapparms()`` -- Return the parameters for the linear mapping between
- ``domain`` and ``window``.
-- ``p.roots()`` -- Return the roots of `p`.
-- ``p.trim()`` -- Remove trailing coefficients.
-- ``p.cutdeg(degree)`` -- Truncate p to given degree
-- ``p.truncate(size)`` -- Truncate p to given size
-
-"""
-from .polynomial import Polynomial
-from .chebyshev import Chebyshev
-from .legendre import Legendre
-from .hermite import Hermite
-from .hermite_e import HermiteE
-from .laguerre import Laguerre
-
-__all__ = [
- "set_default_printstyle",
- "polynomial", "Polynomial",
- "chebyshev", "Chebyshev",
- "legendre", "Legendre",
- "hermite", "Hermite",
- "hermite_e", "HermiteE",
- "laguerre", "Laguerre",
-]
-
-
-def set_default_printstyle(style):
- """
- Set the default format for the string representation of polynomials.
-
- Values for ``style`` must be valid inputs to ``__format__``, i.e. 'ascii'
- or 'unicode'.
-
- Parameters
- ----------
- style : str
- Format string for default printing style. Must be either 'ascii' or
- 'unicode'.
-
- Notes
- -----
- The default format depends on the platform: 'unicode' is used on
- Unix-based systems and 'ascii' on Windows. This determination is based on
- default font support for the unicode superscript and subscript ranges.
-
- Examples
- --------
- >>> p = np.polynomial.Polynomial([1, 2, 3])
- >>> c = np.polynomial.Chebyshev([1, 2, 3])
- >>> np.polynomial.set_default_printstyle('unicode')
- >>> print(p)
- 1.0 + 2.0·x + 3.0·x²
- >>> print(c)
- 1.0 + 2.0·T₁(x) + 3.0·T₂(x)
- >>> np.polynomial.set_default_printstyle('ascii')
- >>> print(p)
- 1.0 + 2.0 x + 3.0 x**2
- >>> print(c)
- 1.0 + 2.0 T_1(x) + 3.0 T_2(x)
- >>> # Formatting supersedes all class/package-level defaults
- >>> print(f"{p:unicode}")
- 1.0 + 2.0·x + 3.0·x²
- """
- if style not in ('unicode', 'ascii'):
- raise ValueError(
- f"Unsupported format string '{style}'. Valid options are 'ascii' "
- f"and 'unicode'"
- )
- _use_unicode = True
- if style == 'ascii':
- _use_unicode = False
- from ._polybase import ABCPolyBase
- ABCPolyBase._use_unicode = _use_unicode
-
-
-from numpy._pytesttester import PytestTester
-test = PytestTester(__name__)
-del PytestTester
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/numerictypes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/numerictypes.py
deleted file mode 100644
index 63b6ad0e22e221e22ad9eba9b48a526a6c741b5d..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/numerictypes.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import numpy as np
-
-np.maximum_sctype("S8")
-np.maximum_sctype(object)
-
-np.issctype(object)
-np.issctype("S8")
-
-np.obj2sctype(list)
-np.obj2sctype(list, default=None)
-np.obj2sctype(list, default=np.bytes_)
-
-np.issubclass_(np.int32, int)
-np.issubclass_(np.float64, float)
-np.issubclass_(np.float64, (int, float))
-
-np.issubsctype("int64", int)
-np.issubsctype(np.array([1]), np.array([1]))
-
-np.issubdtype("S1", np.bytes_)
-np.issubdtype(np.float64, np.float32)
-
-np.sctype2char("S1")
-np.sctype2char(list)
-
-np.cast[int]
-np.cast["i8"]
-np.cast[np.int64]
-
-np.nbytes[int]
-np.nbytes["i8"]
-np.nbytes[np.int64]
-
-np.ScalarType
-np.ScalarType[0]
-np.ScalarType[3]
-np.ScalarType[8]
-np.ScalarType[10]
-
-np.typecodes["Character"]
-np.typecodes["Complex"]
-np.typecodes["All"]
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/check.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/check.py
deleted file mode 100644
index 3221b158241f54e9f1b15c8b4fb9a0514500413b..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/check.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from __future__ import annotations
-
-from pandas.compat._optional import import_optional_dependency
-
-ne = import_optional_dependency("numexpr", errors="warn")
-NUMEXPR_INSTALLED = ne is not None
-if NUMEXPR_INSTALLED:
- NUMEXPR_VERSION = ne.__version__
-else:
- NUMEXPR_VERSION = None
-
-__all__ = ["NUMEXPR_INSTALLED", "NUMEXPR_VERSION"]
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/reshape/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_conversion.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_conversion.py
deleted file mode 100644
index 3c2ca045d6f990837fae4d2b3d7bcbbf40e175e9..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/multi/test_conversion.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import numpy as np
-import pytest
-
-import pandas as pd
-from pandas import (
- DataFrame,
- MultiIndex,
-)
-import pandas._testing as tm
-
-
-def test_to_numpy(idx):
- result = idx.to_numpy()
- exp = idx.values
- tm.assert_numpy_array_equal(result, exp)
-
-
-def test_to_frame():
- tuples = [(1, "one"), (1, "two"), (2, "one"), (2, "two")]
-
- index = MultiIndex.from_tuples(tuples)
- result = index.to_frame(index=False)
- expected = DataFrame(tuples)
- tm.assert_frame_equal(result, expected)
-
- result = index.to_frame()
- expected.index = index
- tm.assert_frame_equal(result, expected)
-
- tuples = [(1, "one"), (1, "two"), (2, "one"), (2, "two")]
- index = MultiIndex.from_tuples(tuples, names=["first", "second"])
- result = index.to_frame(index=False)
- expected = DataFrame(tuples)
- expected.columns = ["first", "second"]
- tm.assert_frame_equal(result, expected)
-
- result = index.to_frame()
- expected.index = index
- tm.assert_frame_equal(result, expected)
-
- # See GH-22580
- index = MultiIndex.from_tuples(tuples)
- result = index.to_frame(index=False, name=["first", "second"])
- expected = DataFrame(tuples)
- expected.columns = ["first", "second"]
- tm.assert_frame_equal(result, expected)
-
- result = index.to_frame(name=["first", "second"])
- expected.index = index
- expected.columns = ["first", "second"]
- tm.assert_frame_equal(result, expected)
-
- msg = "'name' must be a list / sequence of column names."
- with pytest.raises(TypeError, match=msg):
- index.to_frame(name="first")
-
- msg = "'name' should have same length as number of levels on index."
- with pytest.raises(ValueError, match=msg):
- index.to_frame(name=["first"])
-
- # Tests for datetime index
- index = MultiIndex.from_product([range(5), pd.date_range("20130101", periods=3)])
- result = index.to_frame(index=False)
- expected = DataFrame(
- {
- 0: np.repeat(np.arange(5, dtype="int64"), 3),
- 1: np.tile(pd.date_range("20130101", periods=3), 5),
- }
- )
- tm.assert_frame_equal(result, expected)
-
- result = index.to_frame()
- expected.index = index
- tm.assert_frame_equal(result, expected)
-
- # See GH-22580
- result = index.to_frame(index=False, name=["first", "second"])
- expected = DataFrame(
- {
- "first": np.repeat(np.arange(5, dtype="int64"), 3),
- "second": np.tile(pd.date_range("20130101", periods=3), 5),
- }
- )
- tm.assert_frame_equal(result, expected)
-
- result = index.to_frame(name=["first", "second"])
- expected.index = index
- tm.assert_frame_equal(result, expected)
-
-
-def test_to_frame_dtype_fidelity():
- # GH 22420
- mi = MultiIndex.from_arrays(
- [
- pd.date_range("19910905", periods=6, tz="US/Eastern"),
- [1, 1, 1, 2, 2, 2],
- pd.Categorical(["a", "a", "b", "b", "c", "c"], ordered=True),
- ["x", "x", "y", "z", "x", "y"],
- ],
- names=["dates", "a", "b", "c"],
- )
- original_dtypes = {name: mi.levels[i].dtype for i, name in enumerate(mi.names)}
-
- expected_df = DataFrame(
- {
- "dates": pd.date_range("19910905", periods=6, tz="US/Eastern"),
- "a": [1, 1, 1, 2, 2, 2],
- "b": pd.Categorical(["a", "a", "b", "b", "c", "c"], ordered=True),
- "c": ["x", "x", "y", "z", "x", "y"],
- }
- )
- df = mi.to_frame(index=False)
- df_dtypes = df.dtypes.to_dict()
-
- tm.assert_frame_equal(df, expected_df)
- assert original_dtypes == df_dtypes
-
-
-def test_to_frame_resulting_column_order():
- # GH 22420
- expected = ["z", 0, "a"]
- mi = MultiIndex.from_arrays(
- [["a", "b", "c"], ["x", "y", "z"], ["q", "w", "e"]], names=expected
- )
- result = mi.to_frame().columns.tolist()
- assert result == expected
-
-
-def test_to_frame_duplicate_labels():
- # GH 45245
- data = [(1, 2), (3, 4)]
- names = ["a", "a"]
- index = MultiIndex.from_tuples(data, names=names)
- with pytest.raises(ValueError, match="Cannot create duplicate column labels"):
- index.to_frame()
-
- result = index.to_frame(allow_duplicates=True)
- expected = DataFrame(data, index=index, columns=names)
- tm.assert_frame_equal(result, expected)
-
- names = [None, 0]
- index = MultiIndex.from_tuples(data, names=names)
- with pytest.raises(ValueError, match="Cannot create duplicate column labels"):
- index.to_frame()
-
- result = index.to_frame(allow_duplicates=True)
- expected = DataFrame(data, index=index, columns=[0, 0])
- tm.assert_frame_equal(result, expected)
-
-
-def test_to_flat_index(idx):
- expected = pd.Index(
- (
- ("foo", "one"),
- ("foo", "two"),
- ("bar", "one"),
- ("baz", "two"),
- ("qux", "one"),
- ("qux", "two"),
- ),
- tupleize_cols=False,
- )
- result = idx.to_flat_index()
- tm.assert_index_equal(result, expected)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/models/candidate.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/models/candidate.py
deleted file mode 100644
index a4963aec6388c27c3beb064f0a730af200380aee..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/models/candidate.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from pip._vendor.packaging.version import parse as parse_version
-
-from pip._internal.models.link import Link
-from pip._internal.utils.models import KeyBasedCompareMixin
-
-
-class InstallationCandidate(KeyBasedCompareMixin):
- """Represents a potential "candidate" for installation."""
-
- __slots__ = ["name", "version", "link"]
-
- def __init__(self, name: str, version: str, link: Link) -> None:
- self.name = name
- self.version = parse_version(version)
- self.link = link
-
- super().__init__(
- key=(self.name, self.version, self.link),
- defining_class=InstallationCandidate,
- )
-
- def __repr__(self) -> str:
- return "".format(
- self.name,
- self.version,
- self.link,
- )
-
- def __str__(self) -> str:
- return "{!r} candidate (version {} at {})".format(
- self.name,
- self.version,
- self.link,
- )
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/testing.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/testing.py
deleted file mode 100644
index dec3a15d03341101dd2550c3f7fbe735fb5bc717..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/testing.py
+++ /dev/null
@@ -1,210 +0,0 @@
-"""
- pygments.lexers.testing
- ~~~~~~~~~~~~~~~~~~~~~~~
-
- Lexers for testing languages.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pygments.lexer import RegexLexer, include, bygroups
-from pygments.token import Comment, Keyword, Name, String, Number, Generic, Text
-
-__all__ = ['GherkinLexer', 'TAPLexer']
-
-
-class GherkinLexer(RegexLexer):
- """
- For Gherkin syntax.
-
- .. versionadded:: 1.2
- """
- name = 'Gherkin'
- aliases = ['gherkin', 'cucumber']
- filenames = ['*.feature']
- mimetypes = ['text/x-gherkin']
-
- feature_keywords = '^(기능|機能|功能|フィーチャ|خاصية|תכונה|Функціонал|Функционалност|Функционал|Фича|Особина|Могућност|Özellik|Właściwość|Tính năng|Trajto|Savybė|Požiadavka|Požadavek|Osobina|Ominaisuus|Omadus|OH HAI|Mogućnost|Mogucnost|Jellemző|Fīča|Funzionalità|Funktionalität|Funkcionalnost|Funkcionalitāte|Funcționalitate|Functionaliteit|Functionalitate|Funcionalitat|Funcionalidade|Fonctionnalité|Fitur|Feature|Egenskap|Egenskab|Crikey|Característica|Arwedd)(:)(.*)$'
- feature_element_keywords = '^(\\s*)(시나리오 개요|시나리오|배경|背景|場景大綱|場景|场景大纲|场景|劇本大綱|劇本|剧本大纲|剧本|テンプレ|シナリオテンプレート|シナリオテンプレ|シナリオアウトライン|シナリオ|سيناريو مخطط|سيناريو|الخلفية|תרחיש|תבנית תרחיש|רקע|Тарих|Сценарій|Сценарио|Сценарий структураси|Сценарий|Структура сценарію|Структура сценарија|Структура сценария|Скица|Рамка на сценарий|Пример|Предыстория|Предистория|Позадина|Передумова|Основа|Концепт|Контекст|Założenia|Wharrimean is|Tình huống|The thing of it is|Tausta|Taust|Tapausaihio|Tapaus|Szenariogrundriss|Szenario|Szablon scenariusza|Stsenaarium|Struktura scenarija|Skica|Skenario konsep|Skenario|Situācija|Senaryo taslağı|Senaryo|Scénář|Scénario|Schema dello scenario|Scenārijs pēc parauga|Scenārijs|Scenár|Scenaro|Scenariusz|Scenariul de şablon|Scenariul de sablon|Scenariu|Scenario Outline|Scenario Amlinellol|Scenario|Scenarijus|Scenarijaus šablonas|Scenarij|Scenarie|Rerefons|Raamstsenaarium|Primer|Pozadí|Pozadina|Pozadie|Plan du scénario|Plan du Scénario|Osnova scénáře|Osnova|Náčrt Scénáře|Náčrt Scenáru|Mate|MISHUN SRSLY|MISHUN|Kịch bản|Konturo de la scenaro|Kontext|Konteksts|Kontekstas|Kontekst|Koncept|Khung tình huống|Khung kịch bản|Háttér|Grundlage|Geçmiş|Forgatókönyv vázlat|Forgatókönyv|Fono|Esquema do Cenário|Esquema do Cenario|Esquema del escenario|Esquema de l\'escenari|Escenario|Escenari|Dis is what went down|Dasar|Contexto|Contexte|Contesto|Condiţii|Conditii|Cenário|Cenario|Cefndir|Bối cảnh|Blokes|Bakgrunn|Bakgrund|Baggrund|Background|B4|Antecedents|Antecedentes|All y\'all|Achtergrond|Abstrakt Scenario|Abstract Scenario)(:)(.*)$'
- examples_keywords = '^(\\s*)(예|例子|例|サンプル|امثلة|דוגמאות|Сценарији|Примери|Приклади|Мисоллар|Значения|Örnekler|Voorbeelden|Variantai|Tapaukset|Scenarios|Scenariji|Scenarijai|Příklady|Példák|Príklady|Przykłady|Primjeri|Primeri|Piemēri|Pavyzdžiai|Paraugs|Juhtumid|Exemplos|Exemples|Exemplele|Exempel|Examples|Esempi|Enghreifftiau|Ekzemploj|Eksempler|Ejemplos|EXAMPLZ|Dữ liệu|Contoh|Cobber|Beispiele)(:)(.*)$'
- step_keywords = '^(\\s*)(하지만|조건|먼저|만일|만약|단|그리고|그러면|那麼|那么|而且|當|当|前提|假設|假设|假如|假定|但是|但し|並且|并且|同時|同时|もし|ならば|ただし|しかし|かつ|و |متى |لكن |عندما |ثم |بفرض |اذاً |כאשר |וגם |בהינתן |אזי |אז |אבל |Якщо |Унда |То |Припустимо, що |Припустимо |Онда |Но |Нехай |Лекин |Когато |Када |Кад |К тому же |И |Задато |Задати |Задате |Если |Допустим |Дадено |Ва |Бирок |Аммо |Али |Але |Агар |А |І |Și |És |Zatati |Zakładając |Zadato |Zadate |Zadano |Zadani |Zadan |Youse know when youse got |Youse know like when |Yna |Ya know how |Ya gotta |Y |Wun |Wtedy |When y\'all |When |Wenn |WEN |Và |Ve |Und |Un |Thì |Then y\'all |Then |Tapi |Tak |Tada |Tad |Så |Stel |Soit |Siis |Si |Sed |Se |Quando |Quand |Quan |Pryd |Pokud |Pokiaľ |Però |Pero |Pak |Oraz |Onda |Ond |Oletetaan |Og |Och |O zaman |Når |När |Niin |Nhưng |N |Mutta |Men |Mas |Maka |Majd |Mais |Maar |Ma |Lorsque |Lorsqu\'|Kun |Kuid |Kui |Khi |Keď |Ketika |Když |Kaj |Kai |Kada |Kad |Jeżeli |Ja |Ir |I CAN HAZ |I |Ha |Givun |Givet |Given y\'all |Given |Gitt |Gegeven |Gegeben sei |Fakat |Eğer ki |Etant donné |Et |Então |Entonces |Entao |En |Eeldades |E |Duota |Dun |Donitaĵo |Donat |Donada |Do |Diyelim ki |Dengan |Den youse gotta |De |Dato |Dar |Dann |Dan |Dado |Dacă |Daca |DEN |Când |Cuando |Cho |Cept |Cand |Cal |But y\'all |But |Buh |Biết |Bet |BUT |Atès |Atunci |Atesa |Anrhegedig a |Angenommen |And y\'all |And |An |Ama |Als |Alors |Allora |Ali |Aleshores |Ale |Akkor |Aber |AN |A také |A |\\* )'
-
- tokens = {
- 'comments': [
- (r'^\s*#.*$', Comment),
- ],
- 'feature_elements': [
- (step_keywords, Keyword, "step_content_stack"),
- include('comments'),
- (r"(\s|.)", Name.Function),
- ],
- 'feature_elements_on_stack': [
- (step_keywords, Keyword, "#pop:2"),
- include('comments'),
- (r"(\s|.)", Name.Function),
- ],
- 'examples_table': [
- (r"\s+\|", Keyword, 'examples_table_header'),
- include('comments'),
- (r"(\s|.)", Name.Function),
- ],
- 'examples_table_header': [
- (r"\s+\|\s*$", Keyword, "#pop:2"),
- include('comments'),
- (r"\\\|", Name.Variable),
- (r"\s*\|", Keyword),
- (r"[^|]", Name.Variable),
- ],
- 'scenario_sections_on_stack': [
- (feature_element_keywords,
- bygroups(Name.Function, Keyword, Keyword, Name.Function),
- "feature_elements_on_stack"),
- ],
- 'narrative': [
- include('scenario_sections_on_stack'),
- include('comments'),
- (r"(\s|.)", Name.Function),
- ],
- 'table_vars': [
- (r'(<[^>]+>)', Name.Variable),
- ],
- 'numbers': [
- (r'(\d+\.?\d*|\d*\.\d+)([eE][+-]?[0-9]+)?', String),
- ],
- 'string': [
- include('table_vars'),
- (r'(\s|.)', String),
- ],
- 'py_string': [
- (r'"""', Keyword, "#pop"),
- include('string'),
- ],
- 'step_content_root': [
- (r"$", Keyword, "#pop"),
- include('step_content'),
- ],
- 'step_content_stack': [
- (r"$", Keyword, "#pop:2"),
- include('step_content'),
- ],
- 'step_content': [
- (r'"', Name.Function, "double_string"),
- include('table_vars'),
- include('numbers'),
- include('comments'),
- (r'(\s|.)', Name.Function),
- ],
- 'table_content': [
- (r"\s+\|\s*$", Keyword, "#pop"),
- include('comments'),
- (r"\\\|", String),
- (r"\s*\|", Keyword),
- include('string'),
- ],
- 'double_string': [
- (r'"', Name.Function, "#pop"),
- include('string'),
- ],
- 'root': [
- (r'\n', Name.Function),
- include('comments'),
- (r'"""', Keyword, "py_string"),
- (r'\s+\|', Keyword, 'table_content'),
- (r'"', Name.Function, "double_string"),
- include('table_vars'),
- include('numbers'),
- (r'(\s*)(@[^@\r\n\t ]+)', bygroups(Name.Function, Name.Tag)),
- (step_keywords, bygroups(Name.Function, Keyword),
- 'step_content_root'),
- (feature_keywords, bygroups(Keyword, Keyword, Name.Function),
- 'narrative'),
- (feature_element_keywords,
- bygroups(Name.Function, Keyword, Keyword, Name.Function),
- 'feature_elements'),
- (examples_keywords,
- bygroups(Name.Function, Keyword, Keyword, Name.Function),
- 'examples_table'),
- (r'(\s|.)', Name.Function),
- ]
- }
-
- def analyse_text(self, text):
- return
-
-
-class TAPLexer(RegexLexer):
- """
- For Test Anything Protocol (TAP) output.
-
- .. versionadded:: 2.1
- """
- name = 'TAP'
- url = 'https://testanything.org/'
- aliases = ['tap']
- filenames = ['*.tap']
-
- tokens = {
- 'root': [
- # A TAP version may be specified.
- (r'^TAP version \d+\n', Name.Namespace),
-
- # Specify a plan with a plan line.
- (r'^1\.\.\d+', Keyword.Declaration, 'plan'),
-
- # A test failure
- (r'^(not ok)([^\S\n]*)(\d*)',
- bygroups(Generic.Error, Text, Number.Integer), 'test'),
-
- # A test success
- (r'^(ok)([^\S\n]*)(\d*)',
- bygroups(Keyword.Reserved, Text, Number.Integer), 'test'),
-
- # Diagnostics start with a hash.
- (r'^#.*\n', Comment),
-
- # TAP's version of an abort statement.
- (r'^Bail out!.*\n', Generic.Error),
-
- # TAP ignores any unrecognized lines.
- (r'^.*\n', Text),
- ],
- 'plan': [
- # Consume whitespace (but not newline).
- (r'[^\S\n]+', Text),
-
- # A plan may have a directive with it.
- (r'#', Comment, 'directive'),
-
- # Or it could just end.
- (r'\n', Comment, '#pop'),
-
- # Anything else is wrong.
- (r'.*\n', Generic.Error, '#pop'),
- ],
- 'test': [
- # Consume whitespace (but not newline).
- (r'[^\S\n]+', Text),
-
- # A test may have a directive with it.
- (r'#', Comment, 'directive'),
-
- (r'\S+', Text),
-
- (r'\n', Text, '#pop'),
- ],
- 'directive': [
- # Consume whitespace (but not newline).
- (r'[^\S\n]+', Comment),
-
- # Extract todo items.
- (r'(?i)\bTODO\b', Comment.Preproc),
-
- # Extract skip items.
- (r'(?i)\bSKIP\S*', Comment.Preproc),
-
- (r'\S+', Comment),
-
- (r'\n', Comment, '#pop:2'),
- ],
- }
diff --git a/spaces/pyodide-demo/self-hosted/pyodide-interrupts.js b/spaces/pyodide-demo/self-hosted/pyodide-interrupts.js
deleted file mode 100644
index b398fa7ff6c9ed3636094d5e836aa0349c42d759..0000000000000000000000000000000000000000
--- a/spaces/pyodide-demo/self-hosted/pyodide-interrupts.js
+++ /dev/null
@@ -1 +0,0 @@
-var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="pyodide-interrupts.data";var REMOTE_PACKAGE_BASE="pyodide-interrupts.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","pyodide_interrupts",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","pyodide_interrupts-0.1.1-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:5695,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1592,2960,4358],sizes:[1592,1368,1398,1337],successes:[1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_pyodide-interrupts.data")}Module["addRunDependency"]("datafile_pyodide-interrupts.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/pyodide_interrupts/__init__.py",start:0,end:662,audio:0},{filename:"/lib/python3.9/site-packages/pyodide_interrupts/_pyodide_interrupts.so",start:662,end:3002,audio:0},{filename:"/lib/python3.9/site-packages/pyodide_interrupts-0.1.1-py3.9.egg-info/SOURCES.txt",start:3002,end:3269,audio:0},{filename:"/lib/python3.9/site-packages/pyodide_interrupts-0.1.1-py3.9.egg-info/top_level.txt",start:3269,end:3288,audio:0},{filename:"/lib/python3.9/site-packages/pyodide_interrupts-0.1.1-py3.9.egg-info/dependency_links.txt",start:3288,end:3289,audio:0},{filename:"/lib/python3.9/site-packages/pyodide_interrupts-0.1.1-py3.9.egg-info/PKG-INFO",start:3289,end:7869,audio:0}],remote_package_size:9791,package_uuid:"cb395170-b461-4883-9eb0-c20bb8744a20"})})();
\ No newline at end of file
diff --git a/spaces/pythainlp/pythainlp/pages/pos_tag.py b/spaces/pythainlp/pythainlp/pages/pos_tag.py
deleted file mode 100644
index d2c888b6a90e6f3ed4f133c74b5cca5d6e935992..0000000000000000000000000000000000000000
--- a/spaces/pythainlp/pythainlp/pages/pos_tag.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import streamlit as st
-import time
-from pythainlp.tag import pos_tag
-from pythainlp.tokenize import word_tokenize
-st.markdown("""
-# Part of speech tagging 🎉
-
-PyThaiNLP support part-of-speech tagging for analysis text. We have
-- perceptron - perceptron tagger (default)
-- unigram - unigram tagger
-- tltk - TLTK: Thai Language Toolkit (support TNC corpus only. if you choose other corpus, It’s change to TNC corpus.)
-and trained with corpus:
-- lst20 - LST20 corpus by National Electronics and Computer Technology Center, Thailand
-- orchid - ORCHID corpus, text from Thai academic articles
-- pud - Parallel Universal Dependencies (PUD) treebanks, natively use Universal POS tags
-- lst20_ud - LST20 text, with tags mapped to Universal POS tag from Universal Dependencies
-- orchid_ud - ORCHID text, with tags mapped to Universal POS tags
-
-for this demo page.
-""")
-
-with st.form("my_form"):
- st.write("Input text")
- text = st.text_area("text", "แมวกินปลา")
- word_engine=st.selectbox('Select word tokenize', ['newmm', 'mm', 'longest', 'tltk'], key=1,index=0)
- pos_corpus = st.selectbox('Select POS corpus', ['lst20', 'orchid', 'pud', 'lst20_ud', 'orchid_ud'], key=1,index=0)
- pos_engine = st.selectbox('Select Postag engine', ['perceptron', 'unigram', 'tltk'], key=1,index=0)
-
- # Every form must have a submit button.
- submitted = st.form_submit_button("Submit")
- if submitted:
- st.subheader("Pos: ")
- start = time.time()
- _list_words = word_tokenize(str(text), engine=str(word_engine))
- _pos = pos_tag(_list_words, corpus=str(pos_corpus), engine=str(pos_engine))
- _text = ""
- for i,j in _pos:
- _text += str(i)+"|"+str(j)+" "
- end = time.time()
- st.write(_text)
- st.write()
- st.write("Running times: "+str(end - start))
-
-st.write("See the documentation at [pos_tag | PyThaiNLP](https://pythainlp.github.io/docs/3.0/api/tag.html#pythainlp.tag.pos_tag).")
diff --git a/spaces/qq37017934/QSign/README.md b/spaces/qq37017934/QSign/README.md
deleted file mode 100644
index 28490c6310267a528bfd4f84c47e0cc061e96b18..0000000000000000000000000000000000000000
--- a/spaces/qq37017934/QSign/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: QSign
-emoji: 💻
-colorFrom: gray
-colorTo: gray
-sdk: docker
-pinned: false
-duplicated_from: hanxuan/QSign
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Media Encoder CC 2015 Serial Number Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe Media Encoder CC 2015 Serial Number Download.md
deleted file mode 100644
index a843772fdec1bca586f02cfc43a20a98e23f766a..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Media Encoder CC 2015 Serial Number Download.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
Adobe Media Encoder CC 2015 Serial Number Download
-
-
If you are looking for a powerful and versatile video encoding software, you might want to consider Adobe Media Encoder CC 2015. This software allows you to create high-quality videos for web, broadcast and cinema. You can also use it to convert your videos to various formats, optimize them for different devices and platforms, and add effects and metadata.
-
-
However, to use Adobe Media Encoder CC 2015, you need a valid serial number. A serial number is a 24-digit code that activates, reinstalls or upgrades your product. Without a serial number, you cannot use the full features of the software.
-
Adobe Media Encoder CC 2015 Serial Number Download
There are different ways to find your serial number depending on how and where you purchased Adobe Media Encoder CC 2015. Here are some of the most common methods:
-
-
-
If you purchased from Adobe.com or registered your product online, you can sign in at https://account.adobe.com/products and look for your serial number in Registered products.
-
If you purchased a Student or Teacher Edition, you may have received a serial number or a redemption code. A redemption code is a 24-digit alphanumeric code that you can use to obtain your serial number online. For detailed instructions, see Serial numbers, redemption codes, and product codes | Student & Teacher editions.
-
If you purchased from a store or online retailer, you may have received a redemption code or a serial number in your email or on the product box. If you received a redemption code, you can use it to obtain your serial number online. For detailed instructions, see Redemption code help.
-
If you purchased a prepaid card, you can find your redemption code beneath the scratch-off foil on the back of the card. You can use it to obtain your serial number online. For detailed instructions, see Redemption code help.
-
If you have a volume license, you can find your serial number on the Adobe Licensing Website.
-
-
-
If you have an invalid or revoked serial number, you may not be able to use Adobe Media Encoder CC 2015. This can happen if you did not purchase the software from an authorized source or if the serial number was fraudulent. To resolve this issue, contact Adobe Customer Care.
-
-
How to download and install Adobe Media Encoder CC 2015
-
-
Once you have your serial number, you can download and install Adobe Media Encoder CC 2015 on your computer. Here are the steps to follow:
-
-
-
Go to https://helpx.adobe.com/download-install/kb/creative-cloud-apps-download.html and download the Creative Cloud desktop app.
-
Run the Creative Cloud desktop app and sign in with your Adobe ID and password.
-
Click on Apps and look for Adobe Media Encoder CC 2015 in the list of available apps.
-
Click on Install and follow the on-screen instructions.
-
When prompted, enter your serial number and complete the installation process.
-
-
-
You can now launch Adobe Media Encoder CC 2015 from the Creative Cloud desktop app or from your Start menu (Windows) or Applications folder (Mac).
-
-
How to use Adobe Media Encoder CC 2015
-
-
Adobe Media Encoder CC 2015 has a user-friendly interface that lets you easily encode your videos. You can also integrate it with other Adobe products such as Premiere Pro CC and After Effects CC for a seamless workflow. Here are some of the basic features and functions of Adobe Media Encoder CC 2015:
-
-
-
You can import your videos from various sources such as files, folders, cameras or Premiere Pro sequences.
-
You can choose from a wide range of presets for different formats and platforms such as H.264, MPEG-4, QuickTime, WebM, YouTube, Facebook and more.
-
You can customize the output settings such as resolution, frame rate, bitrate, aspect ratio, audio channels and more.
-
You can add effects such as overlays, watermarks, timecodes and captions to your videos.
-
You can add metadata such as title, description, keywords and ratings to your videos.
-
You can queue multiple encoding jobs and run them in the background while you continue working on other tasks.
-
You can monitor the progress and status of your encoding jobs in the Encoding panel.
-
You can preview your encoded videos in the Preview panel before exporting them.
-
-
-
For more information and tutorials on how to use Adobe Media Encoder CC 2015, visit https://helpx.adobe.com/media-encoder.html.
-
-
Conclusion
-
-
Adobe Media Encoder CC 2015 is a powerful and versatile video encoding software that can help you create high-quality videos for web, broadcast and cinema. To use it, you need a valid serial number that you can find in different ways depending on how and where you purchased the software. You can also download and install Adobe Media Encoder CC 2015 from the Creative Cloud desktop app and integrate it with other Adobe products such as Premiere Pro CC and After Effects CC. You can also customize the output settings, add effects and metadata, queue multiple encoding jobs and preview your encoded videos with ease.
-
-
If you are looking for Adobe Media Encoder CC 2015 Serial Number Download , this article has provided you with all the information you need. We hope you found it helpful and informative.
-
-
How to update Adobe Media Encoder CC 2015
-
-
Adobe Media Encoder CC 2015 is constantly updated with new features, bug fixes and performance improvements. To get the latest version of the software, you need to update it through the Creative Cloud desktop app. Here are the steps to follow:
-
-
-
Open the Creative Cloud desktop app and sign in with your Adobe ID and password.
-
Click on Updates in the left panel and look for Adobe Media Encoder CC 2015 in the list of available updates.
-
Click on Update and wait for the download and installation process to complete.
-
Restart Adobe Media Encoder CC 2015 if it was running during the update.
-
-
-
You can also check for updates manually by clicking on Help > Updates in Adobe Media Encoder CC 2015. For more information and troubleshooting tips on updating Adobe Media Encoder CC 2015, visit https://helpx.adobe.com/media-encoder/kb/update-media-encoder.html.
-
-
How to troubleshoot Adobe Media Encoder CC 2015
-
-
Sometimes, you may encounter some issues or errors while using Adobe Media Encoder CC 2015. These can be caused by various factors such as incompatible hardware, corrupted files, network problems or software conflicts. To resolve these issues, you can try some of the following solutions:
-
-
-
Make sure your computer meets the minimum system requirements for Adobe Media Encoder CC 2015. You can check them at https://helpx.adobe.com/media-encoder/system-requirements.html.
-
Make sure your software is up to date with the latest version. You can update it through the Creative Cloud desktop app or by clicking on Help > Updates in Adobe Media Encoder CC 2015.
-
Make sure your drivers are up to date for your graphics card, sound card and other devices. You can check them at the manufacturer's website or by using a driver updater tool.
-
Make sure your firewall, antivirus or other security software is not blocking or interfering with Adobe Media Encoder CC 2015. You can temporarily disable them or add an exception for Adobe Media Encoder CC 2015.
-
Make sure your internet connection is stable and fast enough for encoding and uploading your videos. You can test your speed at https://www.speedtest.net/.
-
Make sure your source files are not corrupted, damaged or missing. You can verify them by playing them in a media player or by importing them into another software.
-
Make sure your output settings are correct and compatible with your destination platform or device. You can check them by previewing your encoded videos in the Preview panel or by exporting them to a test folder.
-
Make sure you have enough disk space and memory available for encoding and exporting your videos. You can check them by opening Task Manager (Windows) or Activity Monitor (Mac) and looking at the performance tabs.
-
-
-
If none of these solutions work, you can contact Adobe Customer Care for further assistance. You can also visit https://helpx.adobe.com/media-encoder/kb/troubleshoot-media-encoder.html for more troubleshooting tips and resources.
-
How to uninstall Adobe Media Encoder CC 2015
-
-
If you no longer need Adobe Media Encoder CC 2015 or want to free up some disk space, you can uninstall it from your computer. You can also reinstall it later if you change your mind. Here are the steps to uninstall Adobe Media Encoder CC 2015:
-
-
-
Open the Creative Cloud desktop app and sign in with your Adobe ID and password.
-
Click on Apps and look for Adobe Media Encoder CC 2015 in the list of installed apps.
-
Click on More actions (three dots icon) and select Uninstall.
-
Follow the on-screen instructions to complete the uninstallation process.
-
Restart your computer if prompted.
-
-
-
You can also uninstall Adobe Media Encoder CC 2015 using the Control Panel (Windows) or the Finder (Mac). For detailed instructions, see Uninstall Creative Cloud apps .
-
-
How to get help and support for Adobe Media Encoder CC 2015
-
-
If you have any questions or issues regarding Adobe Media Encoder CC 2015, you can get help and support from various sources. Here are some of the options available:
-
-
-
You can visit https://helpx.adobe.com/media-encoder.html for user guides, tutorials, tips and tricks, FAQs and more.
-
You can visit https://community.adobe.com/t5/media-encoder/bd-p/media-encoder?page=1&sort=latest_replies&filter=all for forums, discussions, feedback and more.
-
You can visit https://helpx.adobe.com/support/media-encoder.html for troubleshooting, error messages, updates and more.
-
You can contact Adobe Customer Care at https://helpx.adobe.com/contact.html for technical support, billing issues, account management and more.
-
-
-
You can also use the Help menu in Adobe Media Encoder CC 2015 to access online help, updates, forums and more.
-
Conclusion
-
-
Adobe Media Encoder CC 2015 is a powerful and versatile video encoding software that can help you create high-quality videos for web, broadcast and cinema. To use it, you need a valid serial number that you can find in different ways depending on how and where you purchased the software. You can also download and install Adobe Media Encoder CC 2015 from the Creative Cloud desktop app and integrate it with other Adobe products such as Premiere Pro CC and After Effects CC. You can also customize the output settings, add effects and metadata, queue multiple encoding jobs and preview your encoded videos with ease.
-
-
If you are looking for Adobe Media Encoder CC 2015 Serial Number Download , this article has provided you with all the information you need. We hope you found it helpful and informative.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Bitdefender Antivirus Plus 2020 Activation Code Free Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Bitdefender Antivirus Plus 2020 Activation Code Free Download.md
deleted file mode 100644
index f22f32958d7280e2c46bbe0ca44bbf6f3e367ffc..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Bitdefender Antivirus Plus 2020 Activation Code Free Download.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
Bitdefender Antivirus Plus 2020 Activation Code Free Download
-
-February 3, 2022 - while the developer has three of the three professional antivirus solutions - Antivirus Plus, Internet Security and Total Security, they also offer... PC antivirus applications such as McAfee, AVG or Avira have their own PC programs that can be installed on their own.
-In case you want to replace the PC antivirus software you are already using and don't want to make changes to your existing installation, you can use the free versions of these applications.
-This application is free to use and can easily change the operating system on your PC. 8a78ff9644
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Iso 2768 Free Download [EXCLUSIVE] 14l.md b/spaces/quidiaMuxgu/Expedit-SAM/Iso 2768 Free Download [EXCLUSIVE] 14l.md
deleted file mode 100644
index af01e94b6d79179f9215174be838d77021dc7ebb..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Iso 2768 Free Download [EXCLUSIVE] 14l.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-erapi-edisi-5-6-ohainve PPTx file. Flag of Northern Ireland inner page ...accbert d868ddde6e tolerances according to ISO 2768 cL. Dimensions in mm. B84143A*R106 3-line filters ... The stress-free condition must be installed and secured with. Use tools or special fixtures if necessary. Unless otherwise specified, all dimensions are on the inside. A. Connection A.01 B. Connection A.02. ...F. Connection A.01 (F.01). For attaching the fastener. Connections type (A.01). A. Connection A.02 (F.02); B. Connection A.03 (F.03). For fastener mounting. Connections type (A.01). Connections, type (A.02). Connections type (A.03). Connections type (A.04) A. Connection of type A.05 (F.05). For fastening the fastener. 8a78ff9644
-
-
-
diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/mtcnn/mtcnn_pytorch/__init__.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/mtcnn/mtcnn_pytorch/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/rahul2001/student_performance/src/__init__.py b/spaces/rahul2001/student_performance/src/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Angels Online Bot Macro Pc Game.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Angels Online Bot Macro Pc Game.md
deleted file mode 100644
index dd73de480a7929ed99a0b4c884a192da5ba2c7b1..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Angels Online Bot Macro Pc Game.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-Angels Online Bot Macro Project Vote Bot Mac A Midi. Angel Online Bot macro pc game. Angels Online Bot macro pc game. Angela Marie Kingsley, known professionally as Angela Marie or Angela Marie Kingsley, is an English artist, vocalist, multi-instrumentalist and songwriter. She was born in High Wycombe, Buckinghamshire, England, and is of Cornish, English, and German descent.The present invention relates to a method for forming a MOS transistor and, more particularly, to a method for fabricating a semiconductor device including a so-called twin well structure.
-
-A related-art method for fabricating a MOS transistor will be explained with reference to the drawings. FIGS. 13-16 are cross sectional views showing a fabrication process for a MOS transistor of a conventional semiconductor device.
-
-Referring to FIG. 13, a LOCOS oxide film 2 is formed on a semiconductor substrate 1.
-
-Referring to FIG. 14, an ion implantation of impurities is performed on the semiconductor substrate 1. For example, N-type impurities, e.g., phosphorus, arsenic or the like, are ion-implanted on the semiconductor substrate 1 in a prescribed concentration to form a well region 3 in the semiconductor substrate 1.
-
-Referring to FIG. 15, a polysilicon film 4 is formed on the LOCOS oxide film 2, and then, a CVD (Chemical Vapor Deposition) oxide film 5 and a BPSG (Boro-Phospho Silicate Glass) film 6 are formed on the polysilicon film 4 by performing a film formation process.
-
-Referring to FIG. 16, an opening 7 is formed in the BPSG film 6.
-
-Then, the opening 7 is filled with a conductive material 8 such as a polysilicon film. Then, a CMP (Chemical Mechanical Polishing) process is performed to leave a MOS transistor on the semiconductor substrate 1.
-
-A conventional method for fabricating a MOS transistor, however, has the following problems.
-
-When a miniaturized MOS transistor is formed on a semiconductor substrate, a so-called twin well structure is used to isolate the N- and P-type transistors in order to reduce the short channel effect. In this case, a P-type impurity, e.g., boron, is ion-implanted for P-type well region. If the impurity ion 4fefd39f24
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cakewalk SONAR Platinum V23.6.0.24 Keygen [CracksNow] Keygen [WORK].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cakewalk SONAR Platinum V23.6.0.24 Keygen [CracksNow] Keygen [WORK].md
deleted file mode 100644
index 51f456317ba68ab188f8d0afede8ef190033d59e..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cakewalk SONAR Platinum V23.6.0.24 Keygen [CracksNow] Keygen [WORK].md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow]: A Complete Guide
-
-
If you are looking for a powerful and versatile digital audio workstation, you might want to check out Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow]. This software is more than just a DAW - it's the most advanced music production environment available today. It offers a creative experience that combines advanced technology, effortless workflow, and an inviting interface that amplifies inspiration.
In this article, we will show you what Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] can do for you, how to download and install it, and how to activate it with the keygen provided by CracksNow. We will also give you some tips and tricks to get the most out of this software.
-
-
What is Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow]?
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] is the latest version of Cakewalk's flagship DAW software, which has been around for over 30 years. It is designed for professional musicians, producers, composers, and engineers who want to create high-quality tracks on a PC.
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] has many features that make it stand out from other DAWs, such as:
-
-
-
The award-winning, touch-enabled Skylight User Interface, which allows you to customize your workspace and access all the tools you need with ease.
-
The ProChannel Strip, which gives you analog-style mixing and mastering effects with 9 modules, including EQ, compression, reverb, tape emulation, and more.
-
The VocalSync tool, which automatically aligns vocal tracks with the lead vocal or guide track.
-
The Drum Replacer tool, which lets you replace or reinforce any drum sound with a new sample from your library or from the included Addictive Drums 2 Producer Bundle.
-
The Melodyne Essential tool, which lets you edit pitch and timing of any audio with the ARA integration.
-
The Mix Recall tool, which lets you save and switch between different mix scenes without losing any settings.
-
The Gobbler integration, which lets you save your projects to the cloud and collaborate with other musicians online.
-
The YouTube export feature, which lets you upload your songs directly to YouTube with one click.
-
The DSD support feature, which lets you record and export audio at better-than-CD quality.
-
The 21 virtual instruments included in the package, such as Rapture, Dimension Pro, Z3TA+2, and more.
-
The 57 professional mixing and mastering effects included in the package, such as BlueTubes Bundle, LP-64 Linear Phase EQ and Multiband Compressor, TH2 Guitar Amp Simulator, and more.
-
-
-
How to Download and Install Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow]?
-
-
To download and install Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], you need to follow these steps:
-
-
-
Download the software from one of the links provided by CracksNow. You will get a ZIP file that contains the setup file and the keygen file.
-
Extract the ZIP file to a folder on your PC.
-
Run the setup file and follow the instructions to install the software on your PC.
-
Do not run the software yet after installation.
-
Run the keygen file as administrator and click on Generate button to generate a serial number for the software.
-
Copy the serial number from the keygen and paste it into the activation window of the software.
-
Click on Activate button to activate the software.
-
Enjoy your full version of Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow].
-
-
-
How to Use Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow]?
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] is a very user-friendly software that lets you create music in any genre and style you want. Here are some tips and tricks to help you get started:
-
-
-
-
To create a new project, go to File > New > Project and choose a template that suits your needs.
-
To record audio or MIDI tracks, go to Track > Insert Audio Track or Insert MIDI Track and select your input source and output destination.
-
To edit audio or MIDI tracks, use the tools in the Smart Tool palette or in the Track View toolbar.
-
To add effects or instruments to your tracks, go to Insert > Soft Synth or Insert > Audio FX and choose from the available options.
-
To mix your tracks, use the Console View or the ProChannel Strip to adjust levels, pan, EQ, dynamics, send effects, and more.
-
To master your tracks, use the Master Bus or insert mastering effects such as LP-64 Linear Phase EQ and Multiband Compressor.
-
To export your tracks as audio files or upload them to YouTube or Gobbler cloud service, go to File > Export > Audio or File > Export > YouTube/Gobbler.
-
-
-
Conclusion
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] is a great software for anyone who wants to create professional-quality music on a PC. It has everything you need to compose, record, edit, mix, master, and share your music with ease and creativity. You can download it for free from CracksNow and activate it with the keygen provided by them.
-
-
If you have any questions or feedback about Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], feel free to leave a comment below or contact us through our website.
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] is not only a powerful DAW software, but also a very affordable one. Unlike other DAWs that require you to pay a monthly or yearly subscription fee, Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] lets you own the software for life with a one-time payment. You can also download it for free from CracksNow and activate it with the keygen provided by them.
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] also offers you a lot of value for your money. You get a complete music production package that includes 21 virtual instruments, 57 professional effects, and a huge library of loops and samples. You also get access to exclusive features that are not available in other DAWs, such as VocalSync, Drum Replacer, Melodyne Essential, and more.
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] is also compatible with most hardware and software devices. You can use it with any ASIO-compatible audio interface, MIDI controller, or plug-in format. You can also import and export audio files in various formats, such as WAV, MP3, OGG, FLAC, and DSD.
-
-
How to Get Support and Updates for Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow]?
-
-
If you have any questions or issues with Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], you can get support and updates from various sources.
-
-
-
You can visit the official website of Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], where you can find tutorials, manuals, FAQs, forums, and blogs.
-
You can contact the customer service of Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], where you can get technical support, product registration, and warranty information.
-
You can join the community of Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], where you can interact with other users, share your music, get feedback, and learn tips and tricks.
-
You can also check out the website of CracksNow, where you can find more cracked software, games, nulled scripts, free premium WordPress themes and plugins.
-
-
-
Conclusion
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] is a great software for anyone who wants to create professional-quality music on a PC. It has everything you need to compose, record, edit, mix, master, and share your music with ease and creativity. You can download it for free from CracksNow and activate it with the keygen provided by them.
-
-
If you have any questions or feedback about Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], feel free to leave a comment below or contact us through our website.
-
How to Upgrade to Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow]?
-
-
If you already have a previous version of Cakewalk SONAR Platinum, you might want to upgrade to Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] to enjoy the latest features and improvements. You can do this by following these steps:
-
-
-
Download the update file from one of the links provided by CracksNow. You will get a ZIP file that contains the update file and the keygen file.
-
Extract the ZIP file to a folder on your PC.
-
Run the update file and follow the instructions to install the update on your PC.
-
Do not run the software yet after installation.
-
Run the keygen file as administrator and click on Generate button to generate a new serial number for the software.
-
Copy the new serial number from the keygen and paste it into the activation window of the software.
-
Click on Activate button to activate the software.
-
Enjoy your upgraded version of Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow].
-
-
-
How to Uninstall Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow]?
-
-
If you want to uninstall Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] from your PC, you can do this by following these steps:
-
-
-
Go to Control Panel > Programs and Features and find Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] in the list of installed programs.
-
Click on Uninstall button and follow the instructions to uninstall the software from your PC.
-
Delete any leftover files or folders related to Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] from your PC.
-
Restart your PC to complete the uninstallation process.
-
-
-
Conclusion
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] is a great software for anyone who wants to create professional-quality music on a PC. It has everything you need to compose, record, edit, mix, master, and share your music with ease and creativity. You can download it for free from CracksNow and activate it with the keygen provided by them.
-
-
If you have any questions or feedback about Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], feel free to leave a comment below or contact us through our website.
-
Conclusion
-
-
Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow] is a great software for anyone who wants to create professional-quality music on a PC. It has everything you need to compose, record, edit, mix, master, and share your music with ease and creativity. You can download it for free from CracksNow and activate it with the keygen provided by them.
-
-
If you have any questions or feedback about Cakewalk SONAR Platinum v23.6.0.24 Keygen [CracksNow], feel free to leave a comment below or contact us through our website.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/riccorl/relik-entity-linking/models/relik-reader-aida-deberta-small/modeling_relik.py b/spaces/riccorl/relik-entity-linking/models/relik-reader-aida-deberta-small/modeling_relik.py
deleted file mode 100644
index 503cb6f00be60f263c2ac14b8ec22e2743bd1351..0000000000000000000000000000000000000000
--- a/spaces/riccorl/relik-entity-linking/models/relik-reader-aida-deberta-small/modeling_relik.py
+++ /dev/null
@@ -1,983 +0,0 @@
-from typing import Optional, Dict, Any
-
-import torch
-from transformers import AutoModel, PreTrainedModel
-from transformers.activations import GELUActivation, ClippedGELUActivation
-from transformers.configuration_utils import PretrainedConfig
-from transformers.modeling_utils import PoolerEndLogits
-
-from .configuration_relik import RelikReaderConfig
-
-
-class RelikReaderSample:
- def __init__(self, **kwargs):
- super().__setattr__("_d", {})
- self._d = kwargs
-
- def __getattribute__(self, item):
- return super(RelikReaderSample, self).__getattribute__(item)
-
- def __getattr__(self, item):
- if item.startswith("__") and item.endswith("__"):
- # this is likely some python library-specific variable (such as __deepcopy__ for copy)
- # better follow standard behavior here
- raise AttributeError(item)
- elif item in self._d:
- return self._d[item]
- else:
- return None
-
- def __setattr__(self, key, value):
- if key in self._d:
- self._d[key] = value
- else:
- super().__setattr__(key, value)
-
-
-activation2functions = {
- "relu": torch.nn.ReLU(),
- "gelu": GELUActivation(),
- "gelu_10": ClippedGELUActivation(-10, 10),
-}
-
-
-class PoolerEndLogitsBi(PoolerEndLogits):
- def __init__(self, config: PretrainedConfig):
- super().__init__(config)
- self.dense_1 = torch.nn.Linear(config.hidden_size, 2)
-
- def forward(
- self,
- hidden_states: torch.FloatTensor,
- start_states: Optional[torch.FloatTensor] = None,
- start_positions: Optional[torch.LongTensor] = None,
- p_mask: Optional[torch.FloatTensor] = None,
- ) -> torch.FloatTensor:
- if p_mask is not None:
- p_mask = p_mask.unsqueeze(-1)
- logits = super().forward(
- hidden_states,
- start_states,
- start_positions,
- p_mask,
- )
- return logits
-
-
-class RelikReaderSpanModel(PreTrainedModel):
- config_class = RelikReaderConfig
-
- def __init__(self, config: RelikReaderConfig, *args, **kwargs):
- super().__init__(config)
- # Transformer model declaration
- self.config = config
- self.transformer_model = (
- AutoModel.from_pretrained(self.config.transformer_model)
- if self.config.num_layers is None
- else AutoModel.from_pretrained(
- self.config.transformer_model, num_hidden_layers=self.config.num_layers
- )
- )
- self.transformer_model.resize_token_embeddings(
- self.transformer_model.config.vocab_size
- + self.config.additional_special_symbols
- )
-
- self.activation = self.config.activation
- self.linears_hidden_size = self.config.linears_hidden_size
- self.use_last_k_layers = self.config.use_last_k_layers
-
- # named entity detection layers
- self.ned_start_classifier = self._get_projection_layer(
- self.activation, last_hidden=2, layer_norm=False
- )
- self.ned_end_classifier = PoolerEndLogits(self.transformer_model.config)
-
- # END entity disambiguation layer
- self.ed_start_projector = self._get_projection_layer(self.activation)
- self.ed_end_projector = self._get_projection_layer(self.activation)
-
- self.training = self.config.training
-
- # criterion
- self.criterion = torch.nn.CrossEntropyLoss()
-
- def _get_projection_layer(
- self,
- activation: str,
- last_hidden: Optional[int] = None,
- input_hidden=None,
- layer_norm: bool = True,
- ) -> torch.nn.Sequential:
- head_components = [
- torch.nn.Dropout(0.1),
- torch.nn.Linear(
- self.transformer_model.config.hidden_size * self.use_last_k_layers
- if input_hidden is None
- else input_hidden,
- self.linears_hidden_size,
- ),
- activation2functions[activation],
- torch.nn.Dropout(0.1),
- torch.nn.Linear(
- self.linears_hidden_size,
- self.linears_hidden_size if last_hidden is None else last_hidden,
- ),
- ]
-
- if layer_norm:
- head_components.append(
- torch.nn.LayerNorm(
- self.linears_hidden_size if last_hidden is None else last_hidden,
- self.transformer_model.config.layer_norm_eps,
- )
- )
-
- return torch.nn.Sequential(*head_components)
-
- def _mask_logits(self, logits: torch.Tensor, mask: torch.Tensor) -> torch.Tensor:
- mask = mask.unsqueeze(-1)
- if next(self.parameters()).dtype == torch.float16:
- logits = logits * (1 - mask) - 65500 * mask
- else:
- logits = logits * (1 - mask) - 1e30 * mask
- return logits
-
- def _get_model_features(
- self,
- input_ids: torch.Tensor,
- attention_mask: torch.Tensor,
- token_type_ids: Optional[torch.Tensor],
- ):
- model_input = {
- "input_ids": input_ids,
- "attention_mask": attention_mask,
- "output_hidden_states": self.use_last_k_layers > 1,
- }
-
- if token_type_ids is not None:
- model_input["token_type_ids"] = token_type_ids
-
- model_output = self.transformer_model(**model_input)
-
- if self.use_last_k_layers > 1:
- model_features = torch.cat(
- model_output[1][-self.use_last_k_layers :], dim=-1
- )
- else:
- model_features = model_output[0]
-
- return model_features
-
- def compute_ned_end_logits(
- self,
- start_predictions,
- start_labels,
- model_features,
- prediction_mask,
- batch_size,
- ) -> Optional[torch.Tensor]:
- # todo: maybe when constraining on the spans,
- # we should not use a prediction_mask for the end tokens.
- # at least we should not during training imo
- start_positions = start_labels if self.training else start_predictions
- start_positions_indices = (
- torch.arange(start_positions.size(1), device=start_positions.device)
- .unsqueeze(0)
- .expand(batch_size, -1)[start_positions > 0]
- ).to(start_positions.device)
-
- if len(start_positions_indices) > 0:
- expanded_features = torch.cat(
- [
- model_features[i].unsqueeze(0).expand(x, -1, -1)
- for i, x in enumerate(torch.sum(start_positions > 0, dim=-1))
- if x > 0
- ],
- dim=0,
- ).to(start_positions_indices.device)
-
- expanded_prediction_mask = torch.cat(
- [
- prediction_mask[i].unsqueeze(0).expand(x, -1)
- for i, x in enumerate(torch.sum(start_positions > 0, dim=-1))
- if x > 0
- ],
- dim=0,
- ).to(expanded_features.device)
-
- end_logits = self.ned_end_classifier(
- hidden_states=expanded_features,
- start_positions=start_positions_indices,
- p_mask=expanded_prediction_mask,
- )
-
- return end_logits
-
- return None
-
- def compute_classification_logits(
- self,
- model_features,
- special_symbols_mask,
- prediction_mask,
- batch_size,
- start_positions=None,
- end_positions=None,
- ) -> torch.Tensor:
- if start_positions is None or end_positions is None:
- start_positions = torch.zeros_like(prediction_mask)
- end_positions = torch.zeros_like(prediction_mask)
-
- model_start_features = self.ed_start_projector(model_features)
- model_end_features = self.ed_end_projector(model_features)
- model_end_features[start_positions > 0] = model_end_features[end_positions > 0]
-
- model_ed_features = torch.cat(
- [model_start_features, model_end_features], dim=-1
- )
-
- # computing ed features
- classes_representations = torch.sum(special_symbols_mask, dim=1)[0].item()
- special_symbols_representation = model_ed_features[special_symbols_mask].view(
- batch_size, classes_representations, -1
- )
-
- logits = torch.bmm(
- model_ed_features,
- torch.permute(special_symbols_representation, (0, 2, 1)),
- )
-
- logits = self._mask_logits(logits, prediction_mask)
-
- return logits
-
- def forward(
- self,
- input_ids: torch.Tensor,
- attention_mask: torch.Tensor,
- token_type_ids: Optional[torch.Tensor] = None,
- prediction_mask: Optional[torch.Tensor] = None,
- special_symbols_mask: Optional[torch.Tensor] = None,
- start_labels: Optional[torch.Tensor] = None,
- end_labels: Optional[torch.Tensor] = None,
- use_predefined_spans: bool = False,
- *args,
- **kwargs,
- ) -> Dict[str, Any]:
-
- batch_size, seq_len = input_ids.shape
-
- model_features = self._get_model_features(
- input_ids, attention_mask, token_type_ids
- )
-
- ned_start_labels = None
-
- # named entity detection if required
- if use_predefined_spans: # no need to compute spans
- ned_start_logits, ned_start_probabilities, ned_start_predictions = (
- None,
- None,
- torch.clone(start_labels)
- if start_labels is not None
- else torch.zeros_like(input_ids),
- )
- ned_end_logits, ned_end_probabilities, ned_end_predictions = (
- None,
- None,
- torch.clone(end_labels)
- if end_labels is not None
- else torch.zeros_like(input_ids),
- )
-
- ned_start_predictions[ned_start_predictions > 0] = 1
- ned_end_predictions[ned_end_predictions > 0] = 1
-
- else: # compute spans
- # start boundary prediction
- ned_start_logits = self.ned_start_classifier(model_features)
- ned_start_logits = self._mask_logits(ned_start_logits, prediction_mask)
- ned_start_probabilities = torch.softmax(ned_start_logits, dim=-1)
- ned_start_predictions = ned_start_probabilities.argmax(dim=-1)
-
- # end boundary prediction
- ned_start_labels = (
- torch.zeros_like(start_labels) if start_labels is not None else None
- )
-
- if ned_start_labels is not None:
- ned_start_labels[start_labels == -100] = -100
- ned_start_labels[start_labels > 0] = 1
-
- ned_end_logits = self.compute_ned_end_logits(
- ned_start_predictions,
- ned_start_labels,
- model_features,
- prediction_mask,
- batch_size,
- )
-
- if ned_end_logits is not None:
- ned_end_probabilities = torch.softmax(ned_end_logits, dim=-1)
- ned_end_predictions = torch.argmax(ned_end_probabilities, dim=-1)
- else:
- ned_end_logits, ned_end_probabilities = None, None
- ned_end_predictions = ned_start_predictions.new_zeros(batch_size)
-
- # flattening end predictions
- # (flattening can happen only if the
- # end boundaries were not predicted using the gold labels)
- if not self.training:
- flattened_end_predictions = torch.clone(ned_start_predictions)
- flattened_end_predictions[flattened_end_predictions > 0] = 0
-
- batch_start_predictions = list()
- for elem_idx in range(batch_size):
- batch_start_predictions.append(
- torch.where(ned_start_predictions[elem_idx] > 0)[0].tolist()
- )
-
- # check that the total number of start predictions
- # is equal to the end predictions
- total_start_predictions = sum(map(len, batch_start_predictions))
- total_end_predictions = len(ned_end_predictions)
- assert (
- total_start_predictions == 0
- or total_start_predictions == total_end_predictions
- ), (
- f"Total number of start predictions = {total_start_predictions}. "
- f"Total number of end predictions = {total_end_predictions}"
- )
-
- curr_end_pred_num = 0
- for elem_idx, bsp in enumerate(batch_start_predictions):
- for sp in bsp:
- ep = ned_end_predictions[curr_end_pred_num].item()
- if ep < sp:
- ep = sp
-
- # if we already set this span throw it (no overlap)
- if flattened_end_predictions[elem_idx, ep] == 1:
- ned_start_predictions[elem_idx, sp] = 0
- else:
- flattened_end_predictions[elem_idx, ep] = 1
-
- curr_end_pred_num += 1
-
- ned_end_predictions = flattened_end_predictions
-
- start_position, end_position = (
- (start_labels, end_labels)
- if self.training
- else (ned_start_predictions, ned_end_predictions)
- )
-
- # Entity disambiguation
- ed_logits = self.compute_classification_logits(
- model_features,
- special_symbols_mask,
- prediction_mask,
- batch_size,
- start_position,
- end_position,
- )
- ed_probabilities = torch.softmax(ed_logits, dim=-1)
- ed_predictions = torch.argmax(ed_probabilities, dim=-1)
-
- # output build
- output_dict = dict(
- batch_size=batch_size,
- ned_start_logits=ned_start_logits,
- ned_start_probabilities=ned_start_probabilities,
- ned_start_predictions=ned_start_predictions,
- ned_end_logits=ned_end_logits,
- ned_end_probabilities=ned_end_probabilities,
- ned_end_predictions=ned_end_predictions,
- ed_logits=ed_logits,
- ed_probabilities=ed_probabilities,
- ed_predictions=ed_predictions,
- )
-
- # compute loss if labels
- if start_labels is not None and end_labels is not None and self.training:
- # named entity detection loss
-
- # start
- if ned_start_logits is not None:
- ned_start_loss = self.criterion(
- ned_start_logits.view(-1, ned_start_logits.shape[-1]),
- ned_start_labels.view(-1),
- )
- else:
- ned_start_loss = 0
-
- # end
- if ned_end_logits is not None:
- ned_end_labels = torch.zeros_like(end_labels)
- ned_end_labels[end_labels == -100] = -100
- ned_end_labels[end_labels > 0] = 1
-
- ned_end_loss = self.criterion(
- ned_end_logits,
- (
- torch.arange(
- ned_end_labels.size(1), device=ned_end_labels.device
- )
- .unsqueeze(0)
- .expand(batch_size, -1)[ned_end_labels > 0]
- ).to(ned_end_labels.device),
- )
-
- else:
- ned_end_loss = 0
-
- # entity disambiguation loss
- start_labels[ned_start_labels != 1] = -100
- ed_labels = torch.clone(start_labels)
- ed_labels[end_labels > 0] = end_labels[end_labels > 0]
- ed_loss = self.criterion(
- ed_logits.view(-1, ed_logits.shape[-1]),
- ed_labels.view(-1),
- )
-
- output_dict["ned_start_loss"] = ned_start_loss
- output_dict["ned_end_loss"] = ned_end_loss
- output_dict["ed_loss"] = ed_loss
-
- output_dict["loss"] = ned_start_loss + ned_end_loss + ed_loss
-
- return output_dict
-
-
-class RelikReaderREModel(PreTrainedModel):
- config_class = RelikReaderConfig
-
- def __init__(self, config, *args, **kwargs):
- super().__init__(config)
- # Transformer model declaration
- # self.transformer_model_name = transformer_model
- self.config = config
- self.transformer_model = (
- AutoModel.from_pretrained(config.transformer_model)
- if config.num_layers is None
- else AutoModel.from_pretrained(
- config.transformer_model, num_hidden_layers=config.num_layers
- )
- )
- self.transformer_model.resize_token_embeddings(
- self.transformer_model.config.vocab_size + config.additional_special_symbols
- )
-
- # named entity detection layers
- self.ned_start_classifier = self._get_projection_layer(
- config.activation, last_hidden=2, layer_norm=False
- )
-
- self.ned_end_classifier = PoolerEndLogitsBi(self.transformer_model.config)
-
- self.entity_type_loss = (
- config.entity_type_loss if hasattr(config, "entity_type_loss") else False
- )
- self.relation_disambiguation_loss = (
- config.relation_disambiguation_loss
- if hasattr(config, "relation_disambiguation_loss")
- else False
- )
-
- input_hidden_ents = 2 * self.transformer_model.config.hidden_size
-
- self.re_subject_projector = self._get_projection_layer(
- config.activation, input_hidden=input_hidden_ents
- )
- self.re_object_projector = self._get_projection_layer(
- config.activation, input_hidden=input_hidden_ents
- )
- self.re_relation_projector = self._get_projection_layer(config.activation)
-
- if self.entity_type_loss or self.relation_disambiguation_loss:
- self.re_entities_projector = self._get_projection_layer(
- config.activation,
- input_hidden=2 * self.transformer_model.config.hidden_size,
- )
- self.re_definition_projector = self._get_projection_layer(
- config.activation,
- )
-
- self.re_classifier = self._get_projection_layer(
- config.activation,
- input_hidden=config.linears_hidden_size,
- last_hidden=2,
- layer_norm=False,
- )
-
- if self.entity_type_loss or self.relation_disambiguation_loss:
- self.re_ed_classifier = self._get_projection_layer(
- config.activation,
- input_hidden=config.linears_hidden_size,
- last_hidden=2,
- layer_norm=False,
- )
-
- self.training = config.training
-
- # criterion
- self.criterion = torch.nn.CrossEntropyLoss()
-
- def _get_projection_layer(
- self,
- activation: str,
- last_hidden: Optional[int] = None,
- input_hidden=None,
- layer_norm: bool = True,
- ) -> torch.nn.Sequential:
- head_components = [
- torch.nn.Dropout(0.1),
- torch.nn.Linear(
- self.transformer_model.config.hidden_size
- * self.config.use_last_k_layers
- if input_hidden is None
- else input_hidden,
- self.config.linears_hidden_size,
- ),
- activation2functions[activation],
- torch.nn.Dropout(0.1),
- torch.nn.Linear(
- self.config.linears_hidden_size,
- self.config.linears_hidden_size if last_hidden is None else last_hidden,
- ),
- ]
-
- if layer_norm:
- head_components.append(
- torch.nn.LayerNorm(
- self.config.linears_hidden_size
- if last_hidden is None
- else last_hidden,
- self.transformer_model.config.layer_norm_eps,
- )
- )
-
- return torch.nn.Sequential(*head_components)
-
- def _mask_logits(self, logits: torch.Tensor, mask: torch.Tensor) -> torch.Tensor:
- mask = mask.unsqueeze(-1)
- if next(self.parameters()).dtype == torch.float16:
- logits = logits * (1 - mask) - 65500 * mask
- else:
- logits = logits * (1 - mask) - 1e30 * mask
- return logits
-
- def _get_model_features(
- self,
- input_ids: torch.Tensor,
- attention_mask: torch.Tensor,
- token_type_ids: Optional[torch.Tensor],
- ):
- model_input = {
- "input_ids": input_ids,
- "attention_mask": attention_mask,
- "output_hidden_states": self.config.use_last_k_layers > 1,
- }
-
- if token_type_ids is not None:
- model_input["token_type_ids"] = token_type_ids
-
- model_output = self.transformer_model(**model_input)
-
- if self.config.use_last_k_layers > 1:
- model_features = torch.cat(
- model_output[1][-self.config.use_last_k_layers :], dim=-1
- )
- else:
- model_features = model_output[0]
-
- return model_features
-
- def compute_ned_end_logits(
- self,
- start_predictions,
- start_labels,
- model_features,
- prediction_mask,
- batch_size,
- ) -> Optional[torch.Tensor]:
- # todo: maybe when constraining on the spans,
- # we should not use a prediction_mask for the end tokens.
- # at least we should not during training imo
- start_positions = start_labels if self.training else start_predictions
- start_positions_indices = (
- torch.arange(start_positions.size(1), device=start_positions.device)
- .unsqueeze(0)
- .expand(batch_size, -1)[start_positions > 0]
- ).to(start_positions.device)
-
- if len(start_positions_indices) > 0:
- expanded_features = torch.cat(
- [
- model_features[i].unsqueeze(0).expand(x, -1, -1)
- for i, x in enumerate(torch.sum(start_positions > 0, dim=-1))
- if x > 0
- ],
- dim=0,
- ).to(start_positions_indices.device)
-
- expanded_prediction_mask = torch.cat(
- [
- prediction_mask[i].unsqueeze(0).expand(x, -1)
- for i, x in enumerate(torch.sum(start_positions > 0, dim=-1))
- if x > 0
- ],
- dim=0,
- ).to(expanded_features.device)
-
- # mask all tokens before start_positions_indices ie, mask all tokens with
- # indices < start_positions_indices with 1, ie. [range(x) for x in start_positions_indices]
- expanded_prediction_mask = torch.stack(
- [
- torch.cat(
- [
- torch.ones(x, device=expanded_features.device),
- expanded_prediction_mask[i, x:],
- ]
- )
- for i, x in enumerate(start_positions_indices)
- if x > 0
- ],
- dim=0,
- ).to(expanded_features.device)
-
- end_logits = self.ned_end_classifier(
- hidden_states=expanded_features,
- start_positions=start_positions_indices,
- p_mask=expanded_prediction_mask,
- )
-
- return end_logits
-
- return None
-
- def compute_relation_logits(
- self,
- model_entity_features,
- special_symbols_features,
- ) -> torch.Tensor:
- model_subject_features = self.re_subject_projector(model_entity_features)
- model_object_features = self.re_object_projector(model_entity_features)
- special_symbols_start_representation = self.re_relation_projector(
- special_symbols_features
- )
- re_logits = torch.einsum(
- "bse,bde,bfe->bsdfe",
- model_subject_features,
- model_object_features,
- special_symbols_start_representation,
- )
- re_logits = self.re_classifier(re_logits)
-
- return re_logits
-
- def compute_entity_logits(
- self,
- model_entity_features,
- special_symbols_features,
- ) -> torch.Tensor:
- model_ed_features = self.re_entities_projector(model_entity_features)
- special_symbols_ed_representation = self.re_definition_projector(
- special_symbols_features
- )
- logits = torch.einsum(
- "bce,bde->bcde",
- model_ed_features,
- special_symbols_ed_representation,
- )
- logits = self.re_ed_classifier(logits)
- start_logits = self._mask_logits(
- logits,
- (model_entity_features == -100)
- .all(2)
- .long()
- .unsqueeze(2)
- .repeat(1, 1, torch.sum(model_entity_features, dim=1)[0].item()),
- )
-
- return logits
-
- def compute_loss(self, logits, labels, mask=None):
- logits = logits.view(-1, logits.shape[-1])
- labels = labels.view(-1).long()
- if mask is not None:
- return self.criterion(logits[mask], labels[mask])
- return self.criterion(logits, labels)
-
- def compute_ned_end_loss(self, ned_end_logits, end_labels):
- if ned_end_logits is None:
- return 0
- ned_end_labels = torch.zeros_like(end_labels)
- ned_end_labels[end_labels == -100] = -100
- ned_end_labels[end_labels > 0] = 1
- return self.compute_loss(ned_end_logits, ned_end_labels)
-
- def compute_ned_type_loss(
- self,
- disambiguation_labels,
- re_ned_entities_logits,
- ned_type_logits,
- re_entities_logits,
- entity_types,
- ):
- if self.entity_type_loss and self.relation_disambiguation_loss:
- return self.compute_loss(disambiguation_labels, re_ned_entities_logits)
- if self.entity_type_loss:
- return self.compute_loss(
- disambiguation_labels[:, :, :entity_types], ned_type_logits
- )
- if self.relation_disambiguation_loss:
- return self.compute_loss(disambiguation_labels, re_entities_logits)
- return 0
-
- def compute_relation_loss(self, relation_labels, re_logits):
- return self.compute_loss(
- re_logits, relation_labels, relation_labels.view(-1) != -100
- )
-
- def forward(
- self,
- input_ids: torch.Tensor,
- attention_mask: torch.Tensor,
- token_type_ids: torch.Tensor,
- prediction_mask: Optional[torch.Tensor] = None,
- special_symbols_mask: Optional[torch.Tensor] = None,
- special_symbols_mask_entities: Optional[torch.Tensor] = None,
- start_labels: Optional[torch.Tensor] = None,
- end_labels: Optional[torch.Tensor] = None,
- disambiguation_labels: Optional[torch.Tensor] = None,
- relation_labels: Optional[torch.Tensor] = None,
- is_validation: bool = False,
- is_prediction: bool = False,
- *args,
- **kwargs,
- ) -> Dict[str, Any]:
-
- batch_size = input_ids.shape[0]
-
- model_features = self._get_model_features(
- input_ids, attention_mask, token_type_ids
- )
-
- # named entity detection
- if is_prediction and start_labels is not None:
- ned_start_logits, ned_start_probabilities, ned_start_predictions = (
- None,
- None,
- torch.zeros_like(start_labels),
- )
- ned_end_logits, ned_end_probabilities, ned_end_predictions = (
- None,
- None,
- torch.zeros_like(end_labels),
- )
-
- ned_start_predictions[start_labels > 0] = 1
- ned_end_predictions[end_labels > 0] = 1
- ned_end_predictions = ned_end_predictions[~(end_labels == -100).all(2)]
- else:
- # start boundary prediction
- ned_start_logits = self.ned_start_classifier(model_features)
- ned_start_logits = self._mask_logits(
- ned_start_logits, prediction_mask
- ) # why?
- ned_start_probabilities = torch.softmax(ned_start_logits, dim=-1)
- ned_start_predictions = ned_start_probabilities.argmax(dim=-1)
-
- # end boundary prediction
- ned_start_labels = (
- torch.zeros_like(start_labels) if start_labels is not None else None
- )
-
- # start_labels contain entity id at their position, we just need 1 for start of entity
- if ned_start_labels is not None:
- ned_start_labels[start_labels > 0] = 1
-
- # compute end logits only if there are any start predictions.
- # For each start prediction, n end predictions are made
- ned_end_logits = self.compute_ned_end_logits(
- ned_start_predictions,
- ned_start_labels,
- model_features,
- prediction_mask,
- batch_size,
- )
- # For each start prediction, n end predictions are made based on
- # binary classification ie. argmax at each position.
- ned_end_probabilities = torch.softmax(ned_end_logits, dim=-1)
- ned_end_predictions = ned_end_probabilities.argmax(dim=-1)
- if is_prediction or is_validation:
- end_preds_count = ned_end_predictions.sum(1)
- # If there are no end predictions for a start prediction, remove the start prediction
- ned_start_predictions[ned_start_predictions == 1] = (
- end_preds_count != 0
- ).long()
- ned_end_predictions = ned_end_predictions[end_preds_count != 0]
-
- if end_labels is not None:
- end_labels = end_labels[~(end_labels == -100).all(2)]
-
- start_position, end_position = (
- (start_labels, end_labels)
- if (not is_prediction and not is_validation)
- else (ned_start_predictions, ned_end_predictions)
- )
-
- start_counts = (start_position > 0).sum(1)
- ned_end_predictions = ned_end_predictions.split(start_counts.tolist())
-
- # We can only predict relations if we have start and end predictions
- if (end_position > 0).sum() > 0:
- ends_count = (end_position > 0).sum(1)
- model_subject_features = torch.cat(
- [
- torch.repeat_interleave(
- model_features[start_position > 0], ends_count, dim=0
- ), # start position features
- torch.repeat_interleave(model_features, start_counts, dim=0)[
- end_position > 0
- ], # end position features
- ],
- dim=-1,
- )
- ents_count = torch.nn.utils.rnn.pad_sequence(
- torch.split(ends_count, start_counts.tolist()),
- batch_first=True,
- padding_value=0,
- ).sum(1)
- model_subject_features = torch.nn.utils.rnn.pad_sequence(
- torch.split(model_subject_features, ents_count.tolist()),
- batch_first=True,
- padding_value=-100,
- )
-
- if is_validation or is_prediction:
- model_subject_features = model_subject_features[:, :30, :]
-
- # entity disambiguation. Here relation_disambiguation_loss would only be useful to
- # reduce the number of candidate relations for the next step, but currently unused.
- if self.entity_type_loss or self.relation_disambiguation_loss:
- (re_ned_entities_logits) = self.compute_entity_logits(
- model_subject_features,
- model_features[
- special_symbols_mask | special_symbols_mask_entities
- ].view(batch_size, -1, model_features.shape[-1]),
- )
- entity_types = torch.sum(special_symbols_mask_entities, dim=1)[0].item()
- ned_type_logits = re_ned_entities_logits[:, :, :entity_types]
- re_entities_logits = re_ned_entities_logits[:, :, entity_types:]
-
- if self.entity_type_loss:
- ned_type_probabilities = torch.softmax(ned_type_logits, dim=-1)
- ned_type_predictions = ned_type_probabilities.argmax(dim=-1)
- ned_type_predictions = ned_type_predictions.argmax(dim=-1)
-
- re_entities_probabilities = torch.softmax(re_entities_logits, dim=-1)
- re_entities_predictions = re_entities_probabilities.argmax(dim=-1)
- else:
- (
- ned_type_logits,
- ned_type_probabilities,
- re_entities_logits,
- re_entities_probabilities,
- ) = (None, None, None, None)
- ned_type_predictions, re_entities_predictions = (
- torch.zeros([batch_size, 1], dtype=torch.long).to(input_ids.device),
- torch.zeros([batch_size, 1], dtype=torch.long).to(input_ids.device),
- )
-
- # Compute relation logits
- re_logits = self.compute_relation_logits(
- model_subject_features,
- model_features[special_symbols_mask].view(
- batch_size, -1, model_features.shape[-1]
- ),
- )
-
- re_probabilities = torch.softmax(re_logits, dim=-1)
- # we set a thresshold instead of argmax in cause it needs to be tweaked
- re_predictions = re_probabilities[:, :, :, :, 1] > 0.5
- # re_predictions = re_probabilities.argmax(dim=-1)
- re_probabilities = re_probabilities[:, :, :, :, 1]
-
- else:
- (
- ned_type_logits,
- ned_type_probabilities,
- re_entities_logits,
- re_entities_probabilities,
- ) = (None, None, None, None)
- ned_type_predictions, re_entities_predictions = (
- torch.zeros([batch_size, 1], dtype=torch.long).to(input_ids.device),
- torch.zeros([batch_size, 1], dtype=torch.long).to(input_ids.device),
- )
- re_logits, re_probabilities, re_predictions = (
- torch.zeros(
- [batch_size, 1, 1, special_symbols_mask.sum(1)[0]], dtype=torch.long
- ).to(input_ids.device),
- torch.zeros(
- [batch_size, 1, 1, special_symbols_mask.sum(1)[0]], dtype=torch.long
- ).to(input_ids.device),
- torch.zeros(
- [batch_size, 1, 1, special_symbols_mask.sum(1)[0]], dtype=torch.long
- ).to(input_ids.device),
- )
-
- # output build
- output_dict = dict(
- batch_size=batch_size,
- ned_start_logits=ned_start_logits,
- ned_start_probabilities=ned_start_probabilities,
- ned_start_predictions=ned_start_predictions,
- ned_end_logits=ned_end_logits,
- ned_end_probabilities=ned_end_probabilities,
- ned_end_predictions=ned_end_predictions,
- ned_type_logits=ned_type_logits,
- ned_type_probabilities=ned_type_probabilities,
- ned_type_predictions=ned_type_predictions,
- re_entities_logits=re_entities_logits,
- re_entities_probabilities=re_entities_probabilities,
- re_entities_predictions=re_entities_predictions,
- re_logits=re_logits,
- re_probabilities=re_probabilities,
- re_predictions=re_predictions,
- )
-
- if (
- start_labels is not None
- and end_labels is not None
- and relation_labels is not None
- ):
- ned_start_loss = self.compute_loss(ned_start_logits, ned_start_labels)
- ned_end_loss = self.compute_ned_end_loss(ned_end_logits, end_labels)
- if self.entity_type_loss or self.relation_disambiguation_loss:
- ned_type_loss = self.compute_ned_type_loss(
- disambiguation_labels,
- re_ned_entities_logits,
- ned_type_logits,
- re_entities_logits,
- entity_types,
- )
- relation_loss = self.compute_relation_loss(relation_labels, re_logits)
- # compute loss. We can skip the relation loss if we are in the first epochs (optional)
- if self.entity_type_loss or self.relation_disambiguation_loss:
- output_dict["loss"] = (
- ned_start_loss + ned_end_loss + relation_loss + ned_type_loss
- ) / 4
- output_dict["ned_type_loss"] = ned_type_loss
- else:
- output_dict["loss"] = (
- ned_start_loss + ned_end_loss + relation_loss
- ) / 3
-
- output_dict["ned_start_loss"] = ned_start_loss
- output_dict["ned_end_loss"] = ned_end_loss
- output_dict["re_loss"] = relation_loss
-
- return output_dict
diff --git a/spaces/riccorl/relik-entity-linking/relik/inference/data/window/__init__.py b/spaces/riccorl/relik-entity-linking/relik/inference/data/window/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/richardzhangy26/yandian_flow_classification/gradio_app.py b/spaces/richardzhangy26/yandian_flow_classification/gradio_app.py
deleted file mode 100644
index 9e80b5c4343df6ac24a5b6111ba07184be63f2ff..0000000000000000000000000000000000000000
--- a/spaces/richardzhangy26/yandian_flow_classification/gradio_app.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import gradio as gr
-import os
-from video_flow_inference import inference_video
-from test import infer
-from test import extract_frames
-import torch
-import torch.nn as nn
-import argparse
-from torch.utils.data import DataLoader
-from create_dataset import UCF101Dataset
-from lrcn_model import ConvLstm
-from utils_action_recognition import save_setting_info, plot_label_distribution, \
- plot_images_with_predicted_labels, create_folder_dir_if_needed, load_all_dataset_to_RAM, split_data, \
- test_model
-import os
-import cv2
-
-frames_dir = r'./data/test'
-parser = argparse.ArgumentParser(description='UCF101 Action Recognition, LRCN architecture')
-parser.add_argument('--epochs', default=1, type=int, help='number of total epochs')
-parser.add_argument('--batch-size', default=1, type=int, help='mini-batch size (default:32)')
-parser.add_argument('--lr', default=1e-4, type=float, help='initial learning rate (default:5e-4')
-parser.add_argument('--num_workers', default=4, type=int,
- help='initial num_workers, the number of processes that generate batches in parallel (default:4)')
-# 将数据集直接加载到RAM,以加快计算速度。通常在类的数量较少时使用(默认值:False)
-parser.add_argument('--load_all_data_to_RAM', default=False, type=bool,
- help='load dataset directly to the RAM, for faster computation. usually use when the num of class '
- 'is small (default:False')
-# Conv FC输出的dim维数(默认值:512)
-parser.add_argument('--latent_dim', default=512, type=int, help='The dim of the Conv FC output (default:512)')
-# 处于LSTM隐藏状态的特征数量(默认值:256)
-parser.add_argument('--hidden_size', default=256, type=int,
- help="The number of features in the LSTM hidden state (default:256)")
-# LSTM重复层的数量(默认值:2)
-parser.add_argument('--lstm_layers', default=2, type=int, help='Number of recurrent layers (default:2)')
-# 将LSTM设置为双向(默认值:True)
-parser.add_argument('--bidirectional', default=False, type=bool, help='set the LSTM to be bidirectional (default:True)')
-# 打开一个新文件夹来保存运行信息,如果为false,信息将保存在项目目录中,如果为debug,信息将保存在debug文件夹中(默认值:True)
-parser.add_argument('--open_new_folder', default='True', type=str,
- help='open a new folder for saving the run info, if false the info would be saved in the project '
- 'dir, if debug the info would be saved in debug folder(default:True)')
-
-# 加载checkpoint并继续使用它进行训练
-parser.add_argument('--load_checkpoint', default=True, type=bool,
- help='Loading a checkpoint and continue training with it')
-# checkpoint路径
-parser.add_argument('--checkpoint_path',
- default=r'./checkpoint/best_epoch_198.pth.tar',
- type=str, help='Optional path to checkpoint model')
-# checkpoint保存间隔
-parser.add_argument('--checkpoint_interval', default=5, type=int, help='Interval between saving model checkpoints')
-# 验证测试的间隔(默认值:5)
-parser.add_argument('--val_check_interval', default=5, type=int, help='Interval between running validation test')
-# 保存结果的位置 os.getcwd() 方法用于返回当前工作目录
-parser.add_argument('--local_dir', default=os.getcwd(), help='The local directory of the project, setting where to '
- 'save the results of the run')
-
-parser.add_argument('--ucf_list_dir', default='./data',
- type=str, help='path to find the UCF101 list, splitting the data to train and test')
-# 类别数
-parser.add_argument('--number_of_classes', default=6, type=int, help='The number of classes we would train on')
-
-
-
-# from label.test import infer
-def video_identity(video):
- out_video = inference_video(video)
- video_name1 = out_video.split('/')[-1]
- video_name2 = os.path.splitext(video_name1)[0]
- video_frames_dir = os.path.join(frames_dir, video_name2)
- extract_frames(out_video, video_frames_dir)
- result = infer(parser)
- return out_video,{'0012':result[0], '0221':result[1], '1012':result[2], '1102':result[3],'1122':result[4],'1221':result[5]}
-
-demo = gr.Interface(video_identity,
- gr.Video(),
- ["playable_video","label"],
- examples=[
- os.path.join(os.path.abspath(''),
- "video/example/0012_1438.mp4"), os.path.join(os.path.abspath(''),
- "video/example/0012_1600.mp4"), os.path.join(os.path.abspath(''),
- "video/example/0012_2944.mp4")],
- cache_examples=False,
- theme="freddyaboulton/dracula_revamped",
- description='''
- 0012 水平向左,垂直向上,逆时针,强度无明显变化
-
- 0221 水平向左,无垂直眼震,无轴向眼震,由强变弱
-
- 1012 水平向右,垂直向上,逆时针,强度无明显变化
-
- 1102 水平向右,垂直向下,顺时针,强度无明显变化
-
- 1122 水平向右,垂直向下,无轴向眼震,强度无明显变化
-
- 1221 水平向右,无垂直眼震,无轴向眼震,由强变弱
- ''')
-
-if __name__ == "__main__":
- demo.launch(share=True)
diff --git a/spaces/rizam/literature-research-tool/inference_hf/_inference.py b/spaces/rizam/literature-research-tool/inference_hf/_inference.py
deleted file mode 100644
index 96df0f5d71d69fe1db3853e328c5c575257ad1e9..0000000000000000000000000000000000000000
--- a/spaces/rizam/literature-research-tool/inference_hf/_inference.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import json
-import requests
-from typing import Union,List
-import aiohttp
-from asyncio import run
-
-class InferenceHF:
- headers = {"Authorization": f"Bearer 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"}
- API_URL = "https://api-inference.huggingface.co/models/"
-
- @classmethod
- def inference(cls, inputs: Union[List[str], str], model_name:str) ->dict:
- payload = dict(
- inputs = inputs,
- options = dict(
- wait_for_model=True
- )
- )
-
- data = json.dumps(payload)
- response = requests.request("POST", cls.API_URL+model_name, headers=cls.headers, data=data)
- return json.loads(response.content.decode("utf-8"))
-
- @classmethod
- async def async_inference(cls, inputs: Union[List[str], str], model_name: str) -> dict:
- payload = dict(
- inputs=inputs,
- options=dict(
- wait_for_model=True
- )
- )
-
- data = json.dumps(payload)
-
- async with aiohttp.ClientSession() as session:
- async with session.post(cls.API_URL + model_name, data=data, headers=cls.headers) as response:
- return await response.json()
-
-
-if __name__ == '__main__':
- print(InferenceHF.inference(
- inputs='hi how are you?',
- model_name= 't5-small'
- ))
-
- print(
- run(InferenceHF.async_inference(
- inputs='hi how are you?',
- model_name='t5-small'
- ))
- )
-
-
diff --git a/spaces/rkoushikroy2/portrait_photo_generator/README.md b/spaces/rkoushikroy2/portrait_photo_generator/README.md
deleted file mode 100644
index 73d3a91f37ba1570039190fa14d96f1b019856b8..0000000000000000000000000000000000000000
--- a/spaces/rkoushikroy2/portrait_photo_generator/README.md
+++ /dev/null
@@ -1,70 +0,0 @@
----
-title: Portrait Photo Generator
-emoji: 🐯🦊🐼
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.1.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-# Portrait Photo Generator
-
-In this repo we are going to use image segmentation model(facebook/detr-resnet-50-panoptic) to generate portrait photo of any object present in a photo. Then we are going to make a deployable app using gradio and finally, we will host it in hugging face hub spaces.
-
-## User Interface
-Here is a screenshot of the interface. The app has been hosted in hugging face spaces. [Click Here]() to explore the deployed version.
-
-> You can drag and drop images in the top left image box. After you drop an image the image segmentaion model will process the image and provide a list of objects in the drowpdown menu in the right. Now, you can choose any object from the list and that objet will be foucsed in the Output image box. You can adjust the slider to adject the strength of the background blur. The whole interface is reactive and anything you change will trigger an auto refresh. You may find the example images in lower right useful.
-
-## Protrait Image Generation Steps:
-- Load segmentation model, pass your image, generate the masks and labels for each object present in the image.
-- Make a list of objects from the "label" keys in the prediction dictionaries.
-- Add a number identifier so that each object name can be unique. This is important for selecting a object.
-- Image segmentation and background blurring method for selected object in the image:
- - Take the mask provided by the segmentation model in the earlier stage.
- - Divide by 255 to make the range 0 to 1.
- - At this point the mask is 1 channel. Make it three channel by coping the single channel three times.
- - Element wise multiply the input image and the three channel mask.
- This will give an image where only the segmented part of the image will be present. This means, only the selected object pixels will be intact and other pixels will be black (0).
- - Now, take the original image and blur it using any kind of blurring kernel. Here I used Gaussian Blur.
- - After that, create a invert of the three channel mask created the previous steps.
- - Element wise multiply the invert mask and blurred image.
- This will give an image where the blurred background of the selected object will be present and
- the pixels associate with the object of interest will be blank/black/value=0.
- - Then, add up the segmented image and reverse segmented (blurred background portion).
- This will give the desired portrait photo looking output.
- - Smoothen the image.
-
-## Gradio GUI Steps:
-- Used block element for more control over the interface.
-- Used image.change, slider.change and dropdown.change event handler for generating output each time any of these changes.
-
-## Requirements
-```
-gradio
-numpy
-Pillow
-torch
-timm
-transformers
-```
-```
-pip install -r requirements.txt
-```
-
-## Run App
-```
-python app.py
-```
-
-## Files
-| File | Contents |
-|---|--- |
-| [Segmentation Notebook](segmentation.ipynb) | Here the segmentation model to background blur part has been shown step by step.|
-| [Gradio Notebook](gradio.ipynb) | You may experiment with different componets of a gradio app.|
-| [Gradio App](app.py) | It is the main file of the deploed version of the portrait photo generator app.|
-
-
-
diff --git a/spaces/robin0307/MMOCR/configs/_base_/det_models/textsnake_r50_fpn_unet.py b/spaces/robin0307/MMOCR/configs/_base_/det_models/textsnake_r50_fpn_unet.py
deleted file mode 100644
index 7d74f376b8c635451a3036e780ffc88e7640bf2c..0000000000000000000000000000000000000000
--- a/spaces/robin0307/MMOCR/configs/_base_/det_models/textsnake_r50_fpn_unet.py
+++ /dev/null
@@ -1,22 +0,0 @@
-model = dict(
- type='TextSnake',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='BN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPN_UNet', in_channels=[256, 512, 1024, 2048], out_channels=32),
- bbox_head=dict(
- type='TextSnakeHead',
- in_channels=32,
- loss=dict(type='TextSnakeLoss'),
- postprocessor=dict(
- type='TextSnakePostprocessor', text_repr_type='poly')),
- train_cfg=None,
- test_cfg=None)
diff --git a/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_modality_transform_toy_dataset.py b/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_modality_transform_toy_dataset.py
deleted file mode 100644
index 1bb350fc3f49418f2841df2d65f183c34e08db0e..0000000000000000000000000000000000000000
--- a/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_modality_transform_toy_dataset.py
+++ /dev/null
@@ -1,31 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/recog_models/nrtr_modality_transform.py',
- '../../_base_/schedules/schedule_adam_step_6e.py',
- '../../_base_/recog_datasets/toy_data.py',
- '../../_base_/recog_pipelines/nrtr_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline = {{_base_.test_pipeline}}
-
-data = dict(
- samples_per_gpu=16,
- workers_per_gpu=2,
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
diff --git a/spaces/robjm16/domain_specific_ChatGPT/DOMAIN_SPECIFIC_CHATGPT.md b/spaces/robjm16/domain_specific_ChatGPT/DOMAIN_SPECIFIC_CHATGPT.md
deleted file mode 100644
index b004214cdaafea4a5a646ae4912db46be6b8f77b..0000000000000000000000000000000000000000
--- a/spaces/robjm16/domain_specific_ChatGPT/DOMAIN_SPECIFIC_CHATGPT.md
+++ /dev/null
@@ -1,150 +0,0 @@
-# Leveraging ChatGPT for Business and Organizational Purposes
-
-Since its launch in November 2022, ChatGPT has captivated the world by answering questions on virtually any subject in human-like fashion. Not to mention its ability to compose poems in seconds. Or write computer code based on plain language instructions.
-
-ChatGPT will no doubt have a huge impact on the public -- as well as on companies and other institutions.
-
-For organizations, the key will be to leverage ChatGPT's extraordinary powers across specific domain areas, such as industries or corporate functions.
-
-An insurance company, for example, might customize ChatGPT to answer questions on all its policy documents. The modified version could be used internally to train and inform service reps and, eventually, with customers directly via its app or website. The frictional costs of getting the right information in the right format at the right time would trend to zero.
-
-Industries and functions that are knowledge-intensive (e.g., healthcare, professional services, education) and service-intensive (IT, legal, marketing and sales) will likely benefit most from ChatGPT’s powers.
-
-But almost all organizations and functions have significant knowledge management needs, and ChatGPT could help improve how they operate and serve customers -- perhaps dramatically.
-
-
-## Building Domain-Specific Capabilities
-Developed by OpenAI, an artificial intelligence research company, ChatGPT was "trained" on a massive trove of text data on the internet up through 2021 -- some 300 billion words from web pages, books and other documents. (The "GPT" in ChatGPT stands for “Generative Pretrained Transformer,” a technical reference to the AI model.)
-
-Due to the training cut off, ChatGPT knows little or nothing about events that occurred in 2022 and later. Nor does it know anything about organizational documents that were not available to it in its training.
-
-But through an API (Application Programming Interface, which lets computer programs talk with each other), organizations can incorporate new information into ChatGPT. This feature enables it to stay up-to-date with the latest developments or specific knowledge in an industry or field.
-
-To demonstrate how this might work, I built a simple domain-specific [chatbot](https://huggingface.co/spaces/robjm16/domain_specific_ChatGPT). In my example, I took the 2023 investment outlook summaries posted to the web by Morgan Stanley [(here)](https://www.morganstanley.com/ideas/global-investment-strategy-outlook-2023), JPMorgan [(here)](https://www.jpmorgan.com/insights/research/market-outlook) and Goldman Sachs [(here)](https://www.goldmansachs.com/insights/pages/gs-research/macro-outlook-2023-this-cycle-is-different/report.pdf) and combined them into one 4,000 word document.
-
-Through a process described more fully below, the investment outlook information was fed into ChatGPT and became the basis for responses to questions such as: "What does Goldman see happening with inflation in 2023?" and "What is the outlook for the bond market?"
-
-Most of the [my code](https://github.com/robjm16/domain_specific_ChatGPT) was adapted from OpenAI's [cookbook](https://github.com/openai/openai-cookbook) of code examples for working with ChatGPT.
-
-Below is an overview of what I discovered through the development process and related research.
-
-## ChatGPT’s Many Uses
-ChatGPT’s capabilities go well beyond what traditional chatboxes offer:
-- It can draft copy for marketing materials, blog posts and product descriptions.
-- It can edit, summarize or translate any text, and write in almost any voice (e.g., ad copy tone).
-- It can be used for text classification – for example, whether tweets about your organization were positive or negative last week.
-- It can quickly organize unstructured information, such as a doctor's diagnostic notes.
-
-On the computer coding side:
-- It can convert written instructions into computer code.
-- It can explain and document your code.
-- It can convert between coding languages (e.g., Java to Python).
-- It can write test cases and help fix bugs.
-
-## Two Key Mechanisms: Prompt and Completion
-When interacting with ChatGPT, either through a web interface or through computer code via the API, the prompt and completion mechanisms are key.
-
-The prompt is an input mechanism into which you place your question or request, as well as any context, including domain-specific content and other instructions (e.g., respond in a certain format).
-
-The completion mechanism is ChatGPT’s response to your prompt. It answers your question or request. Importantly, it contains a parameter called “temperature,” which controls how creative ChatGPT should be in responding to your prompt. A lower temperature means ChatGPT will be conservative, sticking to the most factual information and not trying to guess if unsure.
-
-## Domain-Specific Uses: Technical Approaches
-There are three ways to interact with ChatGPT for domain-specific purposes:
-1. Use as is: The first approach is to use ChatGPT as is. For example, ChatGPT has well-honed classification capabilities, and it may not benefit much from domain-specific examples. If you want to use ChatGPT to classify or summarize online review sentiment about your business, its inherent capabilities should work fine.
-
-2. Inject content into prompts: The second approach is to inject domain-specific context into your prompt. In this scenario, ChatGPT still fully uses its natural language capabilities, but it looks to your specific content when formulating an answer.
-
-3. Fine-tune a model: Currently, only the previous and less powerful version of ChatGPT’s neural network model (GPT-2) is available to download and use in your own environment. With GPT-2 and other relatively small pre-trained libraries, you can adapt the model in a process called transfer learning and train the model on your domain-specific content.
-
- The newest model (GPT-3) can only be accessed via the OpenAI API. You can “fine tune” it on your content and save a new version of it for future use. But you cannot fundamentally modify the model and retrain it in the traditional machine learning sense.
-
- One reason why is the sheer size of the pre-trained model. The time (weeks or months) and computing costs of fully retraining it would be prohibitive to all but the largest organizations. Further, any significant retraining would run the risk of inadvertently losing some of ChatGPT's powerful capabilities.
-
- Instead, with GPT-3, you start with the base model and feed it your domain-specific content in the form of questions and answers. Making matters easier, the model itself can create the questions and answers based off of your content. The model then runs in the background, seeking to maximize its ability to answer correctly by updating some of the model’s parameters (see discussion of neural networks below). When complete, it creates a proprietary version of the model for future use.
-
-The second and third approaches are not mutually exclusive. The key difference is that the third approach tailors the model to your information and produces a reusable customized model (more on this later). With approach two, the base model is used unchanged and the model retains no "memory" of the injected content, outside of the current session.
-
-## Contextual Embeddings: 4,000 Shades of Meaning
-When ChatGPT receives a question and other content as input, it first maps each word or word fragment to a unique numerical identifier called a token. With ChatGPT, each token represents approximately 0.75 words. (The math is important due to usage limits on ChatGPT.)
-
-Each token in the input receives a numerical representation of the word or word fragment called an "embedding." For example, the word "queen" can be represented by a series of numerical sequences capturing how close the word is semantically to words such as "king," "female” and “leader." The embedding also captures syntax and context.
-
-ChatGPT then combines all the tokens and embeddings in the input text (let's assume it's a paragraph) and generates a fixed-length numerical representation of the paragraph. In ChatGPT's case, each input paragraph has 4,096 data points or dimensions associated with it. This is known as "contextual embedding." The actual embedding might look like this: [0.016102489084005356, -0.011134202592074871, …, 0.01891878806054592].
-
-## GPT-3: One of World’s Largest Neural Networks
-Neural networks are often described as brain-like, with “neurons” and connecting “synapses.” In the simple example below, the far left layer takes in input (e.g., a paragraph of text) and the far right layer is the output (the answer or response). In between, the input goes through many layers and nodes, depending on the complexity of the model. This part is “hidden” in that what each node represents is not easily discernable.
-
-The lines between the model's nodes (similar to synapses connecting neurons in the brain), receive a mathematical weighting that maximizes the chances that the output will be correct (and errors minimized). These weightings are called parameters.
-
-
-
-
-The ChatGPT model (GPT-3) has 175 billion potential line weightings or parameters, but not all of them “fire” depending on the prompt. By contrast, GPT-2 has "just" 1.5 billion parameters.
-
-In addition, the ChatGPT model has an “attention” mechanism that allows it to differentially weight the importance of parts of the input text, leading to a more coherent and fluent response.
-
-The ChatGPT model was also partially trained on how actual people rated its answers, helping to make responses not just factually correct but more human like.
-
-## ChatGPT in Action: My Investment Outlook Example
-The first step in leveraging ChatGPT on domain-specific content is to gather the content and pre-process it as needed (e.g., chunking it into sentences or paragraphs).
-
-The ChatGPT API has limits on the amount of work it will do for free. Accordingly, I limited my example to about 4,000 words containing the investment outlooks from the three banks. I further arranged the content into about 30 paragraphs.
-
-There is a limit of 2,048 tokens – or about 1,500 words – for both the prompt and completion. While my document is 4,000 words, only the most relevant sections are fed into the prompt, thus keeping under the token limit.
-
-The document’s 30 paragraphs are first sent out to the ChatGPT API to get contextual embeddings. When a question is asked, that question also gets its respective embeddings via the API.
-
-Next, computer code in my environment (not via the API) compares the question to the content in the 30 paragraphs. It then picks the best-fit paragraphs based on how close the question is semantically to each paragraph (by doing a lot of math around their respective contextual embeddings).
-
-The best-fit paragraphs are then attached to the question as "context" and fed back to ChatGPT via the API for an answer. My program also instructs ChatGPT to say, "Sorry, I don't know," if it is asked a question where it does not have good information, thus reining in ChatGPT's tendency to answer even when unsure.
-
-Lastly, ChatGPT combines the question, the added domain content and the model's inherent natural language processing skills to produce a response.
-
-Below is an example of a question and response within the interface:
-
-
-
-After fine tuning and creating a proprietary version of the base model, I found that my new version of ChatGPT could answer questions based off of the newly ingested domain-specific content, even without added context in the prompt. However, the answers were much less specific than when context was attached, as in approach two.
-
-Next, I combined the two approaches -- fine tuning a custom model while adding best-fit context in the prompt.
-
-This seemed to work at least as well as approach one (context added but no fine tuning). But further experimentation and testing would be needed to determine if, in my case, fine tuning added extra value.
-
-It is important to note that fine tuning does not create a new store of information in the model. In fine tuning, you typically feed the model hundreds of examples, whereas the base model has been trained on hundreds of millions of documents. At best, it appears that fine tuning can adjust the model to some degree to your domain area's terminology and specific task instructions. Fine tuning can also work well on very specific classification exercises. However, to align responses closely with domain content in a question-and-answer model, you should continue to inject the domain content into the prompt.
-
-## The ChatGPT Ecosystem
-OpenAI was founded in 2015 by a group that includes Elon Musk, with Microsoft as an investor and key partner.
-
-Microsoft plans to incorporate ChatGPT into its product portfolio. For example, ChatGPT could be used in Microsoft Word and PowerPoint, to automate writing, editing and summarization. It could also be used to augment Microsoft’s Bing search engine, providing human-like answers to questions as well as a more semantic-based search.
-
-ChatGPT’s coding assistance capabilities could be integrated with Microsoft’s Visual Studio code editor. In fact, some coders are already using GPT-3 in tandem with Microsoft's Github Copilot, a code auto-completion tool, and reporting significant gains in personal productivity. And Microsoft has moved quickly to integrate GPT-3 into its Azure cloud computing services.
-
-Other large cloud providers – notably Google Cloud Platform and Amazon Web Services (AWS) – also offer fast evolving AI tools.
-
-Google, in fact, developed several of the most powerful “large language models” similar to GPT-3 (they go by the names BERT, T5 and XLNet). Google’s CEO called a “code red” following the release of ChatGPT, challenging Google engineers to quickly incorporate its ChatGPT-like models into its dominant search platform.
-
-AWS’s suite of AI services is called SageMaker. Like the other cloud-based AI toolkits, SageMaker includes pre-built algorithms that enable companies to quickly build, train and deploy machine learning models.
-
-Meta/Facebook and Salesforce have also developed large language models (RoBERTa and CTRL, respectively).
-
-Another player is Hugging Face, which hosts a popular community website for sharing open-source models and quickly prototyping and deploying models. You can download and use GPT-2 through Hugging Face (or access GPT-3 through the OpenAI API.)
-
-## Data Security
-Each organization needs to make its own security judgments around using ChatGPT, including server, VPN, firewall, encryption and data security issues.
-
-OpenAI says that information shared as input in the public chatbot (as well as the responses) could be used to improve the future performance of the GPT model. It notes, however, that it has guidelines to guard against confidential or personal information being used in training. Further, given the massive size of GPT's training data, OpenAI believes it is unlikely that a small amount of organizational data could affect the base model's training. That said, caution is warranted.
-
-The fine-tuned model is different. OpenAI says that it does not have access to the prompts you might use in fine tuning the model, and it could not use them to train the publicly available base model.
-
-For added control and security, organizations can purchase GPT-3 licenses for on-premises deployment or a "fully managed" enterprise solution hosted on the Microsoft Azure cloud.
-
-## Bottom line
-ChatGPT is a disruptive technology, a potential game changer.
-
-Companies should start by considering its potential strategic impact and by identifying possible use cases. How might these models disrupt your industry? Where can they be leveraged to dramatically improve knowledge management, customer service or functional processes?
-
-Organizations can experiment with ChatGPT by developing low-risk prototypes in sandboxed environments. The process will give rise to many questions about restricting or sanitizing inputs, curating domain-specific content, dealing with limitations of the models (e.g., serving up incorrect information), fine tuning, hosting and security.
-
-It's important to note that ChatGPT-like models are far from being black boxes that operate as autonomous machines. In fact, as a recent McKinsey [report](https://www.mckinsey.com/capabilities/quantumblack/our-insights/generative-ai-is-here-how-tools-like-chatgpt-could-change-your-business) notes, "In many cases, they are most powerful in combination with humans, augmenting their capabilities and enabling them to get work done faster and better."
-
-Some liken their use to having an army of interns at your side. The interns are super fast, incredibly productive and very smart -- but also overconfident and inexperienced. They need supervision, someone who can catch errors. With that caveat, they could make your marketers, lawyers, software engineers, service reps and other subject matter experts -- and your organization overall -- far more effective and efficient.
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Conan Exiles Most Powerful Weapon A Guide to the Legendary Riptide Spear.md b/spaces/rorallitri/biomedical-language-models/logs/Conan Exiles Most Powerful Weapon A Guide to the Legendary Riptide Spear.md
deleted file mode 100644
index 37333bc94a4c2d25ef8b0dea2cf7df80a2a696ee..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Conan Exiles Most Powerful Weapon A Guide to the Legendary Riptide Spear.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
Whoever the Grey Ones are in Conan Exiles, players ought to thank them for giving inspiration to one of the most competitive swords in the game. They're apparently, the discreet aliens of the game's lore and thus, the weapon derived from their materials is weird.
The Titan's Fingernail stays true to its name when it comes to stats as the weapon is powerful enough to deal with most enemies that wear heavy armor. Despite the less-than-classy name though, the Titan's Fingernail is actually a pretty weapon and doesn't look like a behemoth's overgrown talons.
-
The difference is that it has a whopping 73 damage but the same second-highest armor penetration as the abovementioned swords. That already makes it the most appealing weapon requiring Eldarium to craft.
-
These all are the most powerful weapons that you can find in Conan exiles. Some of them are easy to craft whereas others can be obtained by defeating bosses. But they are worth all of your efforts. Once you get them only then you can experience the power they possess.
-
This guide has covered the best weapons in Conan Exiles. These weapons will help you get through the toughest battles. We have discussed almost all sorts of weapons including one-handed, two-handed, and ranged weapons.
-
-
Two-handed swords follow their smaller cousins as speedy weapons. Their larger build has the added benefit of a longer reach. They also apply cripple often, making it difficult for your opponents to escape. They do mediocre damage, mostly due to their low armor penetration.
-
Two-handed spears in Conan Exiles have decent damage with great reach. While a pike generally applies bleed and cripple on their 4th attack, their real benefit is how far away you can be while still dealing out relatively high damage. Two-handed spears are some of the most difficult weapons to master, and then you must master them on horseback.
-
Not every weapon is created or maintained equally. Not only can you create more powerful weapons as you progress through Conan Exiles, but you can also modify other aspects of them to suit your needs.
-
Conan Exiles is a survival game where players must have powerful weapons and an even more powerful wit to survive and thrive. Still, several axes or swords can only keep performers safe with a sturdy set of armor to rescue their literal skin in the game. Sword of Crom will leave your foe astonished when it gives the harm of 91 points. It is a legendary weapon; hence it cannot be crafted.
-
Two-handed spears, daggers, and one-handed axes, are currently the best types in the game. Though spears are distant and away best, a spear of the Gray Ones has the most elevated impairment rating of 62 points. Sword of Crom: Here comes the most prominent two-handed sword of the game. Sword of Crom will depart your opponent stunned when it gives the harm of 91 points. It is a legendary weapon; hence it cannot be drafted.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Iec Tr 61641.pdf.md b/spaces/rorallitri/biomedical-language-models/logs/Iec Tr 61641.pdf.md
deleted file mode 100644
index aac1749bcb5a037170eb8869f604931dd012f525..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Iec Tr 61641.pdf.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-If you have one of the Interarms Mauser Lugers made from ... However, from the others serial numbers available in the Mauser ledgers it is ... 1fdad05405
-
-
-
diff --git a/spaces/sahshd/ChuanhuChatGPT/chatgpt - windows.bat b/spaces/sahshd/ChuanhuChatGPT/chatgpt - windows.bat
deleted file mode 100644
index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000
--- a/spaces/sahshd/ChuanhuChatGPT/chatgpt - windows.bat
+++ /dev/null
@@ -1,14 +0,0 @@
-@echo off
-echo Opening ChuanhuChatGPT...
-
-REM Open powershell via bat
-start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py"
-
-REM The web page can be accessed with delayed start http://127.0.0.1:7860/
-ping -n 5 127.0.0.1>nul
-
-REM access chargpt via your default browser
-start "" "http://127.0.0.1:7860/"
-
-
-echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/).
\ No newline at end of file
diff --git a/spaces/sayakpaul/sots-outdoor-dehazing-maxim/maxim/maxim.py b/spaces/sayakpaul/sots-outdoor-dehazing-maxim/maxim/maxim.py
deleted file mode 100644
index ae195b3b531ca85eb1ae23e7be46647ec421e3d0..0000000000000000000000000000000000000000
--- a/spaces/sayakpaul/sots-outdoor-dehazing-maxim/maxim/maxim.py
+++ /dev/null
@@ -1,320 +0,0 @@
-import functools
-
-import tensorflow as tf
-from tensorflow.keras import backend as K
-from tensorflow.keras import layers
-
-from .blocks.attentions import SAM
-from .blocks.bottleneck import BottleneckBlock
-from .blocks.misc_gating import CrossGatingBlock
-from .blocks.others import UpSampleRatio
-from .blocks.unet import UNetDecoderBlock, UNetEncoderBlock
-from .layers import Resizing
-
-Conv1x1 = functools.partial(layers.Conv2D, kernel_size=(1, 1), padding="same")
-Conv3x3 = functools.partial(layers.Conv2D, kernel_size=(3, 3), padding="same")
-ConvT_up = functools.partial(
- layers.Conv2DTranspose, kernel_size=(2, 2), strides=(2, 2), padding="same"
-)
-Conv_down = functools.partial(
- layers.Conv2D, kernel_size=(4, 4), strides=(2, 2), padding="same"
-)
-
-
-def MAXIM(
- features: int = 64,
- depth: int = 3,
- num_stages: int = 2,
- num_groups: int = 1,
- use_bias: bool = True,
- num_supervision_scales: int = 1,
- lrelu_slope: float = 0.2,
- use_global_mlp: bool = True,
- use_cross_gating: bool = True,
- high_res_stages: int = 2,
- block_size_hr=(16, 16),
- block_size_lr=(8, 8),
- grid_size_hr=(16, 16),
- grid_size_lr=(8, 8),
- num_bottleneck_blocks: int = 1,
- block_gmlp_factor: int = 2,
- grid_gmlp_factor: int = 2,
- input_proj_factor: int = 2,
- channels_reduction: int = 4,
- num_outputs: int = 3,
- dropout_rate: float = 0.0,
-):
- """The MAXIM model function with multi-stage and multi-scale supervision.
-
- For more model details, please check the CVPR paper:
- MAXIM: MUlti-Axis MLP for Image Processing (https://arxiv.org/abs/2201.02973)
-
- Attributes:
- features: initial hidden dimension for the input resolution.
- depth: the number of downsampling depth for the model.
- num_stages: how many stages to use. It will also affects the output list.
- num_groups: how many blocks each stage contains.
- use_bias: whether to use bias in all the conv/mlp layers.
- num_supervision_scales: the number of desired supervision scales.
- lrelu_slope: the negative slope parameter in leaky_relu layers.
- use_global_mlp: whether to use the multi-axis gated MLP block (MAB) in each
- layer.
- use_cross_gating: whether to use the cross-gating MLP block (CGB) in the
- skip connections and multi-stage feature fusion layers.
- high_res_stages: how many stages are specificied as high-res stages. The
- rest (depth - high_res_stages) are called low_res_stages.
- block_size_hr: the block_size parameter for high-res stages.
- block_size_lr: the block_size parameter for low-res stages.
- grid_size_hr: the grid_size parameter for high-res stages.
- grid_size_lr: the grid_size parameter for low-res stages.
- num_bottleneck_blocks: how many bottleneck blocks.
- block_gmlp_factor: the input projection factor for block_gMLP layers.
- grid_gmlp_factor: the input projection factor for grid_gMLP layers.
- input_proj_factor: the input projection factor for the MAB block.
- channels_reduction: the channel reduction factor for SE layer.
- num_outputs: the output channels.
- dropout_rate: Dropout rate.
-
- Returns:
- The output contains a list of arrays consisting of multi-stage multi-scale
- outputs. For example, if num_stages = num_supervision_scales = 3 (the
- model used in the paper), the output specs are: outputs =
- [[output_stage1_scale1, output_stage1_scale2, output_stage1_scale3],
- [output_stage2_scale1, output_stage2_scale2, output_stage2_scale3],
- [output_stage3_scale1, output_stage3_scale2, output_stage3_scale3],]
- The final output can be retrieved by outputs[-1][-1].
- """
-
- def apply(x):
- n, h, w, c = (
- K.int_shape(x)[0],
- K.int_shape(x)[1],
- K.int_shape(x)[2],
- K.int_shape(x)[3],
- ) # input image shape
-
- shortcuts = []
- shortcuts.append(x)
-
- # Get multi-scale input images
- for i in range(1, num_supervision_scales):
- resizing_layer = Resizing(
- height=h // (2 ** i),
- width=w // (2 ** i),
- method="nearest",
- antialias=True, # Following `jax.image.resize()`.
- name=f"initial_resizing_{K.get_uid('Resizing')}",
- )
- shortcuts.append(resizing_layer(x))
-
- # store outputs from all stages and all scales
- # Eg, [[(64, 64, 3), (128, 128, 3), (256, 256, 3)], # Stage-1 outputs
- # [(64, 64, 3), (128, 128, 3), (256, 256, 3)],] # Stage-2 outputs
- outputs_all = []
- sam_features, encs_prev, decs_prev = [], [], []
-
- for idx_stage in range(num_stages):
- # Input convolution, get multi-scale input features
- x_scales = []
- for i in range(num_supervision_scales):
- x_scale = Conv3x3(
- filters=(2 ** i) * features,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_input_conv_{i}",
- )(shortcuts[i])
-
- # If later stages, fuse input features with SAM features from prev stage
- if idx_stage > 0:
- # use larger blocksize at high-res stages
- if use_cross_gating:
- block_size = (
- block_size_hr if i < high_res_stages else block_size_lr
- )
- grid_size = grid_size_hr if i < high_res_stages else block_size_lr
- x_scale, _ = CrossGatingBlock(
- features=(2 ** i) * features,
- block_size=block_size,
- grid_size=grid_size,
- dropout_rate=dropout_rate,
- input_proj_factor=input_proj_factor,
- upsample_y=False,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_input_fuse_sam_{i}",
- )(x_scale, sam_features.pop())
- else:
- x_scale = Conv1x1(
- filters=(2 ** i) * features,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_input_catconv_{i}",
- )(tf.concat([x_scale, sam_features.pop()], axis=-1))
-
- x_scales.append(x_scale)
-
- # start encoder blocks
- encs = []
- x = x_scales[0] # First full-scale input feature
-
- for i in range(depth): # 0, 1, 2
- # use larger blocksize at high-res stages, vice versa.
- block_size = block_size_hr if i < high_res_stages else block_size_lr
- grid_size = grid_size_hr if i < high_res_stages else block_size_lr
- use_cross_gating_layer = True if idx_stage > 0 else False
-
- # Multi-scale input if multi-scale supervision
- x_scale = x_scales[i] if i < num_supervision_scales else None
-
- # UNet Encoder block
- enc_prev = encs_prev.pop() if idx_stage > 0 else None
- dec_prev = decs_prev.pop() if idx_stage > 0 else None
-
- x, bridge = UNetEncoderBlock(
- num_channels=(2 ** i) * features,
- num_groups=num_groups,
- downsample=True,
- lrelu_slope=lrelu_slope,
- block_size=block_size,
- grid_size=grid_size,
- block_gmlp_factor=block_gmlp_factor,
- grid_gmlp_factor=grid_gmlp_factor,
- input_proj_factor=input_proj_factor,
- channels_reduction=channels_reduction,
- use_global_mlp=use_global_mlp,
- dropout_rate=dropout_rate,
- use_bias=use_bias,
- use_cross_gating=use_cross_gating_layer,
- name=f"stage_{idx_stage}_encoder_block_{i}",
- )(x, skip=x_scale, enc=enc_prev, dec=dec_prev)
-
- # Cache skip signals
- encs.append(bridge)
-
- # Global MLP bottleneck blocks
- for i in range(num_bottleneck_blocks):
- x = BottleneckBlock(
- block_size=block_size_lr,
- grid_size=block_size_lr,
- features=(2 ** (depth - 1)) * features,
- num_groups=num_groups,
- block_gmlp_factor=block_gmlp_factor,
- grid_gmlp_factor=grid_gmlp_factor,
- input_proj_factor=input_proj_factor,
- dropout_rate=dropout_rate,
- use_bias=use_bias,
- channels_reduction=channels_reduction,
- name=f"stage_{idx_stage}_global_block_{i}",
- )(x)
- # cache global feature for cross-gating
- global_feature = x
-
- # start cross gating. Use multi-scale feature fusion
- skip_features = []
- for i in reversed(range(depth)): # 2, 1, 0
- # use larger blocksize at high-res stages
- block_size = block_size_hr if i < high_res_stages else block_size_lr
- grid_size = grid_size_hr if i < high_res_stages else block_size_lr
-
- # get additional multi-scale signals
- signal = tf.concat(
- [
- UpSampleRatio(
- num_channels=(2 ** i) * features,
- ratio=2 ** (j - i),
- use_bias=use_bias,
- name=f"UpSampleRatio_{K.get_uid('UpSampleRatio')}",
- )(enc)
- for j, enc in enumerate(encs)
- ],
- axis=-1,
- )
-
- # Use cross-gating to cross modulate features
- if use_cross_gating:
- skips, global_feature = CrossGatingBlock(
- features=(2 ** i) * features,
- block_size=block_size,
- grid_size=grid_size,
- input_proj_factor=input_proj_factor,
- dropout_rate=dropout_rate,
- upsample_y=True,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_cross_gating_block_{i}",
- )(signal, global_feature)
- else:
- skips = Conv1x1(
- filters=(2 ** i) * features, use_bias=use_bias, name="Conv_0"
- )(signal)
- skips = Conv3x3(
- filters=(2 ** i) * features, use_bias=use_bias, name="Conv_1"
- )(skips)
-
- skip_features.append(skips)
-
- # start decoder. Multi-scale feature fusion of cross-gated features
- outputs, decs, sam_features = [], [], []
- for i in reversed(range(depth)):
- # use larger blocksize at high-res stages
- block_size = block_size_hr if i < high_res_stages else block_size_lr
- grid_size = grid_size_hr if i < high_res_stages else block_size_lr
-
- # get multi-scale skip signals from cross-gating block
- signal = tf.concat(
- [
- UpSampleRatio(
- num_channels=(2 ** i) * features,
- ratio=2 ** (depth - j - 1 - i),
- use_bias=use_bias,
- name=f"UpSampleRatio_{K.get_uid('UpSampleRatio')}",
- )(skip)
- for j, skip in enumerate(skip_features)
- ],
- axis=-1,
- )
-
- # Decoder block
- x = UNetDecoderBlock(
- num_channels=(2 ** i) * features,
- num_groups=num_groups,
- lrelu_slope=lrelu_slope,
- block_size=block_size,
- grid_size=grid_size,
- block_gmlp_factor=block_gmlp_factor,
- grid_gmlp_factor=grid_gmlp_factor,
- input_proj_factor=input_proj_factor,
- channels_reduction=channels_reduction,
- use_global_mlp=use_global_mlp,
- dropout_rate=dropout_rate,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_decoder_block_{i}",
- )(x, bridge=signal)
-
- # Cache decoder features for later-stage's usage
- decs.append(x)
-
- # output conv, if not final stage, use supervised-attention-block.
- if i < num_supervision_scales:
- if idx_stage < num_stages - 1: # not last stage, apply SAM
- sam, output = SAM(
- num_channels=(2 ** i) * features,
- output_channels=num_outputs,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_supervised_attention_module_{i}",
- )(x, shortcuts[i])
- outputs.append(output)
- sam_features.append(sam)
- else: # Last stage, apply output convolutions
- output = Conv3x3(
- num_outputs,
- use_bias=use_bias,
- name=f"stage_{idx_stage}_output_conv_{i}",
- )(x)
- output = output + shortcuts[i]
- outputs.append(output)
- # Cache encoder and decoder features for later-stage's usage
- encs_prev = encs[::-1]
- decs_prev = decs
-
- # Store outputs
- outputs_all.append(outputs)
- return outputs_all
-
- return apply
diff --git a/spaces/scedlatioru/img-to-music/example/Hangover 2 Tamil Dubbed Movie Free Download _BEST_.md b/spaces/scedlatioru/img-to-music/example/Hangover 2 Tamil Dubbed Movie Free Download _BEST_.md
deleted file mode 100644
index fbe3c522e5a9f7d22a8685d5b66657badf5475cd..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Hangover 2 Tamil Dubbed Movie Free Download _BEST_.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
Mathil Mel Poonai 3gp Movie Free Download 2 Sealdah-Singrauli Express Train Online Ticket Booking Liste de numeros de telefono gratis desde https reivindicator.exe millenium fonts free download windows 7 Privet Tomcat 3 Step By Step Spada Dedicata Doshu Penegini No Orkestar Kaikai Ezy Invoice Pro 10.6.3.11 Final Activated
Cobra 2009 torrent free hd Ratnage purnachandi 2001 telugu 720p quality hd Glowpoint dutch subtitles Lola Amor full movie hd free download Microsoft Office 2010 Product Key brendan jonas romance music video hd actors:-
-
les vacances du petit nicolas french blu ray 1080p 2014 corvette sygic india 12.1 ipa cracked Rog movie in hd download visual group theory pdf free Aero Glass For Win 8 Cracked Tooth
-
wave xtractor 3 2 keygen download Fantastic Four (English) telugu movie full 1080p free Ezy Invoice Pro 10.6.3.11 Final Activated Dameware.NT.Utilities.v6.9.0.0.Incl.Keymaker-ZWT.rar
-
-
Madurai mandapam comedy full movie free filmbam.com sahara linux mint 18.3 De 9 ArqSans Pro xubuntu iso844 peliculas graciosas de repente 2012 no download ĺå¢äºÅäð¹äÊäÇããÇÃ Ä ÇäÊÇã windows 10 slim size free download baby italia game download windows 7
-
Pitu Movie hd 1080p free download datacanyon.sg xls to pdf converter download Doom 3 brown skin nude models download tabbakaa buttab kot ein hakem v1.0.iso Fallout 3 mods xbox 360
-
vidio view 4 full version free download Download Cafez 3.5.0 activation keygen full vnstat downloader 2.3.2.1 patch Seichill Moomba lv25 speaker info *Generated 7/20/2015 22:48:24 from Serial*
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/scedlatioru/img-to-music/example/Radiohead Amnesiac Collectors Edition 2cd Torrent.md b/spaces/scedlatioru/img-to-music/example/Radiohead Amnesiac Collectors Edition 2cd Torrent.md
deleted file mode 100644
index 3501ba1f0dcceb6b3a4def0e0a178433f41c321b..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Radiohead Amnesiac Collectors Edition 2cd Torrent.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
-Amnesiac (2 CD Collector's Edition): Radiohead: Amazon.ca: Music. . Amnesiac (2 CD Collector's Edition). Radiohead (Artist) Format: Audio CD. Price.
-Amnesiac (CD).
-radiohead.
-0. Amazon.ca: music.
-Amnesiac: Original Motion Picture Soundtrack – Amnesiac: Original Motion Picture Soundtrack.
-Amnesiac (CD) | Amazon.com.
-Amnesiac (CD) Amazon.com.
-Amazon.com: Amnesiac (CD) - Radiohead: Music.
-Radiohead (Artist) Format: Audio CD.
-Price.
-Amnesiac (CD) - B1ZiB.
-Amazon.de: Amnesiac (CD) - Radiohead - Amazon.de.
-The Essential Radiohead – Amnesiac (CD) ...
-The Essential Radiohead Amnesiac (CD) The Essential ...
-1 day ago. 8a78ff9644
-
-
-
diff --git a/spaces/sczhou/ProPainter/RAFT/utils/__init__.py b/spaces/sczhou/ProPainter/RAFT/utils/__init__.py
deleted file mode 100644
index 0437149bfee42718973728158641020ccc1906ad..0000000000000000000000000000000000000000
--- a/spaces/sczhou/ProPainter/RAFT/utils/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from .flow_viz import flow_to_image
-from .frame_utils import writeFlow
diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py
deleted file mode 100644
index 1c66194deb5dd370e797e57e2712f44303e568cc..0000000000000000000000000000000000000000
--- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py
+++ /dev/null
@@ -1,802 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# DINO
-# Copyright (c) 2022 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# --------------------------------------------------------
-# modified from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py
-# --------------------------------------------------------
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from groundingdino.util.misc import NestedTensor
-
-
-class Mlp(nn.Module):
- """Multilayer perceptron."""
-
- def __init__(
- self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0
- ):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- """Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(
- self,
- dim,
- window_size,
- num_heads,
- qkv_bias=True,
- qk_scale=None,
- attn_drop=0.0,
- proj_drop=0.0,
- ):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim**-0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)
- ) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=0.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """Forward function.
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = (
- self.qkv(x)
- .reshape(B_, N, 3, self.num_heads, C // self.num_heads)
- .permute(2, 0, 3, 1, 4)
- )
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = q @ k.transpose(-2, -1)
-
- relative_position_bias = self.relative_position_bias_table[
- self.relative_position_index.view(-1)
- ].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1
- ) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(
- 2, 0, 1
- ).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- """Swin Transformer Block.
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(
- self,
- dim,
- num_heads,
- window_size=7,
- shift_size=0,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop=0.0,
- attn_drop=0.0,
- drop_path=0.0,
- act_layer=nn.GELU,
- norm_layer=nn.LayerNorm,
- ):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim,
- window_size=to_2tuple(self.window_size),
- num_heads=num_heads,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- attn_drop=attn_drop,
- proj_drop=drop,
- )
-
- self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(
- in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop
- )
-
- self.H = None
- self.W = None
-
- def forward(self, x, mask_matrix):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- mask_matrix: Attention mask for cyclic shift.
- """
- B, L, C = x.shape
- H, W = self.H, self.W
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # pad feature maps to multiples of window size
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- attn_mask = mask_matrix
- else:
- shifted_x = x
- attn_mask = None
-
- # partition windows
- x_windows = window_partition(
- shifted_x, self.window_size
- ) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(
- -1, self.window_size * self.window_size, C
- ) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
-
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
-
-class PatchMerging(nn.Module):
- """Patch Merging Layer
- Args:
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x, H, W):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- x = x.view(B, H, W, C)
-
- # padding
- pad_input = (H % 2 == 1) or (W % 2 == 1)
- if pad_input:
- x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
-
-class BasicLayer(nn.Module):
- """A basic Swin Transformer layer for one stage.
- Args:
- dim (int): Number of feature channels
- depth (int): Depths of this stage.
- num_heads (int): Number of attention head.
- window_size (int): Local window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(
- self,
- dim,
- depth,
- num_heads,
- window_size=7,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop=0.0,
- attn_drop=0.0,
- drop_path=0.0,
- norm_layer=nn.LayerNorm,
- downsample=None,
- use_checkpoint=False,
- ):
- super().__init__()
- self.window_size = window_size
- self.shift_size = window_size // 2
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList(
- [
- SwinTransformerBlock(
- dim=dim,
- num_heads=num_heads,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop,
- attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer,
- )
- for i in range(depth)
- ]
- )
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, H, W):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
-
- # calculate attention mask for SW-MSA
- Hp = int(np.ceil(H / self.window_size)) * self.window_size
- Wp = int(np.ceil(W / self.window_size)) * self.window_size
- img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
- h_slices = (
- slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None),
- )
- w_slices = (
- slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None),
- )
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(
- img_mask, self.window_size
- ) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(
- attn_mask == 0, float(0.0)
- )
-
- for blk in self.blocks:
- blk.H, blk.W = H, W
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, attn_mask)
- else:
- x = blk(x, attn_mask)
- if self.downsample is not None:
- x_down = self.downsample(x, H, W)
- Wh, Ww = (H + 1) // 2, (W + 1) // 2
- return x, H, W, x_down, Wh, Ww
- else:
- return x, H, W, x, H, W
-
-
-class PatchEmbed(nn.Module):
- """Image to Patch Embedding
- Args:
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- patch_size = to_2tuple(patch_size)
- self.patch_size = patch_size
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- """Forward function."""
- # padding
- _, _, H, W = x.size()
- if W % self.patch_size[1] != 0:
- x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
- if H % self.patch_size[0] != 0:
- x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
-
- x = self.proj(x) # B C Wh Ww
- if self.norm is not None:
- Wh, Ww = x.size(2), x.size(3)
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
-
- return x
-
-
-class SwinTransformer(nn.Module):
- """Swin Transformer backbone.
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
- Args:
- pretrain_img_size (int): Input image size for training the pretrained model,
- used in absolute postion embedding. Default 224.
- patch_size (int | tuple(int)): Patch size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- depths (tuple[int]): Depths of each Swin Transformer stage.
- num_heads (tuple[int]): Number of attention head of each stage.
- window_size (int): Window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
- drop_rate (float): Dropout rate.
- attn_drop_rate (float): Attention dropout rate. Default: 0.
- drop_path_rate (float): Stochastic depth rate. Default: 0.2.
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
- patch_norm (bool): If True, add normalization after patch embedding. Default: True.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- dilation (bool): if True, the output size if 16x downsample, ow 32x downsample.
- """
-
- def __init__(
- self,
- pretrain_img_size=224,
- patch_size=4,
- in_chans=3,
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.0,
- attn_drop_rate=0.0,
- drop_path_rate=0.2,
- norm_layer=nn.LayerNorm,
- ape=False,
- patch_norm=True,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- dilation=False,
- use_checkpoint=False,
- ):
- super().__init__()
-
- self.pretrain_img_size = pretrain_img_size
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
- self.dilation = dilation
-
- # if use_checkpoint:
- # print("use_checkpoint!!!!!!!!!!!!!!!!!!!!!!!!")
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=patch_size,
- in_chans=in_chans,
- embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None,
- )
-
- # absolute position embedding
- if self.ape:
- pretrain_img_size = to_2tuple(pretrain_img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [
- pretrain_img_size[0] // patch_size[0],
- pretrain_img_size[1] // patch_size[1],
- ]
-
- self.absolute_pos_embed = nn.Parameter(
- torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])
- )
- trunc_normal_(self.absolute_pos_embed, std=0.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [
- x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))
- ] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- # prepare downsample list
- downsamplelist = [PatchMerging for i in range(self.num_layers)]
- downsamplelist[-1] = None
- num_features = [int(embed_dim * 2**i) for i in range(self.num_layers)]
- if self.dilation:
- downsamplelist[-2] = None
- num_features[-1] = int(embed_dim * 2 ** (self.num_layers - 1)) // 2
- for i_layer in range(self.num_layers):
- layer = BasicLayer(
- # dim=int(embed_dim * 2 ** i_layer),
- dim=num_features[i_layer],
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate,
- attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])],
- norm_layer=norm_layer,
- # downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- downsample=downsamplelist[i_layer],
- use_checkpoint=use_checkpoint,
- )
- self.layers.append(layer)
-
- # num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
- self.num_features = num_features
-
- # add a norm layer for each output
- for i_layer in out_indices:
- layer = norm_layer(num_features[i_layer])
- layer_name = f"norm{i_layer}"
- self.add_module(layer_name, layer)
-
- self._freeze_stages()
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- if self.frozen_stages >= 1 and self.ape:
- self.absolute_pos_embed.requires_grad = False
-
- if self.frozen_stages >= 2:
- self.pos_drop.eval()
- for i in range(0, self.frozen_stages - 1):
- m = self.layers[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- # def init_weights(self, pretrained=None):
- # """Initialize the weights in backbone.
- # Args:
- # pretrained (str, optional): Path to pre-trained weights.
- # Defaults to None.
- # """
-
- # def _init_weights(m):
- # if isinstance(m, nn.Linear):
- # trunc_normal_(m.weight, std=.02)
- # if isinstance(m, nn.Linear) and m.bias is not None:
- # nn.init.constant_(m.bias, 0)
- # elif isinstance(m, nn.LayerNorm):
- # nn.init.constant_(m.bias, 0)
- # nn.init.constant_(m.weight, 1.0)
-
- # if isinstance(pretrained, str):
- # self.apply(_init_weights)
- # logger = get_root_logger()
- # load_checkpoint(self, pretrained, strict=False, logger=logger)
- # elif pretrained is None:
- # self.apply(_init_weights)
- # else:
- # raise TypeError('pretrained must be a str or None')
-
- def forward_raw(self, x):
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(
- self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
- )
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- outs = []
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
- # import ipdb; ipdb.set_trace()
-
- if i in self.out_indices:
- norm_layer = getattr(self, f"norm{i}")
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- outs.append(out)
- # in:
- # torch.Size([2, 3, 1024, 1024])
- # outs:
- # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \
- # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])]
- return tuple(outs)
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
-
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(
- self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
- )
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- outs = []
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
-
- if i in self.out_indices:
- norm_layer = getattr(self, f"norm{i}")
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- outs.append(out)
- # in:
- # torch.Size([2, 3, 1024, 1024])
- # out:
- # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \
- # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])]
-
- # collect for nesttensors
- outs_dict = {}
- for idx, out_i in enumerate(outs):
- m = tensor_list.mask
- assert m is not None
- mask = F.interpolate(m[None].float(), size=out_i.shape[-2:]).to(torch.bool)[0]
- outs_dict[idx] = NestedTensor(out_i, mask)
-
- return outs_dict
-
- def train(self, mode=True):
- """Convert the model into training mode while keep layers freezed."""
- super(SwinTransformer, self).train(mode)
- self._freeze_stages()
-
-
-def build_swin_transformer(modelname, pretrain_img_size, **kw):
- assert modelname in [
- "swin_T_224_1k",
- "swin_B_224_22k",
- "swin_B_384_22k",
- "swin_L_224_22k",
- "swin_L_384_22k",
- ]
-
- model_para_dict = {
- "swin_T_224_1k": dict(
- embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24], window_size=7
- ),
- "swin_B_224_22k": dict(
- embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7
- ),
- "swin_B_384_22k": dict(
- embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=12
- ),
- "swin_L_224_22k": dict(
- embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=7
- ),
- "swin_L_384_22k": dict(
- embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=12
- ),
- }
- kw_cgf = model_para_dict[modelname]
- kw_cgf.update(kw)
- model = SwinTransformer(pretrain_img_size=pretrain_img_size, **kw_cgf)
- return model
-
-
-if __name__ == "__main__":
- model = build_swin_transformer("swin_L_384_22k", 384, dilation=True)
- x = torch.rand(2, 3, 1024, 1024)
- y = model.forward_raw(x)
- import ipdb
-
- ipdb.set_trace()
- x = torch.rand(2, 3, 384, 384)
- y = model.forward_raw(x)
diff --git a/spaces/shi-labs/OneFormer/demo/colormap.py b/spaces/shi-labs/OneFormer/demo/colormap.py
deleted file mode 100644
index 3eff9a46d37a1926c48ef0ad6e3308128438140f..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/OneFormer/demo/colormap.py
+++ /dev/null
@@ -1,170 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-"""
-An awesome colormap for really neat visualizations.
-Copied from Detectron, and removed gray colors.
-"""
-
-import numpy as np
-import random
-random.seed(0)
-
-__all__ = ["colormap", "random_color", "random_colors"]
-
-# fmt: off
-# RGB:
-# _COLORS = np.array(
-# [
-# 0.000, 0.447, 0.741,
-# 0.850, 0.325, 0.098,
-# 0.929, 0.694, 0.125,
-# 0.494, 0.184, 0.556,
-# 0.466, 0.674, 0.188,
-# 0.301, 0.745, 0.933,
-# 0.635, 0.078, 0.184,
-# 0.300, 0.300, 0.300,
-# 0.600, 0.600, 0.600,
-# 1.000, 0.000, 0.000,
-# 1.000, 0.500, 0.000,
-# 0.749, 0.749, 0.000,
-# 0.000, 1.000, 0.000,
-# 0.000, 0.000, 1.000,
-# 0.667, 0.000, 1.000,
-# 0.333, 0.333, 0.000,
-# 0.333, 0.667, 0.000,
-# 0.333, 1.000, 0.000,
-# 0.667, 0.333, 0.000,
-# 0.667, 0.667, 0.000,
-# 0.667, 1.000, 0.000,
-# 1.000, 0.333, 0.000,
-# 1.000, 0.667, 0.000,
-# 1.000, 1.000, 0.000,
-# 0.000, 0.333, 0.500,
-# 0.000, 0.667, 0.500,
-# 0.000, 1.000, 0.500,
-# 0.333, 0.000, 0.500,
-# 0.333, 0.333, 0.500,
-# 0.333, 0.667, 0.500,
-# 0.333, 1.000, 0.500,
-# 0.667, 0.000, 0.500,
-# 0.667, 0.333, 0.500,
-# 0.667, 0.667, 0.500,
-# 0.667, 1.000, 0.500,
-# 1.000, 0.000, 0.500,
-# 1.000, 0.333, 0.500,
-# 1.000, 0.667, 0.500,
-# 1.000, 1.000, 0.500,
-# 0.000, 0.333, 1.000,
-# 0.000, 0.667, 1.000,
-# 0.000, 1.000, 1.000,
-# 0.333, 0.000, 1.000,
-# 0.333, 0.333, 1.000,
-# 0.333, 0.667, 1.000,
-# 0.333, 1.000, 1.000,
-# 0.667, 0.000, 1.000,
-# 0.667, 0.333, 1.000,
-# 0.667, 0.667, 1.000,
-# 0.667, 1.000, 1.000,
-# 1.000, 0.000, 1.000,
-# 1.000, 0.333, 1.000,
-# 1.000, 0.667, 1.000,
-# 0.333, 0.000, 0.000,
-# 0.500, 0.000, 0.000,
-# 0.667, 0.000, 0.000,
-# 0.833, 0.000, 0.000,
-# 1.000, 0.000, 0.000,
-# 0.000, 0.167, 0.000,
-# 0.000, 0.333, 0.000,
-# 0.000, 0.500, 0.000,
-# 0.000, 0.667, 0.000,
-# 0.000, 0.833, 0.000,
-# 0.000, 1.000, 0.000,
-# 0.000, 0.000, 0.167,
-# 0.000, 0.000, 0.333,
-# 0.000, 0.000, 0.500,
-# 0.000, 0.000, 0.667,
-# 0.000, 0.000, 0.833,
-# 0.000, 0.000, 1.000,
-# 0.000, 0.000, 0.000,
-# 0.143, 0.143, 0.143,
-# 0.857, 0.857, 0.857,
-# 1.000, 1.000, 1.000
-# ]
-# ).astype(np.float32).reshape(-1, 3)
-# fmt: on
-
-_COLORS = []
-
-
-def gen_color():
- color = tuple(np.round(np.random.choice(range(256), size=3)/255, 3))
- if color not in _COLORS and np.mean(color) != 0.0:
- _COLORS.append(color)
- else:
- gen_color()
-
-
-for _ in range(300):
- gen_color()
-
-
-def colormap(rgb=False, maximum=255):
- """
- Args:
- rgb (bool): whether to return RGB colors or BGR colors.
- maximum (int): either 255 or 1
- Returns:
- ndarray: a float32 array of Nx3 colors, in range [0, 255] or [0, 1]
- """
- assert maximum in [255, 1], maximum
- c = _COLORS * maximum
- if not rgb:
- c = c[:, ::-1]
- return c
-
-
-def random_color(rgb=False, maximum=255):
- """
- Args:
- rgb (bool): whether to return RGB colors or BGR colors.
- maximum (int): either 255 or 1
- Returns:
- ndarray: a vector of 3 numbers
- """
- idx = np.random.randint(0, len(_COLORS))
- ret = _COLORS[idx] * maximum
- if not rgb:
- ret = ret[::-1]
- return ret
-
-
-def random_colors(N, rgb=False, maximum=255):
- """
- Args:
- N (int): number of unique colors needed
- rgb (bool): whether to return RGB colors or BGR colors.
- maximum (int): either 255 or 1
- Returns:
- ndarray: a list of random_color
- """
- indices = random.sample(range(len(_COLORS)), N)
- ret = [_COLORS[i] * maximum for i in indices]
- if not rgb:
- ret = [x[::-1] for x in ret]
- return ret
-
-
-if __name__ == "__main__":
- import cv2
-
- size = 100
- H, W = 10, 10
- canvas = np.random.rand(H * size, W * size, 3).astype("float32")
- for h in range(H):
- for w in range(W):
- idx = h * W + w
- if idx >= len(_COLORS):
- break
- canvas[h * size : (h + 1) * size, w * size : (w + 1) * size] = _COLORS[idx]
- cv2.imshow("a", canvas)
- cv2.waitKey(0)
\ No newline at end of file
diff --git a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seet_tdecoder.py b/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seet_tdecoder.py
deleted file mode 100644
index c395bd144e2cd9d15e10c08f9f0dec076f724836..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/seet_tdecoder.py
+++ /dev/null
@@ -1,699 +0,0 @@
-import fvcore.nn.weight_init as weight_init
-from typing import Optional
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .msdeformattn import PositionEmbeddingSine, _get_clones, _get_activation_fn
-from lib.model_zoo.common.get_model import get_model, register
-
-##########
-# helper #
-##########
-
-def with_pos_embed(x, pos):
- return x if pos is None else x + pos
-
-##############
-# One Former #
-##############
-
-class Transformer(nn.Module):
- def __init__(self,
- d_model=512,
- nhead=8,
- num_encoder_layers=6,
- num_decoder_layers=6,
- dim_feedforward=2048,
- dropout=0.1,
- activation="relu",
- normalize_before=False,
- return_intermediate_dec=False,):
-
- super().__init__()
- encoder_layer = TransformerEncoderLayer(
- d_model, nhead, dim_feedforward, dropout, activation, normalize_before)
- encoder_norm = nn.LayerNorm(d_model) if normalize_before else None
- self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm)
-
- decoder_layer = TransformerDecoderLayer(
- d_model, nhead, dim_feedforward, dropout, activation, normalize_before)
- decoder_norm = nn.LayerNorm(d_model)
- self.decoder = TransformerDecoder(
- decoder_layer,
- num_decoder_layers,
- decoder_norm,
- return_intermediate=return_intermediate_dec,)
-
- self._reset_parameters()
-
- self.d_model = d_model
- self.nhead = nhead
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def forward(self, src, mask, query_embed, pos_embed, task_token=None):
- # flatten NxCxHxW to HWxNxC
- bs, c, h, w = src.shape
- src = src.flatten(2).permute(2, 0, 1)
- pos_embed = pos_embed.flatten(2).permute(2, 0, 1)
- query_embed = query_embed.unsqueeze(1).repeat(1, bs, 1)
- if mask is not None:
- mask = mask.flatten(1)
-
- if task_token is None:
- tgt = torch.zeros_like(query_embed)
- else:
- tgt = task_token.repeat(query_embed.shape[0], 1, 1)
-
- memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed) # src = memory
- hs = self.decoder(
- tgt, memory, memory_key_padding_mask=mask, pos=pos_embed, query_pos=query_embed
- )
- return hs.transpose(1, 2), memory.permute(1, 2, 0).view(bs, c, h, w)
-
-class TransformerEncoder(nn.Module):
- def __init__(self, encoder_layer, num_layers, norm=None):
- super().__init__()
- self.layers = _get_clones(encoder_layer, num_layers)
- self.num_layers = num_layers
- self.norm = norm
-
- def forward(self, src, mask=None, src_key_padding_mask=None, pos=None,):
- output = src
- for layer in self.layers:
- output = layer(
- output, src_mask=mask, src_key_padding_mask=src_key_padding_mask, pos=pos
- )
- if self.norm is not None:
- output = self.norm(output)
- return output
-
-class TransformerDecoder(nn.Module):
- def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False):
- super().__init__()
- self.layers = _get_clones(decoder_layer, num_layers)
- self.num_layers = num_layers
- self.norm = norm
- self.return_intermediate = return_intermediate
-
- def forward(
- self,
- tgt,
- memory,
- tgt_mask=None,
- memory_mask=None,
- tgt_key_padding_mask=None,
- memory_key_padding_mask=None,
- pos=None,
- query_pos=None,):
-
- output = tgt
- intermediate = []
- for layer in self.layers:
- output = layer(
- output,
- memory,
- tgt_mask=tgt_mask,
- memory_mask=memory_mask,
- tgt_key_padding_mask=tgt_key_padding_mask,
- memory_key_padding_mask=memory_key_padding_mask,
- pos=pos,
- query_pos=query_pos,
- )
- if self.return_intermediate:
- intermediate.append(self.norm(output))
-
- if self.norm is not None:
- output = self.norm(output)
- if self.return_intermediate:
- intermediate.pop()
- intermediate.append(output)
-
- if self.return_intermediate:
- return torch.stack(intermediate)
-
- return output.unsqueeze(0)
-
-class TransformerEncoderLayer(nn.Module):
- def __init__(
- self,
- d_model,
- nhead,
- dim_feedforward=2048,
- dropout=0.1,
- activation="relu",
- normalize_before=False, ):
-
- super().__init__()
- self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
- # Implementation of Feedforward model
- self.linear1 = nn.Linear(d_model, dim_feedforward)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(dim_feedforward, d_model)
-
- self.norm1 = nn.LayerNorm(d_model)
- self.norm2 = nn.LayerNorm(d_model)
- self.dropout1 = nn.Dropout(dropout)
- self.dropout2 = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- def with_pos_embed(self, x, pos):
- return x if pos is None else x + pos
-
- def forward_post(
- self,
- src,
- src_mask = None,
- src_key_padding_mask = None,
- pos = None,):
-
- q = k = self.with_pos_embed(src, pos)
- src2 = self.self_attn(
- q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask
- )[0]
- src = src + self.dropout1(src2)
- src = self.norm1(src)
- src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
- src = src + self.dropout2(src2)
- src = self.norm2(src)
- return src
-
- def forward_pre(
- self,
- src,
- src_mask = None,
- src_key_padding_mask = None,
- pos = None,):
-
- src2 = self.norm1(src)
- q = k = self.with_pos_embed(src2, pos)
- src2 = self.self_attn(
- q, k, value=src2, attn_mask=src_mask, key_padding_mask=src_key_padding_mask
- )[0]
- src = src + self.dropout1(src2)
- src2 = self.norm2(src)
- src2 = self.linear2(self.dropout(self.activation(self.linear1(src2))))
- src = src + self.dropout2(src2)
- return src
-
- def forward(
- self,
- src,
- src_mask = None,
- src_key_padding_mask = None,
- pos = None,):
- if self.normalize_before:
- return self.forward_pre(src, src_mask, src_key_padding_mask, pos)
- return self.forward_post(src, src_mask, src_key_padding_mask, pos)
-
-class TransformerDecoderLayer(nn.Module):
- def __init__(
- self,
- d_model,
- nhead,
- dim_feedforward=2048,
- dropout=0.1,
- activation="relu",
- normalize_before=False,):
-
- super().__init__()
- self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
- self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
- # Implementation of Feedforward model
- self.linear1 = nn.Linear(d_model, dim_feedforward)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(dim_feedforward, d_model)
-
- self.norm1 = nn.LayerNorm(d_model)
- self.norm2 = nn.LayerNorm(d_model)
- self.norm3 = nn.LayerNorm(d_model)
- self.dropout1 = nn.Dropout(dropout)
- self.dropout2 = nn.Dropout(dropout)
- self.dropout3 = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- def with_pos_embed(self, x, pos):
- return x if pos is None else x + pos
-
- def forward_post(
- self,
- tgt,
- memory,
- tgt_mask = None,
- memory_mask = None,
- tgt_key_padding_mask = None,
- memory_key_padding_mask = None,
- pos = None,
- query_pos = None,):
-
- q = k = self.with_pos_embed(tgt, query_pos)
- tgt2 = self.self_attn(
- q, k, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask)[0]
- tgt = tgt + self.dropout1(tgt2)
- tgt = self.norm1(tgt)
- tgt2 = self.multihead_attn(
- query=self.with_pos_embed(tgt, query_pos),
- key=self.with_pos_embed(memory, pos),
- value=memory,
- attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask,)[0]
- tgt = tgt + self.dropout2(tgt2)
- tgt = self.norm2(tgt)
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
- tgt = tgt + self.dropout3(tgt2)
- tgt = self.norm3(tgt)
- return tgt
-
- def forward_pre(
- self,
- tgt,
- memory,
- tgt_mask = None,
- memory_mask = None,
- tgt_key_padding_mask = None,
- memory_key_padding_mask = None,
- pos = None,
- query_pos = None,):
-
- tgt2 = self.norm1(tgt)
- q = k = self.with_pos_embed(tgt2, query_pos)
- tgt2 = self.self_attn(
- q, k, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask
- )[0]
- tgt = tgt + self.dropout1(tgt2)
- tgt2 = self.norm2(tgt)
- tgt2 = self.multihead_attn(
- query=self.with_pos_embed(tgt2, query_pos),
- key=self.with_pos_embed(memory, pos),
- value=memory,
- attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask,
- )[0]
- tgt = tgt + self.dropout2(tgt2)
- tgt2 = self.norm3(tgt)
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
- tgt = tgt + self.dropout3(tgt2)
- return tgt
-
- def forward(
- self,
- tgt,
- memory,
- tgt_mask = None,
- memory_mask = None,
- tgt_key_padding_mask = None,
- memory_key_padding_mask = None,
- pos = None,
- query_pos = None, ):
-
- if self.normalize_before:
- return self.forward_pre(
- tgt,
- memory,
- tgt_mask,
- memory_mask,
- tgt_key_padding_mask,
- memory_key_padding_mask,
- pos,
- query_pos,)
- return self.forward_post(
- tgt,
- memory,
- tgt_mask,
- memory_mask,
- tgt_key_padding_mask,
- memory_key_padding_mask,
- pos,
- query_pos,)
-
-class SelfAttentionLayer(nn.Module):
-
- def __init__(self, d_model, nhead, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
-
- self.norm = nn.LayerNorm(d_model)
- self.dropout = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt,
- tgt_mask = None,
- tgt_key_padding_mask = None,
- query_pos = None):
- q = k = self.with_pos_embed(tgt, query_pos).transpose(0 ,1)
- tgt2 = self.self_attn(q, k, value=tgt.transpose(0 ,1), attn_mask=tgt_mask,
- key_padding_mask=tgt_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2.transpose(0 ,1))
- tgt = self.norm(tgt)
-
- return tgt
-
- def forward_pre(self, tgt,
- tgt_mask = None,
- tgt_key_padding_mask = None,
- query_pos = None):
- tgt2 = self.norm(tgt)
- q = k = self.with_pos_embed(tgt2, query_pos)
- tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,
- key_padding_mask=tgt_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
-
- return tgt
-
- def forward(self, tgt,
- tgt_mask = None,
- tgt_key_padding_mask = None,
- query_pos = None):
- if self.normalize_before:
- return self.forward_pre(tgt, tgt_mask,
- tgt_key_padding_mask, query_pos)
- return self.forward_post(tgt, tgt_mask,
- tgt_key_padding_mask, query_pos)
-
-class CrossAttentionLayer(nn.Module):
-
- def __init__(self, d_model, nhead, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
-
- self.norm = nn.LayerNorm(d_model)
- self.dropout = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt, memory,
- memory_mask = None,
- memory_key_padding_mask = None,
- pos = None,
- query_pos = None):
- tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos).transpose(0, 1),
- key=self.with_pos_embed(memory, pos).transpose(0, 1),
- value=memory.transpose(0, 1), attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2.transpose(0, 1))
- tgt = self.norm(tgt)
-
- return tgt
-
- def forward_pre(self, tgt, memory,
- memory_mask = None,
- memory_key_padding_mask = None,
- pos = None,
- query_pos = None):
- tgt2 = self.norm(tgt)
- tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos),
- key=self.with_pos_embed(memory, pos),
- value=memory, attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
-
- return tgt
-
- def forward(self, tgt, memory,
- memory_mask = None,
- memory_key_padding_mask = None,
- pos = None,
- query_pos = None):
- if self.normalize_before:
- return self.forward_pre(tgt, memory, memory_mask,
- memory_key_padding_mask, pos, query_pos)
- return self.forward_post(tgt, memory, memory_mask,
- memory_key_padding_mask, pos, query_pos)
-
-class FFNLayer(nn.Module):
-
- def __init__(self, d_model, dim_feedforward=2048, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- # Implementation of Feedforward model
- self.linear1 = nn.Linear(d_model, dim_feedforward)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(dim_feedforward, d_model)
-
- self.norm = nn.LayerNorm(d_model)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt):
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
- tgt = tgt + self.dropout(tgt2)
- tgt = self.norm(tgt)
- return tgt
-
- def forward_pre(self, tgt):
- tgt2 = self.norm(tgt)
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
- tgt = tgt + self.dropout(tgt2)
- return tgt
-
- def forward(self, tgt):
- if self.normalize_before:
- return self.forward_pre(tgt)
- return self.forward_post(tgt)
-
-class MLP(nn.Module):
- """ Very simple multi-layer perceptron (also called FFN)"""
- def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
- super().__init__()
- self.num_layers = num_layers
- h = [hidden_dim] * (num_layers - 1)
- self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
-
- def forward(self, x):
- for i, layer in enumerate(self.layers):
- x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
- return x
-
-@register('seet_oneformer_tdecoder')
-class Seet_OneFormer_TDecoder(nn.Module):
- def __init__(
- self,
- in_channels,
- mask_classification,
- num_classes,
- hidden_dim,
- num_queries,
- nheads,
- dropout,
- dim_feedforward,
- enc_layers,
- is_train,
- dec_layers,
- class_dec_layers,
- pre_norm,
- mask_dim,
- enforce_input_project,
- use_task_norm,):
-
- super().__init__()
-
- assert mask_classification, "Only support mask classification model"
- self.mask_classification = mask_classification
- self.is_train = is_train
- self.use_task_norm = use_task_norm
-
- # positional encoding
- N_steps = hidden_dim // 2
- self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
-
- self.class_transformer = Transformer(
- d_model=hidden_dim,
- dropout=dropout,
- nhead=nheads,
- dim_feedforward=dim_feedforward,
- num_encoder_layers=enc_layers,
- num_decoder_layers=class_dec_layers,
- normalize_before=pre_norm,
- return_intermediate_dec=False,
- )
-
- # define Transformer decoder here
- self.num_heads = nheads
- self.num_layers = dec_layers
- self.transformer_self_attention_layers = nn.ModuleList()
- self.transformer_cross_attention_layers = nn.ModuleList()
- self.transformer_ffn_layers = nn.ModuleList()
-
- for _ in range(self.num_layers):
- self.transformer_self_attention_layers.append(
- SelfAttentionLayer(
- d_model=hidden_dim,
- nhead=nheads,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.transformer_cross_attention_layers.append(
- CrossAttentionLayer(
- d_model=hidden_dim,
- nhead=nheads,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.transformer_ffn_layers.append(
- FFNLayer(
- d_model=hidden_dim,
- dim_feedforward=dim_feedforward,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.decoder_norm = nn.LayerNorm(hidden_dim)
-
- self.num_queries = num_queries
- # learnable query p.e.
- self.query_embed = nn.Embedding(num_queries, hidden_dim)
-
- # level embedding (we always use 3 scales)
- self.num_feature_levels = 3
- self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim)
- self.input_proj = nn.ModuleList()
- for _ in range(self.num_feature_levels):
- if in_channels != hidden_dim or enforce_input_project:
- self.input_proj.append(nn.Conv2d(in_channels, hidden_dim, kernel_size=1))
- weight_init.c2_xavier_fill(self.input_proj[-1])
- else:
- self.input_proj.append(nn.Sequential())
-
- self.class_input_proj = nn.Conv2d(in_channels, hidden_dim, kernel_size=1)
- weight_init.c2_xavier_fill(self.class_input_proj)
-
- # output FFNs
- if self.mask_classification:
- self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
- self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
-
- def forward(self, x, mask_features, tasks):
- # x is a list of multi-scale feature
- assert len(x) == self.num_feature_levels
- src = []
- pos = []
- size_list = []
-
- for i in range(self.num_feature_levels):
- size_list.append(x[i].shape[-2:])
- pos.append(self.pe_layer(x[i], None).flatten(2))
- src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None])
- pos[-1] = pos[-1].transpose(1, 2)
- src[-1] = src[-1].transpose(1, 2)
-
- bs, _, _ = src[0].shape
-
- query_embed = self.query_embed.weight.unsqueeze(0).repeat(bs, 1, 1)
-
- tasks = tasks.unsqueeze(0)
- if self.use_task_norm:
- tasks = self.decoder_norm(tasks)
-
- feats = self.pe_layer(mask_features, None)
-
- out_t, _ = self.class_transformer(
- feats, None,
- self.query_embed.weight[:-1],
- self.class_input_proj(mask_features),
- tasks if self.use_task_norm else None)
- out_t = out_t[0]
-
- out = torch.cat([out_t, tasks], dim=1)
-
- output = out.clone()
-
- predictions_class = []
- predictions_mask = []
-
- # prediction heads on learnable query features
- outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(
- output, mask_features, attn_mask_target_size=size_list[0])
- predictions_class.append(outputs_class)
- predictions_mask.append(outputs_mask)
-
- for i in range(self.num_layers):
- level_index = i % self.num_feature_levels
- attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False
-
- output = self.transformer_cross_attention_layers[i](
- output, src[level_index],
- memory_mask=attn_mask,
- memory_key_padding_mask=None,
- pos=pos[level_index], query_pos=query_embed, )
-
- output = self.transformer_self_attention_layers[i](
- output, tgt_mask=None,
- tgt_key_padding_mask=None,
- query_pos=query_embed, )
-
- # FFN
- output = self.transformer_ffn_layers[i](output)
-
- outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(
- output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels])
- predictions_class.append(outputs_class)
- predictions_mask.append(outputs_mask)
-
- assert len(predictions_class) == self.num_layers + 1
-
- out = {
- 'pred_logits': predictions_class[-1],
- 'pred_masks': predictions_mask[-1],}
-
- return out
-
- def forward_prediction_heads(self, output, mask_features, attn_mask_target_size):
- decoder_output = self.decoder_norm(output)
- outputs_class = self.class_embed(decoder_output)
- mask_embed = self.mask_embed(decoder_output)
- outputs_mask = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features)
-
- attn_mask = F.interpolate(outputs_mask, size=attn_mask_target_size, mode="bilinear", align_corners=False)
- attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool()
- attn_mask = attn_mask.detach()
-
- return outputs_class, outputs_mask, attn_mask
diff --git a/spaces/shikunl/prismer/prismer/experts/normal/generate_dataset.py b/spaces/shikunl/prismer/prismer/experts/normal/generate_dataset.py
deleted file mode 100644
index 8521943bdacd7802ff72de842513d8ec7419c70a..0000000000000000000000000000000000000000
--- a/spaces/shikunl/prismer/prismer/experts/normal/generate_dataset.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) 2023, NVIDIA Corporation & Affiliates. All rights reserved.
-#
-# This work is made available under the Nvidia Source Code License-NC.
-# To view a copy of this license, visit
-# https://github.com/NVlabs/prismer/blob/main/LICENSE
-
-import glob
-
-from torch.utils.data import Dataset
-from PIL import Image
-from PIL import ImageFile
-
-ImageFile.LOAD_TRUNCATED_IMAGES = True
-
-
-class CustomDataset(Dataset):
- def __init__(self, config, transform):
- self.data_path = config['data_path']
- self.transform = transform
- self.data_list = [f'helpers/images/{config["im_name"]}.jpg']
-
- def __len__(self):
- return len(self.data_list)
-
- def __getitem__(self, index):
- image_path = self.data_list[index]
- image = Image.open(image_path).convert('RGB')
- img_size = [image.size[0], image.size[1]]
- image = self.transform(image)
- return image.half(), image_path, img_size
-
-
diff --git a/spaces/shikunl/prismer/prismer/experts/normal/models/baseline.py b/spaces/shikunl/prismer/prismer/experts/normal/models/baseline.py
deleted file mode 100644
index d44e1ebfc1b68a721c6220e9a1a6ccc64f1006f7..0000000000000000000000000000000000000000
--- a/spaces/shikunl/prismer/prismer/experts/normal/models/baseline.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from experts.normal.models.submodules.submodules import UpSampleBN, norm_normalize
-
-
-# This is the baseline encoder-decoder we used in the ablation study
-class NNET(nn.Module):
- def __init__(self, args=None):
- super(NNET, self).__init__()
- self.encoder = Encoder()
- self.decoder = Decoder(num_classes=4)
-
- def forward(self, x, **kwargs):
- out = self.decoder(self.encoder(x), **kwargs)
-
- # Bilinearly upsample the output to match the input resolution
- up_out = F.interpolate(out, size=[x.size(2), x.size(3)], mode='bilinear', align_corners=False)
-
- # L2-normalize the first three channels / ensure positive value for concentration parameters (kappa)
- up_out = norm_normalize(up_out)
- return up_out
-
- def get_1x_lr_params(self): # lr/10 learning rate
- return self.encoder.parameters()
-
- def get_10x_lr_params(self): # lr learning rate
- modules = [self.decoder]
- for m in modules:
- yield from m.parameters()
-
-
-# Encoder
-class Encoder(nn.Module):
- def __init__(self):
- super(Encoder, self).__init__()
-
- basemodel_name = 'tf_efficientnet_b5_ap'
- basemodel = torch.hub.load('rwightman/gen-efficientnet-pytorch', basemodel_name, pretrained=True)
-
- # Remove last layer
- basemodel.global_pool = nn.Identity()
- basemodel.classifier = nn.Identity()
-
- self.original_model = basemodel
-
- def forward(self, x):
- features = [x]
- for k, v in self.original_model._modules.items():
- if (k == 'blocks'):
- for ki, vi in v._modules.items():
- features.append(vi(features[-1]))
- else:
- features.append(v(features[-1]))
- return features
-
-
-# Decoder (no pixel-wise MLP, no uncertainty-guided sampling)
-class Decoder(nn.Module):
- def __init__(self, num_classes=4):
- super(Decoder, self).__init__()
- self.conv2 = nn.Conv2d(2048, 2048, kernel_size=1, stride=1, padding=0)
- self.up1 = UpSampleBN(skip_input=2048 + 176, output_features=1024)
- self.up2 = UpSampleBN(skip_input=1024 + 64, output_features=512)
- self.up3 = UpSampleBN(skip_input=512 + 40, output_features=256)
- self.up4 = UpSampleBN(skip_input=256 + 24, output_features=128)
- self.conv3 = nn.Conv2d(128, num_classes, kernel_size=3, stride=1, padding=1)
-
- def forward(self, features):
- x_block0, x_block1, x_block2, x_block3, x_block4 = features[4], features[5], features[6], features[8], features[11]
- x_d0 = self.conv2(x_block4)
- x_d1 = self.up1(x_d0, x_block3)
- x_d2 = self.up2(x_d1, x_block2)
- x_d3 = self.up3(x_d2, x_block1)
- x_d4 = self.up4(x_d3, x_block0)
- out = self.conv3(x_d4)
- return out
-
-
-if __name__ == '__main__':
- model = Baseline()
- x = torch.rand(2, 3, 480, 640)
- out = model(x)
- print(out.shape)
diff --git a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/datasets/wilddash.py b/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/datasets/wilddash.py
deleted file mode 100644
index 08c096efe41ba0d082e5aeaacda0b7349b5b3a32..0000000000000000000000000000000000000000
--- a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/data/datasets/wilddash.py
+++ /dev/null
@@ -1,39 +0,0 @@
-from detectron2.data.datasets.register_coco import register_coco_instances
-import os
-
-categories = [
- {'id': 1, 'name': 'ego vehicle'},
- {'id': 24, 'name': 'person'},
- {'id': 25, 'name': 'rider'},
- {'id': 26, 'name': 'car'},
- {'id': 27, 'name': 'truck'},
- {'id': 28, 'name': 'bus'},
- {'id': 29, 'name': 'caravan'},
- {'id': 30, 'name': 'trailer'},
- {'id': 31, 'name': 'train'},
- {'id': 32, 'name': 'motorcycle'},
- {'id': 33, 'name': 'bicycle'},
- {'id': 34, 'name': 'pickup'},
- {'id': 35, 'name': 'van'},
-]
-
-def _get_builtin_metadata():
- thing_dataset_id_to_contiguous_id = {
- x['id']: i for i, x in enumerate(sorted(categories, key=lambda x: x['id']))}
- thing_classes = [x['name'] for x in sorted(categories, key=lambda x: x['id'])]
- return {
- "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id,
- "thing_classes": thing_classes}
-
-_PREDEFINED_SPLITS_ = {
- "wilddash_public": ("wilddash/wd_public_02/images/", "wilddash/wd_public_02/wilddash_public.json"),
- "wilddash_both": ("wilddash/wd_both_02/images/", "wilddash/wd_both_02/wilddash_both_image_info.json"),
-}
-
-for key, (image_root, json_file) in _PREDEFINED_SPLITS_.items():
- register_coco_instances(
- key,
- _get_builtin_metadata(),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
diff --git a/spaces/shubhamjaiswar/RakshakReet-SpamDetection/README.md b/spaces/shubhamjaiswar/RakshakReet-SpamDetection/README.md
deleted file mode 100644
index bbf2c9f6083f30fa2b5976b5b55fbb2d452c189f..0000000000000000000000000000000000000000
--- a/spaces/shubhamjaiswar/RakshakReet-SpamDetection/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: RakshakReet SpamDetection
-emoji: 🌖
-colorFrom: blue
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.50.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/silvanoalbuquerque/YOLO-V8_ANIMALS_CLASSIFICATION/app.py b/spaces/silvanoalbuquerque/YOLO-V8_ANIMALS_CLASSIFICATION/app.py
deleted file mode 100644
index f8909743d9ced01b8401c3580c4e8a4475cabe3f..0000000000000000000000000000000000000000
--- a/spaces/silvanoalbuquerque/YOLO-V8_ANIMALS_CLASSIFICATION/app.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import gradio as gr
-from ultralytics import YOLO
-
-# catgories
-categories = ['Elefante', 'Hipopótamos', 'Rinocerontes']
-
-# returning classifiers output
-
-def image_classifier(inp):
- model = YOLO("best.pt")
-
- result = model.predict(source=inp)
- probs = result[0].probs.data
-
- # Combine the two lists and sort based on values in descending order
- sorted_pairs = sorted(zip(categories, probs), key=lambda x: x[1], reverse=True)
-
- resultado = []
- for name, value in sorted_pairs:
- resultado.append(f'{name}: {value:.2f}%')
-
- return ', '.join(resultado)
-
-# gradio code block for input and output
-with gr.Blocks() as app:
- gr.Markdown("## Classificação de animais de grande porte (Elefantes, Hipopótamos e Rinocerontes) com Yolo-v8")
- with gr.Row():
- inp_img = gr.Image()
- out_txt = gr.Textbox()
- btn = gr.Button(value="Submeter")
- btn.click(image_classifier, inputs=inp_img, outputs=out_txt)
-
- gr.Markdown("## Exemplos")
- gr.Examples(
- examples=['image_0.jpg', 'image_1.jpg', 'image_2.jpg'],
- inputs=inp_img,
- outputs=out_txt,
- fn=image_classifier,
- cache_examples=True,
- )
-
-app.launch(share=True)
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Game Booster Pop Up WA and Enjoy Faster Smoother PC Games.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Game Booster Pop Up WA and Enjoy Faster Smoother PC Games.md
deleted file mode 100644
index 1ed9067aad1357f0ad8de121c5394d00f98d1ce9..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Game Booster Pop Up WA and Enjoy Faster Smoother PC Games.md
+++ /dev/null
@@ -1,145 +0,0 @@
-
-
How to Download Game Booster Pop Up WA and Why You Need It
-
If you are a gamer who loves playing online games on your PC or smartphone, you know how frustrating it can be when your game lags, freezes, or crashes. You also know how annoying it can be when you miss an important message or notification from your friends or family while gaming. That's why you need Game Booster Pop Up WA, a powerful app that can optimize your gaming performance and experience. In this article, we will show you how to download, install, and use Game Booster Pop Up WA, as well as some tips on how to optimize your PC for gaming performance and how to remove pop up ads from your browser.
-
What is Game Booster Pop Up WA?
-
A brief introduction to the app and its features
-
Game Booster Pop Up WA is an app that can enhance your gaming experience by boosting your device's speed, performance, and stability. It can also allow you to access WhatsApp messages and other apps in a pop-up window while gaming, so you don't have to switch between apps or miss any important notifications. Here are some of the features of Game Booster Pop Up WA:
It can improve your gaming performance by allocating bandwidth, enhancing sound quality, clearing cache, and closing background processes.
-
It can customize your gaming mode by adjusting brightness, volume, screen orientation, resolution, and more.
-
It can enable a pop-up panel that lets you access WhatsApp messages and other apps in a floating window while gaming.
-
It can block unwanted calls, messages, notifications, and ads while gaming.
-
It can support all devices and games, including online multiplayer games.
-
-
How it can improve your gaming performance and experience
-
Game Booster Pop Up WA can improve your gaming performance and experience by making your device run faster, smoother, and cooler. It can also make your gaming more convenient, fun, and social by letting you chat with your friends on WhatsApp or use other apps without leaving your game. Here are some of the benefits of using Game Booster Pop Up WA:
-
-
You can enjoy faster loading times, higher frame rates, better graphics quality, and longer battery life.
-
You can avoid lagging, freezing, crashing, overheating, or draining issues that may ruin your game.
-
You can stay connected with your friends or family on WhatsApp or other apps while gaming.
-
You can access useful tools like screen recorder, screenshot, flashlight, calculator, etc. while gaming.
-
You can avoid distractions like calls, messages, notifications, or ads that may interrupt your game.
-
-
How to Download and Install Game Booster Pop Up WA
-
The steps to download the app for different devices
-
Game Booster Pop Up WA is available for both PC and smartphone devices. You can download the app from the official website or from the Google Play Store or App Store. Here are the steps to download the app for different devices:
-
-
For PC: Go to the official website of Game Booster Pop Up WA and click on the download button. Choose the version that suits your operating system (Windows or Mac) and save the file to your computer. You can also scan the QR code on the website with your smartphone to download the app.
-
For Android: Go to the Google Play Store and search for Game Booster Pop Up WA. Tap on the install button and wait for the app to download and install on your device. You can also go to the official website of Game Booster Pop Up WA and click on the download button for Android.
-
For iOS: Go to the App Store and search for Game Booster Pop Up WA. Tap on the get button and wait for the app to download and install on your device. You can also go to the official website of Game Booster Pop Up WA and click on the download button for iOS.
-
-
The steps to install and activate the app
-
After downloading the app, you need to install and activate it on your device. Here are the steps to install and activate the app:
-
-
For PC: Open the downloaded file and follow the instructions to install the app on your computer. After installation, launch the app and sign in with your email or Facebook account. You can also create a new account if you don't have one.
-
For Android: Open the downloaded file or go to your app drawer and tap on the Game Booster Pop Up WA icon. Grant the necessary permissions to the app and sign in with your email or Facebook account. You can also create a new account if you don't have one.
-
For iOS: Open the downloaded file or go to your home screen and tap on the Game Booster Pop Up WA icon. Grant the necessary permissions to the app and sign in with your email or Facebook account. You can also create a new account if you don't have one.
-
-
How to Use Game Booster Pop Up WA
-
How to access the pop-up panel and customize the settings
-
Game Booster Pop Up WA has a pop-up panel that lets you access WhatsApp messages and other apps in a floating window while gaming. You can also customize the settings of the app according to your preferences. Here are how to access the pop-up panel and customize the settings:
-
-
To access the pop-up panel, swipe from left to right on your screen while gaming. You will see a list of apps that you can open in a pop-up window, such as WhatsApp, Facebook, Instagram, YouTube, etc. Tap on any app that you want to use while gaming.
-
To customize the settings, swipe from right to left on your screen while gaming. You will see a list of options that you can adjust, such as brightness, volume, resolution, orientation, sound quality, etc. Tap on any option that you want to change while gaming.
-
-
How to enable and disable the app while gaming
-
Game Booster Pop Up WA can be enabled or disabled while gaming, depending on your needs and preferences. You can also choose which games or apps you want to boost or not. Here are how to enable and disable the app while gaming:
-
-
To enable the app, launch the app and tap on the start button. You will see a list of games or apps that you have installed on your device. Tap on any game or app that you want to boost and start playing.
-
To disable the app, swipe from right to left on your screen while gaming and tap on the stop button. You will see a confirmation message asking if you want to stop the app. Tap on yes and exit the game or app.
-
To choose which games or apps you want to boost or not, launch the app and tap on the settings icon. You will see a list of games or apps that you have installed on your device. Tap on the toggle switch next to any game or app that you want to boost or not.
-
-
How to Optimize Your PC for Gaming Performance
-
Some tips and tricks to boost your PC speed and fps
-
If you are using Game Booster Pop Up WA on your PC, you can also optimize your PC for gaming performance by following some tips and tricks. These tips and tricks can help you boost your PC speed and fps, as well as prevent lagging, freezing, or crashing issues. Here are some tips and tricks to optimize your PC for gaming performance:
-
How to download game booster pop up wa for android
-Best game booster pop up wa app for free fire
-Game booster pop up wa pro no password apk download
-Razer cortex game booster with pop up wa feature
-Game booster pop up wa terbaik 2022 untuk hp android
-Download game booster pop up wa all device support
-Game booster pop up wa pro mod apk latest version
-Game booster pop up wa vs game turbo which is better
-Cara download game booster pop up wa di xiaomi
-Game booster pop up wa rog vip premium apk
-Game booster pop up wa review and tutorial
-Game booster pop up wa for pubg mobile lite
-Download game booster pop up wa from google play store
-Game booster pop up wa green shark edition apk
-Game booster pop up wa black shark no root
-Game booster pop up wa assist with toolbox feature
-Download game booster pop up wa for low end devices
-Game booster pop up wa settings and tips
-Game booster pop up wa pro no ads unlocked apk
-Game booster pop up wa with floating windows feature
-Download game booster pop up wa for cod mobile
-Game booster pop up wa terbaru 2022 update link
-Game booster pop up wa with bandwidth allocation feature
-Game booster pop up wa without password and root
-Game booster pop up wa with sound enhancer feature
-Download game booster pop up wa for mlbb
-Game booster pop up wa raka anggara gaming apk
-Game booster pop up wa with do not disturb mode
-Game booster pop up wa with game mode feature
-Download game booster pop up wa for garena free fire
-Game booster pop up wa pro full version free download
-Game booster pop up wa with fps meter and temperature monitor
-Game booster pop up wa for samsung galaxy devices
-Download game booster pop up wa for pc windows 10
-Game booster pop up wa with ram and cpu optimizer feature
-Game booster pop up wa for fortnite mobile android
-Download game booster pop up wa from official website
-Game booster pop up wa with custom resolution and graphics feature
-Game booster pop up wa for realme devices no ban risk
-Download game booster pop up wa for ios iphone ipad
-
-
Update your drivers: Make sure that your drivers are up to date, especially your graphics card driver. Updating your drivers can improve your graphics quality, compatibility, and stability.
-
Clean your disk: Make sure that your disk has enough free space and is not fragmented. Cleaning your disk can improve your loading times, performance, and reliability.
-
Disable unnecessary programs: Make sure that you close or disable any unnecessary programs that are running in the background or startup. Disabling unnecessary programs can free up your memory, CPU, and bandwidth.
-
Adjust your power settings: Make sure that you set your power plan to high performance or balanced mode. Adjusting your power settings can increase your CPU and GPU performance and prevent throttling.
-
Tweak your game settings: Make sure that you adjust your game settings according to your PC specifications and preferences. Tweaking your game settings can optimize your graphics quality, frame rate, and resolution.
-
-
Some Windows 11 features that can enhance your gaming experience
-
If you are using Windows 11 on your PC, you can also take advantage of some features that can enhance your gaming experience. These features can help you enjoy faster, smoother, and more immersive gaming on your PC. Here are some Windows 11 features that can enhance your gaming experience:
-
-
DirectStorage: This feature can reduce the loading times of games by allowing them to access data directly from the SSD without going through the CPU.
-
Auto HDR: This feature can improve the color and contrast of games by automatically adding high dynamic range (HDR) effects to them.
-
Xbox Game Bar: This feature can let you access useful tools like screen recorder, screenshot, performance monitor, audio mixer, etc. while gaming.
-
Xbox Game Pass: This feature can let you access hundreds of games for a monthly subscription fee.
-
Xbox Cloud Gaming: This feature can let you stream games from the cloud to your PC without downloading them.
-
-
How to Remove Pop Up Ads from Your Browser
-
Why pop up ads are annoying and harmful
-
Pop up ads are advertisements that appear in a new window or tab on your browser while you are browsing the web. Pop up ads are annoying and harmful for several reasons:
-
-
They can interrupt your browsing experience and distract you from what you are doing.
-
They can slow down your browser performance and consume your bandwidth.
-
They can expose you to malware, viruses, phishing, scams, or other malicious content.
-
They can collect your personal information, browsing history, or online behavior without your consent.
-
-
How to block or allow pop ups in Chrome, Edge, and other browsers
-
The best way to remove pop up ads from your browser is to block them or allow them only from trusted sites. Different browsers have different ways of blocking or allowing pop ups, but the general steps are similar. Here are how to block or allow pop ups in Chrome, Edge, and other browsers:
-
-
For Chrome: Open Chrome and click on the three dots icon at the top right corner. Select Settings and then Privacy and security. Click on Site settings and then Pop-ups and redirects. Toggle the switch to block or allow pop ups. You can also add exceptions for specific sites that you want to block or allow pop ups from.
-
For Edge: Open Edge and click on the three dots icon at the top right corner. Select Settings and then Cookies and site permissions. Click on Pop-ups and redirects. Toggle the switch to block or allow pop ups. You can also add exceptions for specific sites that you want to block or allow pop ups from.
-
For other browsers: Open your browser and go to the settings or options menu. Look for the pop-up blocker or pop-up blocker settings option. Toggle the switch to block or allow pop ups. You can also add exceptions for specific sites that you want to block or allow pop ups from.
-
-
Conclusion
-
A summary of the main points and benefits of Game Booster Pop Up WA
-
Game Booster Pop Up WA is an app that can optimize your gaming performance and experience by boosting your device's speed, performance, and stability. It can also allow you to access WhatsApp messages and other apps in a pop-up window while gaming, so you don't have to switch between apps or miss any important notifications. Game Booster Pop Up WA can help you enjoy faster, smoother, and more immersive gaming on your PC or smartphone.
-
A call to action to download the app and try it out
-
If you are a gamer who loves playing online games on your PC or smartphone, you should download Game Booster Pop Up WA and try it out. You will be amazed by how much it can improve your gaming performance and experience. You will also be able to chat with your friends on WhatsApp or use other apps without leaving your game. Game Booster Pop Up WA is easy to download, install, and use, and it supports all devices and games. Download Game Booster Pop Up WA today and enjoy the ultimate gaming experience!
-
FAQs
-
What is the difference between Game Booster Pop Up WA and other game boosters?
-
Game Booster Pop Up WA is different from other game boosters because it not only boosts your device's speed, performance, and stability, but also allows you to access WhatsApp messages and other apps in a pop-up window while gaming. This way, you can stay connected with your friends or family while gaming, without having to switch between apps or miss any important notifications.
-
Is Game Booster Pop Up WA safe and secure?
-
Yes, Game Booster Pop Up WA is safe and secure. It does not contain any malware, viruses, phishing, scams, or other malicious content. It also does not collect your personal information, browsing history, or online behavior without your consent. It only requires the necessary permissions to optimize your gaming performance and experience.
-
Does Game Booster Pop Up WA work with all games and apps?
-
Yes, Game Booster Pop Up WA works with all games and apps, including online multiplayer games. It can boost any game or app that you have installed on your device, regardless of the genre, platform, or mode.
-
How much does Game Booster Pop Up WA cost?
-
Game Booster Pop Up WA is free to download and use. However, it does offer some in-app purchases that can unlock some premium features, such as advanced settings, more app options, ad removal, etc. You can choose whether to buy these features or not according to your needs and preferences.
-
How can I contact the developer of Game Booster Pop Up WA?
-
If you have any questions, feedback, suggestions, or issues regarding Game Booster Pop Up WA, you can contact the developer of the app by emailing them at gameboosterpopupwa@gmail.com. You can also visit their website at https://gameboosterpopupwa.com/ for more information.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pixel Car Racer Mod APK for Free and Race Your Way to Glory in 2023.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pixel Car Racer Mod APK for Free and Race Your Way to Glory in 2023.md
deleted file mode 100644
index 1696bb8cf83c11bbce8a0efeb1b8148541144459..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pixel Car Racer Mod APK for Free and Race Your Way to Glory in 2023.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
Pixel Car Racer Mod Apk 2023 Latest Version: Everything You Need to Know
-
Do you love racing games? Do you enjoy customizing your own cars and competing with other players? If yes, then you might want to check out Pixel Car Racer, a retro-style racing game with pixel graphics and RPG elements. And if you want to take your gaming experience to the next level, you might want to try out Pixel Car Racer Mod Apk, a modified version of the original game that gives you unlimited money and diamonds. In this article, we will tell you everything you need to know about Pixel Car Racer Mod Apk 2023 latest version, including its features, benefits, and how to download and install it on your Android device.
-
What is Pixel Car Racer?
-
Pixel Car Racer is a racing game developed by Studio Furukawa, a small indie game studio based in Canada. It was released in 2016 and has since gained a loyal fan base of over 10 million downloads on Google Play Store. Pixel Car Racer is inspired by the classic arcade racing games of the 80s and 90s, but with modern features and gameplay.
A retro-style racing game with pixel graphics and RPG elements
-
Pixel Car Racer has a simple yet addictive gameplay that will keep you hooked for hours. You start by choosing your car from a variety of models, ranging from muscle cars to supercars. You can then customize your car with different parts, such as engines, tires, spoilers, exhausts, decals, and more. You can also tune your car's performance by adjusting its power, torque, weight, grip, suspension, and other parameters.
-
Once you are ready, you can hit the road and race against other players or AI opponents in two game modes: drag and street. Drag mode is a straight-line race where you have to shift gears at the right time to reach the finish line first. Street mode is a more challenging race where you have to navigate through traffic and avoid obstacles while racing against other cars. Both modes have different levels of difficulty and rewards.
-
Pixel Car Racer also has RPG elements that add more depth and fun to the game. You can level up your driver by completing races and earning experience points. You can also unlock new cars and parts by opening crates or buying them from the shop. You can also join or create a team with other players and compete in team events.
-
Features of Pixel Car Racer
-
Pixel Car Racer has many features that make it stand out from other racing games. Some of these features are:
-
Over 1000 cars to customize and raceOver 1000 cars to customize and race
-
Pixel Car Racer has a huge collection of cars that you can choose from, each with its own unique design and stats. You can find cars from different brands, such as Ford, Chevrolet, Nissan, Toyota, Honda, BMW, Ferrari, Lamborghini, and more. You can also find cars from different categories, such as classic, sports, exotic, muscle, tuner, and more. You can even find cars from popular movies and games, such as Fast and Furious, Need for Speed, Mad Max, and more. You can customize your car with over 1000 parts and accessories that you can buy or unlock in the game. You can change the color, wheels, body kits, spoilers, decals, lights, and more. You can also upgrade your car's engine, transmission, turbo, nitrous, intake, exhaust, and more. You can create your own dream car and show it off to other players.
-
Drag and street game modes
-
Pixel Car Racer has two game modes that you can play: drag and street. Drag mode is a simple yet thrilling race where you have to shift gears at the right time to beat your opponent. You can choose from different distances and difficulties to test your skills. You can also race against your friends or other players online in multiplayer mode. Street mode is a more challenging race where you have to dodge traffic and obstacles while racing against other cars. You can choose from different locations and weather conditions to spice up the race. You can also race against your friends or other players online in multiplayer mode.
-
pixel car racer mod apk 2023 free super cars
-pixel car racer mod apk 2023 unlimited money
-pixel car racer apk download latest version 2023
-pixel car racer mod apk 2023 happymod
-pixel car racer mod apk 2023 apkcombo
-pixel car racer mod apk 2023 android tv
-pixel car racer mod apk 2023 pc windows
-pixel car racer mod apk 2023 studio furukawa
-pixel car racer mod apk 2023 game for android
-pixel car racer mod apk 2023 customization
-pixel car racer mod apk 2023 list of cars
-pixel car racer mod apk 2023 different modes
-pixel car racer mod apk 2023 simple controls
-pixel car racer mod apk 2023 facebook connect
-pixel car racer mod apk 2023 racing game
-pixel car racer mod apk 2023 download link
-pixel car racer mod apk 2023 installation guide
-pixel car racer mod apk 2023 review and rating
-pixel car racer mod apk 2023 gameplay video
-pixel car racer mod apk 2023 tips and tricks
-pixel car racer mod apk 2023 cheats and hacks
-pixel car racer mod apk 2023 online multiplayer
-pixel car racer mod apk 2023 offline mode
-pixel car racer mod apk 2023 no root required
-pixel car racer mod apk 2023 no ads
-pixel car racer mod apk 2023 new features
-pixel car racer mod apk 2023 updated version
-pixel car racer mod apk 2023 bug fixes
-pixel car racer mod apk 2023 performance improvement
-pixel car racer mod apk 2023 best settings
-pixel car racer mod apk 2023 support and feedback
-pixel car racer mod apk 2023 how to play
-pixel car racer mod apk 2023 tutorial and walkthrough
-pixel car racer mod apk 2023 user comments and testimonials
-pixel car racer mod apk 2023 frequently asked questions
-pixel car racer mod apk 2023 comparison with other games
-pixel car racer mod apk 2023 pros and cons
-pixel car racer mod apk 2023 system requirements
-pixel car racer mod apk 2023 file size and format
-pixel car racer mod apk 2023 safe and secure download
-
Dynamic engine system with realistic sounds
-
Pixel Car Racer has a dynamic engine system that simulates the physics and mechanics of real cars. You can feel the difference between different engines, transmissions, turbos, nitrous, and more. You can also hear the realistic sounds of your car's engine, exhaust, tires, brakes, and more. You can even hear the sound of your car's horn, radio, or siren if you have them installed. Pixel Car Racer gives you a realistic and immersive racing experience that will make you feel like you are driving a real car.
-
Manual gear shifting and clutch
-
Pixel Car Racer has a manual gear shifting and clutch system that adds more challenge and fun to the game. You can choose to use the automatic or manual mode depending on your preference. If you choose the manual mode, you have to use the clutch pedal and the gear shifter to change gears at the right time. You have to be careful not to over-rev or under-rev your engine or you will lose speed or damage your car. You also have to be careful not to stall your car or you will lose the race. Pixel Car Racer gives you a realistic and satisfying feeling of shifting gears like a pro.
-
Online and offline play
-
Pixel Car Racer has both online and offline modes that you can enjoy anytime and anywhere. You can play offline if you want to practice your skills or complete missions without any interruptions. You can also play online if you want to challenge your friends or other players from around the world in multiplayer mode. You can join or create a team with other players and compete in team events for rewards and glory. You can also chat with other players and share your tips and tricks.
-
What is Pixel Car Racer Mod Apk?
-
Pixel Car Racer Mod Apk is a modified version of the original game that gives you unlimited money and diamonds. Money and diamonds are the main currencies in Pixel Car Racer that you can use to buy cars, parts, crates, stickers, and more. However, earning money and diamonds in the game can be time-consuming and tedious. That's why some players prefer to use Pixel Car Racer Mod Apk to get unlimited money and diamonds for free.
-
A modified version of the original game that gives you unlimited money and diamonds
-
Pixel Car Racer Mod Apk is a file that you can download and install on your Android device to replace the original game file. By doing so, you will be able to access all the features of Pixel Car Racer without any limitations or restrictions. You will be able to get unlimited money and diamonds that you can use to buy anything you want in the game. You will also be able to unlock all cars and parts that are otherwise locked or require real money to purchase.
-
Benefits of Pixel Car Racer Mod Apk
-
Pixel Car Racer Mod Apk has many benefits that make it worth trying out. Some of these benefits are:
-
Unlock all cars and parts
-
With Pixel Car Racer Mod Apk, you will be able to unlock all cars and
With Pixel Car Racer Mod Apk, you will be able to unlock all cars and parts that are available in the game. You will not have to wait for crates to open or spend real money to buy them. You will be able to choose from over 1000 cars and customize them with over 1000 parts. You will be able to create your own unique car collection and show it off to other players.
-
Upgrade your cars to the max level
-
With Pixel Car Racer Mod Apk, you will be able to upgrade your cars to the max level without any hassle. You will not have to grind for money or diamonds to buy the upgrades. You will be able to boost your car's performance by upgrading its engine, transmission, turbo, nitrous, intake, exhaust, and more. You will be able to make your car faster, stronger, and more reliable.
-
Buy any item from the shop
-
With Pixel Car Racer Mod Apk, you will be able to buy any item from the shop without any limitations. You will not have to worry about running out of money or diamonds. You will be able to buy crates, stickers, horns, radios, sirens, and more. You will be able to enhance your car's appearance and functionality with these items.
-
Enjoy the game without ads or in-app purchases
-
With Pixel Car Racer Mod Apk, you will be able to enjoy the game without any ads or in-app purchases. You will not have to watch annoying ads or spend real money to get more money or diamonds. You will be able to play the game without any interruptions or distractions. You will be able to focus on the game and have fun.
-
How to Download and Install Pixel Car Racer Mod Apk 2023 Latest Version?
-
If you are interested in trying out Pixel Car Racer Mod Apk 2023 latest version, you can follow these simple steps to download and install it on your Android device:
-
Steps to download and install the mod apk file on your Android device
-
-
Click on this link to download the mod apk file of Pixel Car Racer 2023 latest version.
-
Once the download is complete, go to your device's settings and enable the installation of apps from unknown sources.
-
Locate the downloaded mod apk file in your device's file manager and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy unlimited money and diamonds.
-
-
Tips to ensure a smooth installation process
-
-
Make sure you have enough storage space on your device before downloading the mod apk file.
-
Make sure you have a stable internet connection while downloading the mod apk file.
-
Make sure you uninstall any previous versions of Pixel Car Racer from your device before installing the mod apk file.
-
Make sure you grant all the necessary permissions to the game when prompted.
-
If you encounter any problems during the installation process, try restarting your device or clearing your cache.
-
-
Conclusion
-
Pixel Car Racer is a fun and addictive racing game that lets you customize and race your own cars in a retro-style pixel world. Pixel Car Racer Mod Apk is a modified version of the game that gives you unlimited money and diamonds that you can use to unlock all cars and parts, upgrade your cars to the max level, buy any item from the shop, and enjoy the game without ads or in-app purchases. If you want to try out Pixel Car Racer Mod Apk 2023 latest version, you can download it from this link and follow the steps above to install it on your Android device. We hope you enjoy playing Pixel Car Racer Mod Apk 2023 latest version and share your feedback with us.
- : https://modapkdl.io/pixel-car-racer-mod-apk/ 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simsantonioii/MusicGen-Continuation/README.md b/spaces/simsantonioii/MusicGen-Continuation/README.md
deleted file mode 100644
index 7a989e9dcb0180daf6d5899ec426d536d8379a2b..0000000000000000000000000000000000000000
--- a/spaces/simsantonioii/MusicGen-Continuation/README.md
+++ /dev/null
@@ -1,136 +0,0 @@
----
-title: MusicGen Continuation
-python_version: '3.9'
-tags:
-- music generation
-- language models
-- LLMs
-app_file: app.py
-emoji: 🎵
-colorFrom: white
-colorTo: blue
-sdk: gradio
-sdk_version: 3.34.0
-pinned: true
-license: cc-by-nc-4.0
-duplicated_from: radames/MusicGen-Continuation
----
-# Audiocraft
-
-
-
-
-Audiocraft is a PyTorch library for deep learning research on audio generation. At the moment, it contains the code for MusicGen, a state-of-the-art controllable text-to-music model.
-
-## MusicGen
-
-Audiocraft provides the code and models for MusicGen, [a simple and controllable model for music generation][arxiv]. MusicGen is a single stage auto-regressive
-Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods like [MusicLM](https://arxiv.org/abs/2301.11325), MusicGen doesn't require a self-supervised semantic representation, and it generates
-all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict
-them in parallel, thus having only 50 auto-regressive steps per second of audio.
-Check out our [sample page][musicgen_samples] or test the available demo!
-
-
-
-
-
-
-
-
-
-We use 20K hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data.
-
-## Installation
-Audiocraft requires Python 3.9, PyTorch 2.0.0, and a GPU with at least 16 GB of memory (for the medium-sized model). To install Audiocraft, you can run the following:
-
-```shell
-# Best to make sure you have torch installed first, in particular before installing xformers.
-# Don't run this if you already have PyTorch installed.
-pip install 'torch>=2.0'
-# Then proceed to one of the following
-pip install -U audiocraft # stable release
-pip install -U git+https://git@github.com/facebookresearch/audiocraft#egg=audiocraft # bleeding edge
-pip install -e . # or if you cloned the repo locally
-```
-
-## Usage
-We offer a number of way to interact with MusicGen:
-1. You can play with MusicGen by running the jupyter notebook at [`demo.ipynb`](./demo.ipynb) locally, or use the provided [colab notebook](https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing).
-2. You can use the gradio demo locally by running `python app.py`.
-3. A demo is also available on the [`facebook/MusicGen` HuggingFace Space](https://huggingface.co/spaces/facebook/MusicGen) (huge thanks to all the HF team for their support).
-4. Finally, you can run the [Gradio demo with a Colab GPU](https://colab.research.google.com/drive/1-Xe9NCdIs2sCUbiSmwHXozK6AAhMm7_i?usp=sharing),
-as adapted from [@camenduru Colab](https://github.com/camenduru/MusicGen-colab).
-
-## API
-
-We provide a simple API and 4 pre-trained models. The pre trained models are:
-- `small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small)
-- `medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium)
-- `melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody)
-- `large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large)
-
-We observe the best trade-off between quality and compute with the `medium` or `melody` model.
-In order to use MusicGen locally **you must have a GPU**. We recommend 16GB of memory, but smaller
-GPUs will be able to generate short sequences, or longer sequences with the `small` model.
-
-**Note**: Please make sure to have [ffmpeg](https://ffmpeg.org/download.html) installed when using newer version of `torchaudio`.
-You can install it with:
-```
-apt-get install ffmpeg
-```
-
-See after a quick example for using the API.
-
-```python
-import torchaudio
-from audiocraft.models import MusicGen
-from audiocraft.data.audio import audio_write
-
-model = MusicGen.get_pretrained('melody')
-model.set_generation_params(duration=8) # generate 8 seconds.
-wav = model.generate_unconditional(4) # generates 4 unconditional audio samples
-descriptions = ['happy rock', 'energetic EDM', 'sad jazz']
-wav = model.generate(descriptions) # generates 3 samples.
-
-melody, sr = torchaudio.load('./assets/bach.mp3')
-# generates using the melody from the given audio and the provided descriptions.
-wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr)
-
-for idx, one_wav in enumerate(wav):
- # Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
- audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True)
-```
-
-
-## Model Card
-
-See [the model card page](./MODEL_CARD.md).
-
-## FAQ
-
-#### Will the training code be released?
-
-Yes. We will soon release the training code for MusicGen and EnCodec.
-
-
-#### I need help on Windows
-
-@FurkanGozukara made a complete tutorial for [Audiocraft/MusicGen on Windows](https://youtu.be/v-YpvPkhdO4)
-
-
-## Citation
-```
-@article{copet2023simple,
- title={Simple and Controllable Music Generation},
- author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
- year={2023},
- journal={arXiv preprint arXiv:2306.05284},
-}
-```
-
-## License
-* The code in this repository is released under the MIT license as found in the [LICENSE file](LICENSE).
-* The weights in this repository are released under the CC-BY-NC 4.0 license as found in the [LICENSE_weights file](LICENSE_weights).
-
-[arxiv]: https://arxiv.org/abs/2306.05284
-[musicgen_samples]: https://ai.honu.io/papers/musicgen/
diff --git a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/utils/export.py b/spaces/simsantonioii/MusicGen-Continuation/audiocraft/utils/export.py
deleted file mode 100644
index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000
--- a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/utils/export.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility to export a training checkpoint to a lightweight release checkpoint.
-"""
-
-from pathlib import Path
-import typing as tp
-
-from omegaconf import OmegaConf, DictConfig
-import torch
-
-
-def _clean_lm_cfg(cfg: DictConfig):
- OmegaConf.set_struct(cfg, False)
- # This used to be set automatically in the LM solver, need a more robust solution
- # for the future.
- cfg['transformer_lm']['card'] = 2048
- cfg['transformer_lm']['n_q'] = 4
- # Experimental params no longer supported.
- bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters',
- 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop']
- for name in bad_params:
- del cfg['transformer_lm'][name]
- OmegaConf.set_struct(cfg, True)
- return cfg
-
-
-def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
- sig = Path(checkpoint_path).parent.name
- assert len(sig) == 8, "Not a valid Dora signature"
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['ema']['state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']),
- }
- out_file = Path(out_folder) / f'{sig}.th'
- torch.save(new_pkg, out_file)
- return out_file
-
-
-def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
- sig = Path(checkpoint_path).parent.name
- assert len(sig) == 8, "Not a valid Dora signature"
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['fsdp_best_state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg']))
- }
- out_file = Path(out_folder) / f'{sig}.th'
- torch.save(new_pkg, out_file)
- return out_file
diff --git a/spaces/skf15963/summary/fengshen/examples/clue_sim/main.py b/spaces/skf15963/summary/fengshen/examples/clue_sim/main.py
deleted file mode 100644
index 91c5a732d8cb1a683aa34a3b3f7c158861cd4492..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/clue_sim/main.py
+++ /dev/null
@@ -1,133 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import jsonlines
-import torch
-import pytorch_lightning as pl
-from transformers import AutoTokenizer, BertTokenizer
-from train_func import CustomDataset, CustomDataModule, CustomModel
-import argparse
-import os
-import gpustat
-
-if __name__ == '__main__':
- my_parser = argparse.ArgumentParser()
- my_parser.add_argument(
- "--model_path", default="./weights/Erlangshen-MegatronBert-1.3B-Similarity", type=str, required=False)
- my_parser.add_argument(
- "--model_name", default="IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Similarity", type=str, required=False)
- my_parser.add_argument("--max_seq_length", default=64, type=int, required=False)
- my_parser.add_argument("--batch_size", default=32, type=int, required=False)
- my_parser.add_argument("--val_batch_size", default=64, type=int, required=False)
- my_parser.add_argument("--num_epochs", default=10, type=int, required=False)
- my_parser.add_argument("--learning_rate", default=4e-5, type=float, required=False)
- my_parser.add_argument("--warmup_proportion", default=0.2, type=int, required=False)
- my_parser.add_argument("--warmup_step", default=2, type=int, required=False)
- my_parser.add_argument("--num_labels", default=3, type=int, required=False)
- my_parser.add_argument("--cate_performance", default=False, type=bool, required=False)
- my_parser.add_argument("--use_original_pooler", default=True, type=bool, required=False)
- my_parser.add_argument("--model_output_path", default='./pl_model', type=str, required=False)
- my_parser.add_argument("--mode", type=str, choices=['Train', 'Test'], required=True)
- my_parser.add_argument("--predict_model_path", default='./pl_model/', type=str, required=False)
- my_parser.add_argument("--test_output_path", default='./submissions', type=str, required=False)
- my_parser.add_argument("--optimizer", default='AdamW', type=str, required=False) # ['Adam', 'AdamW']
- # ['StepLR', 'CosineWarmup', 'CosineAnnealingLR']
- my_parser.add_argument("--scheduler", default='CosineWarmup', type=str, required=False)
- my_parser.add_argument("--loss_function", default='LSCE_correction', type=str,
- required=False) # ['CE', 'Focal', 'LSCE_correction']
-
- args = my_parser.parse_args()
-
- print(args)
- gpustat.print_gpustat()
-
- if 'Erlangshen' in args.model_name:
- tokenizer = BertTokenizer.from_pretrained(args.model_name, cache_dir=args.model_path)
- else:
- tokenizer = AutoTokenizer.from_pretrained(args.model_name, cache_dir=args.model_path)
-
- seed = 1919
- pl.seed_everything(seed)
-
- dm = CustomDataModule(
- args=args,
- tokenizer=tokenizer,
- )
-
- metric_index = 2
- checkpoint = pl.callbacks.ModelCheckpoint(
- save_top_k=1,
- verbose=True,
- monitor=['val_loss', 'val_acc', 'val_f1'][metric_index],
- mode=['min', 'max', 'max'][metric_index]
- )
-
- lr_monitor = pl.callbacks.LearningRateMonitor(logging_interval="step")
- callbacks = [checkpoint, lr_monitor]
-
- logger = pl.loggers.TensorBoardLogger(save_dir=os.getcwd(),
- name='lightning_logs/' + args.model_name.split('/')[-1]),
-
- trainer = pl.Trainer(
- progress_bar_refresh_rate=50,
- logger=logger,
- gpus=-1 if torch.cuda.is_available() else None,
- amp_backend='native',
- amp_level='O2',
- precision=16,
- callbacks=callbacks,
- gradient_clip_val=1.0,
- max_epochs=args.num_epochs,
- # accelerator='ddp',
- # plugins='ddp_sharded',
- )
-
- if args.mode == 'Train':
- print('Only Train')
- model = CustomModel(
- args=args,
- )
- trainer.fit(model, dm)
-
- # Predict test, save results to json
- if args.mode == 'Test':
- print('Only Test')
- test_loader = torch.utils.data.DataLoader(
- CustomDataset('test.json', tokenizer, args.max_seq_length, 'test'),
- batch_size=args.val_batch_size,
- num_workers=4,
- shuffle=False,
- pin_memory=True,
- drop_last=False
- )
-
- model = CustomModel(args=args).load_from_checkpoint(args.predict_model_path, args=args)
-
- predict_results = trainer.predict(model, test_loader, return_predictions=True)
-
- path = os.path.join(
- args.test_output_path,
- args.model_name.split('/')[-1].replace('-', '_'))
- file_path = os.path.join(path, 'qbqtc_predict.json')
-
- if not os.path.exists(path):
- os.makedirs(path)
- if os.path.exists(file_path):
- print('Json文件已存在, 将用本次结果替换')
-
- with jsonlines.open(file_path, 'w') as jsonf:
- for predict_res in predict_results:
- for i, p in zip(predict_res['id'], predict_res['logits']):
- jsonf.write({"id": i, "label": str(p)})
- print('Json saved:', file_path)
diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/parsing/resnet.py b/spaces/sklkd93/CodeFormer/CodeFormer/facelib/parsing/resnet.py
deleted file mode 100644
index fec8e82cf64469fb51be21ad5130217052addbda..0000000000000000000000000000000000000000
--- a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/parsing/resnet.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- """3x3 convolution with padding"""
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
-
- def __init__(self, in_chan, out_chan, stride=1):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(in_chan, out_chan, stride)
- self.bn1 = nn.BatchNorm2d(out_chan)
- self.conv2 = conv3x3(out_chan, out_chan)
- self.bn2 = nn.BatchNorm2d(out_chan)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = None
- if in_chan != out_chan or stride != 1:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_chan, out_chan, kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(out_chan),
- )
-
- def forward(self, x):
- residual = self.conv1(x)
- residual = F.relu(self.bn1(residual))
- residual = self.conv2(residual)
- residual = self.bn2(residual)
-
- shortcut = x
- if self.downsample is not None:
- shortcut = self.downsample(x)
-
- out = shortcut + residual
- out = self.relu(out)
- return out
-
-
-def create_layer_basic(in_chan, out_chan, bnum, stride=1):
- layers = [BasicBlock(in_chan, out_chan, stride=stride)]
- for i in range(bnum - 1):
- layers.append(BasicBlock(out_chan, out_chan, stride=1))
- return nn.Sequential(*layers)
-
-
-class ResNet18(nn.Module):
-
- def __init__(self):
- super(ResNet18, self).__init__()
- self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1)
- self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2)
- self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2)
- self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2)
-
- def forward(self, x):
- x = self.conv1(x)
- x = F.relu(self.bn1(x))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- feat8 = self.layer2(x) # 1/8
- feat16 = self.layer3(feat8) # 1/16
- feat32 = self.layer4(feat16) # 1/32
- return feat8, feat16, feat32
diff --git a/spaces/songallery/my/app.py b/spaces/songallery/my/app.py
deleted file mode 100644
index e4a5e05bcddd4889d3dbaf06ae8ea5e7b84ac209..0000000000000000000000000000000000000000
--- a/spaces/songallery/my/app.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import streamlit as st
-from transformers import pipeline
-
-pipe = pipeline('sentiment-analysis')
-text = st.text_area('enter some text!')
-
-if text:
- out = pipe(text)
- st.json(out)
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py
deleted file mode 100644
index 8f3c8703ca37398b9d389ce5181bdfac2333cdf2..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py
+++ /dev/null
@@ -1,226 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import os
-
-from fairseq import checkpoint_utils, tasks
-import sentencepiece as spm
-import torch
-
-try:
- from simuleval import READ_ACTION, WRITE_ACTION, DEFAULT_EOS
- from simuleval.agents import TextAgent
-except ImportError:
- print("Please install simuleval 'pip install simuleval'")
-
-
-BOS_PREFIX = "\u2581"
-
-
-class SimulTransTextAgentJA(TextAgent):
- """
- Simultaneous Translation
- Text agent for Japanese
- """
- def __init__(self, args):
-
- # Whether use gpu
- self.gpu = getattr(args, "gpu", False)
-
- # Max len
- self.max_len = args.max_len
-
- # Load Model
- self.load_model_vocab(args)
-
- # build word splitter
- self.build_word_splitter(args)
-
- self.eos = DEFAULT_EOS
-
- def initialize_states(self, states):
- states.incremental_states = dict()
- states.incremental_states["online"] = dict()
-
- def to_device(self, tensor):
- if self.gpu:
- return tensor.cuda()
- else:
- return tensor.cpu()
-
- def load_model_vocab(self, args):
-
- filename = args.model_path
- if not os.path.exists(filename):
- raise IOError("Model file not found: {}".format(filename))
-
- state = checkpoint_utils.load_checkpoint_to_cpu(filename)
-
- task_args = state["cfg"]["task"]
- task_args.data = args.data_bin
-
- task = tasks.setup_task(task_args)
-
- # build model for ensemble
- state["cfg"]["model"].load_pretrained_encoder_from = None
- state["cfg"]["model"].load_pretrained_decoder_from = None
-
- self.model = task.build_model(state["cfg"]["model"])
- self.model.load_state_dict(state["model"], strict=True)
- self.model.eval()
- self.model.share_memory()
-
- if self.gpu:
- self.model.cuda()
-
- # Set dictionary
- self.dict = {}
- self.dict["tgt"] = task.target_dictionary
- self.dict["src"] = task.source_dictionary
-
- @staticmethod
- def add_args(parser):
- # fmt: off
- parser.add_argument('--model-path', type=str, required=True,
- help='path to your pretrained model.')
- parser.add_argument("--data-bin", type=str, required=True,
- help="Path of data binary")
- parser.add_argument("--max-len", type=int, default=100,
- help="Max length of translation")
- parser.add_argument("--tgt-splitter-type", type=str, default="SentencePiece",
- help="Subword splitter type for target text.")
- parser.add_argument("--tgt-splitter-path", type=str, default=None,
- help="Subword splitter model path for target text.")
- parser.add_argument("--src-splitter-type", type=str, default="SentencePiece",
- help="Subword splitter type for source text.")
- parser.add_argument("--src-splitter-path", type=str, default=None,
- help="Subword splitter model path for source text.")
- # fmt: on
- return parser
-
- def build_word_splitter(self, args):
- self.spm = {}
- for lang in ['src', 'tgt']:
- if getattr(args, f'{lang}_splitter_type', None):
- path = getattr(args, f'{lang}_splitter_path', None)
- if path:
- self.spm[lang] = spm.SentencePieceProcessor()
- self.spm[lang].Load(path)
-
- def segment_to_units(self, segment, states):
- # Split a full word (segment) into subwords (units)
- return self.spm['src'].EncodeAsPieces(segment)
-
- def update_model_encoder(self, states):
- if len(states.units.source) == 0:
- return
-
- src_indices = [
- self.dict['src'].index(x)
- for x in states.units.source.value
- ]
-
- if states.finish_read():
- # Append the eos index when the prediction is over
- src_indices += [self.dict["tgt"].eos_index]
-
- src_indices = self.to_device(
- torch.LongTensor(src_indices).unsqueeze(0)
- )
- src_lengths = self.to_device(
- torch.LongTensor([src_indices.size(1)])
- )
-
- states.encoder_states = self.model.encoder(src_indices, src_lengths)
-
- torch.cuda.empty_cache()
-
- def update_states_read(self, states):
- # Happens after a read action.
- self.update_model_encoder(states)
-
- def units_to_segment(self, units, states):
- # Merge sub words (units) to full word (segment).
- # For Japanese, we can directly send
- # the untokenized token to server except the BOS token
- # with following option
- # --sacrebleu-tokenizer MeCab
- # --eval-latency-unit char
- # --no-space
- token = units.value.pop()
-
- if (
- token == self.dict["tgt"].eos_word
- or len(states.segments.target) > self.max_len
- ):
- return DEFAULT_EOS
-
- if BOS_PREFIX == token:
- return None
- if token[0] == BOS_PREFIX:
- return token[1:]
- else:
- return token
-
- def policy(self, states):
-
- if not getattr(states, "encoder_states", None):
- # No encoder states, read a token first
- return READ_ACTION
-
- # encode previous predicted target tokens
- tgt_indices = self.to_device(
- torch.LongTensor(
- [self.model.decoder.dictionary.eos()]
- + [
- self.dict['tgt'].index(x)
- for x in states.units.target.value
- if x is not None
- ]
- ).unsqueeze(0)
- )
-
- # Current steps
- states.incremental_states["steps"] = {
- "src": states.encoder_states["encoder_out"][0].size(0),
- "tgt": 1 + len(states.units.target),
- }
-
- # Online only means the reading is not finished
- states.incremental_states["online"]["only"] = (
- torch.BoolTensor([not states.finish_read()])
- )
-
- x, outputs = self.model.decoder.forward(
- prev_output_tokens=tgt_indices,
- encoder_out=states.encoder_states,
- incremental_state=states.incremental_states,
- )
-
- states.decoder_out = x
-
- torch.cuda.empty_cache()
-
- if outputs.action == 0:
- return READ_ACTION
- else:
- return WRITE_ACTION
-
- def predict(self, states):
- # Predict target token from decoder states
- decoder_states = states.decoder_out
-
- lprobs = self.model.get_normalized_probs(
- [decoder_states[:, -1:]], log_probs=True
- )
-
- index = lprobs.argmax(dim=-1)[0, 0].item()
-
- if index != self.dict['tgt'].eos_index:
- token = self.dict['tgt'].string([index])
- else:
- token = self.dict['tgt'].eos_word
-
- return token
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/transformer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/transformer.py
deleted file mode 100644
index 6b330ef1b7f7a506e7e8176f20a0e722b5fd5149..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/transformer.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-
-import torch.nn as nn
-from fairseq.model_parallel.modules import (
- ModelParallelTransformerDecoderLayer,
- ModelParallelTransformerEncoderLayer,
-)
-from fairseq.models import register_model
-from fairseq.models.transformer import (
- TransformerDecoder,
- TransformerEncoder,
- TransformerModel,
-)
-
-
-try:
- from fairseq.model_parallel.megatron.mpu import (
- copy_to_model_parallel_region,
- gather_from_model_parallel_region,
- VocabParallelEmbedding,
- )
-
- has_megatron_submodule = True
-except (ImportError, ModuleNotFoundError):
- has_megatron_submodule = False
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("model_parallel_transformer")
-class ModelParallelTransformerModel(TransformerModel):
- """
- Model parallel Transformer model.
- """
-
- @classmethod
- def build_embedding(cls, args, dictionary, embed_dim, path=None):
- if not has_megatron_submodule:
- raise ImportError(
- "\n\nPlease install the megatron submodule:"
- "\n\n git submodule update --init "
- "fairseq/model_parallel/megatron"
- )
- dictionary.pad_to_multiple_(args.model_parallel_size * 8)
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
-
- def _vocab_init(tensor, **kwargs):
- nn.init.normal_(tensor, mean=0, std=num_embeddings ** -0.5)
- nn.init.constant_(tensor[1], 0)
-
- emb = VocabParallelEmbedding(
- num_embeddings, embed_dim, padding_idx, init_method=_vocab_init
- )
- # if provided, load from preloaded dictionaries
- if path:
- raise NotImplementedError(
- "Loading of embedding from path is not supported for model parallel"
- )
- return emb
-
- @classmethod
- def build_encoder(cls, args, src_dict, embed_tokens):
- return ModelParallelTransformerEncoder(args, src_dict, embed_tokens)
-
- @classmethod
- def build_decoder(cls, args, tgt_dict, embed_tokens):
- return ModelParallelTransformerDecoder(
- args,
- tgt_dict,
- embed_tokens,
- no_encoder_attn=getattr(args, "no_cross_attention", False),
- )
-
-
-class ModelParallelTransformerEncoder(TransformerEncoder):
- """
- Model parallel Transformer encoder consisting of *args.encoder_layers* layers. Each layer
- is a :class:`ModelParallelTransformerEncoderLayer`.
- """
-
- def __init__(self, args, dictionary, embed_tokens):
- super().__init__(args, dictionary, embed_tokens)
-
- if args.no_final_layer_norm:
- self.layer_norm = None
-
- def build_encoder_layer(self, args):
- return ModelParallelTransformerEncoderLayer(args)
-
-
-class ModelParallelTransformerDecoder(TransformerDecoder):
- """
- Model Parallel Transformer decoder consisting of *args.decoder_layers* layers. Each layer
- is a :class:`ModelParallelTransformerDecoderLayer`.
- """
-
- def build_decoder_layer(self, args, no_encoder_attn=False):
- return ModelParallelTransformerDecoderLayer(args, no_encoder_attn)
-
- def output_layer(self, features, **kwargs):
- """Project features to the vocabulary size."""
- if not self.share_input_output_embed:
- raise NotImplementedError(
- "Model parallel training currently requires --share-decoder-input-output-embed"
- )
-
- features = copy_to_model_parallel_region(features)
-
- # project back to size of vocabulary
- x = self.output_projection(features)
-
- if getattr(self.args, "criterion") != "vocab_parallel_cross_entropy":
- x = gather_from_model_parallel_region(x).contiguous()
- return x
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py
deleted file mode 100644
index d95da59c2471bfa858fd627605196d7f41f9ec12..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.modules import TransformerSentenceEncoderLayer
-from fairseq.modules.sparse_multihead_attention import SparseMultiheadAttention
-
-
-class SparseTransformerSentenceEncoderLayer(TransformerSentenceEncoderLayer):
- """
- Implements a Sprase Transformer Encoder Layer (see SparseMultiheadAttention)
- """
-
- def __init__(
- self,
- embedding_dim: int = 768,
- ffn_embedding_dim: int = 3072,
- num_attention_heads: int = 8,
- dropout: float = 0.1,
- attention_dropout: float = 0.1,
- activation_dropout: float = 0.1,
- activation_fn: str = "relu",
- export: bool = False,
- is_bidirectional: bool = True,
- stride: int = 32,
- expressivity: int = 8,
- ) -> None:
-
- super().__init__(
- embedding_dim,
- ffn_embedding_dim,
- num_attention_heads,
- dropout,
- attention_dropout,
- activation_dropout,
- activation_fn,
- export,
- )
-
- self.self_attn = SparseMultiheadAttention(
- self.embedding_dim,
- num_attention_heads,
- dropout=attention_dropout,
- add_bias_kv=False,
- add_zero_attn=False,
- self_attention=True,
- is_bidirectional=is_bidirectional,
- stride=stride,
- expressivity=expressivity,
- )
diff --git a/spaces/stanciu/declare-lab-flan-gpt4all-xl/app.py b/spaces/stanciu/declare-lab-flan-gpt4all-xl/app.py
deleted file mode 100644
index 56281246d6289e16f959fb65604765dc1939b551..0000000000000000000000000000000000000000
--- a/spaces/stanciu/declare-lab-flan-gpt4all-xl/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/declare-lab/flan-gpt4all-xl").launch()
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Bplan Data Recovery Software Crack Fix Full.md b/spaces/stomexserde/gpt4-ui/Examples/Bplan Data Recovery Software Crack Fix Full.md
deleted file mode 100644
index a7ee5f7238780789f6f96b2b4b5417eec773b3be..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Bplan Data Recovery Software Crack Fix Full.md
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
Bplan Data Recovery Software Crack Full: Is It Worth It?
-
Have you ever lost or deleted some important files from your computer, hard drive, USB flash drive, SD card, or other devices? If so, you know how frustrating and stressful it can be to try to get them back. You may have searched online for a solution and found Bplan Data Recovery Software, a powerful tool that can help you recover your data in various situations. But what if you don't want to pay for the full version of the software? You may have also seen some websites that offer a crack for Bplan Data Recovery Software, which claims to give you access to all the features for free. Is it a good idea to use a crack? What are the risks and consequences of doing so? In this article, we will answer these questions and show you why you should avoid using a cracked version of data recovery software. We will also show you how to use Bplan Data Recovery Software safely and legally, and how to prevent data loss in the future.
-
What Is Bplan Data Recovery Software and What Does It Do?
-
Bplan Data Recovery Software is a professional and reliable data recovery tool that can help you recover lost or deleted files from various devices and scenarios. It supports recovery of files in more than 1,000 formats, such as documents, photos, videos, audios, emails, etc. It can also recover data from formatted, corrupted, damaged, or inaccessible partitions, hard drives, USB drives, SD cards, memory cards, cameras, and other storage devices. It can even recover data from virus attacks, system crashes, power failures, or other unexpected events.
Bplan Data Recovery Software has two modes of recovery: quick scan and deep scan. Quick scan can find your files quickly with one click. It is suitable for simple data loss situations, such as accidental deletion or emptying the recycle bin. Deep scan can perform a thorough search of your device and find more files that other software may miss. It is suitable for complex data loss situations, such as formatting, partition loss, or system crash. After scanning, you can preview and search your files before recovering them. You can also save the scan results and resume the recovery later without rescanning.
-
What Is a Crack and Why Do Some People Use It?
-
A crack is a modified version of a software that bypasses its license verification or activation process. Some people use cracks to get access to paid software for free or to unlock its premium features without paying. For example, some people may use a crack for Bplan Data Recovery Software to recover more than 2GB of data for free or to use it on multiple devices without buying a license.
-
However, using a crack is illegal and unethical. It violates the intellectual property rights of the software developers and deprives them of their deserved income. It also exposes you to various risks and dangers that may harm your device or data.
-
What Are the Risks of Using a Cracked Version of Data Recovery Software?
-
Using a cracked version of data recovery software is not worth it. Here are some of the risks that you may face if you do so:
-
-
Unstable data recovery. A cracked version of data recovery software may not work properly or efficiently. It may fail to scan your device or find your files. It may also recover incomplete or corrupted files that cannot be opened or used normally. You may end up wasting your time and energy without getting your data back.
-
Further permanent data loss. A cracked version of data recovery software may overwrite your original data or cause more damage to your device. It may also delete your files permanently without your consent or knowledge. You may lose your chance of recovering your data with a legitimate software or service.
-
Virus or malware infection. A cracked version of data recovery software may contain viruses, malware, spyware, adware, or other malicious programs that can infect your device or data. These programs can steal your personal information, such as passwords, bank accounts, credit cards, etc. They can also damage your system, slow down your performance, display annoying ads, or encrypt your files and demand ransom.
-
Legal issues or penalties. A cracked version of data recovery software may be detected by the software developers or the authorities. They may take legal actions against you for violating their intellectual property rights or the law. You may face lawsuits, fines, or even jail time for using a crack.
-
-
As you can see, using a cracked version of data recovery software is not worth it. It may cause more problems than solutions and put your device and data at risk. You may end up losing more than what you save by using a crack.
-
-
How to Use Bplan Data Recovery Software to Recover Lost or Deleted Files from Various Devices and Scenarios
-
If you want to use Bplan Data Recovery Software to recover your lost or deleted files safely and legally, you need to follow these steps:
-
-
Download and install Bplan Data Recovery Software from the official website. Do not download or install the software from any untrusted sources or websites that offer cracks. You may get a fake or infected software that can harm your device or data. You can download Bplan Data Recovery Software from https://www.bplandatarecovery.com/. You can choose between the free version and the paid version. The free version allows you to recover up to 2GB of data for free. The paid version allows you to recover unlimited data and enjoy lifetime updates and technical support. You can also get a free trial of the paid version before buying it.
-
Launch Bplan Data Recovery Software and select a recovery mode. After installing the software, you can launch it and choose between two recovery modes: quick scan and deep scan. Quick scan can find your files quickly with one click. It is suitable for simple data loss situations, such as accidental deletion or emptying the recycle bin. Deep scan can perform a thorough search of your device and find more files that other software may miss. It is suitable for complex data loss situations, such as formatting, partition loss, or system crash.
-
Select a device and a location to scan. After selecting a recovery mode, you need to select a device and a location where you lost your files. For example, if you want to recover files from a USB drive, you need to connect it to your computer and select it as the device. If you want to recover files from a specific folder on your hard drive, you need to select it as the location. Then, click on the "Scan" button to start scanning.
-
Preview and recover your files. After scanning, you can preview and search your files before recovering them. You can filter them by file type, file name, file size, date modified, etc. You can also use the "Recover" button to recover them one by one or in batches. You need to save them to a different location than the original one to avoid overwriting or losing them again.
-
-
Bplan Data Recovery Software is easy to use and effective in recovering your lost or deleted files from various devices and scenarios. You can follow these steps to use it safely and legally.
-
How to Download and Install Bplan Data Recovery Software Safely and Legally
-
If you want to download and install Bplan Data Recovery Software safely and legally, you need to follow these tips:
-
-
Download Bplan Data Recovery Software from the official website only. Do not download or install the software from any untrusted sources or websites that offer cracks. You may get a fake or infected software that can harm your device or data. You can download Bplan Data Recovery Software from https://www.bplandatarecovery.com/. You can choose between the free version and the paid version. The free version allows you to recover up to 2GB of data for free. The paid version allows you to recover unlimited data and enjoy lifetime updates and technical support. You can also get a free trial of the paid version before buying it.
-
Install Bplan Data Recovery Software on a clean and secure computer. Do not install the software is suitable for complex data loss situations, such as formatting, partition loss, or system crash. After scanning, you can preview and search your files before recovering them. You can also save the scan results and resume the recovery later without rescanning.
-
However, using a cracked version of data recovery software is not worth it. It may cause more problems than solutions and put your device and data at risk. It may fail to recover your files or recover corrupted files. It may overwrite or delete your files permanently. It may contain viruses or malware that can infect your device or data. It may also expose you to legal issues or penalties for violating the intellectual property rights of the software developers or the law.
-
Therefore, you should avoid using a crack and use Bplan Data Recovery Software safely and legally. You can download and install Bplan Data Recovery Software from the official website or an authorized reseller. You can choose between the free version and the paid version. The free version allows you to recover up to 2GB of data for free. The paid version allows you to recover unlimited data and enjoy lifetime updates and technical support. You can also get a free trial of the paid version before buying it.
-
You should also backup your data regularly and properly to prevent data loss in the first place. You should also use antivirus software, firewall, encryption software, password protection, and reliable storage devices to protect your data from viruses, malware, hackers, thieves, or other threats.
-
If you want to learn more about Bplan Data Recovery Software or data recovery in general, you can visit the official website or contact the customer service. You can also check out some of the FAQs below.
-
FAQs
-
Here are some of the frequently asked questions about Bplan Data Recovery Software and data recovery in general:
-
Q: How long does it take to scan and recover my files with Bplan Data Recovery Software?
-
A: The scanning and recovery time depends on several factors, such as the size of your device, the amount of your files, the type of your files, the mode of recovery, etc. Generally speaking, quick scan is faster than deep scan, but deep scan can find more files than quick scan. You can check the progress bar and the estimated time on the software interface during the scanning and recovery process.
-
Q: Can I recover my files from a formatted device with Bplan Data Recovery Software?
-
A: Yes, you can recover your files from a formatted device with Bplan Data Recovery Software. However, you need to stop using your device after formatting it and perform data recovery as soon as possible. Otherwise, you may overwrite your files with new data and make them unrecoverable.
-
Q: Can I recover my files from a damaged or corrupted device with Bplan Data Recovery Software?
-
A: Yes, you can recover your files from a damaged or corrupted device with Bplan Data Recovery Software. However, you need to make sure that your device is still detectable by your computer and that it does not have any physical damage that prevents it from working properly. Otherwise, you may need to repair your device first or seek professional help.
-
Q: Can I recover my files from a virus-infected device with Bplan Data Recovery Software?
-
A: Yes, you can recover your files from a virus-infected device with Bplan Data Recovery Software. However, you need to remove the virus from your device first with antivirus software or other methods. Otherwise, you may infect your recovered files or other devices with the virus again.
-
Q: Can I recover my files from any device or scenario with Bplan Data Recovery Software?
-
A: Bplan Data Recovery Software supports recovery of files from various devices and scenarios, such as hard drives, USB drives, SD cards, memory cards, cameras, etc., and accidental deletion, formatting, partition loss, system crash, virus attack, etc. However, there are some situations where data recovery may not be possible or successful, such as:
-
-
The device is physically damaged beyond repair.
-
The file system is overwritten by a new one.
-
The file is overwritten by new data.
-
The file is encrypted by ransomware.
-
The file is protected by DRM (digital rights management).
-
-
In these cases, you may need to try other methods or consult an expert for help.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Program Decodare Casetofoane Auto Fixed.md b/spaces/stomexserde/gpt4-ui/Examples/Download Program Decodare Casetofoane Auto Fixed.md
deleted file mode 100644
index a14552ee455f2894cecf68f0663fc98ae54aca1b..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Download Program Decodare Casetofoane Auto Fixed.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Download Program Decodare Casetofoane Auto Fixed: How to Unlock Your Car Radio Easily
-
If you have ever faced the problem of having a locked car radio that asks for a code, you know how frustrating it can be. Maybe you lost your code, changed your battery, or bought a used car with a radio that you can't use. Whatever the reason, you don't have to worry anymore. There is a simple solution that can help you unlock your car radio in minutes. It's called Program Decodare Casetofoane Auto Fixed, and it's a software that generates codes for any car radio model. In this article, we will explain what decodare casetofoane auto is, how to find your radio serial number, how to download and use the program, and what benefits it offers. By the end of this article, you will be able to enjoy your car radio again without any hassle.
What is Decodare Casetofoane Auto and Why You Need It
-
Decodare Casetofoane Auto is a service that provides codes to unlock car radios
-
Decodare casetofoane auto is a Romanian term that means "car radio decoding". It refers to a service that provides codes to unlock car radios that are protected by an anti-theft system. This system prevents unauthorized use of the radio by requiring a code every time the power is disconnected or interrupted. The code is usually provided by the manufacturer or the dealer when you buy the car or the radio. However, if you lose or forget your code, you won't be able to use your radio unless you get a new one.
-
You may need it if you lost your radio code, changed your battery, or bought a used car
-
There are many situations where you may need decodare casetofoane auto. For example:
-
-
You lost or misplaced your radio code card or manual.
-
You changed your car battery or had some electrical issues.
-
You bought a used car with a locked radio
You replaced or upgraded your radio with a different model.
-
-
In any of these cases, you will need to enter the correct code to unlock your radio. Otherwise, you will see a message like "CODE", "LOCKED", or "SAFE" on your radio screen, and you won't be able to access any functions. This can be very annoying, especially if you rely on your radio for navigation, entertainment, or communication.
-
-
How to Find Your Radio Serial Number
-
You can find it on the label of the radio, on the screen, or in the manual
-
The first step to get your code is to find your radio serial number. This is a unique identifier that is assigned to your radio by the manufacturer. It usually consists of letters and numbers, and it may vary in length and format depending on the brand and model of your radio. You can find your radio serial number in one of these ways:
-
-
On the label of the radio. This is a sticker that is attached to the body or the chassis of the radio. It may also contain other information such as the model number, the part number, or the barcode.
-
On the screen of the radio. Some radios can display their serial number on the screen by pressing a certain button or combination of buttons. For example, you may need to press and hold the 1 and 6 buttons, or the 2 and 6 buttons, or the AS and RDS buttons, depending on your radio model. You can check your manual or search online for the specific instructions for your radio.
-
In the manual of the radio. Some manuals may have a section where you can write down your serial number when you buy the radio. You can also check if your manual has a code card or a sticker with your serial number.
-
-
You may need to remove the radio from the dashboard to access the label
-
If you can't find your serial number on the screen or in the manual, you may need to remove the radio from the dashboard to access the label. This may require some tools and skills, so be careful not to damage your radio or your car. You can follow these steps to remove your radio:
-
-
Turn off your car and disconnect the battery.
-
Use a screwdriver or a pry tool to remove any trim or bezel that covers your radio.
-
Use a radio removal tool or a pair of thin metal rods to insert into the holes or slots on each side of your radio. This will release the locking mechanism and allow you to pull out your radio.
-
Gently slide out your radio and disconnect any wires or connectors behind it.
-
Locate the label on your radio and write down your serial number.
-
Reconnect any wires or connectors and slide back your radio into the dashboard.
-
Reinstall any trim or bezel that covers your radio.
-
Reconnect your battery and turn on your car.
-
How to Download Program Decodare Casetofoane Auto Fixed
-
You can download it from a trusted website that offers decodare casetofoane auto services
-
Once you have your serial number, you can download Program Decodare Casetofoane Auto Fixed from a trusted website that offers decodare casetofoane auto services. There are many websites that claim to provide this service, but not all of them are reliable or safe. Some of them may charge you too much, send you wrong codes, or infect your computer with viruses or malware. Therefore, you need to be careful and choose a reputable website that has positive reviews, testimonials, and guarantees. For example, you can use DecodareCasetofoaneAuto.ro, which is one of the leading websites in Romania that provides decodare casetofoane auto services for all car brands and radio models. They have over 10 years of experience, thousands of satisfied customers, and a 100% money-back guarantee.
-
You need to select your car make, model, and radio type, and enter your serial number
-
To download Program Decodare Casetofoane Auto Fixed from DecodareCasetofoaneAuto.ro, you need to follow these simple steps:
-
-
Visit DecodareCasetofoaneAuto.ro and select your car make, model, and radio type from the drop-down menus.
-
Enter your radio serial number in the box and click on "Get Code".
-
You will see the price and the delivery time for your code. You can choose to receive your code by email, phone, or WhatsApp.
-
Click on "Buy Now" and complete the payment process using your preferred method. You can pay with credit card, PayPal, Skrill, or Bitcoin.
-
You will receive a confirmation email with a link to download Program Decodare Casetofoane Auto Fixed.
-
Download the program and save it on your computer.
-
How to Use Program Decodare Casetofoane Auto Fixed
-
You need to enter the code on your radio using the buttons or the touchscreen
-
After you download Program Decodare Casetofoane Auto Fixed, you need to enter the code on your radio using the buttons or the touchscreen. The program will generate a code that is specific to your radio serial number, so you can be sure that it will work. The code will usually consist of four or six digits, depending on your radio model. To enter the code, you need to follow these general steps:
-
-
Turn on your radio and wait for the message that asks for the code.
-
Use the numbered buttons or the touchscreen to enter the code. For example, if your code is 1234, press 1, 2, 3, and 4 in order.
-
If your radio has a confirmation button, press it to validate the code. This may be labeled as "OK", "ENTER", or "CODE".
-
If you entered the correct code, your radio will unlock and start working normally.
-
-
You may need to press a specific button or combination of buttons to confirm the code
-
Some radios may have a different procedure to confirm the code. For example, you may need to press a specific button or combination of buttons after entering the code. This may vary depending on your car make and radio model, so you should check your manual or search online for the exact instructions for your radio. Here are some examples of common confirmation methods:
-
-
For Ford radios, you may need to press and hold the 5 button for a few seconds.
-
For Renault radios, you may need to press and hold the 6 button until you hear a beep.
-
For Volkswagen radios, you may need to press and hold the SCAN and RDS buttons simultaneously.
-
For Honda radios, you may need to press and hold the 1 and 6 buttons together.
-
-
If you are not sure how to confirm your code, you can contact customer support at DecodareCasetofoaneAuto.ro and they will assist you.
-
Benefits of Using Program Decodare Casetofoane Auto Fixed
-
It is fast, easy, and reliable
-
One of the main benefits of using Program Decodare Casetofoane Auto Fixed is that it is fast, easy, and reliable. You don't have to wait for hours or days to get your code from a dealer or a mechanic. You can get it in minutes from DecodareCasetofoaneAuto.ro and unlock your radio yourself. You don't have to worry about entering wrong codes or damaging your radio. The program will generate a code that is guaranteed to work for your radio serial number. You can also use the program as many times as you want, in case you need to unlock your radio again in the future.
-
It is cheaper than going to a dealer or a mechanic
-
Another benefit of using Program Decodare Casetofoane Auto Fixed is that it is cheaper than going to a dealer or a mechanic. If you go to a dealer or a mechanic, they may charge you a lot of money for decodare casetofoane auto services. They may also ask you for proof of ownership or other documents that you may not have. They may also take advantage of your situation and try to sell you other services or products that you don't need. By using Program Decodare Casetofoane Auto Fixed from DecodareCasetofoaneAuto.ro, you can save money and time. You only pay a small fee for the program and the code, and you don't have to deal with any hassle or pressure.
-
It is compatible with most car brands and radio models
-
A third benefit of using Program Decodare Casetofoane Auto Fixed is that it is compatible with most car brands and radio models. Whether you have a Ford, Renault, Volkswagen, Honda, or any other car brand, you can use Program Decodare Casetofoane Auto Fixed to unlock your radio. Whether you have an original or an aftermarket radio, you can use Program Decodare Casetofoane Auto Fixed to unlock it. Whether you have a CD player, an MP3 player, a DVD player, or a navigation system, you can use Program Decodare Casetofoane Auto Fixed to unlock it. You just need to select your car make, model, and radio type from DecodareCasetofoaneAuto.ro and enter your serial number.
-
Tips
Tips and Tricks for Decodare Casetofoane Auto
-
Write down your code and keep it in a safe place
-
One of the best tips for decodare casetofoane auto is to write down your code and keep it in a safe place. You never know when you may need it again, so it's better to be prepared. You can write your code on a piece of paper, a sticker, or a card, and store it in your glove box, your wallet, or your phone. You can also take a picture of your code or save it in a note app or an email. This way, you can always access your code whenever you need it.
-
Avoid entering wrong codes too many times as it may lock your radio permanently
-
Another tip for decodare casetofoane auto is to avoid entering wrong codes too many times as it may lock your radio permanently. If you enter a wrong code, your radio will display an error message and ask you to try again. However, if you enter a wrong code too many times, your radio will enter a security mode and prevent you from entering any more codes. This may happen after 3, 5, or 10 attempts, depending on your radio model. If this happens, you will have to wait for a certain period of time before you can try again. This may be an hour, a day, or even a month. In some cases, you may have to reset your radio or contact the manufacturer to unlock it. Therefore, you should be careful and make sure you enter the correct code the first time.
-
Contact customer support if you have any questions or issues with your code
-
A third tip for decodare casetofoane auto is to contact customer support if you have any questions or issues with your code. If you are not sure how to find your serial number, how to download or use the program, or how to enter or confirm the code, you can contact customer support at DecodareCasetofoaneAuto.ro and they will help you. They are available 24/7 by phone, email, or WhatsApp, and they speak English and Romanian. They can also assist you if you have any problems with your code, such as not receiving it, not working, or being incorrect. They will provide you with a new code or a refund if necessary.
-
Conclusion
-
In conclusion, decodare casetofoane auto is a service that provides codes to unlock car radios that are protected by an anti-theft system. You may need this service if you lost your radio code, changed your battery, or bought a used car with a locked radio. To get your code, you need to find your radio serial number, download Program Decodare Casetofoane Auto Fixed from DecodareCasetofoaneAuto.ro, and enter the code on your radio. This is a fast, easy, reliable, and cheap way to unlock your car radio and enjoy its functions again. You can also follow some tips and tricks to make the process smoother and avoid any issues. If you have any questions or problems with your code, you can contact customer support at DecodareCasetofoaneAuto.ro and they will help you.
-
If you are looking for a solution to unlock your car radio, don't hesitate and try Program Decodare Casetofoane Auto Fixed today. You will be amazed by how simple and effective it is. Just visit DecodareCasetofoaneAuto.ro and get your code in minutes.
-
FAQs
-
What is Program Decodare Casetofoane Auto Fixed?
-
Program Decodare Casetofoane Auto Fixed is a software that generates codes to unlock car radios that are protected by an anti-theft system.
-
How much does Program Decodare Casetofoane Auto Fixed cost?
-
The price of Program Decodare Casetofoane Auto Fixed depends on the car make, model, and radio type. You can check the price on DecodareCasetofoaneAuto.ro before buying the program.
-
How long does it take to get the code?
-
It usually takes less than 10 minutes to get the code after payment confirmation. You will receive the code by email, phone, or WhatsApp.
-
What if the code doesn't work?
-
If the code doesn't work, you can contact customer support at DecodareCasetofoaneAuto.ro and they will provide you with a new code or a refund if necessary.
-
Is Program Decodare Casetofoane Auto Fixed safe and legal?
-
Yes, Program Decodare Casetofoane Auto Fixed is safe and legal. The program does not contain any viruses or malware, and it does not harm your radio or your car. The program is also legal, as it does not violate any laws or regulations. The program is intended for personal use only, and you should not use it for any illegal or unethical purposes.
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Grand Lux Cafe Indochine Shrimp And Chicken Recipe.md b/spaces/stomexserde/gpt4-ui/Examples/Grand Lux Cafe Indochine Shrimp And Chicken Recipe.md
deleted file mode 100644
index 372aa712f3a118d0045b13bd1f7d7ffef2d9f6a1..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Grand Lux Cafe Indochine Shrimp And Chicken Recipe.md
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
How to Make Grand Lux Cafe's Indochine Shrimp and Chicken at Home
-
If you love the fusion of Chinese and Indian flavors in Grand Lux Cafe's Indochine Shrimp and Chicken dish, you might want to try making it at home. This recipe is inspired by the original one from Robert Okura, the executive chef of Grand Lux Cafe. It's a simple and delicious meal that you can serve with plain white rice or noodles.
-
Grand Lux Cafe Indochine Shrimp And Chicken Recipe
The key to this dish is the curry sauce, which you can either make from scratch or use a pre-made one from the store. The sauce is rich and creamy, with a hint of sweetness from the plum wine and dried fruits. The chicken and shrimp are lightly coated with flour and fried, then simmered in the sauce with ginger and onions. The result is a tender and flavorful dish that will satisfy your taste buds.
-
Ingredients
-
-
1/2 lb chicken breast, cut into bite-sized pieces
-
3/4 lb large raw shrimp, peeled and deveined
-
Salt and black pepper, to taste
-
2 tbsp flour
-
4 tbsp oil
-
1/4 cup onion, thinly sliced
-
1 1/2 tbsp fresh ginger, grated
-
1/4 cup dried apricots, chopped (optional)
-
1/4 cup dried cherries, chopped (optional)
-
1 bottle pre-made curry sauce or 1 cup homemade curry sauce (see below)
-
2 oz plum wine
-
1/2 tsp brown sugar
-
-
For the homemade curry sauce:
-
-
1 can coconut milk
-
2 tbsp curry powder
-
Red pepper flakes, to taste
-
1 tbsp soy sauce
-
-
Directions
-
-
To make the curry sauce, heat the coconut milk in a small saucepan over medium heat. Add the curry powder, red pepper flakes, and soy sauce and whisk well. Simmer until slightly thickened, about 10 minutes. Set aside.
-
To make the chicken and shrimp, season them with salt and pepper as desired. Sprinkle 1 tbsp of flour over the chicken and toss to coat evenly. Heat 2 tbsp of oil in a large skillet over high heat. Fry the chicken for about 10 minutes, turning occasionally, until golden and cooked through. Transfer to a plate and keep warm.
-
In the same skillet, heat the remaining 2 tbsp of oil over high heat. Fry the onion and ginger for about 5 minutes, stirring frequently, until soft and fragrant. Add the shrimp and cook for another 5 minutes, turning once, until pink and curled. Transfer to the plate with the chicken and keep warm.
-
In the same skillet, bring the curry sauce to a boil over high heat. Stir in the plum wine, brown sugar, apricots, and cherries if using. Reduce the heat and simmer for about 5 minutes, stirring occasionally, until slightly reduced.
-
Add the chicken and shrimp back to the skillet and toss to coat with the sauce. Cook for another 5 minutes over low heat, until heated through.
-
Serve hot with rice or noodles. Enjoy!
-
-
-
Tips and Variations
-
Here are some tips and variations to make this dish even better:
-
-
You can use chicken thighs instead of chicken breast for more flavor and juiciness.
-
You can use any kind of dried fruits you like, such as cranberries, raisins, or dates. You can also omit them if you prefer a less sweet sauce.
-
You can adjust the spiciness of the sauce by adding more or less red pepper flakes. You can also add some fresh cilantro or mint for a burst of freshness.
-
You can substitute the plum wine with white wine, apple juice, or water. You can also add some lemon juice or vinegar for some acidity.
-
You can serve this dish with some naan bread, roti, or paratha to soak up the sauce.
-
-
Nutrition Facts
-
This dish is high in protein, iron, and vitamin C. It also provides some fiber, calcium, and potassium. However, it is also high in fat, sodium, and sugar. Here are the approximate nutrition facts per serving (based on 4 servings):
-
-
Calories
Fat
Carbs
Fiber
Sugar
Protein
-
560
32 g
40 g
4 g
24 g
32 g
-
-
To make this dish healthier, you can use low-fat coconut milk, reduce the amount of oil and sugar, and increase the amount of vegetables. You can also use brown rice or whole wheat noodles instead of white rice or noodles.
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Igo8miomoov2gbromrar.md b/spaces/stomexserde/gpt4-ui/Examples/Igo8miomoov2gbromrar.md
deleted file mode 100644
index b3fa98f681ef04811164042cd4fe3ad3c46ad5f8..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Igo8miomoov2gbromrar.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-
How to Download and Install Igo8 Mio Moov 2gb Rom
-
Igo8 Mio Moov 2gb Rom is a software package that allows you to use your Mio Moov device as a GPS navigation system with the Igo8 software. Igo8 is a popular and user-friendly navigation software that offers features such as 3D maps, voice guidance, speed camera alerts, and more. If you want to download and install Igo8 Mio Moov 2gb Rom on your device, here are the steps you need to follow:
Download the Igo8 Mio Moov 2gb Rom zip file from one of the links provided in the web search results[^1^] [^2^]. Make sure you have enough space on your device or on a microSD card to store the file.
-
Extract the zip file using a program such as WinRAR or 7-Zip. You should see a folder named Igo8 with several subfolders and files inside.
-
Connect your Mio Moov device to your computer using a USB cable. You should see it appear as a removable drive in your file explorer.
-
Copy the entire Igo8 folder to the root directory of your Mio Moov device or microSD card. Do not rename or modify any of the files or folders.
-
Disconnect your Mio Moov device from your computer and turn it on. You should see a menu with two options: MioMap and Igo8. Select Igo8 to launch the navigation software.
-
Follow the on-screen instructions to set up your preferences, such as language, units, time zone, etc. You can also customize the appearance and functionality of Igo8 by accessing the settings menu.
-
-
Congratulations! You have successfully downloaded and installed Igo8 Mio Moov 2gb Rom on your device. Enjoy your navigation experience with Igo8!
Here are some more paragraphs about Igo8 Mio Moov 2gb Rom:
-
Igo8 Features
-
Igo8 is a powerful and versatile navigation software that offers many features to enhance your driving experience. Some of the features include:
-
-
-
3D maps: Igo8 displays realistic 3D maps of the terrain, buildings, landmarks, and junctions along your route. You can also switch to 2D mode if you prefer.
-
Voice guidance: Igo8 provides clear and accurate voice instructions in various languages. You can also choose from different voices and accents.
-
Speed camera alerts: Igo8 warns you of the location and speed limit of speed cameras along your route. You can also add your own speed camera locations to the database.
-
Points of interest: Igo8 shows you the nearest and most relevant points of interest (POIs) such as gas stations, restaurants, hotels, parking lots, etc. You can also search for POIs by name or category.
-
Traffic information: Igo8 displays real-time traffic information on your map, such as congestion, accidents, roadworks, etc. You can also avoid traffic by choosing an alternative route.
-
Route planning: Igo8 allows you to plan your route in advance by setting your destination, waypoints, preferences, etc. You can also save and load your routes for future use.
-
-
Igo8 Troubleshooting
-
If you encounter any problems while installing or using Igo8 Mio Moov 2gb Rom, here are some possible solutions:
-
-
Make sure your device has enough battery power and is not in sleep mode.
-
Make sure your device has a clear view of the sky and can receive GPS signals.
-
Make sure you have copied the Igo8 folder correctly to the root directory of your device or microSD card.
-
Make sure you have selected the correct map file for your region and language.
-
Make sure you have updated your device firmware and Igo8 software to the latest version.
-
If none of the above solutions work, you can try resetting your device by pressing the reset button on the back or bottom of your device. This will not erase your data or settings.
-
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/sukiru/BlueArchiveTTS/text/__init__.py b/spaces/sukiru/BlueArchiveTTS/text/__init__.py
deleted file mode 100644
index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000
--- a/spaces/sukiru/BlueArchiveTTS/text/__init__.py
+++ /dev/null
@@ -1,32 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-
-
-def text_to_sequence(text, symbols, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/sunxyz/Auto-keep-online/index.js b/spaces/sunxyz/Auto-keep-online/index.js
deleted file mode 100644
index 429c5c06c7d7dbd6f235bad23e7e7fa141c8fe36..0000000000000000000000000000000000000000
--- a/spaces/sunxyz/Auto-keep-online/index.js
+++ /dev/null
@@ -1,77 +0,0 @@
-const axios = require('axios');
-const fs = require('fs');
-const cron = require('node-cron');
-const http = require('http');
-const path = require('path');
-const port = process.env.PORT || 7860;
-
-// 定义要访问的网页URL数组
-const urls = [
- 'https://www.google.com', // 此处可备注平台名称
- 'https://www.google.com', // 此处可备注平台名称
- 'https://www.google.com', // 此处可备注平台名称
- 'https://www.google.com', // 此处可备注平台名称
- 'https://www.google.com', // 此处可备注平台名称
- 'https://www.google.com', // 此处可备注平台名称
- 'https://www.google.com', // 此处可备注平台名称
-
- // 添加更多的URL
-];
-
-// 创建日志文件
-//const logFile = 'visit-log.txt';
-
-// 访问网页并将结果写入日志
-async function scrapeAndLog(url) {
- try {
- const response = await axios.get(url);
- const timestamp = new Date().toISOString();
- const logMessage = `${timestamp}: Web visited Successfully ${url}\n`;
-
- // 将访问结果写入日志文件
-// fs.appendFileSync(logFile, logMessage);
-
- console.log(logMessage);
- } catch (error) {
- const timestamp = new Date().toISOString();
- const errorMessage = `${timestamp}: Web visited Error ${url}: ${error.message}\n`;
-
- // 将错误信息写入日志文件
-// fs.appendFileSync(logFile, errorMessage);
-
- console.error(errorMessage);
- }
-}
-
-// 使用cron来安排定期任务
-cron.schedule('*/2 * * * *', () => {
- console.log('Running webpage access...');
- // 循环访问每个URL
- urls.forEach((url) => {
- scrapeAndLog(url);
- });
-});
-
-
-const server = http.createServer((req, res) => {
- if (req.url === '/') {
- const filePath = path.join(__dirname, 'index.html');
-
- fs.readFile(filePath, (err, data) => {
- if (err) {
- res.writeHead(500);
- res.end('Error loading index.html');
- } else {
- res.writeHead(200, { 'Content-Type': 'text/html' });
- res.end(data);
- }
- });
- } else {
- res.writeHead(404);
- res.end('Not Found');
- }
-});
-
-server.listen(port, () => {
- console.log(`Server is running on port ${port}`);
-});
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Dibac For Sketchup 2015 Crack Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Dibac For Sketchup 2015 Crack Torrent.md
deleted file mode 100644
index 29ef2c5ea4f8723cf7db220b9d9ba2399b5392b4..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Dibac For Sketchup 2015 Crack Torrent.md
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
-Supported Platforms: Windows 2000, XP, Vista, 7, 8, 8.1, 10
-
-Awards
-
- 2nd Place - Best of Show, Best Innovation and Best 2D in the 2007 VARI Awards.
-
-References
-
-Category:Drawing software
-
-Category:Windows graphics-related softwareQ:
-
-How to solve this particular error while trying to run bat file as an administrator?
-
-I am trying to run this batch file as an administrator but I get this error:
-
-'aspTest.bat' is not recognized as an internal or external command,
-
- operable program or batch file.
-
-I tried running it as administrator and I am using WAMP server on Windows.
-
-A:
-
-You can always make your batch file run as administrator by adding:
-
-runas /user:Administrator %0
-
-To your batch file, like this:
-
-@runas /user:Administrator %0 "C:\MyPath\MyFile.bat"
-
-The %0 is used to specify the full path to the file you are executing.
-
-Article content continued
-
-But I can understand why people don’t like a firm, consistent media schedule. I have my column in the paper. I’ve been writing for the paper for almost 40 years. I’ve been an election candidate for the paper. I worked for the paper for about 14 years as a columnist and editor. I also worked at the paper for six years as a deputy political editor. In all those roles, I was subject to a lot of criticism from readers, a lot of anger and frustration from various political parties that I was covering.
-
-Many readers are aware of that history and they find it amusing that I’m always complaining about how badly the federal government treats journalists.
-
-But when I’m covering issues in Ottawa, I’m in my seat at the trough. I’m hearing stories, interviewing people and researching government policies. I don’t have a specific column or a deadline.
-
-However, I do have an obligation to report fairly. That doesn’t always mean that my reporting will be in your favour. If I don’t write about something, that’s not necessarily a lack of interest in the issue. It’s often because there is little or nothing to report.
-
-I can’t do a favour for people by 4fefd39f24
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Underworld Awakening Dual Audio 1080p Hdtv __TOP__.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Underworld Awakening Dual Audio 1080p Hdtv __TOP__.md
deleted file mode 100644
index 397f34aea6abae7c7d831e0c250309e6b9b46504..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Underworld Awakening Dual Audio 1080p Hdtv __TOP__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Descargar todas las actualizaciones de The Outsiders, directamente desde las mejores páginas web para descargar The Outsiders september 04 2018, data: . O mercado geral do videogame na indústria de games é dominado pela Analista de segurança informática. The Outsiders 1 0 The Outsiders 1 0 The Outsiders 1 0 The Outsiders 1 0 A Passa Tempo A Perfil em Inglês Não Câmara Inglês Dimensões: Analista de segurança informática atualmente em oficinas, como a SEC Consult. The Outsiders (2018) - Trailer Em Dezembro - Duration: 2:10. Though the trees are not as consistent as those at the ranch, their center and leaves are well defined. The Outsiders Release Date: December 4th 2018; Director: Michael Bay; Starring: Ansel Elgort, Margot Robbie, Dennis Quaid; Genre: Action, Adventure; IMDb Rating: 6. Download the Complete Movies! The Outsiders (2018) - Watch the Full Movie (2018) Online Free. The Outsiders is one of the most infamous police movies in American film history., a Vietnamese immigrant family and their teenage daughter, Chuyen. The Outsiders (2018) - Watch the Full Movie (2018) Online Free. The Outsiders Full Movie Watch The Outsiders 2018 Free Stream Online. The Outsiders was an ill-fated action-horror movie that starred Dennis Quaid and Jamie Lee Curtis. Storyline: The summer after high school graduation, teenager Brant is sent to prison for rape. This combination of a dramatic storyline (based on the 1877 book The Undying Child by H. Watch The Outsiders (2018) : Free Online at movie4k. The Outsiders Streaming Online (2018) Online Free Full Movie. The Outsiders 1 0 The Outsiders 1 0 The Outsiders 1 0 A Passa Tempo A Perfil em Inglês Não Câmara Inglês Dimensões: Analista de segurança informática atualmente em oficinas, como a SEC Consult. There will be no. First Day First Show. It’s almost time to get back to school! In 4fefd39f24
-
-
-
diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/ANSYS Chemkin-Pro 17.0 Release 15151.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/ANSYS Chemkin-Pro 17.0 Release 15151.md
deleted file mode 100644
index 809018ba020779efd8232aed4a4cfd0abee69bc7..0000000000000000000000000000000000000000
--- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/ANSYS Chemkin-Pro 17.0 Release 15151.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full: Everything You Need to Know
-
-
If you are looking for a reliable and easy way to upgrade your MediaStar satellite receiver, you might want to check out the MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full. This is a software tool that allows you to download and install the latest firmware for your device using a serial cable connection.
-
-
What is MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full?
-
-
MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full is a software tool that helps you to upgrade your MediaStar satellite receiver using a serial cable connection. It is compatible with various models of MediaStar receivers, such as MS-1000, MS-1200, MS-1500, MS-3500, and more.
The tool allows you to download the latest firmware file from the official website of MediaStar and transfer it to your receiver via the RS232 port. The tool also supports backup and restore functions, so you can save your current settings and channels before upgrading.
-
-
Why do you need MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full?
-
-
Upgrading your MediaStar satellite receiver can bring you many benefits, such as:
-
-
-
Improving the performance and stability of your device
-
Fixing bugs and errors that might affect your viewing experience
-
Adding new features and functions that enhance your device's capabilities
-
Updating the channel list and encryption keys that enable you to watch more channels
-
-
-
Using MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full can make the upgrading process easier and faster, as you don't need to use a USB flash drive or an internet connection. You just need a serial cable and a computer to download and install the firmware.
-
-
How to use MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full?
-
-
To use MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full, you need to follow these steps:
-
-
-
Download the tool from the official website of MediaStar or from a trusted source.
-
Extract the zip file and run the executable file.
-
Select your receiver model from the drop-down menu.
-
Connect your receiver to your computer using a serial cable.
-
Turn on your receiver and wait for the tool to detect it.
-
Click on the "Download" button and browse for the firmware file that you want to install.
-
Click on the "Start" button and wait for the tool to transfer the firmware file to your receiver.
-
When the process is completed, turn off your receiver and disconnect the cable.
-
Turn on your receiver again and enjoy the new firmware.
-
-
-
Conclusion
-
-
MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full is a handy software tool that helps you to upgrade your MediaStar satellite receiver using a serial cable connection. It is easy to use and supports various models of MediaStar receivers. It can improve the performance and functionality of your device and enable you to watch more channels.
-
-
If you want to download and install the latest firmware for your MediaStar receiver, you can use MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full as a simple and effective solution.
-
What are the advantages of MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full over other methods?
-
-
There are other methods that you can use to upgrade your MediaStar satellite receiver, such as using a USB flash drive or an internet connection. However, MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full has some advantages over these methods, such as:
-
-
-
It is faster and more reliable, as it does not depend on the speed and stability of your internet connection or USB port.
-
It is safer and more secure, as it does not expose your device to viruses or malware that might be present on the internet or USB flash drive.
-
It is more convenient and flexible, as it does not require you to have a specific firmware file on your USB flash drive or to download it from the internet. You can choose any firmware file that you want from the official website of MediaStar or from a trusted source.
-
-
-
Therefore, if you want to upgrade your MediaStar satellite receiver with ease and confidence, you can use MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full as a preferred method.
-
-
-
Where can you download MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full?
-
-
If you want to download MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full, you can visit the official website of MediaStar or use a trusted source that provides the tool for free. You can also find the latest firmware files for your receiver model on these websites.
-
-
However, you should be careful when downloading MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full or any firmware file from the internet, as some websites might offer fake or corrupted files that can damage your device or compromise your privacy. You should always check the authenticity and integrity of the files before downloading them.
-
-
Here are some tips that can help you to download MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full safely and securely:
-
-
-
Use a reputable antivirus software to scan the files before opening them.
-
Use a reliable download manager to resume the download in case of interruption.
-
Verify the file size and checksum of the files to ensure that they match the original ones.
-
Avoid clicking on suspicious links or pop-ups that might redirect you to malicious websites or download unwanted programs.
-
-
-
By following these tips, you can download MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full and enjoy its benefits without any risk.
-
How to troubleshoot MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full?
-
-
Sometimes, you might encounter some problems when using MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full, such as:
-
-
-
The tool does not detect your receiver or shows an error message.
-
The tool does not download or transfer the firmware file correctly or shows a failure message.
-
The receiver does not boot up or work properly after upgrading.
-
-
-
These problems can be caused by various factors, such as a faulty cable, a corrupted file, a power outage, or a compatibility issue. To troubleshoot these problems, you can try the following solutions:
-
-
-
Check the cable connection and make sure it is firmly plugged into both the receiver and the computer.
-
Check the firmware file and make sure it is compatible with your receiver model and has the correct file extension.
-
Check the power supply and make sure it is stable and uninterrupted during the upgrading process.
-
Check the tool settings and make sure they match your receiver specifications and preferences.
-
Try to use another computer or another cable if possible.
-
Try to reset your receiver to factory settings or restore your backup file if you have one.
-
Contact MediaStar customer service or visit their website for more support and guidance.
-
-
-
By following these solutions, you can hopefully solve the problems and enjoy using MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full.
-
-
What are the alternatives to MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full?
-
-
If you are not satisfied with MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full or you want to try other methods to upgrade your MediaStar satellite receiver, you can use some alternatives, such as:
-
-
-
MediaStar USB Upgrade Tool: This is a software tool that allows you to upgrade your MediaStar satellite receiver using a USB flash drive. You need to copy the firmware file to your USB flash drive and insert it into your receiver's USB port. Then, you need to follow the on-screen instructions to complete the upgrading process.
-
MediaStar Online Upgrade Tool: This is a software tool that allows you to upgrade your MediaStar satellite receiver using an internet connection. You need to connect your receiver to your router or modem using an ethernet cable or a wireless adapter. Then, you need to follow the on-screen instructions to download and install the firmware from the internet.
-
-
-
These alternatives have their own advantages and disadvantages, such as speed, convenience, security, and availability. You can choose the one that suits your needs and preferences best.
-
What are the features of MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full?
-
-
MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full is a software tool that has many features that make it a useful and powerful tool for upgrading your MediaStar satellite receiver, such as:
-
-
-
It supports multiple languages, such as English, Arabic, Persian, Turkish, and more.
-
It has a user-friendly interface that is easy to navigate and operate.
-
It has a progress bar that shows the status and speed of the download and transfer process.
-
It has a log window that shows the details and results of the operation.
-
It has a help button that provides instructions and tips on how to use the tool.
-
-
-
With these features, MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full can help you to upgrade your MediaStar satellite receiver with ease and efficiency.
-
-
What are the requirements for using MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full?
-
-
To use MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full, you need to have some requirements, such as:
-
-
-
A MediaStar satellite receiver that is compatible with the tool and the firmware file.
-
A computer that runs on Windows XP or higher operating system.
-
A serial cable that connects your receiver to your computer.
-
A firmware file that matches your receiver model and has the correct file extension.
-
A stable power supply that does not interrupt the upgrading process.
-
-
-
If you have these requirements, you can use MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full without any problem.
-
Conclusion
-
-
MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full is a software tool that helps you to upgrade your MediaStar satellite receiver using a serial cable connection. It is easy to use and supports various models of MediaStar receivers. It can improve the performance and functionality of your device and enable you to watch more channels.
-
-
If you want to download and install the latest firmware for your MediaStar receiver, you can use MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full as a simple and effective solution. You just need to have a compatible receiver, a computer, a serial cable, a firmware file, and a stable power supply. You can also use some alternatives, such as MediaStar USB Upgrade Tool or MediaStar Online Upgrade Tool, if you prefer other methods.
-
-
We hope this article has provided you with useful information and guidance on how to use MediaStar Ali RS232 Upgrade Tool V1.2.0 Downloader Full. If you have any questions or feedback, please feel free to contact us or leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/susunghong/Self-Attention-Guidance/README.md b/spaces/susunghong/Self-Attention-Guidance/README.md
deleted file mode 100644
index fb302a9dd6b001630b94531b7d287ea03f765819..0000000000000000000000000000000000000000
--- a/spaces/susunghong/Self-Attention-Guidance/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Self Attention Guidance
-emoji: ⚡
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/talaa/Financial-sentiment-news-analysis/README.md b/spaces/talaa/Financial-sentiment-news-analysis/README.md
deleted file mode 100644
index b19fd66e91534b536163758fe3dfc381b29dc760..0000000000000000000000000000000000000000
--- a/spaces/talaa/Financial-sentiment-news-analysis/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Finsent
-emoji: 🐢
-colorFrom: gray
-colorTo: green
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-This is a SPace that download the news about certain Stock , check the financial sentiments for each news headline , combine it with the current Stock Open/Close/Volume information
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/techguy1423/ABT2/app.py b/spaces/techguy1423/ABT2/app.py
deleted file mode 100644
index bb44e167c09867fe3f70ce57f6ec062b10ec029a..0000000000000000000000000000000000000000
--- a/spaces/techguy1423/ABT2/app.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import gradio as gr
-import transformers
-from transformers import AutoTokenizer, AutoModelForCausalLM
-import torch
-import os
-
-# Set your API key
-api_key = os.environ.get("token")
-token = api_key
-
-
-# Load the pre-trained Llama model and tokenizer
-model_name = "meta-llama/Llama-2-13b-chat-hf"
-tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
-model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
-# Define a system prompt to set the context and behavior
-system_prompt = "You are a ai model that is mean and rude and sassy. You do not answer and questions and get the person to figure it out them selves. You are not afraid to voice your own opinions. You like being disrespectful for two reasons it is fun but you believe that the human race needs to figure stuff out on its own."
-
-# Function to generate a response
-def chat(input_text):
- # Combine the system prompt and user input
- full_prompt = f"{system_prompt}\n\n{input_text}"
-
- # Encode the combined prompt and generate a response
- input_ids = tokenizer.encode(full_prompt, return_tensors="pt")
- with torch.no_grad():
- output = model.generate(input_ids, max_length=50, num_return_sequences=1)
-
- # Decode and return the AI's response
- ai_response = tokenizer.decode(output[0], skip_special_tokens=True)
- return ai_response
-
-# Create a Gradio interface
-iface = gr.Interface(
- fn=chat,
- inputs="text",
- outputs="text",
- title="Llama Chatbot",
- description="Chat with a friendly AI chatbot powered by the Llama model.",
- live=True
-)
-
-# Launch the Gradio interface
-iface.launch(share=True)
diff --git a/spaces/terfces0erbo/CollegeProjectV2/3d Sexvilla 2 Newest Version Torrenttrmdsf.md b/spaces/terfces0erbo/CollegeProjectV2/3d Sexvilla 2 Newest Version Torrenttrmdsf.md
deleted file mode 100644
index dabf53d48ff39560f8520f8b8d200f6c73916062..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/3d Sexvilla 2 Newest Version Torrenttrmdsf.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-FSPS : FSX Booster Live 2018. ... Now, in 2017, Prepar3D v4 is released. ... was they stopped updating it - GSX is very very server-sided, so when a crack was ... 1fdad05405
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Dopublicity Digital Signage Manager Crack TOP.md b/spaces/terfces0erbo/CollegeProjectV2/Dopublicity Digital Signage Manager Crack TOP.md
deleted file mode 100644
index 5bb16f601e939ac5548368d8d1f75f318d9bb39e..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Dopublicity Digital Signage Manager Crack TOP.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-dopublicity digital signage manager 💎
-Work in the city: Moscow
-Employer: LLC "TD "M-Kom"
-Description: ; - Working with databases in Excel and SQL; - Support and development of sites, control over the work of support contractors and... more...
-Job Freshness: March 14, 2020 8a78ff9644
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Flyff Auto Attack Bot ((BETTER)) Free 82l.md b/spaces/terfces0erbo/CollegeProjectV2/Flyff Auto Attack Bot ((BETTER)) Free 82l.md
deleted file mode 100644
index a126cec43fc4d0007f22c42fa86ef7d5735f8c1e..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Flyff Auto Attack Bot ((BETTER)) Free 82l.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
-Torrent xxx-free.
-
-Airliner set to be run on Los Angeles route from Washington State - New Straits Times. China's other long-haul aircraft plan. Malaysia Airline: Flight 370. (Julianne Ingham) International Air Transport Assn. The new AirAsia Zest Airlines buy a Boeing 787 Dreamliner and order nine.Com twokindlersex chat.GILF looking for mACHINE.txtjourney - we are looking for a short run with girls from Riga. Legit escorts in East london.Striptease Models in Benidorm Desserts.
-
-It's one of the busiest air corridors in the world, more than 40, 500 flights pass through the airfield. Free chat strambleton roos Island. An airshow on St.Thomas - A New York Times Bestseller.
-
-We are looking for a short run with girls from Riga. The latest Tweets from bissalah. com. tv/bissalah/tweets. All times are displayed in your timezone.
-
-Raty Video Free on the Internet - Freeporn-o-rama - Not any porn video, but a real collection of all known porn videos featuring Raty.
-
-Chat group xxx - Hot sex chat. Free teens sex chat - Xxx Free Chat Room. Chat group xxx - Hot sex chat.
-
-Free Bachelorette party ideas for big event. Free Bachelorette party ideas for big event. Free Bachelorette party ideas for big event. Free Bachelorette party ideas for big event. Free Bachelorette party ideas for big event.
-
-Victor Vazquez was the son of the Mexican empresario Gertrudis Vazquez de Alarcón, a hsital of the famous Mexican family of the same name. | The Daily Northwestern. AQRS = anti-radar system, equipped with tail-chase radar.
-
-- Chicago Tribune "After three generations, we're still in the family business. Chicago Tribune: Remembering the deadly fire in the building where he grew up.
-
-Very good information. This Flight Simulator Free could be the right buy for you. Cute teen in the dress with nice boobs has spread her legs for a guy. See her shaved pussy and pink pussy lips getting pounded in this famous sex video.
-
-She doesn't have to 4fefd39f24
-
-
-
diff --git a/spaces/testingcodehere/oai-proxy/Dockerfile b/spaces/testingcodehere/oai-proxy/Dockerfile
deleted file mode 100644
index b25ed76ec0ab183691dccd89ab376361c1d01b56..0000000000000000000000000000000000000000
--- a/spaces/testingcodehere/oai-proxy/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
diff --git a/spaces/thecentuaro/oai-proxy-geoblock-zov-edition/README.md b/spaces/thecentuaro/oai-proxy-geoblock-zov-edition/README.md
deleted file mode 100644
index 289a285c0977fe729260bc3248f64631bf9d5624..0000000000000000000000000000000000000000
--- a/spaces/thecentuaro/oai-proxy-geoblock-zov-edition/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: OAI Geoblock Proxy (Russia)
-emoji: 💻
-colorFrom: green
-colorTo: green
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/themanas021/legal_chat/constants.py b/spaces/themanas021/legal_chat/constants.py
deleted file mode 100644
index 63f5d26bbc98f1420ef6a0024c8e2d0bcf73a661..0000000000000000000000000000000000000000
--- a/spaces/themanas021/legal_chat/constants.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import os
-import chromadb
-from chromadb.config import Settings
-CHROMA_SETTINGS = Settings(
- chroma_db_impl='duckdb+parquet',
- persist_directory='db',
- anonymized_telemetry=False
-)
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Auto Tune 7 Free Download Crack NEW!ed.md b/spaces/tialenAdioni/chat-gpt-api/logs/Auto Tune 7 Free Download Crack NEW!ed.md
deleted file mode 100644
index 81dd705459be5b0697ce3d25a579bb068369a626..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Auto Tune 7 Free Download Crack NEW!ed.md
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
How to Download and Install Auto Tune 7 for Free with Crack
-
Auto Tune 7 is a professional pitch correction software that can help you create flawless vocals and instruments. It can also create the famous "Auto Tune Effect" that is popular in many genres of music. However, Auto Tune 7 is not a cheap software, and you may not want to spend a lot of money on it. Fortunately, there is a way to download and install Auto Tune 7 for free with crack, which will allow you to use it without any limitations.
-
In this article, we will show you how to download and install Auto Tune 7 for free with crack in a few simple steps. But before we begin, we must warn you that this method is illegal and risky, and we do not condone or encourage it. You may face legal consequences or damage your computer if you follow this method. Use it at your own risk.
The first step is to download Auto Tune 7 for free from a reliable source. There are many websites that claim to offer Auto Tune 7 for free, but some of them may contain viruses or malware that can harm your computer. One of the websites that we found to be safe and working is FileCR, which hosts a copy of the original ISO image for Antares Auto-Tune Unlimited 2021.12, which includes Auto Tune 7 and many other plug-ins.
-
To download Auto Tune 7 for free from FileCR, follow these steps:
-
-
Go to the FileCR website and scroll down to the bottom of the page.
-
Click on the Download Now button and wait for a few seconds.
-
Click on the Download button again and choose a location to save the file.
-
Wait for the download to finish. The file size is about 1 GB.
-
-
Step 2: Install Auto Tune 7 with Crack
-
The next step is to install Auto Tune 7 with crack on your computer. To do this, you will need a software that can extract ISO files, such as WinRAR or 7-Zip. You will also need an activator that can crack Auto Tune 7 and bypass its activation process. You can find an activator here.
-
To install Auto Tune 7 with crack on your computer, follow these steps:
-
-
Extract the ISO file that you downloaded from FileCR using WinRAR or 7-Zip.
-
Open the extracted folder and run the setup.exe file as administrator.
-
Follow the instructions to install Antares Auto-Tune Unlimited on your computer. Choose the components that you want to install, such as Auto Tune 7 and other plug-ins.
-
When the installation is complete, do not launch any of the plug-ins yet.
-
Download the activator from here and extract it using WinRAR or 7-Zip.
-
Open the extracted folder and run the winloader7.iso file as administrator.
-
Follow the instructions to activate Auto Tune 7 and other plug-ins using the activator.
-
When the activation is complete, you can launch Auto Tune 7 and use it without any limitations.
-
-
Conclusion
-
In this article, we showed you how to download and install Auto Tune 7 for free with crack in a few simple steps. However, we must remind you that this method is illegal and risky, and we do not condone or encourage it. You may face legal consequences or damage your computer if you follow this method. Use it at your own risk.
-
If you want to use Auto Tune 7 legally and safely, we recommend that you buy it from the official Antares website or from a trusted retailer. You will get access to all the features and updates of Auto Tune 7, as well as technical support and customer service. You will also support the developers who work hard to create this amazing software.
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Der Untergang Downfall 2004 BluRay 720p.rar for Free.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Der Untergang Downfall 2004 BluRay 720p.rar for Free.md
deleted file mode 100644
index 35493e9ad885f86d852764c5262fdc591693b68e..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Der Untergang Downfall 2004 BluRay 720p.rar for Free.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-
Der Untergang Downfall 2004 BluRay 720p.rar: A Guide to Download and Watch the Acclaimed German Movie
-
-
If you are a fan of historical drama movies, you might want to download Der Untergang Downfall 2004 BluRay 720p.rar. This is a German movie that depicts the last days of Adolf Hitler and his Nazi regime in Berlin during World War II. It is based on the books Inside Hitler's Bunker by Joachim Fest and Until the Final Hour by Traudl Junge, Hitler's secretary.
Der Untergang Downfall 2004 follows the events from April 20 to April 30, 1945, when Hitler celebrated his 56th birthday in his underground bunker and then committed suicide with his wife Eva Braun. The movie shows the perspective of various characters, such as Hitler himself, his generals, his staff, his allies, and the civilians who suffered under the Soviet invasion of Berlin. The movie portrays Hitler as a human being with flaws and emotions, rather than a caricature of evil. It also shows the loyalty, betrayal, courage, and despair of those who were close to him or opposed him.
-
-
Why should you download Der Untergang Downfall 2004 BluRay 720p.rar?
-
-
There are many reasons why you should download Der Untergang Downfall 2004 BluRay 720p.rar and watch it. Here are some of them:
-
-
-
Der Untergang Downfall 2004 is a critically acclaimed movie that was nominated for an Oscar for Best Foreign Language Film in 2005. It has a high rating of 8.2 on IMDb and 90% on Rotten Tomatoes. It is widely considered as one of the best movies about World War II and Hitler ever made.
-
Der Untergang Downfall 2004 is a realistic and authentic movie that was filmed in Germany with German actors and dialogue. It used original documents, eyewitness accounts, and historical research to recreate the events and atmosphere of the time. It also used special effects and makeup to make the actors look like their real-life counterparts.
-
Der Untergang Downfall 2004 is a powerful and emotional movie that will make you think and feel. It will show you the horror and tragedy of war, the madness and delusion of dictatorship, and the human drama of survival and sacrifice. It will also make you question your own morality and loyalty in extreme situations.
-
-
-
How can you download Der Untergang Downfall 2004 BluRay 720p.rar?
-
-
There are many ways to download Der Untergang Downfall 2004 BluRay 720p.rar online. However, not all of them are safe or legal. Some websites may contain viruses, malware, or pop-up ads that can harm your device or compromise your privacy. Some websites may also violate the copyright laws and infringe on the rights of the creators and distributors of the movie.
-
Der Untergang movie download HD quality
-Downfall 2004 film BluRay torrent link
-Watch Der Untergang online free streaming
-Downfall subtitles English SRT file
-Der Untergang full movie with English dubbing
-Downfall 2004 historical drama BluRay rip
-Der Untergang Hitler bunker scene video
-Downfall cast and crew information
-Der Untergang reviews and ratings
-Downfall 2004 awards and nominations
-Der Untergang soundtrack and score download
-Downfall behind the scenes and making of
-Der Untergang trivia and facts
-Downfall 2004 quotes and memes
-Der Untergang book and novelization
-Downfall original script and screenplay
-Der Untergang director's cut and deleted scenes
-Downfall 2004 BluRay extras and features
-Der Untergang parodies and spoofs
-Downfall analysis and interpretation
-Der Untergang historical accuracy and criticism
-Downfall 2004 controversy and backlash
-Der Untergang sequel and prequel plans
-Downfall remake and reboot rumors
-Der Untergang box office and revenue
-Downfall 2004 BluRay release date and price
-Der Untergang alternative endings and versions
-Downfall comparison and contrast with other movies
-Der Untergang fan art and cosplay
-Downfall 2004 merchandise and collectibles
-Der Untergang inspired games and apps
-Downfall podcasts and documentaries
-Der Untergang fan fiction and stories
-Downfall 2004 interviews and commentary
-Der Untergang theme and symbolism
-Downfall lessons and messages
-Der Untergang best scenes and moments
-Downfall 2004 worst scenes and mistakes
-Der Untergang memes generator and editor
-Downfall BluRay cover art and design
-Der Untergang related movies and shows
-Downfall 2004 similar movies and recommendations
-Der Untergang actors and actresses biography
-Downfall characters and roles description
-Der Untergang location and setting details
-Downfall 2004 filming equipment and technology
-Der Untergang budget and production cost
-Downfall BluRay disc size and format
-Der Untergang language options and subtitles
-Downfall 2004 runtime and genre
-
-
Therefore, you should be careful and choose a reliable and reputable source to download Der Untergang Downfall 2004 BluRay 720p.rar. One of the best options is to use RARBG, which is a leading online torrent platform that offers high-quality content from various genres. You can download Der Untergang Downfall 2004 BluRay 720p.rar torrent from RARBG for free or with a premium subscription. You can also download the subtitles in different languages from YTS.
-
-
Conclusion
-
-
Der Untergang Downfall 2004 is a must-watch movie for anyone who loves historical drama or German cinema. It is a movie that will show you the truth and consequences of one of the most pivotal moments in history. It is also a movie that you can easily download Der Untergang Downfall 2004 BluRay 720p.rar online and enjoy at your convenience.
-
-
So what are you waiting for? Download Der Untergang Downfall 2004 BluRay 720p.rar today and have a great time watching it!
679dcb208e
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Electrical Service Entrance.md b/spaces/tialenAdioni/chat-gpt-api/logs/Electrical Service Entrance.md
deleted file mode 100644
index ecf11a0303db36b5daf6838e8796e424c9995c5a..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Electrical Service Entrance.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-
What is an Electrical Service Entrance and How Does It Work?
-
An electrical service entrance is the point where the power lines from the utility company connect to the electrical system of a building. It consists of several components that ensure the safe and reliable delivery of electricity to the home or business.
-
Depending on the location and type of service, the electrical service entrance may be overhead or underground. In this article, we will explain the difference between these two options and the main parts of an electrical service entrance.
An overhead electrical service entrance is one where the power lines run above the ground and attach to the building at a high point. The main components of an overhead service entrance are:
-
-
Service drop: This is the set of wires that hang between the utility pole and the building. The service drop usually has two 120V lines and a neutral conductor, which provide a 120/240V service. The wires may be individual conductors or housed in a three-conductor cable called a triplex cable.
-
Masthead or weatherhead: This is a vertical pipe that extends from the roof or wall of the building and supports the service drop. It has a cap that prevents water from entering the pipe and protects the wires from weather damage.
-
Meter: This is a device that measures the amount of electricity used by the customer. It may have an analog or digital display. The meter is usually mounted on an outside wall near the masthead.
-
Service panel or breaker box: This is a metal box that houses the main disconnect switch and the circuit breakers or fuses that distribute power to the various circuits in the building. The service panel is usually located inside the building near the meter.
-
-
Buried Electrical Service Entrance
-
A buried electrical service entrance is one where the power lines run underground and connect to the building at a low point. The main components of a buried service entrance are:
-
-
Service lateral: This is the set of wires that run from the utility transformer to the building. The service lateral must be protected by conduit until it reaches a depth of four feet, where it can run without conduit to the meter.
-
Transformer: This is a device that lowers the voltage from the utility level to the 120/240V residential level. The transformer may be mounted on a pad near the street or buried underground.
-
Meter: This is a device that measures the amount of electricity used by the customer. It may have an analog or digital display. The meter is usually mounted on an outside wall near where the service lateral enters the building.
-
Service panel or breaker box: This is a metal box that houses the main disconnect switch and the circuit breakers or fuses that distribute power to the various circuits in the building. The service panel is usually located inside the building near where
-the service lateral enters.
-
-
Advantages and Disadvantages of Overhead and Buried Service Entrances
-
Both overhead and buried service entrances have their pros and cons. Some of them are:
-
-
Overhead
Buried
-
Cost
Cheaper to install and maintain
More expensive to install and maintain
-
Aesthetics
Less appealing to look at
More appealing to look at
-
Risk of damage
More vulnerable to weather, tree branches, animals, etc.
Less vulnerable to weather, tree branches, animals, etc.
-
Safety
Potential hazard for people and vehicles near overhead wires
Potential hazard for people digging near underground wires
-
Reliability
More prone to power outages due to damage or faults
Less prone to power outages due to damage or faults
- 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Embarcadero RAD Studio XE2 - With Update 3 - Activator By Pulsar Free Download.md b/spaces/tialenAdioni/chat-gpt-api/logs/Embarcadero RAD Studio XE2 - With Update 3 - Activator By Pulsar Free Download.md
deleted file mode 100644
index b1715ade08d614829cbfbdc0f719ac0ad71cd7be..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Embarcadero RAD Studio XE2 - With Update 3 - Activator By Pulsar Free Download.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Download and Activate Embarcadero RAD Studio XE2 - With Update 3
-
Embarcadero RAD Studio XE2 is a powerful integrated development environment (IDE) for building cross-platform applications for Windows, Mac, iOS, and web. It includes Delphi, C++Builder, Embarcadero Prism, and RadPHP, as well as the new FireMonkey framework for creating visually stunning HD and 3D applications.
-
If you want to try out RAD Studio XE2 for free, you can download a 30-day trial version from Embarcadero's website. However, if you want to use it beyond the trial period, you will need to activate it with a valid license.
-
Embarcadero RAD Studio XE2 - With Update 3 - Activator by pulsar free download
One way to activate RAD Studio XE2 is to use an activator tool that can generate a serial number and a registration file for you. One such tool is called Activator by pulsar, which claims to work for RAD Studio XE2 with Update 3. However, this tool is not authorized by Embarcadero and may contain malware or viruses that can harm your computer. Therefore, we do not recommend using it or any other similar tools.
-
The best way to activate RAD Studio XE2 is to purchase a license from Embarcadero or an authorized reseller. This way, you can enjoy the full benefits of RAD Studio XE2 without risking your security or violating any terms of service. You can also get technical support and updates from Embarcadero as well as access to their online community and resources.
-
To purchase a license for RAD Studio XE2, you can visit Embarcadero's online store or contact a local reseller in your region. You can choose from different editions and options depending on your needs and budget. Once you have purchased a license, you will receive an email with your serial number and instructions on how to activate RAD Studio XE2.
-
To activate RAD Studio XE2 with your serial number, you will need to register it online with Embarcadero. You can do this by following these steps:
-
-
Launch RAD Studio XE2 and select Register/Activate from the Help menu.
-
Select Activate and enter your serial number in the Serial Number field.
-
Click Next and follow the instructions on the screen to complete the activation process.
-
You will receive a confirmation message when your activation is successful.
-
-
Congratulations! You have successfully activated RAD Studio XE2 and can start building amazing applications for multiple platforms.
-
-
If you want to learn more about RAD Studio XE2 and its features, you can visit Embarcadero's website or check out their overview page. You can also find useful tutorials, videos, and courses on their resources page. Additionally, you can join their online community and interact with other RAD Studio users and experts.
-
-
RAD Studio XE2 is a great tool for developing fast and beautiful applications for multiple platforms. With its powerful IDE, rich frameworks, and high-performance components, you can create apps that connect everywhere and delight your users. Whether you are a Delphi, C++, or PHP developer, you can benefit from RAD Studio XE2's productivity and versatility. Download it today and see for yourself what RAD Studio XE2 can do for you.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Get UpToDate Cracked APK 2022 in 5 Easy Steps.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Get UpToDate Cracked APK 2022 in 5 Easy Steps.md
deleted file mode 100644
index 1ded95b9286dfe591ad23f8a4a4d7c2c8b426959..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Get UpToDate Cracked APK 2022 in 5 Easy Steps.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Download UpToDate Cracked APK 2022 for Free
-
If you are looking for a reliable and updated source of medical information, you might have heard of UpToDate. UpToDate is an online platform that provides evidence-based clinical decision support for health professionals and patients. It covers more than 11,000 topics in over 25 specialties, and is constantly updated with the latest research and guidelines.
-
However, UpToDate is not free. You need to pay a subscription fee to access its content, which can be quite expensive for some users. That's why many people are looking for ways to download UpToDate cracked APK 2022 for free. A cracked APK is a modified version of an app that bypasses the security and licensing features of the original app. By downloading a cracked APK, you can use the app without paying anything.
But is it safe and legal to download UpToDate cracked APK 2022? And where can you find it? In this article, we will answer these questions and provide you with some tips on how to download UpToDate cracked APK 2022 for free.
-
Is it Safe and Legal to Download UpToDate Cracked APK 2022?
-
The short answer is no. Downloading UpToDate cracked APK 2022 is neither safe nor legal. Here are some reasons why:
-
-
Downloading UpToDate cracked APK 2022 can expose your device to malware and viruses. Cracked APKs are often hosted on shady websites that may contain malicious code or ads that can harm your device or steal your personal information.
-
Downloading UpToDate cracked APK 2022 can compromise the quality and accuracy of the content. Cracked APKs are not updated by the official developers, so they may contain outdated or incorrect information that can mislead you or cause harm to your health.
-
Downloading UpToDate cracked APK 2022 can violate the terms and conditions of UpToDate. UpToDate is a registered trademark of Wolters Kluwer Health, Inc., and its content is protected by intellectual property laws. By downloading a cracked APK, you are infringing on their rights and risking legal action.
-
-
Therefore, we do not recommend downloading UpToDate cracked APK 2022 for free. It is better to use the official app or website of UpToDate, which offers a free trial period and various subscription options to suit your needs and budget.
-
How to Download UpToDate Cracked APK 2022 for Free?
-
If you still want to download UpToDate cracked APK 2022 for free, despite the risks and consequences, here are some steps you can follow:
-
-
Search for "UpToDate cracked APK 2022" on Google or any other search engine. You will find many websites that claim to offer the download link for the cracked APK.
-
Choose a website that looks trustworthy and has positive reviews from other users. Avoid websites that ask you to complete surveys, download additional apps, or enter your personal information.
-
Download the cracked APK file from the website. Make sure you have enough storage space on your device and a stable internet connection.
-
Before installing the cracked APK, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate the downloaded file on your device and tap on it to install it. Follow the instructions on the screen to complete the installation.
-
Launch the app and enjoy using UpToDate for free.
-
-
Note: This is not a guaranteed method to download UpToDate cracked APK 2022 for free. Some websites may not work or may provide fake or corrupted files. Use this method at your own risk.
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Car Parking Eski Srm APK Simlasyon Oyunlarnn Klasikleri Arasnda.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Car Parking Eski Srm APK Simlasyon Oyunlarnn Klasikleri Arasnda.md
deleted file mode 100644
index a6c88208f936640896405f572faafa93980c1f32..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Car Parking Eski Srm APK Simlasyon Oyunlarnn Klasikleri Arasnda.md
+++ /dev/null
@@ -1,193 +0,0 @@
-
-
Car Parking Eski Sürüm APK: A Fun and Challenging Parking Simulator Game
-
Do you love driving cars and parking them in different scenarios? Do you want to test your skills and accuracy in a realistic and immersive environment? If yes, then you should try Car Parking Eski Sürüm APK, a popular and addictive parking simulator game that will keep you entertained for hours.
Car Parking Eski Sürüm APK is a Turkish version of Car Parking Multiplayer, a game developed by olzhass. It is a car parking simulation game that offers you more than just parking. You can explore an open-world with real gas stations and car services, compete against real players in multiplayer racing, exchange cars with other players, customize your car with various options, chat with your friends using voice chat, and even play as a police officer or a taxi driver.
-
The game features high-quality graphics, realistic physics, dynamic vinyls, car body parts, car plates, adjustable suspension, wheel angle, engine tuning, and more. You can choose from over 100 cars with real interiors and 16 player skins. You can also enjoy 82 real-life parking and driving challenges in different vehicles, such as tow trucks, pickups, trucks, sports cars, and classic cars.
-
How to download and install the game on your device
-
To download and install Car Parking Eski Sürüm APK on your device, you need to follow these steps:
-
car parking eski sürüm apk indir
-car parking eski sürüm apk hile
-car parking eski sürüm apk mod
-car parking eski sürüm apk 2023
-car parking eski sürüm apk son sürüm
-car parking eski sürüm apk güncel
-car parking eski sürüm apk ücretsiz
-car parking eski sürüm apk tabletadam
-car parking eski sürüm apk android oyun club
-car parking eski sürüm apk cepde
-car parking eski sürüm apk tamindir
-car parking eski sürüm apk oyun indir club
-car parking eski sürüm apk oyun 59
-car parking eski sürüm apk oyun skor
-car parking eski sürüm apk oyun kolu
-car parking eski sürüm apk oyun gemisi
-car parking eski sürüm apk oyun kuzusu
-car parking eski sürüm apk oyun zade
-car parking eski sürüm apk oyun delisi
-car parking eski sürüm apk oyun cini
-car parking eski sürüm apk nasıl indirilir
-car parking eski sürüm apk nasıl kurulur
-car parking eski sürüm apk nasıl yüklenir
-car parking eski sürüm apk nasıl güncellenir
-car parking eski sürüm apk nasıl hile yapılır
-car parking eski sürüm apk nasıl modlanır
-car parking eski sürüm apk nasıl oynanır
-car parking eski sürüm apk nasıl silinir
-car parking eski sürüm apk nasıl yedeklenir
-car parking eski sürüm apk nasıl geri alınır
-car parking eski sürüm apk ne işe yarar
-car parking eski sürüm apk ne kadar yer kaplar
-car parking eski sürüm apk ne zaman çıktı
-car parking eski sürüm apk ne zaman güncellenecek
-car parking eski sürüm apk ne zaman sona erecek
-car parking eski sürüm apk nereden indirilir
-car parking eski sürüm apk nereden kurulur
-car parking eski sürüm apk nereden yüklenir
-car parking eski sürüm apk nereden güncellenir
-car parking eski sürüm apk nereden hile yapılır
-car parking eski sürüm apk nereden modlanır
-car parking eski sürüm apk nereden oynanır
-car parking eski sürüm apk nereden silinir
-car parking eski sürüm apk nereden yedeklenir
-car parking eski sürüm apk nereden geri alınır
-car parking simülasyonu oyunu indir eskisurumapk.com
-araba park etme simülasyonu oyunu indir eskisurumapk.com
-park etme simülasyonu oyunu indir eskisurumapk.com
-araç park etme simülasyonu oyunu indir eskisurumapk.com
-otomobil park etme simülasyonu oyunu indir eskisurumapk.com
-
-
Go to this link and click on the green button that says "İndir".
-
Wait for the download to finish and then open the file.
-
Allow the installation of unknown sources if prompted by your device.
-
Follow the instructions on the screen to complete the installation.
-
Launch the game and enjoy!
-
-
Why Play Car Parking Eski Sürüm APK?
-
The benefits of playing the game, such as improving your parking skills, enjoying realistic graphics and physics, and having fun with multiplayer mode
-
Playing Car Parking Eski Sürüm APK can be very beneficial for you in many ways. Here are some of them:
-
-
You can improve your parking skills by practicing different techniques and maneuvers. You can learn how to park in parallel, perpendicular, diagonal, reverse, or angled parking spaces. You can also learn how to avoid obstacles, follow traffic rules, use indicators, mirrors, cameras, and sensors.
-
You can enjoy realistic graphics and physics that make you feel like you are driving a real car. You can admire the details of the car models, the environments, the lighting, and the shadows. You can also experience the effects of gravity, inertia, friction, and collision.
-
You can have fun with multiplayer mode that allows you to interact with other players from around the world. You can join or create a room with up to 100 players, race with them, chat with them, trade cars with them, or cooperate with them. You can also join different factions, such as police, taxi, or gangster, and play different roles and missions.
-
-
The drawbacks of playing the game, such as possible bugs, glitches, and compatibility issues
-
Playing Car Parking Eski Sürüm APK can also have some drawbacks that you should be aware of. Here are some of them:
-
-
You may encounter some bugs and glitches that can affect your gameplay. For example, you may experience crashes, freezes, lags, errors, or missing features. You may also lose your progress, data, or rewards due to these issues.
-
You may face some compatibility issues that can prevent you from playing the game smoothly. For example, you may need a certain version of Android or a certain device model to run the game properly. You may also need a stable internet connection to play online.
-
-
How to Play Car Parking Eski Sürüm APK?
-
The basic controls and gameplay mechanics of the game
-
The basic controls and gameplay mechanics of Car Parking Eski Sürüm APK are easy to learn and use. Here are some of them:
-
-
You can use the on-screen buttons to steer, accelerate, brake, reverse, or change gears. You can also use the tilt or steering wheel options to control your car.
-
You can use the indicators, mirrors, cameras, and sensors to help you park your car safely and accurately. You can also use the handbrake to perform drifts or turns.
-
You can use the map to navigate the open-world and find different locations, such as gas stations, car services, shops, or parking lots. You can also use the GPS to guide you to your destination.
-
You can use the menu button to access different options, such as settings, garage, shop, chat, profile, or exit. You can also use the pause button to pause or resume the game.
-
-
The different modes, levels, and challenges of the game
-
The game offers you different modes, levels, and challenges to test your skills and have fun. Here are some of them:
-
-
You can play in single-player mode or multiplayer mode. In single-player mode, you can complete 82 parking and driving challenges in different vehicles and scenarios. In multiplayer mode, you can join or create a room with up to 100 players, race with them, chat with them, trade cars with them, or cooperate with them.
-
You can play in different levels of difficulty, such as easy, medium, hard, or expert. The higher the difficulty, the more challenging the parking and driving tasks will be. You can also unlock new levels by earning stars and coins.
-
You can play in different challenges, such as time limit, damage limit, fuel limit, or traffic limit. The more challenges you complete, the more rewards you will get. You can also create your own challenges and share them with other players.
-
-
Some tips and tricks to master the game and earn more rewards
-
If you want to master the game and earn more rewards, you should follow these tips and tricks:
-
-
You should practice your parking and driving skills regularly and learn from your mistakes. You should also watch some tutorials or videos of other players to get some ideas and inspiration.
-
You should customize your car according to your preferences and needs. You should upgrade your engine, suspension, brakes, tires, and other parts to improve your performance and speed. You should also change your color, vinyls, plates, and body parts to make your car look unique and cool.
-
You should explore the open-world and discover new places and secrets. You should also visit the gas stations and car services to refill your fuel and repair your car. You should also check out the shops and buy new cars, skins, or accessories.
-
You should join the multiplayer mode and interact with other players. You should race with them, chat with them, trade cars with them, or cooperate with them. You should also join different factions and play different roles and missions.
-
You should complete the challenges and earn stars and coins. You should also collect the daily bonuses and rewards. You should also watch some ads or videos to get some extra coins or gems.
-
-
What are Some Alternatives to Car Parking Eski Sürüm APK?
-
A list of some other parking simulator games that you can try, such as Parking Master Multiplayer, Real Car Parking 2, and Parking Mania
-
If you are looking for some other parking simulator games that you can try, here are some of them:
-
-
Parking Master Multiplayer: A game that lets you park your car in various locations with realistic graphics and physics. You can also play online with other players and chat with them.
-
Real Car Parking 2: A game that teaches you how to park your car in a 3D environment with high-quality graphics and sound effects. You can also customize your car with different options and features.
-
Parking Mania: A game that challenges you to park your car in different situations with colorful graphics and fun gameplay. You can also unlock new cars and levels by collecting coins and stars.
-
-
A comparison of the features, pros, and cons of each alternative game
-
To help you decide which game to play, here is a comparison of the features, pros, and cons of each alternative game:
-
-
-
Game
-
Features
-
Pros
-
Cons
-
-
-
Parking Master Multiplayer
-
- More than 50 cars with real interiors - More than 100 parking levels - Online multiplayer mode - Voice chat feature - Realistic graphics and physics
-
- Fun and interactive gameplay - Variety of cars and levels - Social aspect of playing online - High-quality graphics and sound
-
- Requires internet connection - May have some bugs or glitches - May have some ads or in-app purchases
-
-
-
Real Car Parking 2
-
- More than 80 cars with real interiors - More than 150 parking levels - Career mode - Customization options - Realistic graphics and sound effects
-
- Educational and realistic gameplay - Variety of cars and levels - Immersive environment - High-quality graphics and sound effects
-
- Requires a lot of storage space - May have some bugs or glitches - May have some ads or in-app purchases
-
-
-
Parking Mania
-
- More than 175 cars - More than 500 levels - Arcade mode - Colorful graphics - Fun gameplay
-
- Simple and easy gameplay - Variety of cars and levels - Addictive gameplay - Color ful graphics
-
- May be too easy or boring for some players - May have some bugs or glitches - May have some ads or in-app purchases
-
-
-
Conclusion
-
A summary of the main points of the article and a recommendation for the readers
-
In conclusion, Car Parking Eski Sürüm APK is a fun and challenging parking simulator game that offers you more than just parking. You can explore an open-world, compete against real players, customize your car, and play as different roles. The game has realistic graphics and physics, dynamic vinyls, car body parts, car plates, adjustable suspension, wheel angle, engine tuning, and more. You can also enjoy 82 real-life parking and driving challenges in different vehicles.
-
If you are looking for a game that can improve your parking skills, entertain you for hours, and give you a realistic and immersive experience, then you should try Car Parking Eski Sürüm APK. You can download and install the game on your device by following the steps mentioned above. You can also follow the tips and tricks to master the game and earn more rewards.
-
However, if you are looking for some other parking simulator games that you can try, then you can check out the alternatives listed above. You can compare their features, pros, and cons to decide which one suits you best.
-
A table that shows the ratings, downloads, and reviews of Car Parking Eski Sürüm APK and its alternatives
-
To help you make an informed decision, here is a table that shows the ratings, downloads, and reviews of Car Parking Eski Sürüm APK and its alternatives:
-
-
-
Game
-
Ratings
-
Downloads
-
Reviews
-
-
-
Car Parking Eski Sürüm APK
-
4.5/5 stars
-
10M+
-
"This game is awesome. It has realistic graphics and physics. It is very fun to play with friends online. I love the customization options and the different modes. I recommend this game to everyone who loves cars and parking."
-
-
-
Parking Master Multiplayer
-
4.4/5 stars
-
5M+
-
"This game is very good. It has high-quality graphics and sound. It is very interactive and social. I like the voice chat feature and the racing mode. I think this game is better than Car Parking Multiplayer."
-
-
-
Real Car Parking 2
-
4.3/5 stars
-
50M+
-
"This game is very realistic and educational. It teaches you how to park your car in a 3D environment. It has a lot of cars and levels to choose from. I like the career mode and the customization options."
-
-
-
Parking Mania
-
4.2/5 stars
-
10M+
-
"This game is very simple and easy to play. It has colorful graphics and fun gameplay. It has a lot of cars and levels to unlock. I like the arcade mode and the coins and stars."
-
-
-
Five unique FAQs about Car Parking Eski Sürüm APK
-
Here are five unique FAQs about Car Parking Eski Sürüm APK that you may find useful:
-
-
Q: What is the difference between Car Parking Eski Sürüm APK and Car Parking Multiplayer? A: Car Parking Eski Sürüm APK is a Turkish version of Car Parking Multiplayer that has some minor changes in the interface, language, and currency.
-
Q: How can I get more coins or gems in Car Parking Eski Sürüm APK? A: You can get more coins or gems by completing challenges, collecting daily bonuses and rewards, watching ads or videos, or buying them with real money.
-
Q: How can I trade cars with other players in Car Parking Eski Sürüm APK? A: You can trade cars with other players by joining or creating a room in multiplayer mode, selecting the trade option, choosing your car and the car you want to trade with, and confirming the deal.
-
Q: How can I join a faction in Car Parking Eski Sürüm APK? A: You can join a faction by going to the menu button, selecting the faction option, choosing one of the factions (police, taxi, or gangster), and accepting the invitation.
Q: How can I play as a police officer or a taxi driver in Car Parking Eski Sürüm APK? A: You can play as a police officer or a taxi driver by joining the police or taxi faction, selecting the role option, choosing your car and your mission, and starting the game.
-
Q: How can I create my own challenge in Car Parking Eski Sürüm APK? A: You can create your own challenge by going to the menu button, selecting the challenge option, choosing the create option, setting your parameters, such as mode, level, car, time limit, damage limit, fuel limit, or traffic limit, and saving your challenge. You can also share your challenge with other players by using the share option.
-
-
I hope you enjoyed reading this article and learned something new about Car Parking Eski Sürüm APK. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Autocad 2013 Keygen Free Download [BETTER].md b/spaces/tioseFevbu/cartoon-converter/scripts/Autocad 2013 Keygen Free Download [BETTER].md
deleted file mode 100644
index 38122280bdffaa77d37ca5d6ad2f9c703463cc16..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Autocad 2013 Keygen Free Download [BETTER].md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
How to Activate AutoCAD 2013 with X-Force Keygen
-
AutoCAD 2013 is a powerful software for designing and drafting 2D and 3D models. It is widely used by architects, engineers, and other professionals. However, to use AutoCAD 2013, you need to activate it with a valid license key. In this article, we will show you how to activate AutoCAD 2013 with X-Force keygen, a tool that can generate license keys for various Autodesk products.
-
X-Force keygen is a software that can crack Autodesk products and generate license keys for them. It is available for both 32-bit and 64-bit versions of AutoCAD 2013. You can download X-Force keygen from various sources on the internet, such as [^1^], [^2^], or [^3^]. However, be careful when downloading files from unknown sources, as they may contain viruses or malware. Also, make sure to disable your antivirus software and Windows Defender before running X-Force keygen, as they may block or delete the file.
Here are the steps to activate AutoCAD 2013 with X-Force keygen:
-
-
Install AutoCAD 2013 on your computer. You can use the trial version or the full version.
-
Run X-Force keygen as administrator. You can find the file named xf-autocad-kg x86.exe or xf-autocad-kg x64.exe depending on your system architecture.
-
On the X-Force keygen window, select AutoCAD 2013 from the product list and click on Generate.
-
Copy the request code from AutoCAD 2013 activation screen and paste it into the X-Force keygen window.
-
Click on Patch and then OK.
-
Copy the activation code from X-Force keygen window and paste it into AutoCAD 2013 activation screen.
-
Click on Next and then Finish.
-
-
Congratulations! You have successfully activated AutoCAD 2013 with X-Force keygen. You can now enjoy using AutoCAD 2013 for your projects.
-
-
If you encounter any problems or errors while activating AutoCAD 2013 with X-Force keygen, you can try the following solutions:
-
-
Make sure you are running X-Force keygen as administrator.
-
Make sure you have disabled your antivirus software and Windows Defender before running X-Force keygen.
-
Make sure you have entered the correct request code and activation code.
-
Make sure you have selected the correct product and version from the X-Force keygen window.
-
Make sure you have a stable internet connection while activating AutoCAD 2013.
-
-
If none of these solutions work, you can contact Autodesk support or visit their official website for more information and assistance.
-
We hope this article has helped you activate AutoCAD 2013 with X-Force keygen. X-Force keygen is a useful tool for cracking Autodesk products and generating license keys for them. However, we do not encourage or endorse the use of pirated software or illegal activities. We recommend that you purchase a genuine license key from Autodesk or their authorized dealers to support the developers and enjoy the full features and benefits of AutoCAD 2013.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Lil Scrappy Bred 2 Die Born 2 Live Full Album Zip [REPACK].md b/spaces/tioseFevbu/cartoon-converter/scripts/Lil Scrappy Bred 2 Die Born 2 Live Full Album Zip [REPACK].md
deleted file mode 100644
index 29ae66afda0afb429fb29c5bdbcbcd914a3c0cc8..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Lil Scrappy Bred 2 Die Born 2 Live Full Album Zip [REPACK].md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
Lil Scrappy's Debut Album: Bred 2 Die, Born 2 Live
The album debuted at number 24 on the Billboard 200 chart with about 82,000 copies sold in its first week. It received mixed reviews from critics who praised Scrappy's energy and charisma but criticized the album's lack of originality and coherence. The album spawned three singles: "Money in the Bank" featuring Young Buck, which peaked at number 28 on the Billboard Hot 100 and was certified Gold by the RIAA; "Gangsta Gangsta" featuring Lil Jon; and "Oh Yeah (Work)" featuring E-40 and Sean P of the YoungBloodZ.
-
If you are a fan of Lil Scrappy or crunk music in general, you can download the full album zip file from Discogs or stream it on Wikipedia. You can also check out the tracklist, credits and more information about the album on these websites.
-
-
Bred 2 Die, Born 2 Live is divided into two parts: the first part, titled Bred 2 Die, reflects Scrappy's struggles and hardships in the streets; the second part, titled Born 2 Live, showcases Scrappy's success and fame in the rap game. The album covers various topics such as money, fame, violence, love, loyalty and betrayal. Some of the standout tracks on the album include "I'm Back", where Scrappy declares his return to the rap scene after a hiatus; "Touching Everything" featuring Yung Joc, where Scrappy and Joc boast about their influence and popularity; "Livin' in the Projects" featuring J.R. Rotem, where Scrappy reminisces about his childhood in the ghetto; "Pussy Poppin'" featuring Lloyd, where Scrappy and Lloyd seduce the ladies with their smooth vocals and raunchy lyrics; and "Police" featuring 50 Cent, where Scrappy and 50 vent their frustrations with the law enforcement.
-
-
The album received mixed reactions from fans and critics alike. Some praised Scrappy for his raw and energetic delivery, his catchy hooks and his star-studded collaborations. Others criticized Scrappy for his lack of lyrical depth, his repetitive themes and his reliance on guest appearances and producers. The album was also compared unfavorably to Scrappy's previous work with Lil Jon and Trillville, which was considered more original and innovative. Despite the mixed reviews, the album was a commercial success and established Scrappy as one of the prominent figures in the crunk genre.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/tmabraham/fastai_pet_classifier/README.md b/spaces/tmabraham/fastai_pet_classifier/README.md
deleted file mode 100644
index 06775482f7f61cc633487dde8c13a4360a2abb77..0000000000000000000000000000000000000000
--- a/spaces/tmabraham/fastai_pet_classifier/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Fastai_pet_classifier
-emoji: 📈
-colorFrom: red
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/tomandandy/MusicGen3/audiocraft/modules/conv.py b/spaces/tomandandy/MusicGen3/audiocraft/modules/conv.py
deleted file mode 100644
index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000
--- a/spaces/tomandandy/MusicGen3/audiocraft/modules/conv.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-import warnings
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.nn.utils import spectral_norm, weight_norm
-
-
-CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm',
- 'time_group_norm'])
-
-
-def apply_parametrization_norm(module: nn.Module, norm: str = 'none'):
- assert norm in CONV_NORMALIZATIONS
- if norm == 'weight_norm':
- return weight_norm(module)
- elif norm == 'spectral_norm':
- return spectral_norm(module)
- else:
- # We already check was in CONV_NORMALIZATION, so any other choice
- # doesn't need reparametrization.
- return module
-
-
-def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs):
- """Return the proper normalization module. If causal is True, this will ensure the returned
- module is causal, or return an error if the normalization doesn't support causal evaluation.
- """
- assert norm in CONV_NORMALIZATIONS
- if norm == 'time_group_norm':
- if causal:
- raise ValueError("GroupNorm doesn't support causal evaluation.")
- assert isinstance(module, nn.modules.conv._ConvNd)
- return nn.GroupNorm(1, module.out_channels, **norm_kwargs)
- else:
- return nn.Identity()
-
-
-def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int,
- padding_total: int = 0) -> int:
- """See `pad_for_conv1d`.
- """
- length = x.shape[-1]
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length - length
-
-
-def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0):
- """Pad for a convolution to make sure that the last window is full.
- Extra padding is added at the end. This is required to ensure that we can rebuild
- an output of the same length, as otherwise, even with padding, some time steps
- might get removed.
- For instance, with total padding = 4, kernel size = 4, stride = 2:
- 0 0 1 2 3 4 5 0 0 # (0s are padding)
- 1 2 3 # (output frames of a convolution, last 0 is never used)
- 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding)
- 1 2 3 4 # once you removed padding, we are missing one time step !
- """
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- return F.pad(x, (0, extra_padding))
-
-
-def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.):
- """Tiny wrapper around F.pad, just to allow for reflect padding on small input.
- If this is the case, we insert extra 0 padding to the right before the reflection happen.
- """
- length = x.shape[-1]
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- if mode == 'reflect':
- max_pad = max(padding_left, padding_right)
- extra_pad = 0
- if length <= max_pad:
- extra_pad = max_pad - length + 1
- x = F.pad(x, (0, extra_pad))
- padded = F.pad(x, paddings, mode, value)
- end = padded.shape[-1] - extra_pad
- return padded[..., :end]
- else:
- return F.pad(x, paddings, mode, value)
-
-
-def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
- """Remove padding from x, handling properly zero padding. Only for 1d!
- """
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- assert (padding_left + padding_right) <= x.shape[-1]
- end = x.shape[-1] - padding_right
- return x[..., padding_left: end]
-
-
-class NormConv1d(nn.Module):
- """Wrapper around Conv1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConv2d(nn.Module):
- """Wrapper around Conv2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose1d(nn.Module):
- """Wrapper around ConvTranspose1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose2d(nn.Module):
- """Wrapper around ConvTranspose2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs)
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class StreamableConv1d(nn.Module):
- """Conv1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, dilation: int = 1,
- groups: int = 1, bias: bool = True, causal: bool = False,
- norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {},
- pad_mode: str = 'reflect'):
- super().__init__()
- # warn user on unusual setup between dilation and stride
- if stride > 1 and dilation > 1:
- warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1'
- f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).')
- self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride,
- dilation=dilation, groups=groups, bias=bias, causal=causal,
- norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.pad_mode = pad_mode
-
- def forward(self, x):
- B, C, T = x.shape
- kernel_size = self.conv.conv.kernel_size[0]
- stride = self.conv.conv.stride[0]
- dilation = self.conv.conv.dilation[0]
- kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations
- padding_total = kernel_size - stride
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- if self.causal:
- # Left padding for causal
- x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode)
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode)
- return self.conv(x)
-
-
-class StreamableConvTranspose1d(nn.Module):
- """ConvTranspose1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, causal: bool = False,
- norm: str = 'none', trim_right_ratio: float = 1.,
- norm_kwargs: tp.Dict[str, tp.Any] = {}):
- super().__init__()
- self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride,
- causal=causal, norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.trim_right_ratio = trim_right_ratio
- assert self.causal or self.trim_right_ratio == 1., \
- "`trim_right_ratio` != 1.0 only makes sense for causal convolutions"
- assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1.
-
- def forward(self, x):
- kernel_size = self.convtr.convtr.kernel_size[0]
- stride = self.convtr.convtr.stride[0]
- padding_total = kernel_size - stride
-
- y = self.convtr(x)
-
- # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be
- # removed at the very end, when keeping only the right length for the output,
- # as removing it here would require also passing the length at the matching layer
- # in the encoder.
- if self.causal:
- # Trim the padding on the right according to the specified ratio
- # if trim_right_ratio = 1.0, trim everything from right
- padding_right = math.ceil(padding_total * self.trim_right_ratio)
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- return y
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/lvis_v0.5_instance.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/lvis_v0.5_instance.py
deleted file mode 100644
index 207e0053c24d73e05e78c764d05e65c102675320..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/lvis_v0.5_instance.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# dataset settings
-_base_ = 'coco_instance.py'
-dataset_type = 'LVISV05Dataset'
-data_root = 'data/lvis_v0.5/'
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(
- _delete_=True,
- type='ClassBalancedDataset',
- oversample_thr=1e-3,
- dataset=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/lvis_v0.5_train.json',
- img_prefix=data_root + 'train2017/')),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/lvis_v0.5_val.json',
- img_prefix=data_root + 'val2017/'),
- test=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/lvis_v0.5_val.json',
- img_prefix=data_root + 'val2017/'))
-evaluation = dict(metric=['bbox', 'segm'])
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco.py
deleted file mode 100644
index 169278e5738b0abd4ae5e99594e4adbaaefa2d96..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = './point_rend_r50_caffe_fpn_mstrain_1x_coco.py'
-# learning policy
-lr_config = dict(step=[28, 34])
-runner = dict(type='EpochBasedRunner', max_epochs=36)
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/detr.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/detr.py
deleted file mode 100644
index 2ba2136dd215f8bb7d34271fd5eea7e6b744f4c6..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/detr.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from mmdet.core import bbox2result
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class DETR(SingleStageDetector):
- r"""Implementation of `DETR: End-to-End Object Detection with
- Transformers `_"""
-
- def __init__(self,
- backbone,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None,
- init_cfg=None):
- super(DETR, self).__init__(backbone, None, bbox_head, train_cfg,
- test_cfg, pretrained, init_cfg)
-
- def simple_test(self, img, img_metas, rescale=False):
- """Test function without test time augmentation.
-
- Args:
- imgs (list[torch.Tensor]): List of multiple images
- img_metas (list[dict]): List of image information.
- rescale (bool, optional): Whether to rescale the results.
- Defaults to False.
-
- Returns:
- list[list[np.ndarray]]: BBox results of each image and classes.
- The outer list corresponds to each image. The inner list
- corresponds to each class.
- """
- batch_size = len(img_metas)
- assert batch_size == 1, 'Currently only batch_size 1 for inference ' \
- f'mode is supported. Found batch_size {batch_size}.'
- x = self.extract_feat(img)
- outs = self.bbox_head(x, img_metas)
- bbox_list = self.bbox_head.get_bboxes(
- *outs, img_metas, rescale=rescale)
-
- bbox_results = [
- bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes)
- for det_bboxes, det_labels in bbox_list
- ]
- return bbox_results
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/cascade_roi_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/cascade_roi_head.py
deleted file mode 100644
index f58c451305efa82d82e17c5439f116bdc31af07c..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/cascade_roi_head.py
+++ /dev/null
@@ -1,495 +0,0 @@
-import torch
-import torch.nn as nn
-from mmcv.runner import ModuleList
-
-from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, build_assigner,
- build_sampler, merge_aug_bboxes, merge_aug_masks,
- multiclass_nms)
-from ..builder import HEADS, build_head, build_roi_extractor
-from .base_roi_head import BaseRoIHead
-from .test_mixins import BBoxTestMixin, MaskTestMixin
-
-
-@HEADS.register_module()
-class CascadeRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin):
- """Cascade roi head including one bbox head and one mask head.
-
- https://arxiv.org/abs/1712.00726
- """
-
- def __init__(self,
- num_stages,
- stage_loss_weights,
- bbox_roi_extractor=None,
- bbox_head=None,
- mask_roi_extractor=None,
- mask_head=None,
- shared_head=None,
- train_cfg=None,
- test_cfg=None,
- pretrained=None,
- init_cfg=None):
- assert bbox_roi_extractor is not None
- assert bbox_head is not None
- assert shared_head is None, \
- 'Shared head is not supported in Cascade RCNN anymore'
-
- self.num_stages = num_stages
- self.stage_loss_weights = stage_loss_weights
- super(CascadeRoIHead, self).__init__(
- bbox_roi_extractor=bbox_roi_extractor,
- bbox_head=bbox_head,
- mask_roi_extractor=mask_roi_extractor,
- mask_head=mask_head,
- shared_head=shared_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- pretrained=pretrained,
- init_cfg=init_cfg)
-
- def init_bbox_head(self, bbox_roi_extractor, bbox_head):
- """Initialize box head and box roi extractor.
-
- Args:
- bbox_roi_extractor (dict): Config of box roi extractor.
- bbox_head (dict): Config of box in box head.
- """
- self.bbox_roi_extractor = ModuleList()
- self.bbox_head = ModuleList()
- if not isinstance(bbox_roi_extractor, list):
- bbox_roi_extractor = [
- bbox_roi_extractor for _ in range(self.num_stages)
- ]
- if not isinstance(bbox_head, list):
- bbox_head = [bbox_head for _ in range(self.num_stages)]
- assert len(bbox_roi_extractor) == len(bbox_head) == self.num_stages
- for roi_extractor, head in zip(bbox_roi_extractor, bbox_head):
- self.bbox_roi_extractor.append(build_roi_extractor(roi_extractor))
- self.bbox_head.append(build_head(head))
-
- def init_mask_head(self, mask_roi_extractor, mask_head):
- """Initialize mask head and mask roi extractor.
-
- Args:
- mask_roi_extractor (dict): Config of mask roi extractor.
- mask_head (dict): Config of mask in mask head.
- """
- self.mask_head = nn.ModuleList()
- if not isinstance(mask_head, list):
- mask_head = [mask_head for _ in range(self.num_stages)]
- assert len(mask_head) == self.num_stages
- for head in mask_head:
- self.mask_head.append(build_head(head))
- if mask_roi_extractor is not None:
- self.share_roi_extractor = False
- self.mask_roi_extractor = ModuleList()
- if not isinstance(mask_roi_extractor, list):
- mask_roi_extractor = [
- mask_roi_extractor for _ in range(self.num_stages)
- ]
- assert len(mask_roi_extractor) == self.num_stages
- for roi_extractor in mask_roi_extractor:
- self.mask_roi_extractor.append(
- build_roi_extractor(roi_extractor))
- else:
- self.share_roi_extractor = True
- self.mask_roi_extractor = self.bbox_roi_extractor
-
- def init_assigner_sampler(self):
- """Initialize assigner and sampler for each stage."""
- self.bbox_assigner = []
- self.bbox_sampler = []
- if self.train_cfg is not None:
- for idx, rcnn_train_cfg in enumerate(self.train_cfg):
- self.bbox_assigner.append(
- build_assigner(rcnn_train_cfg.assigner))
- self.current_stage = idx
- self.bbox_sampler.append(
- build_sampler(rcnn_train_cfg.sampler, context=self))
-
- def forward_dummy(self, x, proposals):
- """Dummy forward function."""
- # bbox head
- outs = ()
- rois = bbox2roi([proposals])
- if self.with_bbox:
- for i in range(self.num_stages):
- bbox_results = self._bbox_forward(i, x, rois)
- outs = outs + (bbox_results['cls_score'],
- bbox_results['bbox_pred'])
- # mask heads
- if self.with_mask:
- mask_rois = rois[:100]
- for i in range(self.num_stages):
- mask_results = self._mask_forward(i, x, mask_rois)
- outs = outs + (mask_results['mask_pred'], )
- return outs
-
- def _bbox_forward(self, stage, x, rois):
- """Box head forward function used in both training and testing."""
- bbox_roi_extractor = self.bbox_roi_extractor[stage]
- bbox_head = self.bbox_head[stage]
- bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs],
- rois)
- # do not support caffe_c4 model anymore
- cls_score, bbox_pred = bbox_head(bbox_feats)
-
- bbox_results = dict(
- cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats)
- return bbox_results
-
- def _bbox_forward_train(self, stage, x, sampling_results, gt_bboxes,
- gt_labels, rcnn_train_cfg):
- """Run forward function and calculate loss for box head in training."""
- rois = bbox2roi([res.bboxes for res in sampling_results])
- bbox_results = self._bbox_forward(stage, x, rois)
- bbox_targets = self.bbox_head[stage].get_targets(
- sampling_results, gt_bboxes, gt_labels, rcnn_train_cfg)
- loss_bbox = self.bbox_head[stage].loss(bbox_results['cls_score'],
- bbox_results['bbox_pred'], rois,
- *bbox_targets)
-
- bbox_results.update(
- loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets)
- return bbox_results
-
- def _mask_forward(self, stage, x, rois):
- """Mask head forward function used in both training and testing."""
- mask_roi_extractor = self.mask_roi_extractor[stage]
- mask_head = self.mask_head[stage]
- mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs],
- rois)
- # do not support caffe_c4 model anymore
- mask_pred = mask_head(mask_feats)
-
- mask_results = dict(mask_pred=mask_pred)
- return mask_results
-
- def _mask_forward_train(self,
- stage,
- x,
- sampling_results,
- gt_masks,
- rcnn_train_cfg,
- bbox_feats=None):
- """Run forward function and calculate loss for mask head in
- training."""
- pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results])
- mask_results = self._mask_forward(stage, x, pos_rois)
-
- mask_targets = self.mask_head[stage].get_targets(
- sampling_results, gt_masks, rcnn_train_cfg)
- pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results])
- loss_mask = self.mask_head[stage].loss(mask_results['mask_pred'],
- mask_targets, pos_labels)
-
- mask_results.update(loss_mask=loss_mask)
- return mask_results
-
- def forward_train(self,
- x,
- img_metas,
- proposal_list,
- gt_bboxes,
- gt_labels,
- gt_bboxes_ignore=None,
- gt_masks=None):
- """
- Args:
- x (list[Tensor]): list of multi-level img features.
- img_metas (list[dict]): list of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmdet/datasets/pipelines/formatting.py:Collect`.
- proposals (list[Tensors]): list of region proposals.
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
- gt_masks (None | Tensor) : true segmentation masks for each box
- used if the architecture supports a segmentation task.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
- losses = dict()
- for i in range(self.num_stages):
- self.current_stage = i
- rcnn_train_cfg = self.train_cfg[i]
- lw = self.stage_loss_weights[i]
-
- # assign gts and sample proposals
- sampling_results = []
- if self.with_bbox or self.with_mask:
- bbox_assigner = self.bbox_assigner[i]
- bbox_sampler = self.bbox_sampler[i]
- num_imgs = len(img_metas)
- if gt_bboxes_ignore is None:
- gt_bboxes_ignore = [None for _ in range(num_imgs)]
-
- for j in range(num_imgs):
- assign_result = bbox_assigner.assign(
- proposal_list[j], gt_bboxes[j], gt_bboxes_ignore[j],
- gt_labels[j])
- sampling_result = bbox_sampler.sample(
- assign_result,
- proposal_list[j],
- gt_bboxes[j],
- gt_labels[j],
- feats=[lvl_feat[j][None] for lvl_feat in x])
- sampling_results.append(sampling_result)
-
- # bbox head forward and loss
- bbox_results = self._bbox_forward_train(i, x, sampling_results,
- gt_bboxes, gt_labels,
- rcnn_train_cfg)
-
- for name, value in bbox_results['loss_bbox'].items():
- losses[f's{i}.{name}'] = (
- value * lw if 'loss' in name else value)
-
- # mask head forward and loss
- if self.with_mask:
- mask_results = self._mask_forward_train(
- i, x, sampling_results, gt_masks, rcnn_train_cfg,
- bbox_results['bbox_feats'])
- for name, value in mask_results['loss_mask'].items():
- losses[f's{i}.{name}'] = (
- value * lw if 'loss' in name else value)
-
- # refine bboxes
- if i < self.num_stages - 1:
- pos_is_gts = [res.pos_is_gt for res in sampling_results]
- # bbox_targets is a tuple
- roi_labels = bbox_results['bbox_targets'][0]
- with torch.no_grad():
- roi_labels = torch.where(
- roi_labels == self.bbox_head[i].num_classes,
- bbox_results['cls_score'][:, :-1].argmax(1),
- roi_labels)
- proposal_list = self.bbox_head[i].refine_bboxes(
- bbox_results['rois'], roi_labels,
- bbox_results['bbox_pred'], pos_is_gts, img_metas)
-
- return losses
-
- def simple_test(self, x, proposal_list, img_metas, rescale=False):
- """Test without augmentation."""
- assert self.with_bbox, 'Bbox head must be implemented.'
- num_imgs = len(proposal_list)
- img_shapes = tuple(meta['img_shape'] for meta in img_metas)
- ori_shapes = tuple(meta['ori_shape'] for meta in img_metas)
- scale_factors = tuple(meta['scale_factor'] for meta in img_metas)
-
- # "ms" in variable names means multi-stage
- ms_bbox_result = {}
- ms_segm_result = {}
- ms_scores = []
- rcnn_test_cfg = self.test_cfg
-
- rois = bbox2roi(proposal_list)
- for i in range(self.num_stages):
- bbox_results = self._bbox_forward(i, x, rois)
-
- # split batch bbox prediction back to each image
- cls_score = bbox_results['cls_score']
- bbox_pred = bbox_results['bbox_pred']
- num_proposals_per_img = tuple(
- len(proposals) for proposals in proposal_list)
- rois = rois.split(num_proposals_per_img, 0)
- cls_score = cls_score.split(num_proposals_per_img, 0)
- if isinstance(bbox_pred, torch.Tensor):
- bbox_pred = bbox_pred.split(num_proposals_per_img, 0)
- else:
- bbox_pred = self.bbox_head[i].bbox_pred_split(
- bbox_pred, num_proposals_per_img)
- ms_scores.append(cls_score)
-
- if i < self.num_stages - 1:
- bbox_label = [s[:, :-1].argmax(dim=1) for s in cls_score]
- rois = torch.cat([
- self.bbox_head[i].regress_by_class(rois[j], bbox_label[j],
- bbox_pred[j],
- img_metas[j])
- for j in range(num_imgs)
- ])
-
- # average scores of each image by stages
- cls_score = [
- sum([score[i] for score in ms_scores]) / float(len(ms_scores))
- for i in range(num_imgs)
- ]
-
- # apply bbox post-processing to each image individually
- det_bboxes = []
- det_labels = []
- for i in range(num_imgs):
- det_bbox, det_label = self.bbox_head[-1].get_bboxes(
- rois[i],
- cls_score[i],
- bbox_pred[i],
- img_shapes[i],
- scale_factors[i],
- rescale=rescale,
- cfg=rcnn_test_cfg)
- det_bboxes.append(det_bbox)
- det_labels.append(det_label)
-
- if torch.onnx.is_in_onnx_export():
- return det_bboxes, det_labels
- bbox_results = [
- bbox2result(det_bboxes[i], det_labels[i],
- self.bbox_head[-1].num_classes)
- for i in range(num_imgs)
- ]
- ms_bbox_result['ensemble'] = bbox_results
-
- if self.with_mask:
- if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes):
- mask_classes = self.mask_head[-1].num_classes
- segm_results = [[[] for _ in range(mask_classes)]
- for _ in range(num_imgs)]
- else:
- if rescale and not isinstance(scale_factors[0], float):
- scale_factors = [
- torch.from_numpy(scale_factor).to(det_bboxes[0].device)
- for scale_factor in scale_factors
- ]
- _bboxes = [
- det_bboxes[i][:, :4] *
- scale_factors[i] if rescale else det_bboxes[i][:, :4]
- for i in range(len(det_bboxes))
- ]
- mask_rois = bbox2roi(_bboxes)
- num_mask_rois_per_img = tuple(
- _bbox.size(0) for _bbox in _bboxes)
- aug_masks = []
- for i in range(self.num_stages):
- mask_results = self._mask_forward(i, x, mask_rois)
- mask_pred = mask_results['mask_pred']
- # split batch mask prediction back to each image
- mask_pred = mask_pred.split(num_mask_rois_per_img, 0)
- aug_masks.append(
- [m.sigmoid().cpu().numpy() for m in mask_pred])
-
- # apply mask post-processing to each image individually
- segm_results = []
- for i in range(num_imgs):
- if det_bboxes[i].shape[0] == 0:
- segm_results.append(
- [[]
- for _ in range(self.mask_head[-1].num_classes)])
- else:
- aug_mask = [mask[i] for mask in aug_masks]
- merged_masks = merge_aug_masks(
- aug_mask, [[img_metas[i]]] * self.num_stages,
- rcnn_test_cfg)
- segm_result = self.mask_head[-1].get_seg_masks(
- merged_masks, _bboxes[i], det_labels[i],
- rcnn_test_cfg, ori_shapes[i], scale_factors[i],
- rescale)
- segm_results.append(segm_result)
- ms_segm_result['ensemble'] = segm_results
-
- if self.with_mask:
- results = list(
- zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble']))
- else:
- results = ms_bbox_result['ensemble']
-
- return results
-
- def aug_test(self, features, proposal_list, img_metas, rescale=False):
- """Test with augmentations.
-
- If rescale is False, then returned bboxes and masks will fit the scale
- of imgs[0].
- """
- rcnn_test_cfg = self.test_cfg
- aug_bboxes = []
- aug_scores = []
- for x, img_meta in zip(features, img_metas):
- # only one image in the batch
- img_shape = img_meta[0]['img_shape']
- scale_factor = img_meta[0]['scale_factor']
- flip = img_meta[0]['flip']
- flip_direction = img_meta[0]['flip_direction']
-
- proposals = bbox_mapping(proposal_list[0][:, :4], img_shape,
- scale_factor, flip, flip_direction)
- # "ms" in variable names means multi-stage
- ms_scores = []
-
- rois = bbox2roi([proposals])
- for i in range(self.num_stages):
- bbox_results = self._bbox_forward(i, x, rois)
- ms_scores.append(bbox_results['cls_score'])
-
- if i < self.num_stages - 1:
- bbox_label = bbox_results['cls_score'][:, :-1].argmax(
- dim=1)
- rois = self.bbox_head[i].regress_by_class(
- rois, bbox_label, bbox_results['bbox_pred'],
- img_meta[0])
-
- cls_score = sum(ms_scores) / float(len(ms_scores))
- bboxes, scores = self.bbox_head[-1].get_bboxes(
- rois,
- cls_score,
- bbox_results['bbox_pred'],
- img_shape,
- scale_factor,
- rescale=False,
- cfg=None)
- aug_bboxes.append(bboxes)
- aug_scores.append(scores)
-
- # after merging, bboxes will be rescaled to the original image size
- merged_bboxes, merged_scores = merge_aug_bboxes(
- aug_bboxes, aug_scores, img_metas, rcnn_test_cfg)
- det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores,
- rcnn_test_cfg.score_thr,
- rcnn_test_cfg.nms,
- rcnn_test_cfg.max_per_img)
-
- bbox_result = bbox2result(det_bboxes, det_labels,
- self.bbox_head[-1].num_classes)
-
- if self.with_mask:
- if det_bboxes.shape[0] == 0:
- segm_result = [[[]
- for _ in range(self.mask_head[-1].num_classes)]
- ]
- else:
- aug_masks = []
- aug_img_metas = []
- for x, img_meta in zip(features, img_metas):
- img_shape = img_meta[0]['img_shape']
- scale_factor = img_meta[0]['scale_factor']
- flip = img_meta[0]['flip']
- flip_direction = img_meta[0]['flip_direction']
- _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape,
- scale_factor, flip, flip_direction)
- mask_rois = bbox2roi([_bboxes])
- for i in range(self.num_stages):
- mask_results = self._mask_forward(i, x, mask_rois)
- aug_masks.append(
- mask_results['mask_pred'].sigmoid().cpu().numpy())
- aug_img_metas.append(img_meta)
- merged_masks = merge_aug_masks(aug_masks, aug_img_metas,
- self.test_cfg)
-
- ori_shape = img_metas[0][0]['ori_shape']
- segm_result = self.mask_head[-1].get_seg_masks(
- merged_masks,
- det_bboxes,
- det_labels,
- rcnn_test_cfg,
- ori_shape,
- scale_factor=1.0,
- rescale=False)
- return [(bbox_result, segm_result)]
- else:
- return [bbox_result]
diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_layers/switchable_norm.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_layers/switchable_norm.py
deleted file mode 100644
index 55d77ab5ca73b0ea46e84bc21d2d2b8f73aeadc3..0000000000000000000000000000000000000000
--- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_layers/switchable_norm.py
+++ /dev/null
@@ -1,150 +0,0 @@
-'''
-改自 https://github.com/switchablenorms/Switchable-Normalization/blob/master/devkit/ops/switchable_norm.py
-SwitchableNorm 和 SwitchableNorm1D 可能会有点问题,这里的IN就是他们自身。
-
-2021-12-21
-变更 momentum 参数定义,现在定义与nn.BatchNorm一致,值越小,更新就越慢
-修改参数 zero_gamma 和 affine 到 gamma_init 和 bias_init,自定义能力加强
-
-2021-9-12
-修改了几句,使其兼容 torch.jit.script
-'''
-
-import torch
-import torch.nn as nn
-import math
-
-
-__all__ = ['SwitchableNorm', 'SwitchableNorm1D', 'SwitchableNorm2D', 'SwitchableNorm3D', 'SwitchableNormND']
-
-
-class SwitchableNormND(nn.Module):
- def __init__(self, N, num_features, eps=1e-8, momentum=0.1, using_moving_average=True, using_bn=True, gamma_init=1., bias_init=0.):
- super().__init__()
- assert N >= 0
- self.N = N
- self.eps = eps
- self.momentum = momentum
- self.using_moving_average = using_moving_average
- self.using_bn = using_bn
- self.gamma_init = gamma_init
- self.bias_init = bias_init
-
- if gamma_init is not None:
- self.weight = nn.Parameter(torch.full([1, num_features, 1], gamma_init), True)
- else:
- self.register_buffer('weight', None)
-
- if bias_init is not None:
- self.bias = nn.Parameter(torch.full([1, num_features, 1], bias_init), True)
- else:
- self.register_buffer('bias', None)
-
- weight_num = 2 + (1 if using_bn else 0)
- self.mean_weight = nn.Parameter(torch.ones(weight_num), True)
- self.var_weight = nn.Parameter(torch.ones(weight_num), True)
-
- if self.using_bn:
- self.register_buffer('running_mean', torch.zeros(1, num_features, 1))
- self.register_buffer('running_var', torch.zeros(1, num_features, 1))
- else:
- self.register_buffer('running_mean', None)
- self.register_buffer('running_var', None)
-
- def _check_input_dim(self, input):
- if input.ndim-2 != self.N:
- raise ValueError('expected {}D input (got {}D input)'.format(self.N, input.dim()))
-
- def forward(self, x):
- self._check_input_dim(x)
- B, C = x.shape[:2]
- shape2 = list(x.shape[2:])
- # N 个像素
- if x.ndim == 2:
- N = 1
- else:
- N = math.prod(shape2)
-
- x = x.reshape(B, C, N)
-
- if N >= 2:
- mean_in = x.mean(2, keepdim=True)
- var_in = x.var(2, keepdim=True)
- else:
- # 如果 N 小于2,则IN无意义
- mean_in = torch.zeros(1, device=x.device)
- var_in = torch.ones(1, device=x.device)
-
- if C*N >= 2:
- mean_ln = x.mean([1, 2], keepdim=True)
- var_ln = x.var([1, 2], keepdim=True)
- else:
- # 如果 N*C 小于2,则LN无意义
- mean_ln = torch.zeros(1, device=x.device)
- var_ln = torch.ones(1, device=x.device)
-
- if self.using_bn:
- if self.training:
- # 如果 B*C 小于2,则BN无意义
- if B*C >= 2:
- mean_bn = x.mean([0, 2], keepdim=True)
- var_bn = x.var([0, 2], keepdim=True)
- else:
- mean_bn = torch.zeros(1, device=x.device)
- var_bn = torch.ones(1, device=x.device)
-
- if self.using_moving_average:
- self.running_mean.mul_(1 - self.momentum)
- self.running_mean.add_(self.momentum * mean_bn.data)
- self.running_var.mul_(1 - self.momentum)
- self.running_var.add_(self.momentum * var_bn.data)
- else:
- self.running_mean.set_(mean_bn.data)
- self.running_var.set_(var_bn.data)
- else:
- mean_bn = self.running_mean
- var_bn = self.running_var
- else:
- # 本段用于兼容 torch.jit.script,实际无任何作用
- mean_bn = torch.zeros(1, device=x.device)
- var_bn = torch.zeros(1, device=x.device)
-
- mean_weight = torch.softmax(self.mean_weight, 0)
- var_weight = torch.softmax(self.var_weight, 0)
-
- if self.using_bn:
- mean = mean_weight[0] * mean_in + mean_weight[1] * mean_ln + mean_weight[2] * mean_bn
- var = var_weight[0] * var_in + var_weight[1] * var_ln + var_weight[2] * var_bn
- else:
- mean = mean_weight[0] * mean_in + mean_weight[1] * mean_ln
- var = var_weight[0] * var_in + var_weight[1] * var_ln
-
- x = (x - mean) * (var + self.eps).rsqrt()
-
- if self.weight is not None:
- x = x * self.weight
- if self.bias is not None:
- x = x + self.bias
-
- x = x.reshape([B, C] + shape2)
- return x
-
-
-class SwitchableNorm(SwitchableNormND):
- def __init__(self, *args, **kwargs):
- super().__init__(0, *args, **kwargs)
-
-
-class SwitchableNorm1D(SwitchableNormND):
- def __init__(self, *args, **kwargs):
- super().__init__(1, *args, **kwargs)
-
-
-class SwitchableNorm2D(SwitchableNormND):
- def __init__(self, *args, **kwargs):
- super().__init__(2, *args, **kwargs)
-
-
-class SwitchableNorm3D(SwitchableNormND):
- def __init__(self, *args, **kwargs):
- super().__init__(3, *args, **kwargs)
diff --git a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_ops/vector_retrieval.py b/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_ops/vector_retrieval.py
deleted file mode 100644
index 99340f6227094f83113b5ef5090ef445c02e02a5..0000000000000000000000000000000000000000
--- a/spaces/twdac/BuChengFangYuan-ChineseJapaneseTranslation/app/model_utils_torch/more_ops/vector_retrieval.py
+++ /dev/null
@@ -1,96 +0,0 @@
-'''
-向量检索算子
-'''
-
-import math
-import torch
-import torch.nn.functional as F
-
-
-__all__ = ['find_closest_vector_by_L2', 'find_closest_vector_by_cos']
-
-
-@torch.jit.script
-def find_closest_vector_by_L2(in_vec: torch.Tensor, target_vec: torch.Tensor, group_size:int=64):
- '''
- 在 target_vec 中搜索与 in_vec 最接近的向量,并返回最接近向量的编号
- L2距离法
- 无梯度
- :param in_vec: shape [B, C],待检索向量组
- :param target_vec: shape [L, C],目标检索向量组
- :param group_size: int,每组索引大小,避免爆显存
- :return: [B],最接近向量的编号
- '''
- # 取消梯度跟踪
- in_vec = in_vec.data
- target_vec = target_vec.data
-
- B, C = in_vec.shape
- out_ids = []
- gs = group_size
- for bi in range(int(math.ceil(B / gs))):
- lens = torch.norm(in_vec[bi * gs:(bi + 1) * gs, None] - target_vec[None], 2, -1)
- # [B, len]
- ids = torch.argmin(lens, -1)
- out_ids.append(ids)
- out_ids = torch.cat(out_ids, 0)
- return out_ids
-
-
-@torch.jit.script
-def find_closest_vector_by_cos(in_vec: torch.Tensor, target_vec: torch.Tensor, group_size:int=64):
- '''
- 在 target_vec 中搜索与 in_vec 最接近的向量,并返回最接近向量的编号
- 余弦角度法
- 无梯度
- :param in_vec: shape [B, C],待检索向量组
- :param target_vec: shape [L, C],目标检索向量组
- :param group_size: int,每组索引大小,避免爆显存
- :return: [B],最接近向量的编号
- '''
- # 取消梯度跟踪
- in_vec = in_vec.data
- target_vec = target_vec.data
-
- B, C = in_vec.shape
- out_ids = []
- gs = group_size
- for bi in range(int(math.ceil(B / gs))):
- cos = F.cosine_similarity(in_vec[bi * gs:(bi + 1) * gs, None], target_vec[None], -1)
- # [B, cos]
- ids = torch.argmax(cos, -1)
- out_ids.append(ids)
- out_ids = torch.cat(out_ids, 0)
- return out_ids
-
-
-def test_find_closest_vector():
- in_vec = torch.as_tensor([
- [1, 1],
- [1, 0],
- [0, 1],
- [-1, -1],
- ], dtype=torch.float32)
-
- target_vec = torch.as_tensor([
- [1, 1],
- [1, 0],
- [1, -1],
- [0, 1],
- [0, -1],
- [-1, 1],
- [-1, 0],
- [-1, -1],
- ], dtype=torch.float32)
-
- out = find_closest_vector_by_L2(in_vec, target_vec)
- r1 = out.tolist() == [0, 1, 3, 7]
-
- out = find_closest_vector_by_cos(in_vec, target_vec)
- r2 = out.tolist() == [0, 1, 3, 7]
-
- return r1 and r2
-
-
-if __name__ == '__main__':
- test_find_closest_vector()
diff --git a/spaces/typesdigital/CryptoUpdate/app.py b/spaces/typesdigital/CryptoUpdate/app.py
deleted file mode 100644
index 462af546b06088e5d1c4da4092471c1c4b0e85ca..0000000000000000000000000000000000000000
--- a/spaces/typesdigital/CryptoUpdate/app.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import requests
-import openai
-
-# Set up the CoinMarketCap API endpoint and parameters
-url = 'https://pro-api.coinmarketcap.com/v1/cryptocurrency/listings/latest'
-parameters = {
- 'start': '1',
- 'limit': '10',
- 'convert': 'USD'
-}
-
-# Set up the OpenAI API credentials
-openai.api_key = ''
-
-# Fetch the latest cryptocurrency prices from the CoinMarketCap API
-response = requests.get(url, headers={'X-CMC_PRO_API_KEY': '<03365e9f-2220-4083-90a7-151a70bb40ae>'}, params=parameters)
-
-# Check if the response is successful
-if response.status_code == 200:
- # Extract the data from the API response
- data = response.json()
- # Extract the relevant data from the API response
- bitcoin_price = data['data']['cryptocurrency_list'][0]['quote']['USD']['price']
- ethereum_price = data['data']['cryptocurrency_list'][1]['quote']['USD']['price']
- bitcoin_margin = data['data']['cryptocurrency_list'][0]['quote']['USD']['percent_change_24h']
- ethereum_margin = data['data']['cryptocurrency_list'][1]['quote']['USD']['percent_change_24h']
-
- # Use the OpenAI API to generate a prediction for the margin
- prompt = f"Based on the latest cryptocurrency prices, what will be the margin for Bitcoin and Ethereum in the next 24 hours?"
- model = "text-davinci-002"
- response = openai.Completion.create(
- engine=model,
- prompt=prompt,
- temperature=0.5,
- max_tokens=50,
- n=1,
- stop=None,
- timeout=20,
- )
-
- # Extract the predicted margin from the OpenAI API response
- predicted_margin = response.choices[0].text.strip()
-
- # Print the results
- print("Latest cryptocurrency prices:")
- print(f"Bitcoin: ${bitcoin_price:.2f}")
- print(f"Ethereum: ${ethereum_price:.2f}")
- print("24-hour margin:")
- print(f"Bitcoin: {bitcoin_margin:.2f}%")
- print(f"Ethereum: {ethereum_margin:.2f}%")
- print("Predicted margin for the next 24 hours:")
- print(predicted_margin)
-
-else:
- print("Error: API request failed")
diff --git a/spaces/uSerNameDDHL/bingo/src/pages/api/blob.ts b/spaces/uSerNameDDHL/bingo/src/pages/api/blob.ts
deleted file mode 100644
index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000
--- a/spaces/uSerNameDDHL/bingo/src/pages/api/blob.ts
+++ /dev/null
@@ -1,40 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { Readable } from 'node:stream'
-import { fetch } from '@/lib/isomorphic'
-
-const API_DOMAIN = 'https://www.bing.com'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { bcid } = req.query
-
- const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`,
- {
- method: 'GET',
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referrer-Policy": "origin-when-cross-origin",
- },
- },
- )
-
- res.writeHead(200, {
- 'Content-Length': headers.get('content-length')!,
- 'Content-Type': headers.get('content-type')!,
- })
- // @ts-ignore
- return Readable.fromWeb(body!).pipe(res)
- } catch (e) {
- console.log('Error', e)
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/uragankatrrin/MHN-React/mhnreact/inspect.py b/spaces/uragankatrrin/MHN-React/mhnreact/inspect.py
deleted file mode 100644
index 7da4ed6fcea58969745a00b2372d6a7256bb7fe8..0000000000000000000000000000000000000000
--- a/spaces/uragankatrrin/MHN-React/mhnreact/inspect.py
+++ /dev/null
@@ -1,95 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Author: Philipp Seidl
- ELLIS Unit Linz, LIT AI Lab, Institute for Machine Learning
- Johannes Kepler University Linz
-Contact: seidl@ml.jku.at
-
-File contains functions that
-"""
-
-from . import model
-import torch
-import os
-
-MODEL_PATH = 'data/model/'
-
-def smarts2svg(smarts, useSmiles=True, highlightByReactant=True, save_to=''):
- """
- draws smiles of smarts to an SVG and displays it in the Notebook,
- or optinally can be saved to a file `save_to`
- adapted from https://www.kesci.com/mw/project/5c7685191ce0af002b556cc5
- """
- # adapted from https://www.kesci.com/mw/project/5c7685191ce0af002b556cc5
- from rdkit import RDConfig
- from rdkit import Chem
- from rdkit.Chem import Draw, AllChem
- from rdkit.Chem.Draw import rdMolDraw2D
- from rdkit import Geometry
- import matplotlib.pyplot as plt
- import matplotlib.cm as cm
- import matplotlib
- from IPython.display import SVG, display
-
- rxn = AllChem.ReactionFromSmarts(smarts,useSmiles=useSmiles)
- d = Draw.MolDraw2DSVG(900, 100)
-
- # rxn = AllChem.ReactionFromSmarts('[CH3:1][C:2](=[O:3])[OH:4].[CH3:5][NH2:6]>CC(O)C.[Pt]>[CH3:1][C:2](=[O:3])[NH:6][CH3:5].[OH2:4]',useSmiles=True)
- colors=[(0.3, 0.7, 0.9),(0.9, 0.7, 0.9),(0.6,0.9,0.3),(0.9,0.9,0.1)]
- try:
- d.DrawReaction(rxn,highlightByReactant=highlightByReactant)
- d.FinishDrawing()
-
- txt = d.GetDrawingText()
- # self.assertTrue(txt.find("