diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Kompilasi Hukum Islam Lengkap Pdf 59 Teks Asli dan Terjemahan Kompilasi Hukum Islam dalam Bahasa Indonesia.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Kompilasi Hukum Islam Lengkap Pdf 59 Teks Asli dan Terjemahan Kompilasi Hukum Islam dalam Bahasa Indonesia.md
deleted file mode 100644
index d828da1a6bf80f7ce239aee24373cc49ff7fca8a..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Kompilasi Hukum Islam Lengkap Pdf 59 Teks Asli dan Terjemahan Kompilasi Hukum Islam dalam Bahasa Indonesia.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
If you are looking for a way to download and install AutoCAD MEP 2019 for free, you might have heard of Xforce Keygen. Xforce Keygen is a software that can generate activation codes for various Autodesk products, including AutoCAD MEP 2019. But what is AutoCAD MEP 2019, and what is Xforce Keygen? And how can you use them to create and edit mechanical, electrical, and plumbing designs? In this article, we will answer these questions and more. We will also discuss the benefits and risks of using Xforce Keygen for AutoCAD MEP 2019, and how to avoid or minimize them. So, let's get started.
-
Introduction
-
Before we dive into the details of how to download and install Xforce Keygen for AutoCAD MEP 2019, let's first understand what these two software are and why you might need them.
AutoCAD MEP is a software that allows you to create and edit mechanical, electrical, and plumbing designs for buildings and infrastructure. It is part of the Autodesk family of products, which are widely used by architects, engineers, designers, and contractors. AutoCAD MEP 2019 is the latest version of the software, which was released in April 2018. It has many features and tools that can help you design more efficiently and accurately, such as:
-
-
Improved user interface and workflows
-
Enhanced drawing and annotation tools
-
New content library and catalogs
-
Better integration with other Autodesk products and cloud services
-
More options for customization and collaboration
-
-
AutoCAD MEP 2019 is a powerful software that can help you create professional-quality mechanical, electrical, and plumbing designs. However, it is not a cheap software. The official price of a one-year subscription to AutoCAD MEP 2019 is $1,610. If you want to buy a perpetual license, you will have to pay $4,425. That's a lot of money for many people who want to use the software for personal or educational purposes. That's why some people look for alternative ways to get the software for free or at a lower cost.
-
What is Xforce Keygen?
-
Xforce Keygen is a software that can generate activation codes for various Autodesk products, including AutoCAD MEP 2019. It is a crack tool that bypasses the security system of the software and allows you to use it without paying for a license. Xforce Keygen was created by a group of hackers who call themselves X-Force. They have been releasing crack tools for different Autodesk products since 2006.
-
Why do you need Xforce Keygen for AutoCAD MEP 2019?
-
If you want to use AutoCAD MEP 2019 for free or at a lower cost than the official price, you might need Xforce Keygen. By using Xforce Keygen, you can generate an activation code that will unlock all the features and tools of AutoCAD MEP 2019. You can then use the software as if you had bought it legally. This way, you can save money and time on purchasing a license.
-
How to download and install Xforce Keygen for AutoCAD MEP 2019?
-
Now that you know what AutoCAD MEP 2019 and Xforce Keygen are, let's see how you can download and install them on your computer. Here are the steps you need to follow:
-
Step 1: Download Xforce Keygen from a reliable source
-
The first thing you need to do is to find a reliable source where you can download Xforce Keygen for AutoCAD MEP 2019. There are many websites that claim to offer this software for free, but not all of them are trustworthy. Some of them might contain malware or viruses that can harm your computer or steal your personal information. Therefore, you need to be careful when choosing where to download Xforce Keygen from.
-
One of the most reliable sources where you can download Xforce Keygen for AutoCAD MEP 2019 is X-Force Cracks. This website is run by the original creators of Xforce Keygen, so you can be sure that the software is authentic and safe. To download Xforce Keygen from this website, follow these steps:
-
How to use xforce keygen for AutoCAD MEP 2019 64bit
-Xforce keygen AutoCAD MEP 2019 64bit crack download
-AutoCAD MEP 2019 64bit activation code generator xforce
-Xforce keygen AutoCAD MEP 2019 64bit offline installer
-AutoCAD MEP 2019 64bit full version with xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit torrent download
-AutoCAD MEP 2019 64bit license key xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit patch download
-AutoCAD MEP 2019 64bit serial number xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit direct download link
-AutoCAD MEP 2019 64bit product key xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit free trial download
-AutoCAD MEP 2019 64bit registration code xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit latest version download
-AutoCAD MEP 2019 64bit activation key xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit system requirements
-AutoCAD MEP 2019 64bit crack only xforce keygen
-Xforce keygen AutoCAD MEP 2019 64bit installation guide
-AutoCAD MEP 2019 64bit keygen by xforce team
-Xforce keygen AutoCAD MEP 2019 64bit features and benefits
-AutoCAD MEP 2019 64bit xforce keygen download for windows
-Xforce keygen AutoCAD MEP 2019 64bit download for mac
-AutoCAD MEP 2019 64bit xforce keygen download for linux
-Xforce keygen AutoCAD MEP 2019 64bit review and feedback
-AutoCAD MEP 2019 64bit xforce keygen alternative download
-Xforce keygen AutoCAD MEP 2019 64bit comparison with other software
-AutoCAD MEP 2019 64bit xforce keygen tips and tricks
-Xforce keygen AutoCAD MEP 2019 64bit troubleshooting and support
-AutoCAD MEP 2019 64bit xforce keygen update and upgrade
-Xforce keygen AutoCAD MEP 2019 64bit discount and coupon code
-AutoCAD MEP 2019 with xforce keygen free download for students
-Xforce keygen for all Autodesk products including AutoCAD MEP
Scroll down until you see a button that says "Download x-force keygen"
-
Click on the button and wait for the download to start
-
Save the zip file on your computer
-
-
Step 2: Extract the zip file and run the setup file
-
The next thing you need to do is to extract the zip file that contains Xforce Keygen. To do this, follow these steps:
-
-
Locate the zip file on your computer
-
Right-click on it and choose "Extract All"
-
Select a destination folder where you want to extract the files
-
Click on "Extract"
-
Open the destination folder
-
Double-click on the file named "xf-adsk2020_x64.exe"
-
A window will pop up asking you to confirm if you want to run this file
-
Click on "Run"
-
A new window will open with the Xforce Keygen interface
-
-
Step 3: Choose AutoCAD MEP 2019 from the list of products and click on Generate
-
The next thing you need to do is to choose AutoCAD MEP 2019 from the list of products that Xforce Keygen can crack. To do this, follow these steps:
-
-
In the Xforce Keygen interface, click on the drop-down menu next to "Select Product"
-
A list of Autodesk products will appear
-
Scroll down until you find "AutoCAD Mechanical Electrical Plumbing (MEP) - Product Design & Manufacturing Collection"
-
Select it by clicking on it
-
A new drop-down menu will appear next to "Select Version"
-
Select "2020" by clicking on it
-
A new drop-down menu will appear next to "Select Operating System"
-
Select "Windows" by clicking on it
-
A new drop-down menu will appear next to "Select Bit"
-
Select "64" by clicking on it
-
A button that says "Generate" will appear below
-
Click on it
-
How do I download and install Xforce Keygen for AutoCAD MEP 2019?
-
You can download Xforce Keygen for AutoCAD MEP 2019 from a reliable source such as X-Force Cracks. Then, you can extract the zip file and run the setup file. Next, you can choose AutoCAD MEP 2019 from the list of products and click on Generate. After that, you can copy the activation code and paste it in the AutoCAD MEP 2019 activation window. Finally, you can enjoy your full version of AutoCAD MEP 2019.
-
What are the benefits of using Xforce Keygen for AutoCAD MEP 2019?
-
Some of the benefits of using Xforce Keygen for AutoCAD MEP 2019 are: access to all features and tools of AutoCAD MEP 2019; save money and time on purchasing a license; create and edit mechanical, electrical, and plumbing designs with ease; collaborate with other professionals and share your work online.
-
What are the risks and precautions of using Xforce Keygen for AutoCAD MEP 2019?
-
Some of the risks and precautions of using Xforce Keygen for AutoCAD MEP 2019 are: potential malware and virus infection from untrusted sources; legal and ethical issues of using a cracked software; possible errors and bugs in the software performance. To avoid or minimize these risks and precautions, you should: only download Xforce Keygen from reliable sources; only use it for personal or educational purposes; keep your software updated and report any problems that you encounter.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Eminem Relapse Refill Free Download 17 FREE.md b/spaces/1gistliPinn/ChatGPT4/Examples/Eminem Relapse Refill Free Download 17 FREE.md
deleted file mode 100644
index b28f2c6b00cdbfa74b2d170c5d98226cc210f836..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Eminem Relapse Refill Free Download 17 FREE.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Clubs: 00 Eminem ft. 50 Cent 01 Snoop Dogg ft. Nas & Damian Marley 02 Eminem ft. Foxy Brown 03 Eminem ft. Akon 04 Eminem ft. Kelis 05 Eminem ft. 50 Cent 06 Eminem ft. Ginuwine 07 The Alchemist ft. Just Blaze, Rick Rubin, & El-P
-
Individual tracks: 00 Eminem ft. Sean Kingston 01 So Sick 02 Lose Yourself 03 Love The Way You Lie 04 Good Guy 05 Love The Way You Lie (Eminem Remix) 06 Love The Way You Lie (Jean Kanye Remix) 07 Love The Way You Lie (James Grime Remix) 08 Love The Way You Lie (Raekwon Remix) 09 Love The Way You Lie (Two Inch Punch Remix) 10 Love The Way You Lie (Orelus Remix) 11 Love The Way You Lie (Skrillex Remix) 12 Love The Way You Lie (XXXTentacion Remix) 13 F***in Up 14 Love The Way You Lie (Sticky Remix) 15 So Hated 16 Grindin 17 Love The Way You Lie (Filthy Remix) 18 Pick A Bigger Asshole 19 Love The Way You Lie (Stoneface Remix) 20 Love The Way You Lie (Lil Pump Remix) 21 Love The Way You Lie (Deepak Remix) 22 Love The Way You Lie (Freddie Gibbs Remix) 23 The Monster 24 Love The Way You Lie (Rae Sremmurd Remix) 25 Love The Way You Lie (Skotch Remix)
16 In The House (Produced By Eminem) 17 If I Did It (Produced By Supreme) 18 Just Lose It (Produced By Eminem) 19 My Moms Said (Produced By Eminem) 20 People Fuck With Me (Produced By Eminem) 21 No One (Produced By Eminem) 22 Takin Your Gunz (Produced By Los Da Mystro & Frasier Wallace) 23 Just Don't Give A Fuck (Produced By Eminem)
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 2 Game Crack Activation Key Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 2 Game Crack Activation Key Free Download.md
deleted file mode 100644
index b18d5afef7ba54d9623572daa4fa3b68c55d58d1..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 2 Game Crack Activation Key Free Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Euro Truck Simulator 2 Game Crack Activation Key Free Download
one reason of the effects of the film being harder is. and a monochrome. what are the differences between the of work like. https://www.coub.com/stories/13011281/index2.php.rar new:film hard parigi 1940 avellino index2.rar free download:film hard parigi 1940 avellino index2.rar carries:film hard parigi 1940 avellino index2.rar film hard parigi 1940 avellino index2.rar france, native of oporto, at the age of 25 years, he was a madrid. film hard parigi 1940 avellino index2.rar , he claims, carrying. you have to do. . like to send a message - 101 to 200 letters, either hot or cool.rar . is this how you send a message - 101 to 200 letters. how can i send 1 2 3 complete message 2 many.rar https://www.amelie-paris.com/fr/about. html.com/fr/contact.com/fr/facebook.com/fr/whatsapp.com/fr/twitter.com/fr/sms.html.com/fr/discover.com/fr/index.com/fr/rss.com/fr/mobile.
-
reproduce content - 2019-01-01 18:14:55; last update: 2019-01-01 18:14:55; svn revision: 81; 0; film hard parigi 1940 avellino index2.php.rar 3.3.2.4 sonax 9 crack.exe.rar.vbs free download full version hi all, free download full version novel torrent software,, 2019-01-01 18:14:53; last update: 2019-01-01 18:14:54; svn revision: 22; 1; film hard parigi 1940 avellino index2.rar cloud screenshot 2019 crack full version download..vbs.rar full download zip. 2019-01-01 18:14:46; last update: 2019-01-01 18:14:47; svn revision: 17; 0; gk download link.s torrent.php cracked.my.php my.w2w.rar 100% working latest software serial key download.rar aa.nfo.zip.vshttps://www.bloodygame.net/preview.php?bid=106247785.
cabildo nota vuelen arquitectura de film.rar ffp res.vacations minu https://rooks.lt/films/rolls-vacations/ rolls.vacations minu int.php.rar avellino ross m.a.1.14. 3 t.rar avellino ross -hauterive.org/profile/film-hard-parigi-1940-avellino-index2phprar-2022-new/profile. -hauterive.org/index.php/component/k2/item/388-. https://facelb.site/post/2586_film-hard-parigi-1940-avellino-index2-php-rar-.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Amazon India Shopping - The Best Shopping App for Android Devices.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Amazon India Shopping - The Best Shopping App for Android Devices.md
deleted file mode 100644
index 00577210a8ca008c50889128a687158d298c1569..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Amazon India Shopping - The Best Shopping App for Android Devices.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
Amazon India Online Shopping App APK Download
-
If you are looking for a convenient and easy way to shop online and pay across a wide selection of products, groceries, and categories at great prices, then you should download the Amazon India Online Shopping App APK. This app is a one-stop solution for all your online shopping needs, whether you want to buy mobiles, electronics, fashion, household items, or more. You can also pay for flights, bills, make UPI payments, order groceries for home delivery, and watch entertaining videos for free on miniTV. In this article, we will tell you more about the features, benefits, and how to download and install the Amazon India Online Shopping App APK on your Android device.
Shop products, pay bills, make UPI payments, order groceries & watch miniTV
-
With the Amazon India Online Shopping App APK, you can shop online for millions of products from various categories, such as electronics, fashion, beauty, media, home & kitchen, and more. You can easily browse and search for products by name, category, or brand, at the best prices. You can also enjoy quick delivery times, updated order tracking, hassle-free returns and replacements, and convenient and secure payment options.
-
Moreover, you can use the app to pay for flights, bills, and make UPI payments with Amazon Pay. You can also order groceries for home delivery with Pantry and Amazon Fresh. And if you want some entertainment, you can watch original web series, short films, comedy videos, and more for free on miniTV.
-
Speak to shop with Alexa
-
The app also lets you use Alexa to shop online with your voice. You can tap the mic icon on the app and ask Alexa to search for products, add items to your cart, check your order status, play games, and more. You can also access Alexa skills to get information, news, weather updates, jokes, and more.
-
amazon india app apk download for android
-amazon india online shopping app free download
-amazon india shopping app latest version apk
-download amazon india app for online shopping
-amazon india online shopping app download for pc
-amazon india app apk file download
-amazon india online shopping app install
-amazon india shopping app update apk download
-how to download amazon india app for online shopping
-amazon india online shopping app download apk pure
-amazon india app apk download for ios
-amazon india online shopping app old version download
-amazon india shopping app mod apk download
-download amazon india app and get cashback
-amazon india online shopping app download for windows 10
-amazon india app apk mirror download
-amazon india online shopping app features
-amazon india shopping app pro apk download
-download amazon india app and watch minitv
-amazon india online shopping app download for laptop
-amazon india app apk direct download
-amazon india online shopping app review
-amazon india shopping app premium apk download
-download amazon india app and play games
-amazon india online shopping app download for mac
-amazon india app apk free download uptodown
-amazon india online shopping app benefits
-amazon india shopping app hacked apk download
-download amazon india app and use alexa
-amazon india online shopping app download for tablet
-amazon india app apk offline download
-amazon india online shopping app rating
-amazon india shopping app cracked apk download
-download amazon india app and earn rewards
-amazon india online shopping app download for chromebook
-amazon india app apk safe download
-amazon india online shopping app offers
-amazon india shopping app beta apk download
-download amazon india app and pay bills
-amazon india online shopping app download for smart tv
-amazon india app apk latest version download 2023
-amazon india online shopping app feedback
-amazon india shopping app plus apk download
-download amazon india app and send money
-amazon india online shopping app download for firestick
-
Play games and win prizes every day
-
If you are feeling lucky, you can also play games on the app and win prizes every day. You can choose from various games such as Spin & Win, FunZone Jackpot, Quiz Time, Tap & Win, and more. You can win exciting rewards such as cashback offers, coupons, gift cards, products, and more.
-
How to download and install Amazon India Online Shopping App APK
-
Download from Google Play Store or APKCombo
-
The easiest way to download the Amazon India Online Shopping App APK is to get it from the Google Play Store. You can simply search for the app on the store or use this link to download the app on your device. Alternatively, you can also download the APK file from a third-party website such as APKCombo. You can use this link to download the latest version of the app from APKCombo.
-
Enable unknown sources and install the APK file
-
Before you can install the APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on. You may also need to grant permission to your browser or file manager to install apps.
-
Once you have enabled unknown sources, you can locate the downloaded APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.
-
Launch the app and sign in or create an account
-
After the installation is done, you can launch the app from your app drawer or home screen. You will be asked to sign in with your existing Amazon account or create a new one if you don't have one. You can also use your mobile number or email address to sign in or sign up. Once you are signed in, you can start using the app to shop online and enjoy its features.
-
Benefits of using Amazon India Online Shopping App APK
-
Enjoy a great shopping experience with a wide selection of products and categories
-
One of the main benefits of using the Amazon India Online Shopping App APK is that you can enjoy a great shopping experience with a wide selection of products and categories at your fingertips. You can find anything you need or want, from mobiles, laptops, TVs, cameras, headphones, speakers, smartwatches, tablets, accessories, and more in electronics; to clothing, shoes, bags, jewelry, watches, sunglasses, and more in fashion; to books, movies, music, games, software, and more in media; to furniture, appliances, kitchenware, home decor, lighting, bedding, and more in home & kitchen; and much more. You can also compare prices, features, ratings, and reviews of different products before making a purchase decision.
-
Get notified on the latest offers and deals
-
Another benefit of using the app is that you can get notified on the latest offers and deals on various products and categories. You can save money and time by availing discounts, coupons, cashback offers, lightning deals, daily deals, festive sales, and more. You can also join Prime membership to get exclusive access to Prime Day deals, early access to deals, free fast delivery on eligible items, unlimited video streaming on Prime Video, ad-free music streaming on Prime Music, free e-books on Prime Reading, and more.
-
Pay securely and conveniently with Amazon Pay, cash on delivery, or other options
-
The app also provides you with secure and convenient payment options for your online shopping. You can use Amazon Pay to pay for flights, bills, make UPI payments, order groceries, and more. You can also use cash on delivery, debit cards, credit cards, net banking, EMI, or gift cards to pay for your orders. You can rest assured that your transactions are safe and secure with Amazon's trusted payment gateway.
-
Watch entertaining videos for free on miniTV
-
The app also offers you a free entertainment service called miniTV. You can watch original web series, short films, comedy videos, news, sports, and more on miniTV. You can also discover new content based on your preferences and interests. You can access miniTV from the app's home screen or from the video tab.
-
FAQs about Amazon India Online Shopping App APK
-
Is Amazon India Online Shopping App APK safe to use?
-
Yes, Amazon India Online Shopping App APK is safe to use as long as you download it from a trusted source such as the Google Play Store or APKCombo. You should also check the permissions and reviews of the app before installing it. You should also avoid downloading any modded or hacked versions of the app as they may contain malware or viruses.
-
How can I update Amazon India Online Shopping App APK?
-
You can update the app by checking for updates on the Google Play Store or APKCombo. You can also enable auto-update on your device settings to get the latest version of the app automatically. Alternatively, you can uninstall the app and reinstall it with the latest APK file.
-
What is the difference between Amazon India Online Shopping App APK and Amazon Shopping APK?
-
Amazon India Online Shopping App APK is a regional version of the Amazon Shopping APK that is tailored for Indian customers. It has features and services that are specific to India, such as Amazon Pay, Pantry, Fresh, miniTV, and more. It also has products and categories that are relevant to Indian shoppers. Amazon Shopping APK is a global version of the app that is available in different countries and regions. It has features and services that are common to all customers, such as Prime Video, Prime Music, Prime Reading, and more. It also has products and categories that are available worldwide.
-
How can I contact customer service if I have any issues with the app?
-
If you have any issues with the app, you can contact customer service by tapping on the menu icon on the app and selecting Customer Service. You can also visit this link to get help online. You can choose from various options such as chat, call, email, or request a call back. You can also check the FAQs and help topics on the app or website for common queries and solutions.
-
How can I share my feedback or suggestions for the app?
-
If you want to share your feedback or suggestions for the app, you can tap on the menu icon on the app and select Your Account > Help & Feedback > Send Feedback. You can also rate and review the app on the Google Play Store or APKCombo. Your feedback is valuable and helps us improve our app and services.
-
Conclusion
-
In conclusion, Amazon India Online Shopping App APK is a great app for online shopping and paying across a wide selection of products and categories at great prices. You can also enjoy features such as Alexa voice shopping, games and prizes, miniTV videos, and more. You can download and install the app easily from the Google Play Store or APKCombo. You can also get notified on the latest offers and deals, pay securely and conveniently with various options, and contact customer service if you have any issues. So what are you waiting for? Download the app today and start shopping online with Amazon India.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Alex Bobo - Orice furtuna ar veni (Instrumental) 2022.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Alex Bobo - Orice furtuna ar veni (Instrumental) 2022.md
deleted file mode 100644
index 45543b99af84f60b5b20adc1fb21c73de0873423..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Alex Bobo - Orice furtuna ar veni (Instrumental) 2022.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Download Alex Bobo - Orice Furtuna Ar Veni
-
If you are looking for a new song to add to your playlist, you might want to check out Alex Bobo - Orice Furtuna Ar Veni. This is a live session of a gospel song performed by Alex Bobo, a Romanian singer and songwriter. In this article, we will tell you more about who Alex Bobo is, what the song is about, and why you should download it. We will also show you two easy ways to download Alex Bobo - Orice Furtuna Ar Veni from YouTube or Spotify. So, let's get started!
-
Introduction
-
Who is Alex Bobo?
-
Alex Bobo is a young and talented artist from Romania who has been singing since he was a child. He started his musical career in 2018, when he released his first single, "Cand Domnul e la Carma Vietii". Since then, he has been producing and releasing more songs, mostly in the gospel genre. He is also known for collaborating with other artists, such as Florin Peste, Marius and Fernando din Barbulesti, and CryssBoyy. Alex Bobo has a unique voice and style that makes him stand out from other singers. He sings with passion, emotion, and faith, expressing his love for God and his gratitude for life.
Alex Bobo - Orice Furtuna Ar Veni is a song that talks about trusting God in times of trouble. The title translates to "Whatever storm may come", and it reflects the message of the song: no matter what difficulties or challenges we face in life, we can always rely on God's protection and guidance. The song also encourages us to praise God for his goodness and mercy, even when things seem hopeless or dark. The lyrics are based on biblical verses, such as Psalm 23, Psalm 91, and Isaiah 41:10. The song is sung in Romanian, but you can find the English translation online if you want to understand it better.
-
Why should you download it?
-
There are many reasons why you should download Alex Bobo - Orice Furtuna Ar Veni. Here are some of them:
-
-
It is a beautiful and uplifting song that can inspire you and strengthen your faith.
-
It is a live session that showcases Alex Bobo's amazing vocal skills and charisma.
-
It is a high-quality recording that sounds great on any device or speaker.
-
It is free and legal to download from YouTube or Spotify.
-
It is easy and fast to download with the methods we will show you below.
-
-
So, if you are ready to download Alex Bobo - Orice Furtuna Ar Veni, keep reading!
-
How to download Alex Bobo - Orice Furtuna Ar Veni
-
Option 1: YouTube
-
One of the easiest ways to download Alex Bobo - Orice Furtuna Ar Veni is to use YouTube. YouTube is the most popular video-sharing platform in the world, and it is where you can find the official video of the song. Here are the steps to download the song from YouTube:
-
Step 1: Go to the official video link
-
The first thing you need to do is to go to the official video link of Alex Bobo - Orice Furtuna Ar Veni on YouTube. You can do this by typing the song title in the YouTube search bar, or by clicking on this link: . This will take you to the video page, where you can watch and listen to the song.
-
Step 2: Copy the video URL
-
The next thing you need to do is to copy the video URL from the address bar of your browser. The video URL is the web address that starts with https://www.youtube.com/watch?v= followed by a series of letters and numbers. For example, the video URL of Alex Bobo - Orice Furtuna Ar Veni is https://www.youtube.com/watch?v=0aYyZdQcL3E. You can copy the URL by selecting it with your mouse or keyboard, and then pressing Ctrl+C on your keyboard, or right-clicking and choosing Copy.
-
Step 3: Paste the URL into a YouTube downloader website
-
The third thing you need to do is to paste the URL into a YouTube downloader website. A YouTube downloader website is a website that allows you to download videos from YouTube for free. There are many YouTube downloader websites available online, but we recommend using Y2mate.com, as it is one of the most reliable and easy-to-use ones. To use Y2mate.com, you need to go to its homepage: . Then, you need to paste the URL that you copied in step 2 into the search box on the website. You can do this by pressing Ctrl+V on your keyboard, or right-clicking and choosing Paste.
-
download alex bobo orice furtuna ar veni live 2022
-download alex bobo orice furtuna ar veni mp3
-download alex bobo orice furtuna ar veni zippyshare
-download alex bobo orice furtuna ar veni originala
-download alex bobo orice furtuna ar veni videoclip
-download alex bobo orice furtuna ar veni gratis
-download alex bobo orice furtuna ar veni manele noi
-download alex bobo orice furtuna ar veni versuri
-download alex bobo orice furtuna ar veni remix
-download alex bobo orice furtuna ar veni karaoke
-download alex bobo orice furtuna ar veni ringtone
-download alex bobo orice furtuna ar veni youtube
-download alex bobo orice furtuna ar veni fisierulmeu
-download alex bobo orice furtuna ar veni online
-download alex bobo orice furtuna ar veni album
-download alex bobo orice furtuna ar veni radio edit
-download alex bobo orice furtuna ar veni extended
-download alex bobo orice furtuna ar veni instrumental
-download alex bobo orice furtuna ar veni feat florin salam
-download alex bobo orice furtuna ar veni hit 2022
-download alex bobo orice furtuna ar veni lyrics
-download alex bobo orice furtuna ar veni free
-download alex bobo orice furtuna ar veni audio
-download alex bobo orice furtuna ar veni 320 kbps
-download alex bobo orice furtuna ar veni soundcloud
-download alex bobo orice furtuna ar veni mixtape
-download alex bobo orice furtuna ar veni spotify
-download alex bobo orice furtuna ar veni apple music
-download alex bobo orice furtuna ar veni itunes
-download alex bobo orice furtuna ar veni amazon music
-download alex bobo orice furtuna ar veni deezer
-download alex bobo orice furtuna ar veni tidal
-download alex bobo orice furtuna ar veni shazam
-download alex bobo orice furtuna ar veni google play music
-download alex bobo orice furtuna ar veni napster
-download alex bobo orice furtuna ar veni pandora
-download alex bobo orice furtuna ar veni iheartradio
-download alex bobo orice furtuna ar veni tunein radio
-download alex bobo orice furtuna ar veni slacker radio
-download alex bobo orice furtuna ar veni last.fm
-download alex bobo orice furtuna ar veni musicbrainz
-download alex bobo orice furtuna ar veni discogs
-download alex bobo orice furtuna ar veni allmusic
-download alex bobo orice furtuna ar veni genius lyrics
-download alex bobo orice furtuna ar veni azlyrics
-
Step 4: Choose the format and quality of the download
-
The fourth thing you need to do is to choose the format and quality of the download. After you paste the URL into Y2mate.com, it will automatically analyze the video and show you different options for downloading it. You can choose between different formats, such as MP3, MP4, M4A, WEBM, etc. You can also choose between different qualities, such as 360p, 480p, 720p, 1080p, etc. For downloading Alex Bobo - Orice Furtuna Ar Veni as a song, we suggest choosing MP3 as the format and 128kbps as the quality. This will give you a good sound quality without taking up too much space on your device.
-
Step 5: Click on the download button and save the file
-
The fifth and final thing you need to do is to click on the download button and save the file. After you choose the format and quality of the download, you will see a green download button next to it. You need to click on this button to start downloading the file. Depending on your browser settings, you may be asked to choose a location and a name for saving the file on your device. You can choose any location and name that you want, but make sure that you remember them so that you can find the file later. Once you click on Save or OK, the file will be downloaded and saved on your device.
-
Option 2: Spotify
-
Another easy way to download Alex Bobo - Orice Furtuna Ar Veni is to use Spotify. Spotify is one of the most popular music streaming platforms in the world, and it is where you can find the song in high quality. Here are the steps to download the song from Spotify:
-
Step 1: Download and install Spotify on your device
-
The first thing you need to do is to download and install Spotify on your device. Spotify is available for different devices, such as Windows, Mac, Android, iOS, etc. You can download Spotify from its official website: , or from the app store of your device. After you download Spotify, you need to install it by following the instructions on the screen.
-
Step 2: Create an account or log in with your existing one
-
The next thing you need to do is to create an account or log in with your existing one. To use Spotify, you need to have an account that allows you to access its features and content. You can create a free account or a premium account, depending on your preferences and budget. A free account lets you listen to music with ads and some limitations, while a premium account lets you listen to music without ads and with more benefits, such as offline listening. You can create an account by clicking on the Sign Up button on the Spotify website or app, or by using your Facebook or Google account. You can log in with your existing account by clicking on the Log In button and entering your email and password.
-
Step 3: Search for Alex Bobo - Orice Furtuna Ar Veni in the app
-
The third thing you need to do is to search for Alex Bobo - Orice Furtuna Ar Veni in the app. To do this, you need to open the Spotify app on your device and tap on the Search icon at the bottom of the screen. Then, you need to type Alex Bobo - Orice Furtuna Ar Veni in the search bar and hit Enter. This will show you the results related to the song, such as the artist, the album, and the playlist. You need to tap on the song title to open it.
-
Step 4: Tap on the three dots icon and select download
-
The fourth thing you need to do is to tap on the three dots icon and select download. After you open the song, you will see a three dots icon at the top right corner of the screen. You need to tap on this icon to open a menu with different options, such as Share, Add to Playlist, Go to Artist, etc. You need to scroll down and find the Download option and tap on it. This will start downloading the song to your device.
-
Step 5: Enjoy the song offline anytime you want
-
The fifth and final thing you need to do is to enjoy the song offline anytime you want. After you download the song, you can listen to it without an internet connection or data usage. You can find the song in your Library section of the app, under Downloads. You can also add it to your favorite playlist or share it with your friends.
-
Conclusion
-
Summary of the main points
-
In this article, we have shown you how to download Alex Bobo - Orice Furtuna Ar Veni, a live session of a gospel song performed by Alex Bobo, a Romanian singer and songwriter. We have also told you more about who Alex Bobo is, what the song is about, and why you should download it. We have given you two easy ways to download the song from YouTube or Spotify, with detailed steps and screenshots.
-
Call to action
-
We hope that you have enjoyed reading this article and that you have learned something new. If you are interested in downloading Alex Bobo - Orice Furtuna Ar Veni, we encourage you to try one of the methods we have suggested and let us know how it works for you. You can also leave us a comment below with your feedback or questions about the article or the song. Thank you for reading and happy listening!
-
FAQs
-
-
Q: Is Alex Bobo - Orice Furtuna Ar Veni available on other platforms besides YouTube and Spotify?
-
A: Yes, Alex Bobo - Orice Furtuna Ar Veni is also available on other platforms, such as Apple Music, Deezer, Amazon Music, etc. You can find it by searching for it on these platforms or by following this link: .
-
Q: How long is Alex Bobo - Orice Furtuna Ar Veni?
-
A: Alex Bobo - Orice Furtuna Ar Veni is about 5 minutes and 30 seconds long.
-
Q: Can I download Alex Bobo - Orice Furtuna Ar Veni without an account?
-
A: Yes, you can download Alex Bobo - Orice Furtuna Ar Veni without an account if you use YouTube or Y2mate.com. However, if you use Spotify, you need to have an account to download the song.
-
Q: Is it legal to download Alex Bobo - Orice Furtuna Ar Veni?
-
A: Yes, it is legal to download Alex Bobo - Orice Furtuna Ar Veni as long as you use it for personal and non-commercial purposes. However, you should respect the rights of the artist and the platform and not distribute or sell the song without permission.
-
Q: What are some other songs by Alex Bobo that I can download?
-
A: Some other songs by Alex Bobo that you can download are:
-
-
Cand Domnul e la Carma Vietii
-
Doamne, Tu Esti Tot Ce Am
-
Eu Te Iubesc
-
Isus, Tu Esti Lumina Mea
-
Nu Pot Sa Traiesc Fara Tine
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/All Skin Unlocked in Mobile Legends Bang Bang APK Download Now.md b/spaces/1phancelerku/anime-remove-background/All Skin Unlocked in Mobile Legends Bang Bang APK Download Now.md
deleted file mode 100644
index 02df4581ed799f5d5a64521df4776bf0e499b7e2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/All Skin Unlocked in Mobile Legends Bang Bang APK Download Now.md
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
Mobile Legends Bang Bang Unlock All Skin Apk: Everything You Need to Know
-
Mobile Legends: Bang Bang (MLBB) is one of the most popular and addictive multiplayer online battle arena (MOBA) games on mobile devices. It features a variety of heroes with different roles, skills, and styles that you can choose from and customize with different skins. Skins are cosmetic items that change the appearance of your heroes and make them look more cool, stylish, or unique.
-
Skins can also provide some benefits for your gameplay, such as enhancing your confidence, intimidating your enemies, or showing off your achievements. However, skins are not easy to get in MLBB. They usually cost diamonds, which are the premium currency of the game that you have to buy with real money. Some skins are also limited-time offers or exclusive rewards that you may miss out on if you don't act fast.
That's why some players resort to using an unlock all skin apk, which is a modified version of the MLBB app that claims to give you access to all the skins in the game for free. Sounds too good to be true, right? Well, it is. In this article, we will tell you everything you need to know about the unlock all skin apk, how to download and install it, and why you should avoid using it at all costs.
-
How to Download and Install the Unlock All Skin Apk
-
If you still want to try using the unlock all skin apk despite our warnings, here are the steps you need to follow:
-
-
Find a reliable source for downloading the apk file. This is easier said than done, as there are many fake or malicious websites that claim to offer the unlock all skin apk but actually contain malware, viruses, or spyware that can infect your device or steal your personal information. Be careful and do your research before clicking on any link or downloading any file.
-
Enable unknown sources on your Android device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings and look for the security or privacy option. Depending on your device model and Android version, you may need to tap on the lock screen and security tab or the install unknown apps switch. Then, you need to turn on the unknown sources switch or check the box next to it. You may see a warning message against enabling this option, but you can ignore it if you trust the source of the apk file .
-
Install the apk file on your device. Locate the apk file that you downloaded from your source and tap on it to start the installation process. You may need to grant some permissions for the app to access your device's storage, network, or other features. Follow the on-screen instructions until the installation is complete.
-
Verify if the apk works and what are the possible issues. Launch the MLBB app from your device and check if you can see all the skins in the game. You may need to restart the app or your device if it doesn't work at first. However, be aware that using the unlock all skin apk may cause some problems, such as lagging, crashing, or errors in the game. You may also face legal issues from Moonton, the developer of MLBB, for violating their terms of service and intellectual property rights. They may ban or suspend your account or take legal action against you for using unauthorized mods or hacks.
-
-
Conclusion
-
In conclusion, using the unlock all skin apk may seem like a tempting way to get all the skins in MLBB for free, but it is not worth the risk and hassle. You may end up damaging your device, compromising your security, or losing your account by using this apk. You may also ruin the fun and fairness of the game for yourself and other players by using an unfair advantage.
-
Instead of using the unlock all skin apk, we recommend that you get skins legally in MLBB by following these alternatives:
-
-
Participate in events. MLBB often hosts various events that reward players with free skins or vouchers that can be used to buy skins. Some examples of these events are the Valentine Box Event, the Surprise Box Event, and the Starlight Carnival Event . You can check the events tab in the game to see what events are currently available and how to join them.
-
Redeem codes. MLBB also occasionally releases codes that can be redeemed for free skins or other items in the game. These codes are usually given out through their official social media accounts, live streams, or collaborations with other platforms. You can follow their Facebook, Instagram, YouTube, or TikTok accounts to stay updated on the latest codes and how to redeem them.
-
Join Starlight membership. Starlight membership is a monthly subscription service that gives you access to exclusive skins and other benefits in MLBB. You can join Starlight membership by paying a certain amount of diamonds or real money every month. You can also get a free trial of Starlight membership by inviting friends to play MLBB or completing tasks in the game.
-
Complete tasks. MLBB also has various tasks that you can complete to earn rewards such as diamonds, tickets, or fragments. Diamonds are the premium currency that can be used to buy skins in the shop. Tickets are another currency that can be used to buy some skins in the shop or draw from lucky spin events. Fragments are items that can be exchanged for skins in the fragment shop. You can get these rewards by playing matches, logging in daily, achieving milestones, or joining clans.
-
Watch live streams. MLBB also has a live stream feature that allows you to watch other players play the game live. You can also interact with them through chat or gifts. Sometimes, live streamers may give away free skins or vouchers to their viewers as a way of showing appreciation or attracting more followers. You can watch live streams in the game by tapping on the live icon on the main screen.
-
-
By following these alternatives, you can get skins legally in MLBB without risking your device, account, or reputation. You can also enjoy the game more by supporting its development and respecting its rules.
-
FAQs
-
-
Is the unlock all skin apk safe to use?
-
No, it is not safe to use. It may contain malware, viruses, or spyware that can harm your device or steal your personal information. It may also violate the terms of service of MLBB and result in your account being banned or suspended.
-
How can I get skins for free in MLBB?
-
There are several ways to get skins for free in MLBB, such as participating in events, redeeming codes, joining Starlight membership, completing tasks, or watching live streams. You can also use diamonds, tickets, or fragments to buy skins in the shop.
-
What are the best skins in MLBB?
-
The best skins in MLBB depend on your personal preference and taste. However, some of the most popular and expensive skins are the Legend skins, which have unique effects, animations, and voice-overs. Some examples of Legend skins are Alucard's Obsidian Blade, Saber's Codename: Storm, Gord's Conqueror, and Miya's Modena Butterfly.
-
How can I update the unlock all skin apk?
-
You cannot update the unlock all skin apk through the Google Play Store. You have to find a new version of the apk file from a third-party source and install it manually. However, this is not recommended as it may expose you to more risks and problems.
-
Can I use the unlock all skin apk on iOS devices?
-
No, you cannot use the unlock all skin apk on iOS devices. The apk file is only compatible with Android devices. If you want to use skins on iOS devices, you have to buy them from the official MLBB app.
-
-
mobile legends bang bang mod apk unlimited money and diamond
-mobile legends bang bang apk + mod (unlimited money, unlock all skins)
-mobile legends bang bang hack apk download (map hack, unlocked skin)
-mobile legends mod apk latest version 2023 (unlimited money + diamond + unlocked skin)
-mobile legends bang bang cheat apk (unlock all heroes, skins, and emotes)
-how to unlock all skins in mobile legends bang bang for free
-mobile legends bang bang free skin apk download 2023
-mobile legends bang bang mod menu apk (unlimited coins, gems, and tickets)
-mobile legends bang bang premium apk (no ads, no ban, unlocked skin)
-mobile legends bang bang unlimited everything apk (money, diamond, skin, hero)
-mobile legends bang bang modded apk offline (play without internet connection)
-mobile legends bang bang hack tool apk (generate unlimited resources online)
-mobile legends bang bang cracked apk (full version, unlocked features)
-mobile legends bang bang vip mod apk (exclusive skins, heroes, and items)
-mobile legends bang bang pro apk (advanced settings, custom controls, and modes)
-mobile legends bang bang 5v5 moba mod apk (unlimited battle points, magic dust, and fragments)
-mobile legends bang bang original apk (no mod, no hack, no cheat)
-mobile legends bang bang update apk (latest version, new features, and events)
-mobile legends bang bang old version apk (download previous versions of the game)
-mobile legends bang bang lite apk (low size, fast performance, and smooth gameplay)
-mobile legends mod apk unlock all skin 2023
-mobile legends mod apk unlimited money and diamond 2023
-mobile legends hack apk download 2023
-mobile legends mod apk latest version 2023
-mobile legends cheat apk 2023
-mobile legends free skin apk download 2023
-mobile legends mod menu apk 2023
-mobile legends premium apk 2023
-mobile legends unlimited everything apk 2023
-mobile legends modded apk offline 2023
-mobile legends hack tool apk 2023
-mobile legends cracked apk 2023
-mobile legends vip mod apk 2023
-mobile legends pro apk 2023
-mobile legends 5v5 moba mod apk 2023
-mobile legends original apk 2023
-mobile legends update apk 2023
-mobile legends old version apk 2023
-mobile legends lite apk 2023
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Crazy Taxi Classic APK - The Ultimate Racing Game for Android.md b/spaces/1phancelerku/anime-remove-background/Download Crazy Taxi Classic APK - The Ultimate Racing Game for Android.md
deleted file mode 100644
index 2311335fbfb8c550cc97b423a891bc2d09e4f307..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Crazy Taxi Classic APK - The Ultimate Racing Game for Android.md
+++ /dev/null
@@ -1,112 +0,0 @@
-
-
Crazy Taxi Game Free Download for Android APK
-
Do you love driving fast and furious cars in a chaotic city? Do you want to experience the thrill of picking up and dropping off passengers in a limited time? Do you want to enjoy a classic arcade game on your Android device? If you answered yes to any of these questions, then you should try Crazy Taxi Game, one of the most popular and fun racing games ever made. In this article, we will tell you everything you need to know about Crazy Taxi Game, how to download it for free as an APK file, and how to play it on your Android device.
Crazy Taxi Game is a video game that was originally released by Sega in 1999 for arcade machines and later ported to various platforms, including Android. The game is set in a fictional city inspired by San Francisco, where you play as one of four taxi drivers who have to pick up and drop off customers as fast as possible. You can choose from three, five, or ten-minute gameplay modes, or play in the original arcade mode with unlimited time. You can also customize your car, driver, and music from a selection of rock songs by bands like The Offspring and Bad Religion.
-
The history and popularity of the game
-
Crazy Taxi Game was a huge hit when it was first released, thanks to its unique gameplay, colorful graphics, catchy soundtrack, and humorous voice acting. It received critical acclaim from reviewers and gamers alike, and won several awards, such as the Best Arcade Game of 1999 by IGN. It also spawned several sequels, spin-offs, and adaptations, such as Crazy Taxi 2, Crazy Taxi 3, Crazy Taxi City Rush, and even a live-action movie. The game has sold over five million copies worldwide, and has been downloaded over ten million times on Android alone.
-
How to Download Crazy Taxi Game for Android APK?
-
The requirements and compatibility of the game
-
To download and play Crazy Taxi Game on your Android device, you will need at least Android version 4.1 or higher, and about 250 MB of free storage space. The game is compatible with most Android devices, including smartphones and tablets. However, some older or low-end devices may experience performance issues or crashes. You can check the compatibility of your device on the Google Play Store page of the game.
-
The steps to download and install the game from different sources
-
There are two main ways to download Crazy Taxi Game for Android APK: from the official Google Play Store or from a third-party website. Here are the steps for each method:
-
-
From the Google Play Store:
-
Open the Google Play Store app on your device and search for "Crazy Taxi Classic".
-
Select the game from the list of results and tap on "Install".
-
Wait for the download and installation process to complete.
-
Launch the game from your app drawer or home screen.
-
From a third-party website:
-
Open your web browser and search for "Crazy Taxi Game APK" on a search engine like Google or Bing.
-
Select a reputable and trustworthy website that offers the APK file of the game, such as APKPure or APKMirror.
-
Download the APK file to your device by tapping on the download button or link.
-
Before installing the APK file, you may need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Locate the APK file on your device using a file manager app and tap on it to install it.
-
Launch the game from your app drawer or home screen.
-
-
-
The advantages and disadvantages of downloading the game as an APK file
-
Downloading Crazy Taxi Game as an APK file has some pros and cons that you should be aware of before choosing this method. Here are some of them:
-
-
Advantages
Disadvantages
-
You can download the game even if it is not available in your region or country.
You may not get the latest updates and features of the game.
-
You can download the game without using the Google Play Store or having a Google account.
You may expose your device to malware or viruses from untrusted sources.
-
You can download the game for free without any ads or in-app purchases.
You may violate the terms and conditions of the game developer or publisher.
-
-
How to Play Crazy Taxi Game on Android?
-
The gameplay and controls of the game
-
Crazy Taxi Game is easy to play but hard to master. The gameplay is simple: you have to drive your taxi around the city and pick up customers who are waiting for you. You have to take them to their destinations as quickly as possible, while avoiding traffic, obstacles, and other hazards. You can earn extra money by performing stunts, such as jumps, drifts, and near misses. You can also earn tips by satisfying your customers' preferences, such as driving fast, slow, or crazy. The more money you make, the higher your score and rank will be.
-
The controls of the game are intuitive and responsive. You can use either touch or tilt controls to steer your taxi. You can also use buttons to accelerate, brake, reverse, and switch lanes. You can also use a horn button to honk at other vehicles or pedestrians. You can change the control settings from the options menu according to your preference.
-
The modes and challenges of the game
-
Crazy Taxi Game offers four different modes to choose from: Arcade, Original, Crazy Box, and Leaderboards. Here is a brief description of each mode:
-
-
Arcade: This is the classic mode that mimics the original arcade game. You have to pick up and drop off customers in a limited time. You can extend your time by reaching checkpoints or earning bonuses. You can choose from three difficulty levels: Easy, Normal, or Hard.
-
Original: This is a similar mode to Arcade, but with a different map and layout. You have to pick up and drop off customers in a limited time. You can extend your time by reaching checkpoints or earning bonuses. You can choose from three difficulty levels: Easy, Normal, or Hard.
-
Crazy Box: This is a mode that consists of 16 mini-games that test your skills and abilities. You have to complete various tasks and challenges, such as bowling, golfing, popping balloons, jumping ramps, etc. You can unlock new mini-games by completing previous ones.
-
Leaderboards: This is a mode that allows you to compete with other players around the world. You can see your rank and score on global and local leaderboards. You can also compare your stats and achievements with other players.
-
The tips and tricks to master the game
-
Crazy Taxi Game is a game that requires skill, strategy, and luck. Here are some tips and tricks to help you master the game and become a crazy taxi driver:
-
Crazy Taxi Classic APK latest version for Android
-How to install Crazy Taxi Classic APK on Android phone
-Crazy Taxi Classic mobile game by SEGA
-Play Crazy Taxi Classic online for free
-Crazy Taxi Classic tips and tricks for Android
-Crazy Taxi Classic mod APK unlimited money
-Crazy Taxi Classic APK file size and requirements
-Crazy Taxi Classic reviews and ratings on Google Play
-Crazy Taxi Classic gameplay modes and features
-Crazy Taxi Classic controller support for Android
-Download Crazy Taxi Classic APK from APKCombo
-Crazy Taxi Classic APK download link for Android
-Crazy Taxi Classic update and patch notes for Android
-Crazy Taxi Classic cheats and hacks for Android
-Crazy Taxi Classic best drivers and cars for Android
-Crazy Taxi Classic APK old versions for Android
-Crazy Taxi Classic APK download for PC Windows 10
-Crazy Taxi Classic alternatives and similar games for Android
-Crazy Taxi Classic APK mirror and direct download for Android
-Crazy Taxi Classic APK pure and safe download for Android
-Crazy Taxi Classic offline mode and data usage for Android
-Crazy Taxi Classic achievements and leaderboards for Android
-Crazy Taxi Classic wallpapers and themes for Android
-Crazy Taxi Classic soundtracks and music for Android
-Crazy Taxi Classic bugs and issues for Android
-Crazy Taxi Classic FAQ and guide for Android
-Crazy Taxi Classic videos and screenshots for Android
-Crazy Taxi Classic fan art and memes for Android
-Crazy Taxi Classic forum and community for Android
-Crazy Taxi Classic news and events for Android
-
-
Learn the map and the shortcuts. The city is full of hidden paths, shortcuts, and ramps that can help you save time and avoid traffic. Explore the map and memorize the locations of the customers and their destinations. Use the arrow indicator to guide you to the nearest customer or destination.
-
Choose your driver and car wisely. Each driver and car has different attributes, such as speed, acceleration, handling, and weight. Some drivers and cars are better suited for certain modes or challenges than others. Experiment with different combinations and find the one that suits your style and preference.
-
Drive crazy but not reckless. Driving crazy means driving fast, furious, and fun. Driving reckless means driving careless, dangerous, and dumb. You want to drive crazy to earn more money and bonuses, but not reckless to lose customers or crash your car. Balance your speed and safety, and avoid collisions and accidents.
-
Use your horn and your brakes. Your horn is a useful tool to alert other vehicles or pedestrians of your presence. You can use it to make them move out of your way or to scare them for fun. Your brakes are also important to control your car and avoid crashes. You can use them to slow down, stop, reverse, or drift.
-
Have fun and enjoy the game. Crazy Taxi Game is a game that is meant to be fun and enjoyable. Don't take it too seriously or get frustrated if you fail or lose. Just relax and have a good time with the game. You can also play with your friends or family and share your scores and achievements.
-
-
Conclusion
-
A summary of the main points and a call to action
-
Crazy Taxi Game is a classic arcade game that you can download for free as an APK file on your Android device. The game lets you drive a taxi in a crazy city and pick up customers in a limited time. The game has four modes, Arcade, Original, Crazy Box, and Leaderboards, that offer different challenges and fun. The game also has amazing graphics, sound, and music that make it more enjoyable. If you are looking for a fun and exciting racing game on your Android device, you should definitely try Crazy Taxi Game today.
-
FAQs
-
Q1. Is Crazy Taxi Game free to play on Android?
-
A1. Yes, Crazy Taxi Game is free to play on Android devices. You can download it from the Google Play Store or from a third-party website as an APK file.
-
Q2. Is Crazy Taxi Game safe to download as an APK file?
-
A2. Yes, Crazy Taxi Game is safe to download as an APK file if you download it from a reputable and trustworthy website. However, you should always be careful when downloading apps from unknown sources and scan them for malware or viruses before installing them.
-
Q3. How can I update Crazy Taxi Game on Android?
-
A3. You can update Crazy Taxi Game on Android by following these steps:
-
-
If you downloaded the game from the Google Play Store, you can check for updates from the app page or from the My Apps & Games section.
-
If you downloaded the game as an APK file, you can check for updates from the website where you downloaded it or from the app settings.
-
-
Q4. What are some alternatives to Crazy Taxi Game on Android?
-
A4. Some alternatives to Crazy Taxi Game on Android are:
-
-
Taxi Sim 2020: A realistic taxi simulator game that lets you drive various taxis in different cities around the world.
-
Taxi Run: A casual taxi runner game that lets you drive a taxi in an endless road full of obstacles and traffic.
-
Taxi Driver 3D: A 3D taxi driving game that lets you drive a taxi in a city with realistic physics and graphics.
-
-
Q5. How can I contact the developers of Crazy Taxi Game?
-
A5. You can contact the developers of Crazy Taxi Game by sending them an email at help@sega.net or by visiting their website at https://www.sega.com/.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download and Install VLC on Your Windows RT 8.1 Tablet or PC.md b/spaces/1phancelerku/anime-remove-background/Download and Install VLC on Your Windows RT 8.1 Tablet or PC.md
deleted file mode 100644
index 259ea56f5471303e7ae6d52a0900302f16401566..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download and Install VLC on Your Windows RT 8.1 Tablet or PC.md
+++ /dev/null
@@ -1,170 +0,0 @@
-
-
How to Download and Install VLC Media Player on Windows RT 8.1 Devices
-
If you have a Windows RT 8.1 device, such as a Surface RT or Surface 2 tablet, you might be wondering how to play your favorite media files on it. Unfortunately, Windows RT 8.1 has some limitations that prevent you from running most desktop applications, including many popular media players. However, there is one media player that can run on Windows RT 8.1 devices and that is VLC Media Player.
-
VLC Media Player is a free and open source cross-platform multimedia player that can play most media files, as well as DVDs, audio CDs, VCDs, and various streaming protocols. It also has many features that make it a versatile and powerful tool for media playback and manipulation. In this article, we will show you how to download and install VLC Media Player on your Windows RT 8.1 device, and how to use it to play, convert, edit, and download media files.
What is Windows RT 8.1 and What are its Limitations?
-
Windows RT 8.1 is a version of Windows 8.1 that is optimized for thin and light devices that have extended battery life and are designed for life on the go. It runs on devices that use the ARM architecture, which is different from the x86 architecture that most desktop PCs use. Windows RT 8.1 only runs built-in apps or apps that you download from the Windows Store, and it automatically updates itself and protects itself from viruses and malware.
-
However, while Windows RT 8.1 inherits the appearance and functionality of Windows 8.1, it has some drawbacks that you should be aware of before buying a Windows RT 8.1 device or downloading VLC Media Player for it:
-
-
It can only execute software that is digitally signed by Microsoft, which means you cannot run any desktop applications or programs that are not available in the Windows Store.
-
It lacks certain developer-oriented features, such as the command prompt, PowerShell, Group Policy Editor, Registry Editor, etc.
-
It does not support some peripheral devices, such as printers, scanners, webcams, etc., unless they have drivers that are compatible with Windows RT 8.1.
-
It does not support some file formats, such as MKV, FLAC, OGG, etc., unless you have an app that can play them.
-
It does not support some network protocols, such as VPN, FTP, SSH, etc., unless you have an app that can use them.
-
-
If you want to learn more about Windows RT 8.1 and its limitations, you can check out this FAQ from Microsoft or this article from CNET.
-
What is VLC Media Player and What are its Features?
-
VLC Media Player is one of the most popular and widely used media players in the world. It was developed by VideoLAN, a non-profit organization that promotes free and open source software for multimedia. It was first released in 2001 and has since been updated regularly with new features and bug fixes.
-
VLC Media Player has many features that make it a great choice for media playback and manipulation on Windows RT 8.1 devices. Some of the features of VLC Media Player are:
-
-
It can play almost any media file format, including MKV, MP4, AVI, WMV, MOV, FLV, MP3, AAC, WMA, OGG, FLAC, etc., without the need for additional codecs or plugins.
-
It can play DVDs, audio CDs, VCDs, and various streaming protocols, such as HTTP, RTSP, RTP, UDP, etc., as well as online radio stations and podcasts.
-
It can convert videos to any format, such as MP4, AVI, WMV, FLV, etc., with various options for video and audio quality, resolution, bitrate, frame rate, etc.
-
It can edit videos by trimming, cropping, rotating, adding filters and effects, adjusting brightness, contrast, saturation, hue, etc.
-
It can remove audio from any video or extract audio from any video and save it as a separate file.
-
It can add subtitles to any video and synchronize them with the audio and video tracks.
-
It can use VLC as a video downloader for YouTube and other websites by copying and pasting the URL of the video into VLC.
-
It can stream media files from your computer to other devices on your network or over the internet using VLC's built-in server.
-
It can customize its interface with various skins and themes or create your own using VLC Skin Editor.
-
It can control VLC remotely using VLC Remote or other apps that support VLC's web interface.
-
-
If you want to learn more about VLC Media Player and its features, you can check out this official website or this user guide.
-
vlc media player for windows rt 8.1 free download
-how to install vlc on windows rt 8.1 tablet
-vlc for windows rt 8.1 surface 2
-download vlc for windows 8.1 rt from windows store
-vlc windows rt 8.1 app
-vlc player windows rt 8.1 download link
-vlc for windows rt 8.1 not working
-vlc for windows rt 8.1 review
-vlc for windows rt 8.1 update
-vlc for windows rt 8.1 offline installer
-vlc for windows rt 8.1 arm
-vlc for windows rt 8.1 x86
-vlc for windows rt 8.1 crack
-vlc for windows rt 8.1 full version
-vlc for windows rt 8.1 pro
-vlc for windows rt 8.1 latest version
-vlc for windows rt 8.1 beta
-vlc for windows rt 8.1 release date
-vlc for windows rt 8.1 features
-vlc for windows rt 8.1 requirements
-vlc for windows rt 8.1 tutorial
-vlc for windows rt 8.1 support
-vlc for windows rt 8.1 forum
-vlc for windows rt 8.1 reddit
-vlc for windows rt 8.1 youtube
-vlc for windows rt 8.1 videolan
-vlc for windows rt 8.1 microsoft store
-vlc for windows rt 8.1 alternative
-vlc for windows rt 8.1 vs kodi
-vlc for windows rt 8.1 vs mx player
-vlc for windows rt 8.1 vs potplayer
-vlc for windows rt 8.1 vs gom player
-vlc for windows rt 8.1 vs kmplayer
-vlc for windows rt 8.1 vs media player classic
-vlc for windows rt 8.1 vs smplayer
-vlc for windows rt 8.1 vs mpv player
-vlc for windows rt 8.1 vs mpc-hc player
-vlc for windows rt 8.1 vs bsplayer
-vlc for windows rt 8.1 vs zoom player
-vlc for windows rt 8.1 vs divx player
-best settings for vlc on windows rt 8.1
-how to play mkv files on vlc on windows rt 8.1
-how to stream videos from pc to vlc on windows rt 8.1
-how to use subtitles on vlc on windows rt 8.1
-how to adjust audio sync on vlc on windows rt 8.1
-how to convert videos with vlc on windows rt 8.1
-how to record screen with vlc on windows rt 8.1
-how to rip dvd with vlc on windows rt 8.1
-how to download youtube videos with vlc on windows rt 8.1
-
How to Download and Install VLC Media Player on Windows RT 8.1 Devices
-
There are two ways to download and install VLC Media Player on your Windows RT 8.1 device: from the Windows Store or from the official website. We will explain both methods below:
-
How to Download VLC Media Player for Windows RT 8.1 from the Windows Store
-
The easiest way to get VLC Media Player on your Windows RT 8.1 device is to download it from the Windows Store. Here are the steps to do so:
-
-
Open the Windows Store app on your device and search for "VLC" in the search box.
-
Select the app named "VLC for Windows Store" from the search results and tap on it.
-
Tap on the "Install" button and wait for the app to download and install on your device.
-
Once the installation is complete, you can launch VLC Media Player from the Start screen or the Apps list.
-
-
Note that this version of VLC Media Player is different from the desktop version that you can download from the official website. It has a different interface and some features may not be available or may work differently. However, it still supports most media file formats and has basic playback and conversion functions.
-
How to Download VLC Media Player for Windows RT 8.1 from the Official Website
-
If you want to get the desktop version of VLC Media Player on your Windows RT 8.1 device, you will need to download it from the official website and install it manually. However, this method requires some technical skills and involves some risks. You will need to enable a developer mode on your device and run a PowerShell script that will bypass the digital signature requirement of Windows RT 8.1. This may void your warranty or damage your device if done incorrectly. Therefore, we do not recommend this method unless you are confident in what you are doing and understand the consequences.
-
If you still want to proceed with this method, here are the steps to do so:
-
-
Download the latest version of VLC Media Player for Windows RT 8.1 from this link. Make sure you choose the ARM version that matches your device's architecture.
-
Extract the downloaded ZIP file to a folder on your device or a USB drive.
-
Open the Settings app on your device and go to "Update & security" > "For developers".
-
Select "Developer mode" and confirm by tapping "Yes". This will enable you to run unsigned apps on your device.
-
Open File Explorer on your device and go to "C:\Windows\System32". Find the file named "WindowsPowerShell\v1.0\powershell.exe" and copy it to another folder (e g. "C:\Temp"). This will create a copy of the PowerShell executable that you can run without restrictions.
-
Open the folder where you copied the PowerShell executable and right-click on it. Select "Run as administrator". This will open a PowerShell window with elevated privileges.
-
In the PowerShell window, type the following command and press Enter: Set-ExecutionPolicy Unrestricted. This will allow you to run any script on your device.
-
Now, type the following command and press Enter: cd "C:\Users\YourUserName\Downloads\VLC-RT-3.0.16". Replace "YourUserName" with your actual user name and "VLC-RT-3.0.16" with the name of the folder where you extracted the VLC Media Player ZIP file. This will change the directory to the folder where the VLC Media Player files are located.
-
Finally, type the following command and press Enter: .\Add-AppDevPackage.ps1. This will run a script that will install VLC Media Player on your device.
-
Follow the instructions on the screen and wait for the installation to complete. You may need to enter your Microsoft account credentials and accept some terms and conditions.
-
Once the installation is complete, you can close the PowerShell window and launch VLC Media Player from the Start screen or the Apps list.
-
-
Note that this version of VLC Media Player is identical to the desktop version that you can download from the official website. It has the same interface and features as the desktop version, but it may not be as stable or compatible with Windows RT 8.1 devices. You may encounter some errors or crashes while using it, so use it at your own risk.
-
How to Use VLC Media Player on Windows RT 8.1 Devices
-
Now that you have downloaded and installed VLC Media Player on your Windows RT 8.1 device, you can use it to play, convert, edit, and download media files. Here are some tips on how to use VLC Media Player on Windows RT 8.1 devices:
-
How to Play Various Media Files with VLC Media Player
-
VLC Media Player can play almost any media file format that you throw at it, without the need for additional codecs or plugins. Here are some ways to play various media files with VLC Media Player:
-
-
To play a media file from your device, open VLC Media Player and tap on the "Browse" button on the main screen. Navigate to the folder where your media file is located and tap on it to play it.
-
To play a media file from a USB drive or an external hard drive, connect it to your device and open VLC Media Player. Tap on the "Browse" button on the main screen and select "This PC" from the sidebar. Find your USB drive or external hard drive under "Devices and drives" and tap on it to open it. Navigate to the folder where your media file is located and tap on it to play it.
-
To play a media file from a network location, such as a shared folder or a NAS server, open VLC Media Player and tap on the "Browse" button on the main screen. Select "Network" from the sidebar and tap on the "+" button at the bottom right corner of the screen. Enter the URL or IP address of your network location and tap on "OK". Navigate to the folder where your media file is located and tap on it to play it.
-
To play a media file from a streaming source, such as a website or a online radio station, open VLC Media Player and tap on the "Stream" button on the main screen. Enter the URL of your streaming source and tap on "OK". VLC Media Player will start playing the stream.
-
To play a DVD, audio CD, or VCD, insert it into your device's optical drive and open VLC Media Player. Tap on the "Disc" button on the main screen and select the type of disc you want to play. Tap on "Play" and VLC Media Player will start playing the disc.
-
-
How to Adjust Video and Audio Settings with VLC Media Player
-
VLC Media Player allows you to adjust various video and audio settings to enhance your media playback experience. Here are some ways to adjust video and audio settings with VLC Media Player:
-
-
To adjust the brightness, contrast, saturation, hue, and gamma of the video, tap on the "Video" button on the playback screen and select "Adjustments and Effects". Tap on the "Video Effects" tab and use the sliders to adjust the settings as you like.
-
To adjust the volume, balance, equalizer, compressor, and spatializer of the audio, tap on the "Audio" button on the playback screen and select "Adjustments and Effects". Tap on the "Audio Effects" tab and use the sliders and buttons to adjust the settings as you like.
-
To change the aspect ratio, crop ratio, zoom level, or orientation of the video, tap on the "Video" button on the playback screen and select "Crop". Use the buttons to select the option you want.
-
To change the audio track, subtitle track, or playback speed of the media file, tap on the "Tools" button on the playback screen and select the option you want.
-
-
How to Add Subtitles and Synchronize Them with VLC Media Player
-
VLC Media Player can display subtitles for any video file that has a separate subtitle file in SRT, SSA, ASS, or VTT format. You can also synchronize the subtitles with the audio and video tracks if they are out of sync. Here are some ways to add subtitles and synchronize them with VLC Media Player:
-
-
To add subtitles to a video file, make sure that the subtitle file has the same name as the video file and is in the same folder as the video file. For example, if your video file is named "movie.mp4", your subtitle file should be named "movie.srt". Then, open VLC Media Player and play the video file. The subtitles should appear automatically.
-
To synchronize subtitles with a video file, tap on the "Tools" button on the playback screen and select "Track Synchronization". Tap on the "Subtitles/Video" tab and use the buttons to adjust the subtitle delay. You can also use the keyboard shortcuts "G" and "H" to decrease or increase the subtitle delay by 50 milliseconds.
-
To change the font, size, color, or position of the subtitles, tap on the "Video" button on the playback screen and select "Subtitles". Use the buttons to select the option you want.
-
-
How to Convert Videos to Any Format with VLC Media Player
-
VLC Media Player can also convert videos to any format that you want, such as MP4, AVI, WMV, FLV, etc. You can also choose from various presets for different devices, such as iPhone, iPad, Android, etc. Here are some ways to convert videos to any format with VLC Media Player:
-
-
To convert a video file from your device, open VLC Media Player and tap on the "Browse" button on the main screen. Navigate to the folder where your video file is located and tap on it. Then, tap on the "Convert" button at the bottom right corner of the screen.
-
To convert a video file from a USB drive or an external hard drive, connect it to your device and open VLC Media Player. Tap on the "Browse" button on the main screen and select "This PC" from the sidebar. Find your USB drive or external hard drive under "Devices and drives" and tap on it to open it. Navigate to the folder where your video file is located and tap on it. Then, tap on the "Convert" button at the bottom right corner of the screen.
-
To convert a video file from a network location, such as a shared folder or a NAS server, open VLC Media Player and tap on the "Browse" button on the main screen. Select "Network" from the sidebar and tap on the "+" button at the bottom right corner of the screen. Enter the URL or IP address of your network location and tap on "OK". Navigate to the folder where your video file is located and tap on it. Then, tap on the "Convert" button at the bottom right corner of the screen.
-
To convert a video file from a streaming source, such as a website or a online radio station, open VLC Media Player and tap on the "Stream" button on the main screen. Enter the URL of your streaming source and tap on "OK". Then, tap on the "Convert" button at the bottom right corner of the screen.
-
-
After tapping on the "Convert" button, you will see a screen where you can choose the output format, destination, and options for your converted video file. Here are some tips on how to choose the output format, destination, and options for your converted video file:
-
-
To choose the output format, tap on the "Profile" drop-down menu and select the format that you want. You can also tap on the "Edit" button next to the menu to customize the video and audio codecs, bitrate, resolution, frame rate, etc.
-
To choose the destination, tap on the "Browse" button and navigate to the folder where you want to save your converted video file. You can also enter a name for your converted video file in the "File name" box.
-
To choose the options, tap on the "Options" button and select the options that you want. You can choose to start or stop the conversion at a specific time, add subtitles or metadata to your converted video file, or deinterlace or scale your converted video file.
-
-
Once you have chosen the output format, destination, and options for your converted video file, tap on the "Start" button and wait for VLC Media Player to convert your video file. You can see the progress of the conversion on the playback screen. You can also pause or cancel the conversion at any time by tapping on the "Pause" or "Stop" button.
-
Once the conversion is complete, you can find your converted video file in the destination folder that you chose. You can also play it with VLC Media Player or any other media player that supports the output format.
-
Conclusion
-
VLC Media Player is a powerful and versatile media player that can run on Windows RT 8.1 devices and play, convert, edit, and download media files. It can overcome some of the limitations of Windows RT 8.1 and enhance your media playback experience. However, it may not be as stable or compatible with Windows RT 8.1 devices as the Windows Store version of VLC Media Player. Therefore, you should use it with caution and at your own risk.
-
We hope that this article has helped you learn how to download and install VLC Media Player on your Windows RT 8.1 device, and how to use it to play, convert, edit, and download media files. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about VLC Media Player and Windows RT 8.1:
-
-
Q: Is VLC Media Player safe to use on Windows RT 8.1 devices?
-
A: VLC Media Player is safe to use on Windows RT 8.1 devices if you download it from the Windows Store or from the official website of VideoLAN. However, if you download it from the official website, you will need to enable a developer mode on your device and run a PowerShell script that will bypass the digital signature requirement of Windows RT 8.1. This may void your warranty or damage your device if done incorrectly. Therefore, we do not recommend this method unless you are confident in what you are doing and understand the consequences.
-
Q: How can I update VLC Media Player on Windows RT 8.1 devices?
-
A: If you download VLC Media Player from the Windows Store, you can update it automatically or manually through the Windows Store app. If you download VLC Media Player from the official website, you will need to download the latest version of VLC Media Player for Windows RT 8.1 from the same link and install it manually using the same method as before.
-
Q: How can I uninstall VLC Media Player on Windows RT 8.1 devices?
-
A: If you download VLC Media Player from the Windows Store, you can uninstall it by right-clicking on its tile on the Start screen or the Apps list and selecting "Uninstall". If you download VLC Media Player from the official website, you can uninstall it by opening File Explorer and deleting the folder where you extracted the VLC Media Player ZIP file.
-
Q: How can I get help or support for VLC Media Player on Windows RT 8.1 devices?
-
A: If you need help or support for VLC Media Player on Windows RT 8.1 devices, you can visit the official forum or the official wiki of VideoLAN. You can also contact them via email or social media.
-
Q: How can I donate or contribute to VLC Media Player and VideoLAN?
-
A: If you like VLC Media Player and want to support its development and maintenance, you can donate or contribute to VideoLAN in various ways. You can donate money via PayPal, credit card, bank transfer, or cryptocurrency. You can also donate hardware, software, or services via this form. You can also contribute code, documentation, translation, design, testing, or feedback via this page.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/models.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/models.py
deleted file mode 100644
index 22e8017b6d70c8399b3be6a2555485634c03e72d..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/models.py
+++ /dev/null
@@ -1,414 +0,0 @@
-# Copyright (c) 2022 NVIDIA CORPORATION.
-# Licensed under the MIT license.
-
-# Adapted from https://github.com/jik876/hifi-gan under the MIT license.
-# LICENSE is in incl_licenses directory.
-
-
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-import numpy as np
-from .activations import Snake,SnakeBeta
-from .alias_free_torch import *
-import os
-from omegaconf import OmegaConf
-
-LRELU_SLOPE = 0.1
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-class AMPBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5), activation=None):
- super(AMPBlock1, self).__init__()
- self.h = h
-
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- self.num_layers = len(self.convs1) + len(self.convs2) # total number of conv layers
-
- if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing
- self.activations = nn.ModuleList([
- Activation1d(
- activation=Snake(channels, alpha_logscale=h.snake_logscale))
- for _ in range(self.num_layers)
- ])
- elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing
- self.activations = nn.ModuleList([
- Activation1d(
- activation=SnakeBeta(channels, alpha_logscale=h.snake_logscale))
- for _ in range(self.num_layers)
- ])
- else:
- raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.")
-
- def forward(self, x):
- acts1, acts2 = self.activations[::2], self.activations[1::2]
- for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2):
- xt = a1(x)
- xt = c1(xt)
- xt = a2(xt)
- xt = c2(xt)
- x = xt + x
-
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class AMPBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3), activation=None):
- super(AMPBlock2, self).__init__()
- self.h = h
-
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- self.num_layers = len(self.convs) # total number of conv layers
-
- if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing
- self.activations = nn.ModuleList([
- Activation1d(
- activation=Snake(channels, alpha_logscale=h.snake_logscale))
- for _ in range(self.num_layers)
- ])
- elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing
- self.activations = nn.ModuleList([
- Activation1d(
- activation=SnakeBeta(channels, alpha_logscale=h.snake_logscale))
- for _ in range(self.num_layers)
- ])
- else:
- raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.")
-
- def forward(self, x):
- for c, a in zip (self.convs, self.activations):
- xt = a(x)
- xt = c(xt)
- x = xt + x
-
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class BigVGAN(torch.nn.Module):
- # this is our main BigVGAN model. Applies anti-aliased periodic activation for resblocks.
- def __init__(self, h):
- super(BigVGAN, self).__init__()
- self.h = h
-
- self.num_kernels = len(h.resblock_kernel_sizes)
- self.num_upsamples = len(h.upsample_rates)
-
- # pre conv
- self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3))
-
- # define which AMPBlock to use. BigVGAN uses AMPBlock1 as default
- resblock = AMPBlock1 if h.resblock == '1' else AMPBlock2
-
- # transposed conv-based upsamplers. does not apply anti-aliasing
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)):
- self.ups.append(nn.ModuleList([
- weight_norm(ConvTranspose1d(h.upsample_initial_channel // (2 ** i),
- h.upsample_initial_channel // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2))
- ]))
-
- # residual blocks using anti-aliased multi-periodicity composition modules (AMP)
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h.upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)):
- self.resblocks.append(resblock(h, ch, k, d, activation=h.activation))
-
- # post conv
- if h.activation == "snake": # periodic nonlinearity with snake function and anti-aliasing
- activation_post = Snake(ch, alpha_logscale=h.snake_logscale)
- self.activation_post = Activation1d(activation=activation_post)
- elif h.activation == "snakebeta": # periodic nonlinearity with snakebeta function and anti-aliasing
- activation_post = SnakeBeta(ch, alpha_logscale=h.snake_logscale)
- self.activation_post = Activation1d(activation=activation_post)
- else:
- raise NotImplementedError("activation incorrectly specified. check the config file and look for 'activation'.")
-
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
-
- # weight initialization
- for i in range(len(self.ups)):
- self.ups[i].apply(init_weights)
- self.conv_post.apply(init_weights)
-
- def forward(self, x):
- # pre conv
- x = self.conv_pre(x)
-
- for i in range(self.num_upsamples):
- # upsampling
- for i_up in range(len(self.ups[i])):
- x = self.ups[i][i_up](x)
- # AMP blocks
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
-
- # post conv
- x = self.activation_post(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- for l_i in l:
- remove_weight_norm(l_i)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, h, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.d_mult = h.discriminator_channel_mult
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, int(32*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(int(32*self.d_mult), int(128*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(int(128*self.d_mult), int(512*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(int(512*self.d_mult), int(1024*self.d_mult), (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(int(1024*self.d_mult), int(1024*self.d_mult), (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(int(1024*self.d_mult), 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, h):
- super(MultiPeriodDiscriminator, self).__init__()
- self.mpd_reshapes = h.mpd_reshapes
- print("mpd_reshapes: {}".format(self.mpd_reshapes))
- discriminators = [DiscriminatorP(h, rs, use_spectral_norm=h.use_spectral_norm) for rs in self.mpd_reshapes]
- self.discriminators = nn.ModuleList(discriminators)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorR(nn.Module):
- def __init__(self, cfg, resolution):
- super().__init__()
-
- self.resolution = resolution
- assert len(self.resolution) == 3, \
- "MRD layer requires list with len=3, got {}".format(self.resolution)
- self.lrelu_slope = LRELU_SLOPE
-
- norm_f = weight_norm if cfg.use_spectral_norm == False else spectral_norm
- if hasattr(cfg, "mrd_use_spectral_norm"):
- print("INFO: overriding MRD use_spectral_norm as {}".format(cfg.mrd_use_spectral_norm))
- norm_f = weight_norm if cfg.mrd_use_spectral_norm == False else spectral_norm
- self.d_mult = cfg.discriminator_channel_mult
- if hasattr(cfg, "mrd_channel_mult"):
- print("INFO: overriding mrd channel multiplier as {}".format(cfg.mrd_channel_mult))
- self.d_mult = cfg.mrd_channel_mult
-
- self.convs = nn.ModuleList([
- norm_f(nn.Conv2d(1, int(32*self.d_mult), (3, 9), padding=(1, 4))),
- norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))),
- norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))),
- norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 9), stride=(1, 2), padding=(1, 4))),
- norm_f(nn.Conv2d(int(32*self.d_mult), int(32*self.d_mult), (3, 3), padding=(1, 1))),
- ])
- self.conv_post = norm_f(nn.Conv2d(int(32 * self.d_mult), 1, (3, 3), padding=(1, 1)))
-
- def forward(self, x):
- fmap = []
-
- x = self.spectrogram(x)
- x = x.unsqueeze(1)
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, self.lrelu_slope)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
- def spectrogram(self, x):
- n_fft, hop_length, win_length = self.resolution
- x = F.pad(x, (int((n_fft - hop_length) / 2), int((n_fft - hop_length) / 2)), mode='reflect')
- x = x.squeeze(1)
- x = torch.stft(x, n_fft=n_fft, hop_length=hop_length, win_length=win_length, center=False, return_complex=True)
- x = torch.view_as_real(x) # [B, F, TT, 2]
- mag = torch.norm(x, p=2, dim =-1) #[B, F, TT]
-
- return mag
-
-
-class MultiResolutionDiscriminator(nn.Module):
- def __init__(self, cfg, debug=False):
- super().__init__()
- self.resolutions = cfg.resolutions
- assert len(self.resolutions) == 3,\
- "MRD requires list of list with len=3, each element having a list with len=3. got {}".\
- format(self.resolutions)
- self.discriminators = nn.ModuleList(
- [DiscriminatorR(cfg, resolution) for resolution in self.resolutions]
- )
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
-
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(x=y)
- y_d_g, fmap_g = d(x=y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss*2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-
-class VocoderBigVGAN(object):
- def __init__(self, ckpt_vocoder,device='cuda'):
- vocoder_sd = torch.load(os.path.join(ckpt_vocoder,'best_netG.pt'), map_location='cpu')
-
- vocoder_args = OmegaConf.load(os.path.join(ckpt_vocoder,'args.yml'))
-
- self.generator = BigVGAN(vocoder_args)
- self.generator.load_state_dict(vocoder_sd['generator'])
- self.generator.eval()
-
- self.device = device
- self.generator.to(self.device)
-
- def vocode(self, spec):
- with torch.no_grad():
- if isinstance(spec,np.ndarray):
- spec = torch.from_numpy(spec).unsqueeze(0)
- spec = spec.to(dtype=torch.float32,device=self.device)
- return self.generator(spec).squeeze().cpu().numpy()
-
- def __call__(self, wav):
- return self.vocode(wav)
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/loggers/wandb/sweep.py b/spaces/Abhilashvj/planogram-compliance/utils/loggers/wandb/sweep.py
deleted file mode 100644
index 8699fa0a2fbfd7d1855b04c65d62eb31da03c3e5..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/loggers/wandb/sweep.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import sys
-from pathlib import Path
-
-import wandb
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[3] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-from train import parse_opt, train
-from utils.callbacks import Callbacks
-from utils.general import increment_path
-from utils.torch_utils import select_device
-
-
-def sweep():
- wandb.init()
- # Get hyp dict from sweep agent. Copy because train() modifies parameters which confused wandb.
- hyp_dict = vars(wandb.config).get("_items").copy()
-
- # Workaround: get necessary opt args
- opt = parse_opt(known=True)
- opt.batch_size = hyp_dict.get("batch_size")
- opt.save_dir = str(
- increment_path(
- Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve
- )
- )
- opt.epochs = hyp_dict.get("epochs")
- opt.nosave = True
- opt.data = hyp_dict.get("data")
- opt.weights = str(opt.weights)
- opt.cfg = str(opt.cfg)
- opt.data = str(opt.data)
- opt.hyp = str(opt.hyp)
- opt.project = str(opt.project)
- device = select_device(opt.device, batch_size=opt.batch_size)
-
- # train
- train(hyp_dict, opt, device, callbacks=Callbacks())
-
-
-if __name__ == "__main__":
- sweep()
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/5.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/5.js
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Adapter/T2I-Adapter/ldm/data/dataset_coco.py b/spaces/Adapter/T2I-Adapter/ldm/data/dataset_coco.py
deleted file mode 100644
index 0b4aa4facb12be8534522c9240ca6e63ce4a68b5..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/data/dataset_coco.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import json
-import cv2
-import os
-from basicsr.utils import img2tensor
-
-
-class dataset_coco_mask_color():
- def __init__(self, path_json, root_path_im, root_path_mask, image_size):
- super(dataset_coco_mask_color, self).__init__()
- with open(path_json, 'r', encoding='utf-8') as fp:
- data = json.load(fp)
- data = data['annotations']
- self.files = []
- self.root_path_im = root_path_im
- self.root_path_mask = root_path_mask
- for file in data:
- name = "%012d.png" % file['image_id']
- self.files.append({'name': name, 'sentence': file['caption']})
-
- def __getitem__(self, idx):
- file = self.files[idx]
- name = file['name']
- # print(os.path.join(self.root_path_im, name))
- im = cv2.imread(os.path.join(self.root_path_im, name.replace('.png', '.jpg')))
- im = cv2.resize(im, (512, 512))
- im = img2tensor(im, bgr2rgb=True, float32=True) / 255.
-
- mask = cv2.imread(os.path.join(self.root_path_mask, name)) # [:,:,0]
- mask = cv2.resize(mask, (512, 512))
- mask = img2tensor(mask, bgr2rgb=True, float32=True) / 255. # [0].unsqueeze(0)#/255.
-
- sentence = file['sentence']
- return {'im': im, 'mask': mask, 'sentence': sentence}
-
- def __len__(self):
- return len(self.files)
diff --git a/spaces/Ali36Ahmad/MagicPrompt-Stable-Diffusion/README.md b/spaces/Ali36Ahmad/MagicPrompt-Stable-Diffusion/README.md
deleted file mode 100644
index 98b00b0487e2ab609b0b29eb82c55d9215ab3406..0000000000000000000000000000000000000000
--- a/spaces/Ali36Ahmad/MagicPrompt-Stable-Diffusion/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: MagicPrompt Stable Diffusion
-emoji: 😻
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: Gustavosta/MagicPrompt-Stable-Diffusion
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/buffer.cpp b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/buffer.cpp
deleted file mode 100644
index 0ac0fa7bc3ced0447ba4caa359355dd4252670b3..0000000000000000000000000000000000000000
--- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/buffer.cpp
+++ /dev/null
@@ -1,87 +0,0 @@
-#include "libipc/buffer.h"
-#include "libipc/utility/pimpl.h"
-
-#include
-
-namespace ipc {
-
-bool operator==(buffer const & b1, buffer const & b2) {
- return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0);
-}
-
-bool operator!=(buffer const & b1, buffer const & b2) {
- return !(b1 == b2);
-}
-
-class buffer::buffer_ : public pimpl {
-public:
- void* p_;
- std::size_t s_;
- void* a_;
- buffer::destructor_t d_;
-
- buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a)
- : p_(p), s_(s), a_(a), d_(d) {
- }
-
- ~buffer_() {
- if (d_ == nullptr) return;
- d_((a_ == nullptr) ? p_ : a_, s_);
- }
-};
-
-buffer::buffer()
- : buffer(nullptr, 0, nullptr, nullptr) {
-}
-
-buffer::buffer(void* p, std::size_t s, destructor_t d)
- : p_(p_->make(p, s, d, nullptr)) {
-}
-
-buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
- : p_(p_->make(p, s, d, additional)) {
-}
-
-buffer::buffer(void* p, std::size_t s)
- : buffer(p, s, nullptr) {
-}
-
-buffer::buffer(char const & c)
- : buffer(const_cast(&c), 1) {
-}
-
-buffer::buffer(buffer&& rhs)
- : buffer() {
- swap(rhs);
-}
-
-buffer::~buffer() {
- p_->clear();
-}
-
-void buffer::swap(buffer& rhs) {
- std::swap(p_, rhs.p_);
-}
-
-buffer& buffer::operator=(buffer rhs) {
- swap(rhs);
- return *this;
-}
-
-bool buffer::empty() const noexcept {
- return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0);
-}
-
-void* buffer::data() noexcept {
- return impl(p_)->p_;
-}
-
-void const * buffer::data() const noexcept {
- return impl(p_)->p_;
-}
-
-std::size_t buffer::size() const noexcept {
- return impl(p_)->s_;
-}
-
-} // namespace ipc
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/configs/global_config.py b/spaces/Amrrs/DragGan-Inversion/PTI/configs/global_config.py
deleted file mode 100644
index bf3a20e61b0baf5e85377570cdf0f235bade21bd..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/configs/global_config.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# Device
-cuda_visible_devices = '0'
-device = 'cuda:0'
-
-# Logs
-training_step = 1
-image_rec_result_log_snapshot = 100
-pivotal_training_steps = 0
-model_snapshot_interval = 400
-
-# Run name to be updated during PTI
-run_name = ''
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix_xl.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix_xl.py
deleted file mode 100644
index acbcd819b14b739a89d1d03550af0042cf6d698c..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/instruct_pix2pix/train_instruct_pix2pix_xl.py
+++ /dev/null
@@ -1,1205 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2023 Harutatsu Akiyama and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import logging
-import math
-import os
-import shutil
-import warnings
-from pathlib import Path
-from urllib.parse import urlparse
-
-import accelerate
-import datasets
-import numpy as np
-import PIL
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint
-import transformers
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import ProjectConfiguration, set_seed
-from datasets import load_dataset
-from huggingface_hub import create_repo, upload_folder
-from packaging import version
-from PIL import Image
-from torchvision import transforms
-from tqdm.auto import tqdm
-from transformers import AutoTokenizer, PretrainedConfig
-
-import diffusers
-from diffusers import AutoencoderKL, DDPMScheduler, UNet2DConditionModel
-from diffusers.optimization import get_scheduler
-from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl_instruct_pix2pix import (
- StableDiffusionXLInstructPix2PixPipeline,
-)
-from diffusers.training_utils import EMAModel
-from diffusers.utils import check_min_version, deprecate, is_wandb_available, load_image
-from diffusers.utils.import_utils import is_xformers_available
-
-
-# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
-check_min_version("0.19.0")
-
-logger = get_logger(__name__, log_level="INFO")
-
-DATASET_NAME_MAPPING = {
- "fusing/instructpix2pix-1000-samples": ("file_name", "edited_image", "edit_prompt"),
-}
-WANDB_TABLE_COL_NAMES = ["file_name", "edited_image", "edit_prompt"]
-
-
-def import_model_class_from_model_name_or_path(
- pretrained_model_name_or_path: str, revision: str, subfolder: str = "text_encoder"
-):
- text_encoder_config = PretrainedConfig.from_pretrained(
- pretrained_model_name_or_path, subfolder=subfolder, revision=revision
- )
- model_class = text_encoder_config.architectures[0]
-
- if model_class == "CLIPTextModel":
- from transformers import CLIPTextModel
-
- return CLIPTextModel
- elif model_class == "CLIPTextModelWithProjection":
- from transformers import CLIPTextModelWithProjection
-
- return CLIPTextModelWithProjection
- else:
- raise ValueError(f"{model_class} is not supported.")
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Script to train Stable Diffusion XL for InstructPix2Pix.")
- parser.add_argument(
- "--pretrained_model_name_or_path",
- type=str,
- default=None,
- required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--pretrained_vae_model_name_or_path",
- type=str,
- default=None,
- help="Path to an improved VAE to stabilize training. For more details check out: https://github.com/huggingface/diffusers/pull/4038.",
- )
- parser.add_argument(
- "--revision",
- type=str,
- default=None,
- required=False,
- help="Revision of pretrained model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--dataset_name",
- type=str,
- default=None,
- help=(
- "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
- " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
- " or to a folder containing files that 🤗 Datasets can understand."
- ),
- )
- parser.add_argument(
- "--dataset_config_name",
- type=str,
- default=None,
- help="The config of the Dataset, leave as None if there's only one config.",
- )
- parser.add_argument(
- "--train_data_dir",
- type=str,
- default=None,
- help=(
- "A folder containing the training data. Folder contents must follow the structure described in"
- " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
- " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
- ),
- )
- parser.add_argument(
- "--original_image_column",
- type=str,
- default="input_image",
- help="The column of the dataset containing the original image on which edits where made.",
- )
- parser.add_argument(
- "--edited_image_column",
- type=str,
- default="edited_image",
- help="The column of the dataset containing the edited image.",
- )
- parser.add_argument(
- "--edit_prompt_column",
- type=str,
- default="edit_prompt",
- help="The column of the dataset containing the edit instruction.",
- )
- parser.add_argument(
- "--val_image_url_or_path",
- type=str,
- default=None,
- help="URL to the original image that you would like to edit (used during inference for debugging purposes).",
- )
- parser.add_argument(
- "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference."
- )
- parser.add_argument(
- "--num_validation_images",
- type=int,
- default=4,
- help="Number of images that should be generated during validation with `validation_prompt`.",
- )
- parser.add_argument(
- "--validation_steps",
- type=int,
- default=100,
- help=(
- "Run fine-tuning validation every X steps. The validation process consists of running the prompt"
- " `args.validation_prompt` multiple times: `args.num_validation_images`."
- ),
- )
- parser.add_argument(
- "--max_train_samples",
- type=int,
- default=None,
- help=(
- "For debugging purposes or quicker training, truncate the number of training examples to this "
- "value if set."
- ),
- )
- parser.add_argument(
- "--output_dir",
- type=str,
- default="instruct-pix2pix-model",
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument(
- "--cache_dir",
- type=str,
- default=None,
- help="The directory where the downloaded models and datasets will be stored.",
- )
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--resolution",
- type=int,
- default=256,
- help=(
- "The resolution for input images, all the images in the train/validation dataset will be resized to this resolution."
- ),
- )
- parser.add_argument(
- "--crops_coords_top_left_h",
- type=int,
- default=0,
- help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."),
- )
- parser.add_argument(
- "--crops_coords_top_left_w",
- type=int,
- default=0,
- help=("Coordinate for (the height) to be included in the crop coordinate embeddings needed by SDXL UNet."),
- )
- parser.add_argument(
- "--center_crop",
- default=False,
- action="store_true",
- help=(
- "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
- " cropped. The images will be resized to the resolution first before cropping."
- ),
- )
- parser.add_argument(
- "--random_flip",
- action="store_true",
- help="whether to randomly flip images horizontally",
- )
- parser.add_argument(
- "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
- )
- parser.add_argument("--num_train_epochs", type=int, default=100)
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--gradient_checkpointing",
- action="store_true",
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=1e-4,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument(
- "--scale_lr",
- action="store_true",
- default=False,
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
- )
- parser.add_argument(
- "--lr_scheduler",
- type=str,
- default="constant",
- help=(
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
- ' "constant", "constant_with_warmup"]'
- ),
- )
- parser.add_argument(
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument(
- "--conditioning_dropout_prob",
- type=float,
- default=None,
- help="Conditioning dropout probability. Drops out the conditionings (image and edit prompt) used in training InstructPix2Pix. See section 3.2.1 in the paper: https://arxiv.org/abs/2211.09800.",
- )
- parser.add_argument(
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
- )
- parser.add_argument(
- "--allow_tf32",
- action="store_true",
- help=(
- "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
- " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
- ),
- )
- parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.")
- parser.add_argument(
- "--non_ema_revision",
- type=str,
- default=None,
- required=False,
- help=(
- "Revision of pretrained non-ema model identifier. Must be a branch, tag or git identifier of the local or"
- " remote repository specified with --pretrained_model_name_or_path."
- ),
- )
- parser.add_argument(
- "--dataloader_num_workers",
- type=int,
- default=0,
- help=(
- "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
- ),
- )
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--hub_model_id",
- type=str,
- default=None,
- help="The name of the repository to keep in sync with the local `output_dir`.",
- )
- parser.add_argument(
- "--logging_dir",
- type=str,
- default="logs",
- help=(
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
- ),
- )
- parser.add_argument(
- "--mixed_precision",
- type=str,
- default=None,
- choices=["no", "fp16", "bf16"],
- help=(
- "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
- " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
- " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
- ),
- )
- parser.add_argument(
- "--report_to",
- type=str,
- default="tensorboard",
- help=(
- 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
- ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
- ),
- )
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
- parser.add_argument(
- "--checkpointing_steps",
- type=int,
- default=500,
- help=(
- "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
- " training using `--resume_from_checkpoint`."
- ),
- )
- parser.add_argument(
- "--checkpoints_total_limit",
- type=int,
- default=None,
- help=("Max number of checkpoints to store."),
- )
- parser.add_argument(
- "--resume_from_checkpoint",
- type=str,
- default=None,
- help=(
- "Whether training should be resumed from a previous checkpoint. Use a path saved by"
- ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
- ),
- )
- parser.add_argument(
- "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
- )
-
- args = parser.parse_args()
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
- if env_local_rank != -1 and env_local_rank != args.local_rank:
- args.local_rank = env_local_rank
-
- # Sanity checks
- if args.dataset_name is None and args.train_data_dir is None:
- raise ValueError("Need either a dataset name or a training folder.")
-
- # default to using the same revision for the non-ema model if not specified
- if args.non_ema_revision is None:
- args.non_ema_revision = args.revision
-
- return args
-
-
-def convert_to_np(image, resolution):
- if isinstance(image, str):
- image = PIL.Image.open(image)
- image = image.convert("RGB").resize((resolution, resolution))
- return np.array(image).transpose(2, 0, 1)
-
-
-def main():
- args = parse_args()
-
- if args.non_ema_revision is not None:
- deprecate(
- "non_ema_revision!=None",
- "0.15.0",
- message=(
- "Downloading 'non_ema' weights from revision branches of the Hub is deprecated. Please make sure to"
- " use `--variant=non_ema` instead."
- ),
- )
- logging_dir = os.path.join(args.output_dir, args.logging_dir)
- accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
- accelerator = Accelerator(
- gradient_accumulation_steps=args.gradient_accumulation_steps,
- mixed_precision=args.mixed_precision,
- log_with=args.report_to,
- project_config=accelerator_project_config,
- )
-
- generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
-
- if args.report_to == "wandb":
- if not is_wandb_available():
- raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
- import wandb
-
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state, main_process_only=False)
- if accelerator.is_local_main_process:
- datasets.utils.logging.set_verbosity_warning()
- transformers.utils.logging.set_verbosity_warning()
- diffusers.utils.logging.set_verbosity_info()
- else:
- datasets.utils.logging.set_verbosity_error()
- transformers.utils.logging.set_verbosity_error()
- diffusers.utils.logging.set_verbosity_error()
-
- # If passed along, set the training seed now.
- if args.seed is not None:
- set_seed(args.seed)
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
-
- if args.push_to_hub:
- repo_id = create_repo(
- repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
- ).repo_id
-
- vae_path = (
- args.pretrained_model_name_or_path
- if args.pretrained_vae_model_name_or_path is None
- else args.pretrained_vae_model_name_or_path
- )
- vae = AutoencoderKL.from_pretrained(
- vae_path,
- subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None,
- revision=args.revision,
- )
- unet = UNet2DConditionModel.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
- )
-
- # InstructPix2Pix uses an additional image for conditioning. To accommodate that,
- # it uses 8 channels (instead of 4) in the first (conv) layer of the UNet. This UNet is
- # then fine-tuned on the custom InstructPix2Pix dataset. This modified UNet is initialized
- # from the pre-trained checkpoints. For the extra channels added to the first layer, they are
- # initialized to zero.
- logger.info("Initializing the XL InstructPix2Pix UNet from the pretrained UNet.")
- in_channels = 8
- out_channels = unet.conv_in.out_channels
- unet.register_to_config(in_channels=in_channels)
-
- with torch.no_grad():
- new_conv_in = nn.Conv2d(
- in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding
- )
- new_conv_in.weight.zero_()
- new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight)
- unet.conv_in = new_conv_in
-
- # Create EMA for the unet.
- if args.use_ema:
- ema_unet = EMAModel(unet.parameters(), model_cls=UNet2DConditionModel, model_config=unet.config)
-
- if args.enable_xformers_memory_efficient_attention:
- if is_xformers_available():
- import xformers
-
- xformers_version = version.parse(xformers.__version__)
- if xformers_version == version.parse("0.0.16"):
- logger.warn(
- "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
- )
- unet.enable_xformers_memory_efficient_attention()
- else:
- raise ValueError("xformers is not available. Make sure it is installed correctly")
-
- # `accelerate` 0.16.0 will have better support for customized saving
- if version.parse(accelerate.__version__) >= version.parse("0.16.0"):
- # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
- def save_model_hook(models, weights, output_dir):
- if args.use_ema:
- ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema"))
-
- for i, model in enumerate(models):
- model.save_pretrained(os.path.join(output_dir, "unet"))
-
- # make sure to pop weight so that corresponding model is not saved again
- weights.pop()
-
- def load_model_hook(models, input_dir):
- if args.use_ema:
- load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DConditionModel)
- ema_unet.load_state_dict(load_model.state_dict())
- ema_unet.to(accelerator.device)
- del load_model
-
- for i in range(len(models)):
- # pop models so that they are not loaded again
- model = models.pop()
-
- # load diffusers style into model
- load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet")
- model.register_to_config(**load_model.config)
-
- model.load_state_dict(load_model.state_dict())
- del load_model
-
- accelerator.register_save_state_pre_hook(save_model_hook)
- accelerator.register_load_state_pre_hook(load_model_hook)
-
- if args.gradient_checkpointing:
- unet.enable_gradient_checkpointing()
-
- # Enable TF32 for faster training on Ampere GPUs,
- # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
- if args.allow_tf32:
- torch.backends.cuda.matmul.allow_tf32 = True
-
- if args.scale_lr:
- args.learning_rate = (
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
- )
-
- # Initialize the optimizer
- if args.use_8bit_adam:
- try:
- import bitsandbytes as bnb
- except ImportError:
- raise ImportError(
- "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`"
- )
-
- optimizer_cls = bnb.optim.AdamW8bit
- else:
- optimizer_cls = torch.optim.AdamW
-
- optimizer = optimizer_cls(
- unet.parameters(),
- lr=args.learning_rate,
- betas=(args.adam_beta1, args.adam_beta2),
- weight_decay=args.adam_weight_decay,
- eps=args.adam_epsilon,
- )
-
- # Get the datasets: you can either provide your own training and evaluation files (see below)
- # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
-
- # In distributed training, the load_dataset function guarantees that only one local process can concurrently
- # download the dataset.
- if args.dataset_name is not None:
- # Downloading and loading a dataset from the hub.
- dataset = load_dataset(
- args.dataset_name,
- args.dataset_config_name,
- cache_dir=args.cache_dir,
- )
- else:
- data_files = {}
- if args.train_data_dir is not None:
- data_files["train"] = os.path.join(args.train_data_dir, "**")
- dataset = load_dataset(
- "imagefolder",
- data_files=data_files,
- cache_dir=args.cache_dir,
- )
- # See more about loading custom images at
- # https://huggingface.co/docs/datasets/main/en/image_load#imagefolder
-
- # Preprocessing the datasets.
- # We need to tokenize inputs and targets.
- column_names = dataset["train"].column_names
-
- # 6. Get the column names for input/target.
- dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None)
- if args.original_image_column is None:
- original_image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
- else:
- original_image_column = args.original_image_column
- if original_image_column not in column_names:
- raise ValueError(
- f"--original_image_column' value '{args.original_image_column}' needs to be one of: {', '.join(column_names)}"
- )
- if args.edit_prompt_column is None:
- edit_prompt_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
- else:
- edit_prompt_column = args.edit_prompt_column
- if edit_prompt_column not in column_names:
- raise ValueError(
- f"--edit_prompt_column' value '{args.edit_prompt_column}' needs to be one of: {', '.join(column_names)}"
- )
- if args.edited_image_column is None:
- edited_image_column = dataset_columns[2] if dataset_columns is not None else column_names[2]
- else:
- edited_image_column = args.edited_image_column
- if edited_image_column not in column_names:
- raise ValueError(
- f"--edited_image_column' value '{args.edited_image_column}' needs to be one of: {', '.join(column_names)}"
- )
-
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- weight_dtype = torch.float32
- if accelerator.mixed_precision == "fp16":
- weight_dtype = torch.float16
- warnings.warn(f"weight_dtype {weight_dtype} may cause nan during vae encoding", UserWarning)
-
- elif accelerator.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
- warnings.warn(f"weight_dtype {weight_dtype} may cause nan during vae encoding", UserWarning)
-
- # Preprocessing the datasets.
- # We need to tokenize input captions and transform the images.
- def tokenize_captions(captions, tokenizer):
- inputs = tokenizer(
- captions,
- max_length=tokenizer.model_max_length,
- padding="max_length",
- truncation=True,
- return_tensors="pt",
- )
- return inputs.input_ids
-
- # Preprocessing the datasets.
- train_transforms = transforms.Compose(
- [
- transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
- transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
- ]
- )
-
- def preprocess_images(examples):
- original_images = np.concatenate(
- [convert_to_np(image, args.resolution) for image in examples[original_image_column]]
- )
- edited_images = np.concatenate(
- [convert_to_np(image, args.resolution) for image in examples[edited_image_column]]
- )
- # We need to ensure that the original and the edited images undergo the same
- # augmentation transforms.
- images = np.concatenate([original_images, edited_images])
- images = torch.tensor(images)
- images = 2 * (images / 255) - 1
- return train_transforms(images)
-
- # Load scheduler, tokenizer and models.
- tokenizer_1 = AutoTokenizer.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False
- )
- tokenizer_2 = AutoTokenizer.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False
- )
- text_encoder_cls_1 = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision)
- text_encoder_cls_2 = import_model_class_from_model_name_or_path(
- args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2"
- )
-
- # Load scheduler and models
- noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
- text_encoder_1 = text_encoder_cls_1.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
- )
- text_encoder_2 = text_encoder_cls_2.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision
- )
-
- # We ALWAYS pre-compute the additional condition embeddings needed for SDXL
- # UNet as the model is already big and it uses two text encoders.
- text_encoder_1.to(accelerator.device, dtype=weight_dtype)
- text_encoder_2.to(accelerator.device, dtype=weight_dtype)
- tokenizers = [tokenizer_1, tokenizer_2]
- text_encoders = [text_encoder_1, text_encoder_2]
-
- # Freeze vae and text_encoders
- vae.requires_grad_(False)
- text_encoder_1.requires_grad_(False)
- text_encoder_2.requires_grad_(False)
-
- # Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt
- def encode_prompt(text_encoders, tokenizers, prompt):
- prompt_embeds_list = []
-
- for tokenizer, text_encoder in zip(tokenizers, text_encoders):
- text_inputs = tokenizer(
- prompt,
- padding="max_length",
- max_length=tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- prompt_embeds = text_encoder(
- text_input_ids.to(text_encoder.device),
- output_hidden_states=True,
- )
-
- # We are only ALWAYS interested in the pooled output of the final text encoder
- pooled_prompt_embeds = prompt_embeds[0]
- prompt_embeds = prompt_embeds.hidden_states[-2]
- bs_embed, seq_len, _ = prompt_embeds.shape
- prompt_embeds = prompt_embeds.view(bs_embed, seq_len, -1)
- prompt_embeds_list.append(prompt_embeds)
-
- prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
- pooled_prompt_embeds = pooled_prompt_embeds.view(bs_embed, -1)
- return prompt_embeds, pooled_prompt_embeds
-
- # Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt
- def encode_prompts(text_encoders, tokenizers, prompts):
- prompt_embeds_all = []
- pooled_prompt_embeds_all = []
-
- for prompt in prompts:
- prompt_embeds, pooled_prompt_embeds = encode_prompt(text_encoders, tokenizers, prompt)
- prompt_embeds_all.append(prompt_embeds)
- pooled_prompt_embeds_all.append(pooled_prompt_embeds)
-
- return torch.stack(prompt_embeds_all), torch.stack(pooled_prompt_embeds_all)
-
- # Adapted from examples.dreambooth.train_dreambooth_lora_sdxl
- # Here, we compute not just the text embeddings but also the additional embeddings
- # needed for the SD XL UNet to operate.
- def compute_embeddings_for_prompts(prompts, text_encoders, tokenizers):
- with torch.no_grad():
- prompt_embeds_all, pooled_prompt_embeds_all = encode_prompts(text_encoders, tokenizers, prompts)
- add_text_embeds_all = pooled_prompt_embeds_all
-
- prompt_embeds_all = prompt_embeds_all.to(accelerator.device)
- add_text_embeds_all = add_text_embeds_all.to(accelerator.device)
- return prompt_embeds_all, add_text_embeds_all
-
- # Get null conditioning
- def compute_null_conditioning():
- null_conditioning_list = []
- for a_tokenizer, a_text_encoder in zip(tokenizers, text_encoders):
- null_conditioning_list.append(
- a_text_encoder(
- tokenize_captions([""], tokenizer=a_tokenizer).to(accelerator.device),
- output_hidden_states=True,
- ).hidden_states[-2]
- )
- return torch.concat(null_conditioning_list, dim=-1)
-
- null_conditioning = compute_null_conditioning()
-
- def compute_time_ids():
- crops_coords_top_left = (args.crops_coords_top_left_h, args.crops_coords_top_left_w)
- original_size = target_size = (args.resolution, args.resolution)
- add_time_ids = list(original_size + crops_coords_top_left + target_size)
- add_time_ids = torch.tensor([add_time_ids], dtype=weight_dtype)
- return add_time_ids.to(accelerator.device).repeat(args.train_batch_size, 1)
-
- add_time_ids = compute_time_ids()
-
- def preprocess_train(examples):
- # Preprocess images.
- preprocessed_images = preprocess_images(examples)
- # Since the original and edited images were concatenated before
- # applying the transformations, we need to separate them and reshape
- # them accordingly.
- original_images, edited_images = preprocessed_images.chunk(2)
- original_images = original_images.reshape(-1, 3, args.resolution, args.resolution)
- edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution)
-
- # Collate the preprocessed images into the `examples`.
- examples["original_pixel_values"] = original_images
- examples["edited_pixel_values"] = edited_images
-
- # Preprocess the captions.
- captions = list(examples[edit_prompt_column])
- prompt_embeds_all, add_text_embeds_all = compute_embeddings_for_prompts(captions, text_encoders, tokenizers)
- examples["prompt_embeds"] = prompt_embeds_all
- examples["add_text_embeds"] = add_text_embeds_all
- return examples
-
- with accelerator.main_process_first():
- if args.max_train_samples is not None:
- dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
- # Set the training transforms
- train_dataset = dataset["train"].with_transform(preprocess_train)
-
- def collate_fn(examples):
- original_pixel_values = torch.stack([example["original_pixel_values"] for example in examples])
- original_pixel_values = original_pixel_values.to(memory_format=torch.contiguous_format).float()
- edited_pixel_values = torch.stack([example["edited_pixel_values"] for example in examples])
- edited_pixel_values = edited_pixel_values.to(memory_format=torch.contiguous_format).float()
- prompt_embeds = torch.concat([example["prompt_embeds"] for example in examples], dim=0)
- add_text_embeds = torch.concat([example["add_text_embeds"] for example in examples], dim=0)
- return {
- "original_pixel_values": original_pixel_values,
- "edited_pixel_values": edited_pixel_values,
- "prompt_embeds": prompt_embeds,
- "add_text_embeds": add_text_embeds,
- }
-
- # DataLoaders creation:
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset,
- shuffle=True,
- collate_fn=collate_fn,
- batch_size=args.train_batch_size,
- num_workers=args.dataloader_num_workers,
- )
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- args.lr_scheduler,
- optimizer=optimizer,
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- )
-
- # Prepare everything with our `accelerator`.
- unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- unet, optimizer, train_dataloader, lr_scheduler
- )
-
- if args.use_ema:
- ema_unet.to(accelerator.device)
-
- # Move vae, unet and text_encoder to device and cast to weight_dtype
- # The VAE is in float32 to avoid NaN losses.
- if args.pretrained_vae_model_name_or_path is not None:
- vae.to(accelerator.device, dtype=weight_dtype)
- else:
- vae.to(accelerator.device, dtype=torch.float32)
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("instruct-pix2pix-xl", config=vars(args))
-
- # Train!
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- global_step = 0
- first_epoch = 0
-
- # Potentially load in the weights and states from a previous save
- if args.resume_from_checkpoint:
- if args.resume_from_checkpoint != "latest":
- path = os.path.basename(args.resume_from_checkpoint)
- else:
- # Get the most recent checkpoint
- dirs = os.listdir(args.output_dir)
- dirs = [d for d in dirs if d.startswith("checkpoint")]
- dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
- path = dirs[-1] if len(dirs) > 0 else None
-
- if path is None:
- accelerator.print(
- f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
- )
- args.resume_from_checkpoint = None
- else:
- accelerator.print(f"Resuming from checkpoint {path}")
- accelerator.load_state(os.path.join(args.output_dir, path))
- global_step = int(path.split("-")[1])
-
- resume_global_step = global_step * args.gradient_accumulation_steps
- first_epoch = global_step // num_update_steps_per_epoch
- resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
-
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process)
- progress_bar.set_description("Steps")
-
- for epoch in range(first_epoch, args.num_train_epochs):
- unet.train()
- train_loss = 0.0
- for step, batch in enumerate(train_dataloader):
- # Skip steps until we reach the resumed step
- if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
- if step % args.gradient_accumulation_steps == 0:
- progress_bar.update(1)
- continue
-
- with accelerator.accumulate(unet):
- # We want to learn the denoising process w.r.t the edited images which
- # are conditioned on the original image (which was edited) and the edit instruction.
- # So, first, convert images to latent space.
- if args.pretrained_vae_model_name_or_path is not None:
- edited_pixel_values = batch["edited_pixel_values"].to(dtype=weight_dtype)
- else:
- edited_pixel_values = batch["edited_pixel_values"]
- latents = vae.encode(edited_pixel_values).latent_dist.sample()
- latents = latents * vae.config.scaling_factor
- if args.pretrained_vae_model_name_or_path is None:
- latents = latents.to(weight_dtype)
-
- # Sample noise that we'll add to the latents
- noise = torch.randn_like(latents)
- bsz = latents.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
- timesteps = timesteps.long()
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
-
- # SDXL additional inputs
- encoder_hidden_states = batch["prompt_embeds"]
- add_text_embeds = batch["add_text_embeds"]
-
- # Get the additional image embedding for conditioning.
- # Instead of getting a diagonal Gaussian here, we simply take the mode.
- if args.pretrained_vae_model_name_or_path is not None:
- original_pixel_values = batch["original_pixel_values"].to(dtype=weight_dtype)
- else:
- original_pixel_values = batch["original_pixel_values"]
- original_image_embeds = vae.encode(original_pixel_values).latent_dist.sample()
- if args.pretrained_vae_model_name_or_path is None:
- original_image_embeds = original_image_embeds.to(weight_dtype)
-
- # Conditioning dropout to support classifier-free guidance during inference. For more details
- # check out the section 3.2.1 of the original paper https://arxiv.org/abs/2211.09800.
- if args.conditioning_dropout_prob is not None:
- random_p = torch.rand(bsz, device=latents.device, generator=generator)
- # Sample masks for the edit prompts.
- prompt_mask = random_p < 2 * args.conditioning_dropout_prob
- prompt_mask = prompt_mask.reshape(bsz, 1, 1)
- # Final text conditioning.
- encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states)
-
- # Sample masks for the original images.
- image_mask_dtype = original_image_embeds.dtype
- image_mask = 1 - (
- (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype)
- * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype)
- )
- image_mask = image_mask.reshape(bsz, 1, 1, 1)
- # Final image conditioning.
- original_image_embeds = image_mask * original_image_embeds
-
- # Concatenate the `original_image_embeds` with the `noisy_latents`.
- concatenated_noisy_latents = torch.cat([noisy_latents, original_image_embeds], dim=1)
-
- # Get the target for loss depending on the prediction type
- if noise_scheduler.config.prediction_type == "epsilon":
- target = noise
- elif noise_scheduler.config.prediction_type == "v_prediction":
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
- else:
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
-
- # Predict the noise residual and compute loss
- added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
-
- model_pred = unet(
- concatenated_noisy_latents, timesteps, encoder_hidden_states, added_cond_kwargs=added_cond_kwargs
- ).sample
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
-
- # Gather the losses across all processes for logging (if we use distributed training).
- avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
- train_loss += avg_loss.item() / args.gradient_accumulation_steps
-
- # Backpropagate
- accelerator.backward(loss)
- if accelerator.sync_gradients:
- accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm)
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- if args.use_ema:
- ema_unet.step(unet.parameters())
- progress_bar.update(1)
- global_step += 1
- accelerator.log({"train_loss": train_loss}, step=global_step)
- train_loss = 0.0
-
- if global_step % args.checkpointing_steps == 0:
- if accelerator.is_main_process:
- # _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
- if args.checkpoints_total_limit is not None:
- checkpoints = os.listdir(args.output_dir)
- checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
- checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
-
- # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
- if len(checkpoints) >= args.checkpoints_total_limit:
- num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
- removing_checkpoints = checkpoints[0:num_to_remove]
-
- logger.info(
- f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
- )
- logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
-
- for removing_checkpoint in removing_checkpoints:
- removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
- shutil.rmtree(removing_checkpoint)
-
- save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
- accelerator.save_state(save_path)
- logger.info(f"Saved state to {save_path}")
-
- logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
-
- ### BEGIN: Perform validation every `validation_epochs` steps
- if global_step % args.validation_steps == 0 or global_step == 1:
- if (args.val_image_url_or_path is not None) and (args.validation_prompt is not None):
- logger.info(
- f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
- f" {args.validation_prompt}."
- )
-
- # create pipeline
- if args.use_ema:
- # Store the UNet parameters temporarily and load the EMA parameters to perform inference.
- ema_unet.store(unet.parameters())
- ema_unet.copy_to(unet.parameters())
-
- # The models need unwrapping because for compatibility in distributed training mode.
- pipeline = StableDiffusionXLInstructPix2PixPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- unet=accelerator.unwrap_model(unet),
- text_encoder=text_encoder_1,
- text_encoder_2=text_encoder_2,
- tokenizer=tokenizer_1,
- tokenizer_2=tokenizer_2,
- vae=vae,
- revision=args.revision,
- torch_dtype=weight_dtype,
- )
- pipeline = pipeline.to(accelerator.device)
- pipeline.set_progress_bar_config(disable=True)
-
- # run inference
- # Save validation images
- val_save_dir = os.path.join(args.output_dir, "validation_images")
- if not os.path.exists(val_save_dir):
- os.makedirs(val_save_dir)
-
- original_image = (
- lambda image_url_or_path: load_image(image_url_or_path)
- if urlparse(image_url_or_path).scheme
- else Image.open(image_url_or_path).convert("RGB")
- )(args.val_image_url_or_path)
- with torch.autocast(
- str(accelerator.device).replace(":0", ""), enabled=accelerator.mixed_precision == "fp16"
- ):
- edited_images = []
- for val_img_idx in range(args.num_validation_images):
- a_val_img = pipeline(
- args.validation_prompt,
- image=original_image,
- num_inference_steps=20,
- image_guidance_scale=1.5,
- guidance_scale=7,
- generator=generator,
- ).images[0]
- edited_images.append(a_val_img)
- a_val_img.save(os.path.join(val_save_dir, f"step_{global_step}_val_img_{val_img_idx}.png"))
-
- for tracker in accelerator.trackers:
- if tracker.name == "wandb":
- wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES)
- for edited_image in edited_images:
- wandb_table.add_data(
- wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt
- )
- tracker.log({"validation": wandb_table})
- if args.use_ema:
- # Switch back to the original UNet parameters.
- ema_unet.restore(unet.parameters())
-
- del pipeline
- torch.cuda.empty_cache()
- ### END: Perform validation every `validation_epochs` steps
-
- if global_step >= args.max_train_steps:
- break
-
- # Create the pipeline using the trained modules and save it.
- accelerator.wait_for_everyone()
- if accelerator.is_main_process:
- unet = accelerator.unwrap_model(unet)
- if args.use_ema:
- ema_unet.copy_to(unet.parameters())
-
- pipeline = StableDiffusionXLInstructPix2PixPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- text_encoder=text_encoder_1,
- text_encoder_2=text_encoder_2,
- tokenizer=tokenizer_1,
- tokenizer_2=tokenizer_2,
- vae=vae,
- unet=unet,
- revision=args.revision,
- )
- pipeline.save_pretrained(args.output_dir)
-
- if args.push_to_hub:
- upload_folder(
- repo_id=repo_id,
- folder_path=args.output_dir,
- commit_message="End of training",
- ignore_patterns=["step_*", "epoch_*"],
- )
-
- if args.validation_prompt is not None:
- edited_images = []
- pipeline = pipeline.to(accelerator.device)
- with torch.autocast(str(accelerator.device).replace(":0", "")):
- for _ in range(args.num_validation_images):
- edited_images.append(
- pipeline(
- args.validation_prompt,
- image=original_image,
- num_inference_steps=20,
- image_guidance_scale=1.5,
- guidance_scale=7,
- generator=generator,
- ).images[0]
- )
-
- for tracker in accelerator.trackers:
- if tracker.name == "wandb":
- wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES)
- for edited_image in edited_images:
- wandb_table.add_data(
- wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt
- )
- tracker.log({"test": wandb_table})
-
- accelerator.end_training()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipeline_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipeline_utils.py
deleted file mode 100644
index 87709d5f616cdfb195ed4527e4b630a86136c29c..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipeline_utils.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-
-# limitations under the License.
-
-# NOTE: This file is deprecated and will be removed in a future version.
-# It only exists so that temporarely `from diffusers.pipelines import DiffusionPipeline` works
-
-from .pipelines import DiffusionPipeline, ImagePipelineOutput # noqa: F401
-from .utils import deprecate
-
-
-deprecate(
- "pipelines_utils",
- "0.22.0",
- "Importing `DiffusionPipeline` or `ImagePipelineOutput` from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead.",
- standard_warn=False,
- stacklevel=3,
-)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/default_runtime.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/default_runtime.py
deleted file mode 100644
index 55097c5b242da66c9735c0b45cd84beefab487b1..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/default_runtime.py
+++ /dev/null
@@ -1,16 +0,0 @@
-checkpoint_config = dict(interval=1)
-# yapf:disable
-log_config = dict(
- interval=50,
- hooks=[
- dict(type='TextLoggerHook'),
- # dict(type='TensorboardLoggerHook')
- ])
-# yapf:enable
-custom_hooks = [dict(type='NumClassCheckHook')]
-
-dist_params = dict(backend='nccl')
-log_level = 'INFO'
-load_from = None
-resume_from = None
-workflow = [('train', 1)]
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dynamic_rcnn/README.md b/spaces/Andy1621/uniformer_image_detection/configs/dynamic_rcnn/README.md
deleted file mode 100644
index ffdc42dcdfddbaa946f81cba00e73b5573aa19fc..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/dynamic_rcnn/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
-# Dynamic R-CNN: Towards High Quality Object Detection via Dynamic Training
-
-## Introduction
-
-[ALGORITHM]
-
-```
-@article{DynamicRCNN,
- author = {Hongkai Zhang and Hong Chang and Bingpeng Ma and Naiyan Wang and Xilin Chen},
- title = {Dynamic {R-CNN}: Towards High Quality Object Detection via Dynamic Training},
- journal = {arXiv preprint arXiv:2004.06002},
- year = {2020}
-}
-```
-
-## Results and Models
-
-| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-|:---------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
-| R-50 | pytorch | 1x | 3.8 | | 38.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dynamic_rcnn/dynamic_rcnn_r50_fpn_1x.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dynamic_rcnn/dynamic_rcnn_r50_fpn_1x/dynamic_rcnn_r50_fpn_1x-62a3f276.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dynamic_rcnn/dynamic_rcnn_r50_fpn_1x/dynamic_rcnn_r50_fpn_1x_20200618_095048.log.json) |
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg_crop640_50e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg_crop640_50e_coco.py
deleted file mode 100644
index 3c9ea27617c85c54309ac454fff253a6d0462735..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpg_crop640_50e_coco.py
+++ /dev/null
@@ -1,48 +0,0 @@
-_base_ = 'mask_rcnn_r50_fpn_crop640_50e_coco.py'
-
-norm_cfg = dict(type='BN', requires_grad=True)
-model = dict(
- neck=dict(
- type='FPG',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- inter_channels=256,
- num_outs=5,
- stack_times=9,
- paths=['bu'] * 9,
- same_down_trans=None,
- same_up_trans=dict(
- type='conv',
- kernel_size=3,
- stride=2,
- padding=1,
- norm_cfg=norm_cfg,
- inplace=False,
- order=('act', 'conv', 'norm')),
- across_lateral_trans=dict(
- type='conv',
- kernel_size=1,
- norm_cfg=norm_cfg,
- inplace=False,
- order=('act', 'conv', 'norm')),
- across_down_trans=dict(
- type='interpolation_conv',
- mode='nearest',
- kernel_size=3,
- norm_cfg=norm_cfg,
- order=('act', 'conv', 'norm'),
- inplace=False),
- across_up_trans=None,
- across_skip_trans=dict(
- type='conv',
- kernel_size=1,
- norm_cfg=norm_cfg,
- inplace=False,
- order=('act', 'conv', 'norm')),
- output_trans=dict(
- type='last_conv',
- kernel_size=3,
- order=('act', 'conv', 'norm'),
- inplace=False),
- norm_cfg=norm_cfg,
- skip_inds=[(0, 1, 2, 3), (0, 1, 2), (0, 1), (0, ), ()]))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py
deleted file mode 100644
index 497267b6b50b3c160a4f8807230d4f986cf8eb3f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
-conv_cfg = dict(type='ConvWS')
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained='open-mmlab://jhu/resnet50_gn_ws',
- backbone=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg),
- neck=dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg),
- roi_head=dict(
- bbox_head=dict(
- type='Shared4Conv1FCBBoxHead',
- conv_out_channels=256,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg)))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py
deleted file mode 100644
index 86c5b13343b637ce218eed231240195a6768c5d1..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py
+++ /dev/null
@@ -1,41 +0,0 @@
-_base_ = './mask_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'))
-# use caffe img_norm
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(
- type='Resize',
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
- (1333, 768), (1333, 800)],
- multiscale_mode='value',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/scratch/faster_rcnn_r50_fpn_gn-all_scratch_6x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/scratch/faster_rcnn_r50_fpn_gn-all_scratch_6x_coco.py
deleted file mode 100644
index 636f3f67c7c246a60512e2b70d333320fbb85feb..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/scratch/faster_rcnn_r50_fpn_gn-all_scratch_6x_coco.py
+++ /dev/null
@@ -1,22 +0,0 @@
-_base_ = [
- '../_base_/models/faster_rcnn_r50_fpn.py',
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
-model = dict(
- pretrained=None,
- backbone=dict(
- frozen_stages=-1, zero_init_residual=False, norm_cfg=norm_cfg),
- neck=dict(norm_cfg=norm_cfg),
- roi_head=dict(
- bbox_head=dict(
- type='Shared4Conv1FCBBoxHead',
- conv_out_channels=256,
- norm_cfg=norm_cfg)))
-# optimizer
-optimizer = dict(paramwise_cfg=dict(norm_decay_mult=0))
-optimizer_config = dict(_delete_=True, grad_clip=None)
-# learning policy
-lr_config = dict(warmup_ratio=0.1, step=[65, 71])
-runner = dict(type='EpochBasedRunner', max_epochs=73)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hough2image.py b/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hough2image.py
deleted file mode 100644
index 6095eeb6767e005a155ee72057b3537021b09f31..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hough2image.py
+++ /dev/null
@@ -1,100 +0,0 @@
-from share import *
-import config
-
-import cv2
-import einops
-import gradio as gr
-import numpy as np
-import torch
-import random
-
-from pytorch_lightning import seed_everything
-from annotator.util import resize_image, HWC3
-from annotator.mlsd import MLSDdetector
-from cldm.model import create_model, load_state_dict
-from cldm.ddim_hacked import DDIMSampler
-
-
-apply_mlsd = MLSDdetector()
-
-model = create_model('./models/cldm_v15.yaml').cpu()
-model.load_state_dict(load_state_dict('./models/control_sd15_mlsd.pth', location='cuda'))
-model = model.cuda()
-ddim_sampler = DDIMSampler(model)
-
-
-def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, value_threshold, distance_threshold):
- with torch.no_grad():
- input_image = HWC3(input_image)
- detected_map = apply_mlsd(resize_image(input_image, detect_resolution), value_threshold, distance_threshold)
- detected_map = HWC3(detected_map)
- img = resize_image(input_image, image_resolution)
- H, W, C = img.shape
-
- detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_NEAREST)
-
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
- control = torch.stack([control for _ in range(num_samples)], dim=0)
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
-
- if seed == -1:
- seed = random.randint(0, 65535)
- seed_everything(seed)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=False)
-
- cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
- un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
- shape = (4, H // 8, W // 8)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=True)
-
- model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13) # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
- samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
- shape, cond, verbose=False, eta=eta,
- unconditional_guidance_scale=scale,
- unconditional_conditioning=un_cond)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=False)
-
- x_samples = model.decode_first_stage(samples)
- x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
-
- results = [x_samples[i] for i in range(num_samples)]
- return [255 - cv2.dilate(detected_map, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1)] + results
-
-
-block = gr.Blocks().queue()
-with block:
- with gr.Row():
- gr.Markdown("## Control Stable Diffusion with Hough Line Maps")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- prompt = gr.Textbox(label="Prompt")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
- image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
- strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
- guess_mode = gr.Checkbox(label='Guess Mode', value=False)
- detect_resolution = gr.Slider(label="Hough Resolution", minimum=128, maximum=1024, value=512, step=1)
- value_threshold = gr.Slider(label="Hough value threshold (MLSD)", minimum=0.01, maximum=2.0, value=0.1, step=0.01)
- distance_threshold = gr.Slider(label="Hough distance threshold (MLSD)", minimum=0.01, maximum=20.0, value=0.1, step=0.01)
- ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
- scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, randomize=True)
- eta = gr.Number(label="eta (DDIM)", value=0.0)
- a_prompt = gr.Textbox(label="Added Prompt", value='best quality, extremely detailed')
- n_prompt = gr.Textbox(label="Negative Prompt",
- value='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality')
- with gr.Column():
- result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
- ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, value_threshold, distance_threshold]
- run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
-
-
-block.launch(server_name='0.0.0.0')
diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/run_Linux.sh b/spaces/Anthony7906/MengHuiMXD_GPT/run_Linux.sh
deleted file mode 100644
index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000
--- a/spaces/Anthony7906/MengHuiMXD_GPT/run_Linux.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$(readlink -f "$0")")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir" || exit
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
-
-# 检查ChuanhuChatbot.py是否在运行
-if ! pgrep -f ChuanhuChatbot.py > /dev/null; then
- # 如果没有运行,启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/Antoine245/bot/app.py b/spaces/Antoine245/bot/app.py
deleted file mode 100644
index 678304d94eb1851ecdf757cc051041701df11e44..0000000000000000000000000000000000000000
--- a/spaces/Antoine245/bot/app.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import gradio as gr
-import os
-import time
-import google.generativeai as palm
-
-palm.configure(api_key=os.environ.get("palm_key"))
-
-defaults = {
- 'model': 'models/chat-bison-001',
- 'temperature': 0.25,
- 'candidate_count': 1,
- 'top_k': 40,
- 'top_p': 0.95,
-}
-
-context = "Your IT assistant"
-
-examples = [
- [
- "Hey my computer is broken",
- "Hey, what is the issue with your computer?"
- ]
-]
-
-history = ['']
-
-with gr.Blocks(theme=gr.themes.Soft()) as demo:
- chatbot = gr.Chatbot()
- msg = gr.Textbox()
- btn = gr.Button("Submit", variant="primary")
- clear = gr.Button("Clear")
-
- def user(user_message, history):
- history.append([user_message, None])
- return gr.update(value=""), history
-
- def bot(history):
- try:
- bot_message = palm.chat(
- context=context,
- examples=examples,
- messages=[h[0] for h in history]
- )
-
- history[-1][1] = ""
- for character in bot_message.last:
- history[-1][1] += character
- time.sleep(0.005)
- except Exception as e:
- # Handle the exception here
- print("Error occurred:", str(e))
- # You can customize the error handling as per your requirements
- # For example, return an error message to the user
-
- history[-1][1] = "Incorrect input please retry with a longer sentence in english"
-
- return history
-
- response = msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
- bot, chatbot, chatbot
- )
- response = btn.click(user, [msg, chatbot], [msg, chatbot], queue=False).then(
- bot, chatbot, chatbot
- )
- response.then(lambda: gr.update(interactive=True), None, [msg], queue=False)
- clear.click(lambda: None, None, chatbot, queue=False)
-
-demo.queue()
-demo.launch(debug=True)
diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h
deleted file mode 100644
index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000
--- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h
+++ /dev/null
@@ -1,35 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-#pragma once
-#include
-
-namespace groundingdino {
-
-at::Tensor
-ms_deform_attn_cpu_forward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const int im2col_step);
-
-std::vector
-ms_deform_attn_cpu_backward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const at::Tensor &grad_output,
- const int im2col_step);
-
-} // namespace groundingdino
diff --git a/spaces/BAAI/AltDiffusion/share_btn.py b/spaces/BAAI/AltDiffusion/share_btn.py
deleted file mode 100644
index e97a8ec6139e96ce03f018ba9a39670a948c76a7..0000000000000000000000000000000000000000
--- a/spaces/BAAI/AltDiffusion/share_btn.py
+++ /dev/null
@@ -1,60 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- const gradioEl = document.querySelector('body > gradio-app');
- const imgEls = gradioEl.querySelectorAll('#gallery img');
- const promptTxt = gradioEl.querySelector('#prompt-text-input input').value;
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!imgEls.length){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const files = await Promise.all(
- [...imgEls].map(async (imgEl) => {
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const fileName = `diffuse-the-rest-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- })
- );
- const urls = await Promise.all(files.map((f) => uploadFile(f)));
- const htmlImgs = urls.map(url => ``);
- const descriptionMd = `
-${htmlImgs.join(`\n`)}
-
`;
- const params = new URLSearchParams({
- title: promptTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/BAAI/bilingual_stable_diffusion/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/BLACKHOST/timer/tm.py b/spaces/BLACKHOST/timer/tm.py
deleted file mode 100644
index b6af202932a051df32c729297244559a373904bb..0000000000000000000000000000000000000000
--- a/spaces/BLACKHOST/timer/tm.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from time import sleep
-time = 1000 #can change
-while time != 0:
- print(time)
- time -= 1 #can change
- sleep(0.1) #can change
\ No newline at end of file
diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/vqperceptual.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/vqperceptual.py
deleted file mode 100644
index c2febd445728479d4cd9aacdb2572cb1f1af04db..0000000000000000000000000000000000000000
--- a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/losses/vqperceptual.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from taming.modules.losses.lpips import LPIPS
-from taming.modules.discriminator.model import NLayerDiscriminator, weights_init
-
-
-class DummyLoss(nn.Module):
- def __init__(self):
- super().__init__()
-
-
-def adopt_weight(weight, global_step, threshold=0, value=0.):
- if global_step < threshold:
- weight = value
- return weight
-
-
-def hinge_d_loss(logits_real, logits_fake):
- loss_real = torch.mean(F.relu(1. - logits_real))
- loss_fake = torch.mean(F.relu(1. + logits_fake))
- d_loss = 0.5 * (loss_real + loss_fake)
- return d_loss
-
-
-def vanilla_d_loss(logits_real, logits_fake):
- d_loss = 0.5 * (
- torch.mean(torch.nn.functional.softplus(-logits_real)) +
- torch.mean(torch.nn.functional.softplus(logits_fake)))
- return d_loss
-
-
-class VQLPIPSWithDiscriminator(nn.Module):
- def __init__(self, disc_start, codebook_weight=1.0, pixelloss_weight=1.0,
- disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0,
- perceptual_weight=1.0, use_actnorm=False, disc_conditional=False,
- disc_ndf=64, disc_loss="hinge"):
- super().__init__()
- assert disc_loss in ["hinge", "vanilla"]
- self.codebook_weight = codebook_weight
- self.pixel_weight = pixelloss_weight
- self.perceptual_loss = LPIPS().eval()
- self.perceptual_weight = perceptual_weight
-
- self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels,
- n_layers=disc_num_layers,
- use_actnorm=use_actnorm,
- ndf=disc_ndf
- ).apply(weights_init)
- self.discriminator_iter_start = disc_start
- if disc_loss == "hinge":
- self.disc_loss = hinge_d_loss
- elif disc_loss == "vanilla":
- self.disc_loss = vanilla_d_loss
- else:
- raise ValueError(f"Unknown GAN loss '{disc_loss}'.")
- print(f"VQLPIPSWithDiscriminator running with {disc_loss} loss.")
- self.disc_factor = disc_factor
- self.discriminator_weight = disc_weight
- self.disc_conditional = disc_conditional
-
- def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None):
- if last_layer is not None:
- nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0]
- else:
- nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0]
- g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0]
-
- d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4)
- d_weight = torch.clamp(d_weight, 0.0, 1e4).detach()
- d_weight = d_weight * self.discriminator_weight
- return d_weight
-
- def forward(self, codebook_loss, inputs, reconstructions, optimizer_idx,
- global_step, last_layer=None, cond=None, split="train"):
- rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous())
- if self.perceptual_weight > 0:
- p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous())
- rec_loss = rec_loss + self.perceptual_weight * p_loss
- else:
- p_loss = torch.tensor([0.0])
-
- nll_loss = rec_loss
- #nll_loss = torch.sum(nll_loss) / nll_loss.shape[0]
- nll_loss = torch.mean(nll_loss)
-
- # now the GAN part
- if optimizer_idx == 0:
- # generator update
- if cond is None:
- assert not self.disc_conditional
- logits_fake = self.discriminator(reconstructions.contiguous())
- else:
- assert self.disc_conditional
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1))
- g_loss = -torch.mean(logits_fake)
-
- try:
- d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer)
- except RuntimeError:
- assert not self.training
- d_weight = torch.tensor(0.0)
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- loss = nll_loss + d_weight * disc_factor * g_loss + self.codebook_weight * codebook_loss.mean()
-
- log = {"{}/total_loss".format(split): loss.clone().detach().mean(),
- "{}/quant_loss".format(split): codebook_loss.detach().mean(),
- "{}/nll_loss".format(split): nll_loss.detach().mean(),
- "{}/rec_loss".format(split): rec_loss.detach().mean(),
- "{}/p_loss".format(split): p_loss.detach().mean(),
- "{}/d_weight".format(split): d_weight.detach(),
- "{}/disc_factor".format(split): torch.tensor(disc_factor),
- "{}/g_loss".format(split): g_loss.detach().mean(),
- }
- return loss, log
-
- if optimizer_idx == 1:
- # second pass for discriminator update
- if cond is None:
- logits_real = self.discriminator(inputs.contiguous().detach())
- logits_fake = self.discriminator(reconstructions.contiguous().detach())
- else:
- logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1))
- logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1))
-
- disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start)
- d_loss = disc_factor * self.disc_loss(logits_real, logits_fake)
-
- log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(),
- "{}/logits_real".format(split): logits_real.detach().mean(),
- "{}/logits_fake".format(split): logits_fake.detach().mean()
- }
- return d_loss, log
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/parsers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/parsers.py
deleted file mode 100644
index 667fae352f5833218d620a963a0ced3f8fbef7b9..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/parsers.py
+++ /dev/null
@@ -1,1112 +0,0 @@
-# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# http://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-"""Response parsers for the various protocol types.
-
-The module contains classes that can take an HTTP response, and given
-an output shape, parse the response into a dict according to the
-rules in the output shape.
-
-There are many similarities amongst the different protocols with regard
-to response parsing, and the code is structured in a way to avoid
-code duplication when possible. The diagram below is a diagram
-showing the inheritance hierarchy of the response classes.
-
-::
-
-
-
- +--------------+
- |ResponseParser|
- +--------------+
- ^ ^ ^
- +--------------------+ | +-------------------+
- | | |
- +----------+----------+ +------+-------+ +-------+------+
- |BaseXMLResponseParser| |BaseRestParser| |BaseJSONParser|
- +---------------------+ +--------------+ +--------------+
- ^ ^ ^ ^ ^ ^
- | | | | | |
- | | | | | |
- | ++----------+-+ +-+-----------++ |
- | |RestXMLParser| |RestJSONParser| |
- +-----+-----+ +-------------+ +--------------+ +----+-----+
- |QueryParser| |JSONParser|
- +-----------+ +----------+
-
-
-The diagram above shows that there is a base class, ``ResponseParser`` that
-contains logic that is similar amongst all the different protocols (``query``,
-``json``, ``rest-json``, ``rest-xml``). Amongst the various services there
-is shared logic that can be grouped several ways:
-
-* The ``query`` and ``rest-xml`` both have XML bodies that are parsed in the
- same way.
-* The ``json`` and ``rest-json`` protocols both have JSON bodies that are
- parsed in the same way.
-* The ``rest-json`` and ``rest-xml`` protocols have additional attributes
- besides body parameters that are parsed the same (headers, query string,
- status code).
-
-This is reflected in the class diagram above. The ``BaseXMLResponseParser``
-and the BaseJSONParser contain logic for parsing the XML/JSON body,
-and the BaseRestParser contains logic for parsing out attributes that
-come from other parts of the HTTP response. Classes like the
-``RestXMLParser`` inherit from the ``BaseXMLResponseParser`` to get the
-XML body parsing logic and the ``BaseRestParser`` to get the HTTP
-header/status code/query string parsing.
-
-Additionally, there are event stream parsers that are used by the other parsers
-to wrap streaming bodies that represent a stream of events. The
-BaseEventStreamParser extends from ResponseParser and defines the logic for
-parsing values from the headers and payload of a message from the underlying
-binary encoding protocol. Currently, event streams support parsing bodies
-encoded as JSON and XML through the following hierarchy.
-
-
- +--------------+
- |ResponseParser|
- +--------------+
- ^ ^ ^
- +--------------------+ | +------------------+
- | | |
- +----------+----------+ +----------+----------+ +-------+------+
- |BaseXMLResponseParser| |BaseEventStreamParser| |BaseJSONParser|
- +---------------------+ +---------------------+ +--------------+
- ^ ^ ^ ^
- | | | |
- | | | |
- +-+----------------+-+ +-+-----------------+-+
- |EventStreamXMLParser| |EventStreamJSONParser|
- +--------------------+ +---------------------+
-
-Return Values
-=============
-
-Each call to ``parse()`` returns a dict has this form::
-
- Standard Response
-
- {
- "ResponseMetadata": {"RequestId": }
-
- }
-
- Error response
-
- {
- "ResponseMetadata": {"RequestId": }
- "Error": {
- "Code": ,
- "Message": ,
- "Type": ,
-
- }
- }
-
-"""
-import base64
-import http.client
-import json
-import logging
-import re
-
-from botocore.compat import ETree, XMLParseError
-from botocore.eventstream import EventStream, NoInitialResponseError
-from botocore.utils import (
- is_json_value_header,
- lowercase_dict,
- merge_dicts,
- parse_timestamp,
-)
-
-LOG = logging.getLogger(__name__)
-
-DEFAULT_TIMESTAMP_PARSER = parse_timestamp
-
-
-class ResponseParserFactory:
- def __init__(self):
- self._defaults = {}
-
- def set_parser_defaults(self, **kwargs):
- """Set default arguments when a parser instance is created.
-
- You can specify any kwargs that are allowed by a ResponseParser
- class. There are currently two arguments:
-
- * timestamp_parser - A callable that can parse a timestamp string
- * blob_parser - A callable that can parse a blob type
-
- """
- self._defaults.update(kwargs)
-
- def create_parser(self, protocol_name):
- parser_cls = PROTOCOL_PARSERS[protocol_name]
- return parser_cls(**self._defaults)
-
-
-def create_parser(protocol):
- return ResponseParserFactory().create_parser(protocol)
-
-
-def _text_content(func):
- # This decorator hides the difference between
- # an XML node with text or a plain string. It's used
- # to ensure that scalar processing operates only on text
- # strings, which allows the same scalar handlers to be used
- # for XML nodes from the body and HTTP headers.
- def _get_text_content(self, shape, node_or_string):
- if hasattr(node_or_string, 'text'):
- text = node_or_string.text
- if text is None:
- # If an XML node is empty ,
- # we want to parse that as an empty string,
- # not as a null/None value.
- text = ''
- else:
- text = node_or_string
- return func(self, shape, text)
-
- return _get_text_content
-
-
-class ResponseParserError(Exception):
- pass
-
-
-class ResponseParser:
- """Base class for response parsing.
-
- This class represents the interface that all ResponseParsers for the
- various protocols must implement.
-
- This class will take an HTTP response and a model shape and parse the
- HTTP response into a dictionary.
-
- There is a single public method exposed: ``parse``. See the ``parse``
- docstring for more info.
-
- """
-
- DEFAULT_ENCODING = 'utf-8'
- EVENT_STREAM_PARSER_CLS = None
-
- def __init__(self, timestamp_parser=None, blob_parser=None):
- if timestamp_parser is None:
- timestamp_parser = DEFAULT_TIMESTAMP_PARSER
- self._timestamp_parser = timestamp_parser
- if blob_parser is None:
- blob_parser = self._default_blob_parser
- self._blob_parser = blob_parser
- self._event_stream_parser = None
- if self.EVENT_STREAM_PARSER_CLS is not None:
- self._event_stream_parser = self.EVENT_STREAM_PARSER_CLS(
- timestamp_parser, blob_parser
- )
-
- def _default_blob_parser(self, value):
- # Blobs are always returned as bytes type (this matters on python3).
- # We don't decode this to a str because it's entirely possible that the
- # blob contains binary data that actually can't be decoded.
- return base64.b64decode(value)
-
- def parse(self, response, shape):
- """Parse the HTTP response given a shape.
-
- :param response: The HTTP response dictionary. This is a dictionary
- that represents the HTTP request. The dictionary must have the
- following keys, ``body``, ``headers``, and ``status_code``.
-
- :param shape: The model shape describing the expected output.
- :return: Returns a dictionary representing the parsed response
- described by the model. In addition to the shape described from
- the model, each response will also have a ``ResponseMetadata``
- which contains metadata about the response, which contains at least
- two keys containing ``RequestId`` and ``HTTPStatusCode``. Some
- responses may populate additional keys, but ``RequestId`` will
- always be present.
-
- """
- LOG.debug('Response headers: %r', response['headers'])
- LOG.debug('Response body:\n%r', response['body'])
- if response['status_code'] >= 301:
- if self._is_generic_error_response(response):
- parsed = self._do_generic_error_parse(response)
- elif self._is_modeled_error_shape(shape):
- parsed = self._do_modeled_error_parse(response, shape)
- # We don't want to decorate the modeled fields with metadata
- return parsed
- else:
- parsed = self._do_error_parse(response, shape)
- else:
- parsed = self._do_parse(response, shape)
-
- # We don't want to decorate event stream responses with metadata
- if shape and shape.serialization.get('eventstream'):
- return parsed
-
- # Add ResponseMetadata if it doesn't exist and inject the HTTP
- # status code and headers from the response.
- if isinstance(parsed, dict):
- response_metadata = parsed.get('ResponseMetadata', {})
- response_metadata['HTTPStatusCode'] = response['status_code']
- # Ensure that the http header keys are all lower cased. Older
- # versions of urllib3 (< 1.11) would unintentionally do this for us
- # (see urllib3#633). We need to do this conversion manually now.
- headers = response['headers']
- response_metadata['HTTPHeaders'] = lowercase_dict(headers)
- parsed['ResponseMetadata'] = response_metadata
- self._add_checksum_response_metadata(response, response_metadata)
- return parsed
-
- def _add_checksum_response_metadata(self, response, response_metadata):
- checksum_context = response.get('context', {}).get('checksum', {})
- algorithm = checksum_context.get('response_algorithm')
- if algorithm:
- response_metadata['ChecksumAlgorithm'] = algorithm
-
- def _is_modeled_error_shape(self, shape):
- return shape is not None and shape.metadata.get('exception', False)
-
- def _is_generic_error_response(self, response):
- # There are times when a service will respond with a generic
- # error response such as:
- # 'Http/1.1 Service Unavailable'
- #
- # This can also happen if you're going through a proxy.
- # In this case the protocol specific _do_error_parse will either
- # fail to parse the response (in the best case) or silently succeed
- # and treat the HTML above as an XML response and return
- # non sensical parsed data.
- # To prevent this case from happening we first need to check
- # whether or not this response looks like the generic response.
- if response['status_code'] >= 500:
- if 'body' not in response or response['body'] is None:
- return True
-
- body = response['body'].strip()
- return body.startswith(b'') or not body
-
- def _do_generic_error_parse(self, response):
- # There's not really much we can do when we get a generic
- # html response.
- LOG.debug(
- "Received a non protocol specific error response from the "
- "service, unable to populate error code and message."
- )
- return {
- 'Error': {
- 'Code': str(response['status_code']),
- 'Message': http.client.responses.get(
- response['status_code'], ''
- ),
- },
- 'ResponseMetadata': {},
- }
-
- def _do_parse(self, response, shape):
- raise NotImplementedError("%s._do_parse" % self.__class__.__name__)
-
- def _do_error_parse(self, response, shape):
- raise NotImplementedError(f"{self.__class__.__name__}._do_error_parse")
-
- def _do_modeled_error_parse(self, response, shape, parsed):
- raise NotImplementedError(
- f"{self.__class__.__name__}._do_modeled_error_parse"
- )
-
- def _parse_shape(self, shape, node):
- handler = getattr(
- self, f'_handle_{shape.type_name}', self._default_handle
- )
- return handler(shape, node)
-
- def _handle_list(self, shape, node):
- # Enough implementations share list serialization that it's moved
- # up here in the base class.
- parsed = []
- member_shape = shape.member
- for item in node:
- parsed.append(self._parse_shape(member_shape, item))
- return parsed
-
- def _default_handle(self, shape, value):
- return value
-
- def _create_event_stream(self, response, shape):
- parser = self._event_stream_parser
- name = response['context'].get('operation_name')
- return EventStream(response['body'], shape, parser, name)
-
- def _get_first_key(self, value):
- return list(value)[0]
-
- def _has_unknown_tagged_union_member(self, shape, value):
- if shape.is_tagged_union:
- if len(value) != 1:
- error_msg = (
- "Invalid service response: %s must have one and only "
- "one member set."
- )
- raise ResponseParserError(error_msg % shape.name)
- tag = self._get_first_key(value)
- if tag not in shape.members:
- msg = (
- "Received a tagged union response with member "
- "unknown to client: %s. Please upgrade SDK for full "
- "response support."
- )
- LOG.info(msg % tag)
- return True
- return False
-
- def _handle_unknown_tagged_union_member(self, tag):
- return {'SDK_UNKNOWN_MEMBER': {'name': tag}}
-
-
-class BaseXMLResponseParser(ResponseParser):
- def __init__(self, timestamp_parser=None, blob_parser=None):
- super().__init__(timestamp_parser, blob_parser)
- self._namespace_re = re.compile('{.*}')
-
- def _handle_map(self, shape, node):
- parsed = {}
- key_shape = shape.key
- value_shape = shape.value
- key_location_name = key_shape.serialization.get('name') or 'key'
- value_location_name = value_shape.serialization.get('name') or 'value'
- if shape.serialization.get('flattened') and not isinstance(node, list):
- node = [node]
- for keyval_node in node:
- for single_pair in keyval_node:
- # Within each there's a and a
- tag_name = self._node_tag(single_pair)
- if tag_name == key_location_name:
- key_name = self._parse_shape(key_shape, single_pair)
- elif tag_name == value_location_name:
- val_name = self._parse_shape(value_shape, single_pair)
- else:
- raise ResponseParserError("Unknown tag: %s" % tag_name)
- parsed[key_name] = val_name
- return parsed
-
- def _node_tag(self, node):
- return self._namespace_re.sub('', node.tag)
-
- def _handle_list(self, shape, node):
- # When we use _build_name_to_xml_node, repeated elements are aggregated
- # into a list. However, we can't tell the difference between a scalar
- # value and a single element flattened list. So before calling the
- # real _handle_list, we know that "node" should actually be a list if
- # it's flattened, and if it's not, then we make it a one element list.
- if shape.serialization.get('flattened') and not isinstance(node, list):
- node = [node]
- return super()._handle_list(shape, node)
-
- def _handle_structure(self, shape, node):
- parsed = {}
- members = shape.members
- if shape.metadata.get('exception', False):
- node = self._get_error_root(node)
- xml_dict = self._build_name_to_xml_node(node)
- if self._has_unknown_tagged_union_member(shape, xml_dict):
- tag = self._get_first_key(xml_dict)
- return self._handle_unknown_tagged_union_member(tag)
- for member_name in members:
- member_shape = members[member_name]
- if (
- 'location' in member_shape.serialization
- or member_shape.serialization.get('eventheader')
- ):
- # All members with locations have already been handled,
- # so we don't need to parse these members.
- continue
- xml_name = self._member_key_name(member_shape, member_name)
- member_node = xml_dict.get(xml_name)
- if member_node is not None:
- parsed[member_name] = self._parse_shape(
- member_shape, member_node
- )
- elif member_shape.serialization.get('xmlAttribute'):
- attribs = {}
- location_name = member_shape.serialization['name']
- for key, value in node.attrib.items():
- new_key = self._namespace_re.sub(
- location_name.split(':')[0] + ':', key
- )
- attribs[new_key] = value
- if location_name in attribs:
- parsed[member_name] = attribs[location_name]
- return parsed
-
- def _get_error_root(self, original_root):
- if self._node_tag(original_root) == 'ErrorResponse':
- for child in original_root:
- if self._node_tag(child) == 'Error':
- return child
- return original_root
-
- def _member_key_name(self, shape, member_name):
- # This method is needed because we have to special case flattened list
- # with a serialization name. If this is the case we use the
- # locationName from the list's member shape as the key name for the
- # surrounding structure.
- if shape.type_name == 'list' and shape.serialization.get('flattened'):
- list_member_serialized_name = shape.member.serialization.get(
- 'name'
- )
- if list_member_serialized_name is not None:
- return list_member_serialized_name
- serialized_name = shape.serialization.get('name')
- if serialized_name is not None:
- return serialized_name
- return member_name
-
- def _build_name_to_xml_node(self, parent_node):
- # If the parent node is actually a list. We should not be trying
- # to serialize it to a dictionary. Instead, return the first element
- # in the list.
- if isinstance(parent_node, list):
- return self._build_name_to_xml_node(parent_node[0])
- xml_dict = {}
- for item in parent_node:
- key = self._node_tag(item)
- if key in xml_dict:
- # If the key already exists, the most natural
- # way to handle this is to aggregate repeated
- # keys into a single list.
- # 12 -> {'foo': [Node(1), Node(2)]}
- if isinstance(xml_dict[key], list):
- xml_dict[key].append(item)
- else:
- # Convert from a scalar to a list.
- xml_dict[key] = [xml_dict[key], item]
- else:
- xml_dict[key] = item
- return xml_dict
-
- def _parse_xml_string_to_dom(self, xml_string):
- try:
- parser = ETree.XMLParser(
- target=ETree.TreeBuilder(), encoding=self.DEFAULT_ENCODING
- )
- parser.feed(xml_string)
- root = parser.close()
- except XMLParseError as e:
- raise ResponseParserError(
- "Unable to parse response (%s), "
- "invalid XML received. Further retries may succeed:\n%s"
- % (e, xml_string)
- )
- return root
-
- def _replace_nodes(self, parsed):
- for key, value in parsed.items():
- if list(value):
- sub_dict = self._build_name_to_xml_node(value)
- parsed[key] = self._replace_nodes(sub_dict)
- else:
- parsed[key] = value.text
- return parsed
-
- @_text_content
- def _handle_boolean(self, shape, text):
- if text == 'true':
- return True
- else:
- return False
-
- @_text_content
- def _handle_float(self, shape, text):
- return float(text)
-
- @_text_content
- def _handle_timestamp(self, shape, text):
- return self._timestamp_parser(text)
-
- @_text_content
- def _handle_integer(self, shape, text):
- return int(text)
-
- @_text_content
- def _handle_string(self, shape, text):
- return text
-
- @_text_content
- def _handle_blob(self, shape, text):
- return self._blob_parser(text)
-
- _handle_character = _handle_string
- _handle_double = _handle_float
- _handle_long = _handle_integer
-
-
-class QueryParser(BaseXMLResponseParser):
- def _do_error_parse(self, response, shape):
- xml_contents = response['body']
- root = self._parse_xml_string_to_dom(xml_contents)
- parsed = self._build_name_to_xml_node(root)
- self._replace_nodes(parsed)
- # Once we've converted xml->dict, we need to make one or two
- # more adjustments to extract nested errors and to be consistent
- # with ResponseMetadata for non-error responses:
- # 1. {"Errors": {"Error": {...}}} -> {"Error": {...}}
- # 2. {"RequestId": "id"} -> {"ResponseMetadata": {"RequestId": "id"}}
- if 'Errors' in parsed:
- parsed.update(parsed.pop('Errors'))
- if 'RequestId' in parsed:
- parsed['ResponseMetadata'] = {'RequestId': parsed.pop('RequestId')}
- return parsed
-
- def _do_modeled_error_parse(self, response, shape):
- return self._parse_body_as_xml(response, shape, inject_metadata=False)
-
- def _do_parse(self, response, shape):
- return self._parse_body_as_xml(response, shape, inject_metadata=True)
-
- def _parse_body_as_xml(self, response, shape, inject_metadata=True):
- xml_contents = response['body']
- root = self._parse_xml_string_to_dom(xml_contents)
- parsed = {}
- if shape is not None:
- start = root
- if 'resultWrapper' in shape.serialization:
- start = self._find_result_wrapped_shape(
- shape.serialization['resultWrapper'], root
- )
- parsed = self._parse_shape(shape, start)
- if inject_metadata:
- self._inject_response_metadata(root, parsed)
- return parsed
-
- def _find_result_wrapped_shape(self, element_name, xml_root_node):
- mapping = self._build_name_to_xml_node(xml_root_node)
- return mapping[element_name]
-
- def _inject_response_metadata(self, node, inject_into):
- mapping = self._build_name_to_xml_node(node)
- child_node = mapping.get('ResponseMetadata')
- if child_node is not None:
- sub_mapping = self._build_name_to_xml_node(child_node)
- for key, value in sub_mapping.items():
- sub_mapping[key] = value.text
- inject_into['ResponseMetadata'] = sub_mapping
-
-
-class EC2QueryParser(QueryParser):
- def _inject_response_metadata(self, node, inject_into):
- mapping = self._build_name_to_xml_node(node)
- child_node = mapping.get('requestId')
- if child_node is not None:
- inject_into['ResponseMetadata'] = {'RequestId': child_node.text}
-
- def _do_error_parse(self, response, shape):
- # EC2 errors look like:
- #
- #
- #
- # InvalidInstanceID.Malformed
- # Invalid id: "1343124"
- #
- #
- # 12345
- #
- # This is different from QueryParser in that it's RequestID,
- # not RequestId
- original = super()._do_error_parse(response, shape)
- if 'RequestID' in original:
- original['ResponseMetadata'] = {
- 'RequestId': original.pop('RequestID')
- }
- return original
-
- def _get_error_root(self, original_root):
- for child in original_root:
- if self._node_tag(child) == 'Errors':
- for errors_child in child:
- if self._node_tag(errors_child) == 'Error':
- return errors_child
- return original_root
-
-
-class BaseJSONParser(ResponseParser):
- def _handle_structure(self, shape, value):
- final_parsed = {}
- if shape.is_document_type:
- final_parsed = value
- else:
- member_shapes = shape.members
- if value is None:
- # If the comes across the wire as "null" (None in python),
- # we should be returning this unchanged, instead of as an
- # empty dict.
- return None
- final_parsed = {}
- if self._has_unknown_tagged_union_member(shape, value):
- tag = self._get_first_key(value)
- return self._handle_unknown_tagged_union_member(tag)
- for member_name in member_shapes:
- member_shape = member_shapes[member_name]
- json_name = member_shape.serialization.get('name', member_name)
- raw_value = value.get(json_name)
- if raw_value is not None:
- final_parsed[member_name] = self._parse_shape(
- member_shapes[member_name], raw_value
- )
- return final_parsed
-
- def _handle_map(self, shape, value):
- parsed = {}
- key_shape = shape.key
- value_shape = shape.value
- for key, value in value.items():
- actual_key = self._parse_shape(key_shape, key)
- actual_value = self._parse_shape(value_shape, value)
- parsed[actual_key] = actual_value
- return parsed
-
- def _handle_blob(self, shape, value):
- return self._blob_parser(value)
-
- def _handle_timestamp(self, shape, value):
- return self._timestamp_parser(value)
-
- def _do_error_parse(self, response, shape):
- body = self._parse_body_as_json(response['body'])
- error = {"Error": {"Message": '', "Code": ''}, "ResponseMetadata": {}}
- headers = response['headers']
- # Error responses can have slightly different structures for json.
- # The basic structure is:
- #
- # {"__type":"ConnectClientException",
- # "message":"The error message."}
-
- # The error message can either come in the 'message' or 'Message' key
- # so we need to check for both.
- error['Error']['Message'] = body.get(
- 'message', body.get('Message', '')
- )
- # if the message did not contain an error code
- # include the response status code
- response_code = response.get('status_code')
- # Error response may contain an x-amzn-query-error header for json
- # we need to fetch the error code from this header in that case
- query_error = headers.get('x-amzn-query-error', '')
- query_error_components = query_error.split(';')
- code = None
- if len(query_error_components) == 2 and query_error_components[0]:
- code = query_error_components[0]
- error['Error']['Type'] = query_error_components[1]
- if code is None:
- code = body.get('__type', response_code and str(response_code))
- if code is not None:
- # code has a couple forms as well:
- # * "com.aws.dynamodb.vAPI#ProvisionedThroughputExceededException"
- # * "ResourceNotFoundException"
- if '#' in code:
- code = code.rsplit('#', 1)[1]
- error['Error']['Code'] = code
- self._inject_response_metadata(error, response['headers'])
- return error
-
- def _inject_response_metadata(self, parsed, headers):
- if 'x-amzn-requestid' in headers:
- parsed.setdefault('ResponseMetadata', {})['RequestId'] = headers[
- 'x-amzn-requestid'
- ]
-
- def _parse_body_as_json(self, body_contents):
- if not body_contents:
- return {}
- body = body_contents.decode(self.DEFAULT_ENCODING)
- try:
- original_parsed = json.loads(body)
- return original_parsed
- except ValueError:
- # if the body cannot be parsed, include
- # the literal string as the message
- return {'message': body}
-
-
-class BaseEventStreamParser(ResponseParser):
- def _do_parse(self, response, shape):
- final_parsed = {}
- if shape.serialization.get('eventstream'):
- event_type = response['headers'].get(':event-type')
- event_shape = shape.members.get(event_type)
- if event_shape:
- final_parsed[event_type] = self._do_parse(
- response, event_shape
- )
- else:
- self._parse_non_payload_attrs(
- response, shape, shape.members, final_parsed
- )
- self._parse_payload(response, shape, shape.members, final_parsed)
- return final_parsed
-
- def _do_error_parse(self, response, shape):
- exception_type = response['headers'].get(':exception-type')
- exception_shape = shape.members.get(exception_type)
- if exception_shape is not None:
- original_parsed = self._initial_body_parse(response['body'])
- body = self._parse_shape(exception_shape, original_parsed)
- error = {
- 'Error': {
- 'Code': exception_type,
- 'Message': body.get('Message', body.get('message', '')),
- }
- }
- else:
- error = {
- 'Error': {
- 'Code': response['headers'].get(':error-code', ''),
- 'Message': response['headers'].get(':error-message', ''),
- }
- }
- return error
-
- def _parse_payload(self, response, shape, member_shapes, final_parsed):
- if shape.serialization.get('event'):
- for name in member_shapes:
- member_shape = member_shapes[name]
- if member_shape.serialization.get('eventpayload'):
- body = response['body']
- if member_shape.type_name == 'blob':
- parsed_body = body
- elif member_shape.type_name == 'string':
- parsed_body = body.decode(self.DEFAULT_ENCODING)
- else:
- raw_parse = self._initial_body_parse(body)
- parsed_body = self._parse_shape(
- member_shape, raw_parse
- )
- final_parsed[name] = parsed_body
- return
- # If we didn't find an explicit payload, use the current shape
- original_parsed = self._initial_body_parse(response['body'])
- body_parsed = self._parse_shape(shape, original_parsed)
- final_parsed.update(body_parsed)
-
- def _parse_non_payload_attrs(
- self, response, shape, member_shapes, final_parsed
- ):
- headers = response['headers']
- for name in member_shapes:
- member_shape = member_shapes[name]
- if member_shape.serialization.get('eventheader'):
- if name in headers:
- value = headers[name]
- if member_shape.type_name == 'timestamp':
- # Event stream timestamps are an in milleseconds so we
- # divide by 1000 to convert to seconds.
- value = self._timestamp_parser(value / 1000.0)
- final_parsed[name] = value
-
- def _initial_body_parse(self, body_contents):
- # This method should do the initial xml/json parsing of the
- # body. We we still need to walk the parsed body in order
- # to convert types, but this method will do the first round
- # of parsing.
- raise NotImplementedError("_initial_body_parse")
-
-
-class EventStreamJSONParser(BaseEventStreamParser, BaseJSONParser):
- def _initial_body_parse(self, body_contents):
- return self._parse_body_as_json(body_contents)
-
-
-class EventStreamXMLParser(BaseEventStreamParser, BaseXMLResponseParser):
- def _initial_body_parse(self, xml_string):
- if not xml_string:
- return ETree.Element('')
- return self._parse_xml_string_to_dom(xml_string)
-
-
-class JSONParser(BaseJSONParser):
-
- EVENT_STREAM_PARSER_CLS = EventStreamJSONParser
-
- """Response parser for the "json" protocol."""
-
- def _do_parse(self, response, shape):
- parsed = {}
- if shape is not None:
- event_name = shape.event_stream_name
- if event_name:
- parsed = self._handle_event_stream(response, shape, event_name)
- else:
- parsed = self._handle_json_body(response['body'], shape)
- self._inject_response_metadata(parsed, response['headers'])
- return parsed
-
- def _do_modeled_error_parse(self, response, shape):
- return self._handle_json_body(response['body'], shape)
-
- def _handle_event_stream(self, response, shape, event_name):
- event_stream_shape = shape.members[event_name]
- event_stream = self._create_event_stream(response, event_stream_shape)
- try:
- event = event_stream.get_initial_response()
- except NoInitialResponseError:
- error_msg = 'First event was not of type initial-response'
- raise ResponseParserError(error_msg)
- parsed = self._handle_json_body(event.payload, shape)
- parsed[event_name] = event_stream
- return parsed
-
- def _handle_json_body(self, raw_body, shape):
- # The json.loads() gives us the primitive JSON types,
- # but we need to traverse the parsed JSON data to convert
- # to richer types (blobs, timestamps, etc.
- parsed_json = self._parse_body_as_json(raw_body)
- return self._parse_shape(shape, parsed_json)
-
-
-class BaseRestParser(ResponseParser):
- def _do_parse(self, response, shape):
- final_parsed = {}
- final_parsed['ResponseMetadata'] = self._populate_response_metadata(
- response
- )
- self._add_modeled_parse(response, shape, final_parsed)
- return final_parsed
-
- def _add_modeled_parse(self, response, shape, final_parsed):
- if shape is None:
- return final_parsed
- member_shapes = shape.members
- self._parse_non_payload_attrs(
- response, shape, member_shapes, final_parsed
- )
- self._parse_payload(response, shape, member_shapes, final_parsed)
-
- def _do_modeled_error_parse(self, response, shape):
- final_parsed = {}
- self._add_modeled_parse(response, shape, final_parsed)
- return final_parsed
-
- def _populate_response_metadata(self, response):
- metadata = {}
- headers = response['headers']
- if 'x-amzn-requestid' in headers:
- metadata['RequestId'] = headers['x-amzn-requestid']
- elif 'x-amz-request-id' in headers:
- metadata['RequestId'] = headers['x-amz-request-id']
- # HostId is what it's called whenever this value is returned
- # in an XML response body, so to be consistent, we'll always
- # call is HostId.
- metadata['HostId'] = headers.get('x-amz-id-2', '')
- return metadata
-
- def _parse_payload(self, response, shape, member_shapes, final_parsed):
- if 'payload' in shape.serialization:
- # If a payload is specified in the output shape, then only that
- # shape is used for the body payload.
- payload_member_name = shape.serialization['payload']
- body_shape = member_shapes[payload_member_name]
- if body_shape.serialization.get('eventstream'):
- body = self._create_event_stream(response, body_shape)
- final_parsed[payload_member_name] = body
- elif body_shape.type_name in ['string', 'blob']:
- # This is a stream
- body = response['body']
- if isinstance(body, bytes):
- body = body.decode(self.DEFAULT_ENCODING)
- final_parsed[payload_member_name] = body
- else:
- original_parsed = self._initial_body_parse(response['body'])
- final_parsed[payload_member_name] = self._parse_shape(
- body_shape, original_parsed
- )
- else:
- original_parsed = self._initial_body_parse(response['body'])
- body_parsed = self._parse_shape(shape, original_parsed)
- final_parsed.update(body_parsed)
-
- def _parse_non_payload_attrs(
- self, response, shape, member_shapes, final_parsed
- ):
- headers = response['headers']
- for name in member_shapes:
- member_shape = member_shapes[name]
- location = member_shape.serialization.get('location')
- if location is None:
- continue
- elif location == 'statusCode':
- final_parsed[name] = self._parse_shape(
- member_shape, response['status_code']
- )
- elif location == 'headers':
- final_parsed[name] = self._parse_header_map(
- member_shape, headers
- )
- elif location == 'header':
- header_name = member_shape.serialization.get('name', name)
- if header_name in headers:
- final_parsed[name] = self._parse_shape(
- member_shape, headers[header_name]
- )
-
- def _parse_header_map(self, shape, headers):
- # Note that headers are case insensitive, so we .lower()
- # all header names and header prefixes.
- parsed = {}
- prefix = shape.serialization.get('name', '').lower()
- for header_name in headers:
- if header_name.lower().startswith(prefix):
- # The key name inserted into the parsed hash
- # strips off the prefix.
- name = header_name[len(prefix) :]
- parsed[name] = headers[header_name]
- return parsed
-
- def _initial_body_parse(self, body_contents):
- # This method should do the initial xml/json parsing of the
- # body. We we still need to walk the parsed body in order
- # to convert types, but this method will do the first round
- # of parsing.
- raise NotImplementedError("_initial_body_parse")
-
- def _handle_string(self, shape, value):
- parsed = value
- if is_json_value_header(shape):
- decoded = base64.b64decode(value).decode(self.DEFAULT_ENCODING)
- parsed = json.loads(decoded)
- return parsed
-
- def _handle_list(self, shape, node):
- location = shape.serialization.get('location')
- if location == 'header' and not isinstance(node, list):
- # List in headers may be a comma separated string as per RFC7230
- node = [e.strip() for e in node.split(',')]
- return super()._handle_list(shape, node)
-
-
-class RestJSONParser(BaseRestParser, BaseJSONParser):
-
- EVENT_STREAM_PARSER_CLS = EventStreamJSONParser
-
- def _initial_body_parse(self, body_contents):
- return self._parse_body_as_json(body_contents)
-
- def _do_error_parse(self, response, shape):
- error = super()._do_error_parse(response, shape)
- self._inject_error_code(error, response)
- return error
-
- def _inject_error_code(self, error, response):
- # The "Code" value can come from either a response
- # header or a value in the JSON body.
- body = self._initial_body_parse(response['body'])
- if 'x-amzn-errortype' in response['headers']:
- code = response['headers']['x-amzn-errortype']
- # Could be:
- # x-amzn-errortype: ValidationException:
- code = code.split(':')[0]
- error['Error']['Code'] = code
- elif 'code' in body or 'Code' in body:
- error['Error']['Code'] = body.get('code', body.get('Code', ''))
-
- def _handle_integer(self, shape, value):
- return int(value)
-
- _handle_long = _handle_integer
-
-
-class RestXMLParser(BaseRestParser, BaseXMLResponseParser):
-
- EVENT_STREAM_PARSER_CLS = EventStreamXMLParser
-
- def _initial_body_parse(self, xml_string):
- if not xml_string:
- return ETree.Element('')
- return self._parse_xml_string_to_dom(xml_string)
-
- def _do_error_parse(self, response, shape):
- # We're trying to be service agnostic here, but S3 does have a slightly
- # different response structure for its errors compared to other
- # rest-xml serivces (route53/cloudfront). We handle this by just
- # trying to parse both forms.
- # First:
- #
- #
- # Sender
- # InvalidInput
- # Invalid resource type: foo
- #
- # request-id
- #
- if response['body']:
- # If the body ends up being invalid xml, the xml parser should not
- # blow up. It should at least try to pull information about the
- # the error response from other sources like the HTTP status code.
- try:
- return self._parse_error_from_body(response)
- except ResponseParserError:
- LOG.debug(
- 'Exception caught when parsing error response body:',
- exc_info=True,
- )
- return self._parse_error_from_http_status(response)
-
- def _parse_error_from_http_status(self, response):
- return {
- 'Error': {
- 'Code': str(response['status_code']),
- 'Message': http.client.responses.get(
- response['status_code'], ''
- ),
- },
- 'ResponseMetadata': {
- 'RequestId': response['headers'].get('x-amz-request-id', ''),
- 'HostId': response['headers'].get('x-amz-id-2', ''),
- },
- }
-
- def _parse_error_from_body(self, response):
- xml_contents = response['body']
- root = self._parse_xml_string_to_dom(xml_contents)
- parsed = self._build_name_to_xml_node(root)
- self._replace_nodes(parsed)
- if root.tag == 'Error':
- # This is an S3 error response. First we'll populate the
- # response metadata.
- metadata = self._populate_response_metadata(response)
- # The RequestId and the HostId are already in the
- # ResponseMetadata, but are also duplicated in the XML
- # body. We don't need these values in both places,
- # we'll just remove them from the parsed XML body.
- parsed.pop('RequestId', '')
- parsed.pop('HostId', '')
- return {'Error': parsed, 'ResponseMetadata': metadata}
- elif 'RequestId' in parsed:
- # Other rest-xml serivces:
- parsed['ResponseMetadata'] = {'RequestId': parsed.pop('RequestId')}
- default = {'Error': {'Message': '', 'Code': ''}}
- merge_dicts(default, parsed)
- return default
-
- @_text_content
- def _handle_string(self, shape, text):
- text = super()._handle_string(shape, text)
- return text
-
-
-PROTOCOL_PARSERS = {
- 'ec2': EC2QueryParser,
- 'query': QueryParser,
- 'json': JSONParser,
- 'rest-json': RestJSONParser,
- 'rest-xml': RestXMLParser,
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/__init__.py
deleted file mode 100644
index 4c6ec97ec6961bcf184b6e0b2437b9924db0b9de..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# SPDX-License-Identifier: MIT
-# SPDX-FileCopyrightText: 2021 Taneli Hukkinen
-# Licensed to PSF under a Contributor Agreement.
-
-__all__ = ("loads", "load", "TOMLDecodeError")
-__version__ = "2.0.1" # DO NOT EDIT THIS LINE MANUALLY. LET bump2version UTILITY DO IT
-
-from ._parser import TOMLDecodeError, load, loads
-
-# Pretend this exception was created here.
-TOMLDecodeError.__module__ = __name__
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/configs/quick_schedules/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/configs/quick_schedules/README.md
deleted file mode 100644
index a278199b8557a1e2fb341fe6757786a6cecb82b3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/configs/quick_schedules/README.md
+++ /dev/null
@@ -1 +0,0 @@
-These are quick configs for performance or accuracy regression tracking purposes.
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_structures.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_structures.py
deleted file mode 100644
index 34dca0cbb931effcb4a60c979ad2e32bab2eb8bf..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_structures.py
+++ /dev/null
@@ -1,25 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-import unittest
-
-from densepose.structures import normalized_coords_transform
-
-
-class TestStructures(unittest.TestCase):
- def test_normalized_coords_transform(self):
- bbox = (32, 24, 288, 216)
- x0, y0, w, h = bbox
- xmin, ymin, xmax, ymax = x0, y0, x0 + w, y0 + h
- f = normalized_coords_transform(*bbox)
- # Top-left
- expected_p, actual_p = (-1, -1), f((xmin, ymin))
- self.assertEqual(expected_p, actual_p)
- # Top-right
- expected_p, actual_p = (1, -1), f((xmax, ymin))
- self.assertEqual(expected_p, actual_p)
- # Bottom-left
- expected_p, actual_p = (-1, 1), f((xmin, ymax))
- self.assertEqual(expected_p, actual_p)
- # Bottom-right
- expected_p, actual_p = (1, 1), f((xmax, ymax))
- self.assertEqual(expected_p, actual_p)
diff --git a/spaces/CVPR/GFPGAN-example/README.md b/spaces/CVPR/GFPGAN-example/README.md
deleted file mode 100644
index c3eb25964826586eab4d1173abc151434eefd1ef..0000000000000000000000000000000000000000
--- a/spaces/CVPR/GFPGAN-example/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: GFPGAN Example
-emoji: 🚀
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/fill.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/fill.h
deleted file mode 100644
index 6665a264873f6a0a775de0aa670ee7567d899ad9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/fill.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits fill
-#include
-
diff --git a/spaces/CVPR/MonoScene/monoscene/config.py b/spaces/CVPR/MonoScene/monoscene/config.py
deleted file mode 100644
index e03e806ad5e0c7ea4c439e3e82d955e3c0b3038f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/MonoScene/monoscene/config.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from transformers import PretrainedConfig
-from typing import List
-
-
-class MonoSceneConfig(PretrainedConfig):
-
- def __init__(
- self,
- dataset="kitti",
- n_classes=20,
- feature=64,
- project_scale=2,
- full_scene_size=(256, 256, 32),
- **kwargs,
- ):
- self.dataset = dataset
- self.n_classes = n_classes
- self.feature = feature
- self.project_scale = project_scale
- self.full_scene_size = full_scene_size
- super().__init__(**kwargs)
-
-
-
-
-
diff --git a/spaces/CVPR/WALT/walt/datasets/pipelines/auto_augment.py b/spaces/CVPR/WALT/walt/datasets/pipelines/auto_augment.py
deleted file mode 100644
index e19adaec18a96cac4dbe1d8c2c9193e9901be1fb..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/walt/datasets/pipelines/auto_augment.py
+++ /dev/null
@@ -1,890 +0,0 @@
-import copy
-
-import cv2
-import mmcv
-import numpy as np
-
-from ..builder import PIPELINES
-from .compose import Compose
-
-_MAX_LEVEL = 10
-
-
-def level_to_value(level, max_value):
- """Map from level to values based on max_value."""
- return (level / _MAX_LEVEL) * max_value
-
-
-def enhance_level_to_value(level, a=1.8, b=0.1):
- """Map from level to values."""
- return (level / _MAX_LEVEL) * a + b
-
-
-def random_negative(value, random_negative_prob):
- """Randomly negate value based on random_negative_prob."""
- return -value if np.random.rand() < random_negative_prob else value
-
-
-def bbox2fields():
- """The key correspondence from bboxes to labels, masks and
- segmentations."""
- bbox2label = {
- 'gt_bboxes': 'gt_labels',
- 'gt_bboxes_ignore': 'gt_labels_ignore'
- }
- bbox2mask = {
- 'gt_bboxes': 'gt_masks',
- 'gt_bboxes_ignore': 'gt_masks_ignore'
- }
- bbox2seg = {
- 'gt_bboxes': 'gt_semantic_seg',
- }
- return bbox2label, bbox2mask, bbox2seg
-
-
-@PIPELINES.register_module()
-class AutoAugment(object):
- """Auto augmentation.
-
- This data augmentation is proposed in `Learning Data Augmentation
- Strategies for Object Detection `_.
-
- TODO: Implement 'Shear', 'Sharpness' and 'Rotate' transforms
-
- Args:
- policies (list[list[dict]]): The policies of auto augmentation. Each
- policy in ``policies`` is a specific augmentation policy, and is
- composed by several augmentations (dict). When AutoAugment is
- called, a random policy in ``policies`` will be selected to
- augment images.
-
- Examples:
- >>> replace = (104, 116, 124)
- >>> policies = [
- >>> [
- >>> dict(type='Sharpness', prob=0.0, level=8),
- >>> dict(
- >>> type='Shear',
- >>> prob=0.4,
- >>> level=0,
- >>> replace=replace,
- >>> axis='x')
- >>> ],
- >>> [
- >>> dict(
- >>> type='Rotate',
- >>> prob=0.6,
- >>> level=10,
- >>> replace=replace),
- >>> dict(type='Color', prob=1.0, level=6)
- >>> ]
- >>> ]
- >>> augmentation = AutoAugment(policies)
- >>> img = np.ones(100, 100, 3)
- >>> gt_bboxes = np.ones(10, 4)
- >>> results = dict(img=img, gt_bboxes=gt_bboxes)
- >>> results = augmentation(results)
- """
-
- def __init__(self, policies):
- assert isinstance(policies, list) and len(policies) > 0, \
- 'Policies must be a non-empty list.'
- for policy in policies:
- assert isinstance(policy, list) and len(policy) > 0, \
- 'Each policy in policies must be a non-empty list.'
- for augment in policy:
- assert isinstance(augment, dict) and 'type' in augment, \
- 'Each specific augmentation must be a dict with key' \
- ' "type".'
-
- self.policies = copy.deepcopy(policies)
- self.transforms = [Compose(policy) for policy in self.policies]
-
- def __call__(self, results):
- transform = np.random.choice(self.transforms)
- return transform(results)
-
- def __repr__(self):
- return f'{self.__class__.__name__}(policies={self.policies})'
-
-
-@PIPELINES.register_module()
-class Shear(object):
- """Apply Shear Transformation to image (and its corresponding bbox, mask,
- segmentation).
-
- Args:
- level (int | float): The level should be in range [0,_MAX_LEVEL].
- img_fill_val (int | float | tuple): The filled values for image border.
- If float, the same fill value will be used for all the three
- channels of image. If tuple, the should be 3 elements.
- seg_ignore_label (int): The fill value used for segmentation map.
- Note this value must equals ``ignore_label`` in ``semantic_head``
- of the corresponding config. Default 255.
- prob (float): The probability for performing Shear and should be in
- range [0, 1].
- direction (str): The direction for shear, either "horizontal"
- or "vertical".
- max_shear_magnitude (float): The maximum magnitude for Shear
- transformation.
- random_negative_prob (float): The probability that turns the
- offset negative. Should be in range [0,1]
- interpolation (str): Same as in :func:`mmcv.imshear`.
- """
-
- def __init__(self,
- level,
- img_fill_val=128,
- seg_ignore_label=255,
- prob=0.5,
- direction='horizontal',
- max_shear_magnitude=0.3,
- random_negative_prob=0.5,
- interpolation='bilinear'):
- assert isinstance(level, (int, float)), 'The level must be type ' \
- f'int or float, got {type(level)}.'
- assert 0 <= level <= _MAX_LEVEL, 'The level should be in range ' \
- f'[0,{_MAX_LEVEL}], got {level}.'
- if isinstance(img_fill_val, (float, int)):
- img_fill_val = tuple([float(img_fill_val)] * 3)
- elif isinstance(img_fill_val, tuple):
- assert len(img_fill_val) == 3, 'img_fill_val as tuple must ' \
- f'have 3 elements. got {len(img_fill_val)}.'
- img_fill_val = tuple([float(val) for val in img_fill_val])
- else:
- raise ValueError(
- 'img_fill_val must be float or tuple with 3 elements.')
- assert np.all([0 <= val <= 255 for val in img_fill_val]), 'all ' \
- 'elements of img_fill_val should between range [0,255].' \
- f'got {img_fill_val}.'
- assert 0 <= prob <= 1.0, 'The probability of shear should be in ' \
- f'range [0,1]. got {prob}.'
- assert direction in ('horizontal', 'vertical'), 'direction must ' \
- f'in be either "horizontal" or "vertical". got {direction}.'
- assert isinstance(max_shear_magnitude, float), 'max_shear_magnitude ' \
- f'should be type float. got {type(max_shear_magnitude)}.'
- assert 0. <= max_shear_magnitude <= 1., 'Defaultly ' \
- 'max_shear_magnitude should be in range [0,1]. ' \
- f'got {max_shear_magnitude}.'
- self.level = level
- self.magnitude = level_to_value(level, max_shear_magnitude)
- self.img_fill_val = img_fill_val
- self.seg_ignore_label = seg_ignore_label
- self.prob = prob
- self.direction = direction
- self.max_shear_magnitude = max_shear_magnitude
- self.random_negative_prob = random_negative_prob
- self.interpolation = interpolation
-
- def _shear_img(self,
- results,
- magnitude,
- direction='horizontal',
- interpolation='bilinear'):
- """Shear the image.
-
- Args:
- results (dict): Result dict from loading pipeline.
- magnitude (int | float): The magnitude used for shear.
- direction (str): The direction for shear, either "horizontal"
- or "vertical".
- interpolation (str): Same as in :func:`mmcv.imshear`.
- """
- for key in results.get('img_fields', ['img']):
- img = results[key]
- img_sheared = mmcv.imshear(
- img,
- magnitude,
- direction,
- border_value=self.img_fill_val,
- interpolation=interpolation)
- results[key] = img_sheared.astype(img.dtype)
-
- def _shear_bboxes(self, results, magnitude):
- """Shear the bboxes."""
- h, w, c = results['img_shape']
- if self.direction == 'horizontal':
- shear_matrix = np.stack([[1, magnitude],
- [0, 1]]).astype(np.float32) # [2, 2]
- else:
- shear_matrix = np.stack([[1, 0], [magnitude,
- 1]]).astype(np.float32)
- for key in results.get('bbox_fields', []):
- min_x, min_y, max_x, max_y = np.split(
- results[key], results[key].shape[-1], axis=-1)
- coordinates = np.stack([[min_x, min_y], [max_x, min_y],
- [min_x, max_y],
- [max_x, max_y]]) # [4, 2, nb_box, 1]
- coordinates = coordinates[..., 0].transpose(
- (2, 1, 0)).astype(np.float32) # [nb_box, 2, 4]
- new_coords = np.matmul(shear_matrix[None, :, :],
- coordinates) # [nb_box, 2, 4]
- min_x = np.min(new_coords[:, 0, :], axis=-1)
- min_y = np.min(new_coords[:, 1, :], axis=-1)
- max_x = np.max(new_coords[:, 0, :], axis=-1)
- max_y = np.max(new_coords[:, 1, :], axis=-1)
- min_x = np.clip(min_x, a_min=0, a_max=w)
- min_y = np.clip(min_y, a_min=0, a_max=h)
- max_x = np.clip(max_x, a_min=min_x, a_max=w)
- max_y = np.clip(max_y, a_min=min_y, a_max=h)
- results[key] = np.stack([min_x, min_y, max_x, max_y],
- axis=-1).astype(results[key].dtype)
-
- def _shear_masks(self,
- results,
- magnitude,
- direction='horizontal',
- fill_val=0,
- interpolation='bilinear'):
- """Shear the masks."""
- h, w, c = results['img_shape']
- for key in results.get('mask_fields', []):
- masks = results[key]
- results[key] = masks.shear((h, w),
- magnitude,
- direction,
- border_value=fill_val,
- interpolation=interpolation)
-
- def _shear_seg(self,
- results,
- magnitude,
- direction='horizontal',
- fill_val=255,
- interpolation='bilinear'):
- """Shear the segmentation maps."""
- for key in results.get('seg_fields', []):
- seg = results[key]
- results[key] = mmcv.imshear(
- seg,
- magnitude,
- direction,
- border_value=fill_val,
- interpolation=interpolation).astype(seg.dtype)
-
- def _filter_invalid(self, results, min_bbox_size=0):
- """Filter bboxes and corresponding masks too small after shear
- augmentation."""
- bbox2label, bbox2mask, _ = bbox2fields()
- for key in results.get('bbox_fields', []):
- bbox_w = results[key][:, 2] - results[key][:, 0]
- bbox_h = results[key][:, 3] - results[key][:, 1]
- valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size)
- valid_inds = np.nonzero(valid_inds)[0]
- results[key] = results[key][valid_inds]
- # label fields. e.g. gt_labels and gt_labels_ignore
- label_key = bbox2label.get(key)
- if label_key in results:
- results[label_key] = results[label_key][valid_inds]
- # mask fields, e.g. gt_masks and gt_masks_ignore
- mask_key = bbox2mask.get(key)
- if mask_key in results:
- results[mask_key] = results[mask_key][valid_inds]
-
- def __call__(self, results):
- """Call function to shear images, bounding boxes, masks and semantic
- segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Sheared results.
- """
- if np.random.rand() > self.prob:
- return results
- magnitude = random_negative(self.magnitude, self.random_negative_prob)
- self._shear_img(results, magnitude, self.direction, self.interpolation)
- self._shear_bboxes(results, magnitude)
- # fill_val set to 0 for background of mask.
- self._shear_masks(
- results,
- magnitude,
- self.direction,
- fill_val=0,
- interpolation=self.interpolation)
- self._shear_seg(
- results,
- magnitude,
- self.direction,
- fill_val=self.seg_ignore_label,
- interpolation=self.interpolation)
- self._filter_invalid(results)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(level={self.level}, '
- repr_str += f'img_fill_val={self.img_fill_val}, '
- repr_str += f'seg_ignore_label={self.seg_ignore_label}, '
- repr_str += f'prob={self.prob}, '
- repr_str += f'direction={self.direction}, '
- repr_str += f'max_shear_magnitude={self.max_shear_magnitude}, '
- repr_str += f'random_negative_prob={self.random_negative_prob}, '
- repr_str += f'interpolation={self.interpolation})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class Rotate(object):
- """Apply Rotate Transformation to image (and its corresponding bbox, mask,
- segmentation).
-
- Args:
- level (int | float): The level should be in range (0,_MAX_LEVEL].
- scale (int | float): Isotropic scale factor. Same in
- ``mmcv.imrotate``.
- center (int | float | tuple[float]): Center point (w, h) of the
- rotation in the source image. If None, the center of the
- image will be used. Same in ``mmcv.imrotate``.
- img_fill_val (int | float | tuple): The fill value for image border.
- If float, the same value will be used for all the three
- channels of image. If tuple, the should be 3 elements (e.g.
- equals the number of channels for image).
- seg_ignore_label (int): The fill value used for segmentation map.
- Note this value must equals ``ignore_label`` in ``semantic_head``
- of the corresponding config. Default 255.
- prob (float): The probability for perform transformation and
- should be in range 0 to 1.
- max_rotate_angle (int | float): The maximum angles for rotate
- transformation.
- random_negative_prob (float): The probability that turns the
- offset negative.
- """
-
- def __init__(self,
- level,
- scale=1,
- center=None,
- img_fill_val=128,
- seg_ignore_label=255,
- prob=0.5,
- max_rotate_angle=30,
- random_negative_prob=0.5):
- assert isinstance(level, (int, float)), \
- f'The level must be type int or float. got {type(level)}.'
- assert 0 <= level <= _MAX_LEVEL, \
- f'The level should be in range (0,{_MAX_LEVEL}]. got {level}.'
- assert isinstance(scale, (int, float)), \
- f'The scale must be type int or float. got type {type(scale)}.'
- if isinstance(center, (int, float)):
- center = (center, center)
- elif isinstance(center, tuple):
- assert len(center) == 2, 'center with type tuple must have '\
- f'2 elements. got {len(center)} elements.'
- else:
- assert center is None, 'center must be None or type int, '\
- f'float or tuple, got type {type(center)}.'
- if isinstance(img_fill_val, (float, int)):
- img_fill_val = tuple([float(img_fill_val)] * 3)
- elif isinstance(img_fill_val, tuple):
- assert len(img_fill_val) == 3, 'img_fill_val as tuple must '\
- f'have 3 elements. got {len(img_fill_val)}.'
- img_fill_val = tuple([float(val) for val in img_fill_val])
- else:
- raise ValueError(
- 'img_fill_val must be float or tuple with 3 elements.')
- assert np.all([0 <= val <= 255 for val in img_fill_val]), \
- 'all elements of img_fill_val should between range [0,255]. '\
- f'got {img_fill_val}.'
- assert 0 <= prob <= 1.0, 'The probability should be in range [0,1]. '\
- 'got {prob}.'
- assert isinstance(max_rotate_angle, (int, float)), 'max_rotate_angle '\
- f'should be type int or float. got type {type(max_rotate_angle)}.'
- self.level = level
- self.scale = scale
- # Rotation angle in degrees. Positive values mean
- # clockwise rotation.
- self.angle = level_to_value(level, max_rotate_angle)
- self.center = center
- self.img_fill_val = img_fill_val
- self.seg_ignore_label = seg_ignore_label
- self.prob = prob
- self.max_rotate_angle = max_rotate_angle
- self.random_negative_prob = random_negative_prob
-
- def _rotate_img(self, results, angle, center=None, scale=1.0):
- """Rotate the image.
-
- Args:
- results (dict): Result dict from loading pipeline.
- angle (float): Rotation angle in degrees, positive values
- mean clockwise rotation. Same in ``mmcv.imrotate``.
- center (tuple[float], optional): Center point (w, h) of the
- rotation. Same in ``mmcv.imrotate``.
- scale (int | float): Isotropic scale factor. Same in
- ``mmcv.imrotate``.
- """
- for key in results.get('img_fields', ['img']):
- img = results[key].copy()
- img_rotated = mmcv.imrotate(
- img, angle, center, scale, border_value=self.img_fill_val)
- results[key] = img_rotated.astype(img.dtype)
-
- def _rotate_bboxes(self, results, rotate_matrix):
- """Rotate the bboxes."""
- h, w, c = results['img_shape']
- for key in results.get('bbox_fields', []):
- min_x, min_y, max_x, max_y = np.split(
- results[key], results[key].shape[-1], axis=-1)
- coordinates = np.stack([[min_x, min_y], [max_x, min_y],
- [min_x, max_y],
- [max_x, max_y]]) # [4, 2, nb_bbox, 1]
- # pad 1 to convert from format [x, y] to homogeneous
- # coordinates format [x, y, 1]
- coordinates = np.concatenate(
- (coordinates,
- np.ones((4, 1, coordinates.shape[2], 1), coordinates.dtype)),
- axis=1) # [4, 3, nb_bbox, 1]
- coordinates = coordinates.transpose(
- (2, 0, 1, 3)) # [nb_bbox, 4, 3, 1]
- rotated_coords = np.matmul(rotate_matrix,
- coordinates) # [nb_bbox, 4, 2, 1]
- rotated_coords = rotated_coords[..., 0] # [nb_bbox, 4, 2]
- min_x, min_y = np.min(
- rotated_coords[:, :, 0], axis=1), np.min(
- rotated_coords[:, :, 1], axis=1)
- max_x, max_y = np.max(
- rotated_coords[:, :, 0], axis=1), np.max(
- rotated_coords[:, :, 1], axis=1)
- min_x, min_y = np.clip(
- min_x, a_min=0, a_max=w), np.clip(
- min_y, a_min=0, a_max=h)
- max_x, max_y = np.clip(
- max_x, a_min=min_x, a_max=w), np.clip(
- max_y, a_min=min_y, a_max=h)
- results[key] = np.stack([min_x, min_y, max_x, max_y],
- axis=-1).astype(results[key].dtype)
-
- def _rotate_masks(self,
- results,
- angle,
- center=None,
- scale=1.0,
- fill_val=0):
- """Rotate the masks."""
- h, w, c = results['img_shape']
- for key in results.get('mask_fields', []):
- masks = results[key]
- results[key] = masks.rotate((h, w), angle, center, scale, fill_val)
-
- def _rotate_seg(self,
- results,
- angle,
- center=None,
- scale=1.0,
- fill_val=255):
- """Rotate the segmentation map."""
- for key in results.get('seg_fields', []):
- seg = results[key].copy()
- results[key] = mmcv.imrotate(
- seg, angle, center, scale,
- border_value=fill_val).astype(seg.dtype)
-
- def _filter_invalid(self, results, min_bbox_size=0):
- """Filter bboxes and corresponding masks too small after rotate
- augmentation."""
- bbox2label, bbox2mask, _ = bbox2fields()
- for key in results.get('bbox_fields', []):
- bbox_w = results[key][:, 2] - results[key][:, 0]
- bbox_h = results[key][:, 3] - results[key][:, 1]
- valid_inds = (bbox_w > min_bbox_size) & (bbox_h > min_bbox_size)
- valid_inds = np.nonzero(valid_inds)[0]
- results[key] = results[key][valid_inds]
- # label fields. e.g. gt_labels and gt_labels_ignore
- label_key = bbox2label.get(key)
- if label_key in results:
- results[label_key] = results[label_key][valid_inds]
- # mask fields, e.g. gt_masks and gt_masks_ignore
- mask_key = bbox2mask.get(key)
- if mask_key in results:
- results[mask_key] = results[mask_key][valid_inds]
-
- def __call__(self, results):
- """Call function to rotate images, bounding boxes, masks and semantic
- segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Rotated results.
- """
- if np.random.rand() > self.prob:
- return results
- h, w = results['img'].shape[:2]
- center = self.center
- if center is None:
- center = ((w - 1) * 0.5, (h - 1) * 0.5)
- angle = random_negative(self.angle, self.random_negative_prob)
- self._rotate_img(results, angle, center, self.scale)
- rotate_matrix = cv2.getRotationMatrix2D(center, -angle, self.scale)
- self._rotate_bboxes(results, rotate_matrix)
- self._rotate_masks(results, angle, center, self.scale, fill_val=0)
- self._rotate_seg(
- results, angle, center, self.scale, fill_val=self.seg_ignore_label)
- self._filter_invalid(results)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(level={self.level}, '
- repr_str += f'scale={self.scale}, '
- repr_str += f'center={self.center}, '
- repr_str += f'img_fill_val={self.img_fill_val}, '
- repr_str += f'seg_ignore_label={self.seg_ignore_label}, '
- repr_str += f'prob={self.prob}, '
- repr_str += f'max_rotate_angle={self.max_rotate_angle}, '
- repr_str += f'random_negative_prob={self.random_negative_prob})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class Translate(object):
- """Translate the images, bboxes, masks and segmentation maps horizontally
- or vertically.
-
- Args:
- level (int | float): The level for Translate and should be in
- range [0,_MAX_LEVEL].
- prob (float): The probability for performing translation and
- should be in range [0, 1].
- img_fill_val (int | float | tuple): The filled value for image
- border. If float, the same fill value will be used for all
- the three channels of image. If tuple, the should be 3
- elements (e.g. equals the number of channels for image).
- seg_ignore_label (int): The fill value used for segmentation map.
- Note this value must equals ``ignore_label`` in ``semantic_head``
- of the corresponding config. Default 255.
- direction (str): The translate direction, either "horizontal"
- or "vertical".
- max_translate_offset (int | float): The maximum pixel's offset for
- Translate.
- random_negative_prob (float): The probability that turns the
- offset negative.
- min_size (int | float): The minimum pixel for filtering
- invalid bboxes after the translation.
- """
-
- def __init__(self,
- level,
- prob=0.5,
- img_fill_val=128,
- seg_ignore_label=255,
- direction='horizontal',
- max_translate_offset=250.,
- random_negative_prob=0.5,
- min_size=0):
- assert isinstance(level, (int, float)), \
- 'The level must be type int or float.'
- assert 0 <= level <= _MAX_LEVEL, \
- 'The level used for calculating Translate\'s offset should be ' \
- 'in range [0,_MAX_LEVEL]'
- assert 0 <= prob <= 1.0, \
- 'The probability of translation should be in range [0, 1].'
- if isinstance(img_fill_val, (float, int)):
- img_fill_val = tuple([float(img_fill_val)] * 3)
- elif isinstance(img_fill_val, tuple):
- assert len(img_fill_val) == 3, \
- 'img_fill_val as tuple must have 3 elements.'
- img_fill_val = tuple([float(val) for val in img_fill_val])
- else:
- raise ValueError('img_fill_val must be type float or tuple.')
- assert np.all([0 <= val <= 255 for val in img_fill_val]), \
- 'all elements of img_fill_val should between range [0,255].'
- assert direction in ('horizontal', 'vertical'), \
- 'direction should be "horizontal" or "vertical".'
- assert isinstance(max_translate_offset, (int, float)), \
- 'The max_translate_offset must be type int or float.'
- # the offset used for translation
- self.offset = int(level_to_value(level, max_translate_offset))
- self.level = level
- self.prob = prob
- self.img_fill_val = img_fill_val
- self.seg_ignore_label = seg_ignore_label
- self.direction = direction
- self.max_translate_offset = max_translate_offset
- self.random_negative_prob = random_negative_prob
- self.min_size = min_size
-
- def _translate_img(self, results, offset, direction='horizontal'):
- """Translate the image.
-
- Args:
- results (dict): Result dict from loading pipeline.
- offset (int | float): The offset for translate.
- direction (str): The translate direction, either "horizontal"
- or "vertical".
- """
- for key in results.get('img_fields', ['img']):
- img = results[key].copy()
- results[key] = mmcv.imtranslate(
- img, offset, direction, self.img_fill_val).astype(img.dtype)
-
- def _translate_bboxes(self, results, offset):
- """Shift bboxes horizontally or vertically, according to offset."""
- h, w, c = results['img_shape']
- for key in results.get('bbox_fields', []):
- min_x, min_y, max_x, max_y = np.split(
- results[key], results[key].shape[-1], axis=-1)
- if self.direction == 'horizontal':
- min_x = np.maximum(0, min_x + offset)
- max_x = np.minimum(w, max_x + offset)
- elif self.direction == 'vertical':
- min_y = np.maximum(0, min_y + offset)
- max_y = np.minimum(h, max_y + offset)
-
- # the boxes translated outside of image will be filtered along with
- # the corresponding masks, by invoking ``_filter_invalid``.
- results[key] = np.concatenate([min_x, min_y, max_x, max_y],
- axis=-1)
-
- def _translate_masks(self,
- results,
- offset,
- direction='horizontal',
- fill_val=0):
- """Translate masks horizontally or vertically."""
- h, w, c = results['img_shape']
- for key in results.get('mask_fields', []):
- masks = results[key]
- results[key] = masks.translate((h, w), offset, direction, fill_val)
-
- def _translate_seg(self,
- results,
- offset,
- direction='horizontal',
- fill_val=255):
- """Translate segmentation maps horizontally or vertically."""
- for key in results.get('seg_fields', []):
- seg = results[key].copy()
- results[key] = mmcv.imtranslate(seg, offset, direction,
- fill_val).astype(seg.dtype)
-
- def _filter_invalid(self, results, min_size=0):
- """Filter bboxes and masks too small or translated out of image."""
- bbox2label, bbox2mask, _ = bbox2fields()
- for key in results.get('bbox_fields', []):
- bbox_w = results[key][:, 2] - results[key][:, 0]
- bbox_h = results[key][:, 3] - results[key][:, 1]
- valid_inds = (bbox_w > min_size) & (bbox_h > min_size)
- valid_inds = np.nonzero(valid_inds)[0]
- results[key] = results[key][valid_inds]
- # label fields. e.g. gt_labels and gt_labels_ignore
- label_key = bbox2label.get(key)
- if label_key in results:
- results[label_key] = results[label_key][valid_inds]
- # mask fields, e.g. gt_masks and gt_masks_ignore
- mask_key = bbox2mask.get(key)
- if mask_key in results:
- results[mask_key] = results[mask_key][valid_inds]
- return results
-
- def __call__(self, results):
- """Call function to translate images, bounding boxes, masks and
- semantic segmentation maps.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Translated results.
- """
- if np.random.rand() > self.prob:
- return results
- offset = random_negative(self.offset, self.random_negative_prob)
- self._translate_img(results, offset, self.direction)
- self._translate_bboxes(results, offset)
- # fill_val defaultly 0 for BitmapMasks and None for PolygonMasks.
- self._translate_masks(results, offset, self.direction)
- # fill_val set to ``seg_ignore_label`` for the ignored value
- # of segmentation map.
- self._translate_seg(
- results, offset, self.direction, fill_val=self.seg_ignore_label)
- self._filter_invalid(results, min_size=self.min_size)
- return results
-
-
-@PIPELINES.register_module()
-class ColorTransform(object):
- """Apply Color transformation to image. The bboxes, masks, and
- segmentations are not modified.
-
- Args:
- level (int | float): Should be in range [0,_MAX_LEVEL].
- prob (float): The probability for performing Color transformation.
- """
-
- def __init__(self, level, prob=0.5):
- assert isinstance(level, (int, float)), \
- 'The level must be type int or float.'
- assert 0 <= level <= _MAX_LEVEL, \
- 'The level should be in range [0,_MAX_LEVEL].'
- assert 0 <= prob <= 1.0, \
- 'The probability should be in range [0,1].'
- self.level = level
- self.prob = prob
- self.factor = enhance_level_to_value(level)
-
- def _adjust_color_img(self, results, factor=1.0):
- """Apply Color transformation to image."""
- for key in results.get('img_fields', ['img']):
- # NOTE defaultly the image should be BGR format
- img = results[key]
- results[key] = mmcv.adjust_color(img, factor).astype(img.dtype)
-
- def __call__(self, results):
- """Call function for Color transformation.
-
- Args:
- results (dict): Result dict from loading pipeline.
-
- Returns:
- dict: Colored results.
- """
- if np.random.rand() > self.prob:
- return results
- self._adjust_color_img(results, self.factor)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(level={self.level}, '
- repr_str += f'prob={self.prob})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class EqualizeTransform(object):
- """Apply Equalize transformation to image. The bboxes, masks and
- segmentations are not modified.
-
- Args:
- prob (float): The probability for performing Equalize transformation.
- """
-
- def __init__(self, prob=0.5):
- assert 0 <= prob <= 1.0, \
- 'The probability should be in range [0,1].'
- self.prob = prob
-
- def _imequalize(self, results):
- """Equalizes the histogram of one image."""
- for key in results.get('img_fields', ['img']):
- img = results[key]
- results[key] = mmcv.imequalize(img).astype(img.dtype)
-
- def __call__(self, results):
- """Call function for Equalize transformation.
-
- Args:
- results (dict): Results dict from loading pipeline.
-
- Returns:
- dict: Results after the transformation.
- """
- if np.random.rand() > self.prob:
- return results
- self._imequalize(results)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(prob={self.prob})'
-
-
-@PIPELINES.register_module()
-class BrightnessTransform(object):
- """Apply Brightness transformation to image. The bboxes, masks and
- segmentations are not modified.
-
- Args:
- level (int | float): Should be in range [0,_MAX_LEVEL].
- prob (float): The probability for performing Brightness transformation.
- """
-
- def __init__(self, level, prob=0.5):
- assert isinstance(level, (int, float)), \
- 'The level must be type int or float.'
- assert 0 <= level <= _MAX_LEVEL, \
- 'The level should be in range [0,_MAX_LEVEL].'
- assert 0 <= prob <= 1.0, \
- 'The probability should be in range [0,1].'
- self.level = level
- self.prob = prob
- self.factor = enhance_level_to_value(level)
-
- def _adjust_brightness_img(self, results, factor=1.0):
- """Adjust the brightness of image."""
- for key in results.get('img_fields', ['img']):
- img = results[key]
- results[key] = mmcv.adjust_brightness(img,
- factor).astype(img.dtype)
-
- def __call__(self, results):
- """Call function for Brightness transformation.
-
- Args:
- results (dict): Results dict from loading pipeline.
-
- Returns:
- dict: Results after the transformation.
- """
- if np.random.rand() > self.prob:
- return results
- self._adjust_brightness_img(results, self.factor)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(level={self.level}, '
- repr_str += f'prob={self.prob})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class ContrastTransform(object):
- """Apply Contrast transformation to image. The bboxes, masks and
- segmentations are not modified.
-
- Args:
- level (int | float): Should be in range [0,_MAX_LEVEL].
- prob (float): The probability for performing Contrast transformation.
- """
-
- def __init__(self, level, prob=0.5):
- assert isinstance(level, (int, float)), \
- 'The level must be type int or float.'
- assert 0 <= level <= _MAX_LEVEL, \
- 'The level should be in range [0,_MAX_LEVEL].'
- assert 0 <= prob <= 1.0, \
- 'The probability should be in range [0,1].'
- self.level = level
- self.prob = prob
- self.factor = enhance_level_to_value(level)
-
- def _adjust_contrast_img(self, results, factor=1.0):
- """Adjust the image contrast."""
- for key in results.get('img_fields', ['img']):
- img = results[key]
- results[key] = mmcv.adjust_contrast(img, factor).astype(img.dtype)
-
- def __call__(self, results):
- """Call function for Contrast transformation.
-
- Args:
- results (dict): Results dict from loading pipeline.
-
- Returns:
- dict: Results after the transformation.
- """
- if np.random.rand() > self.prob:
- return results
- self._adjust_contrast_img(results, self.factor)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(level={self.level}, '
- repr_str += f'prob={self.prob})'
- return repr_str
diff --git a/spaces/CVPR/lama-example/app.py b/spaces/CVPR/lama-example/app.py
deleted file mode 100644
index 1e08c18901bb85d211d1da175995642af361b519..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/app.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import os
-os.system("gdown https://drive.google.com/uc?id=1-95IOJ-2y9BtmABiffIwndPqNZD_gLnV")
-os.system("unzip big-lama.zip")
-import cv2
-import paddlehub as hub
-import gradio as gr
-import torch
-from PIL import Image, ImageOps
-import numpy as np
-os.mkdir("data")
-os.mkdir("dataout")
-model = hub.Module(name='U2Net')
-def infer(img,mask,option):
- img = ImageOps.contain(img, (700,700))
- width, height = img.size
- img.save("./data/data.png")
- if option == "automatic (U2net)":
- result = model.Segmentation(
- images=[cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)],
- paths=None,
- batch_size=1,
- input_size=320,
- output_dir='output',
- visualization=True)
- im = Image.fromarray(result[0]['mask'])
- else:
- mask = mask.resize((width,height))
- im = mask
- im.save("./data/data_mask.png")
- os.system('python predict.py model.path=/home/user/app/big-lama/ indir=/home/user/app/data/ outdir=/home/user/app/dataout/ device=cpu')
- return "./dataout/data_mask.png",im
-
-inputs = [gr.inputs.Image(type='pil', label="Original Image"),gr.inputs.Image(type='pil',source="canvas", label="Mask",invert_colors=True),gr.inputs.Radio(choices=["automatic (U2net)","manual"], type="value", default="manual", label="Masking option")]
-outputs = [gr.outputs.Image(type="file",label="output"),gr.outputs.Image(type="pil",label="Mask")]
-title = "LaMa Image Inpainting Example"
-description = "Gradio demo for LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Masks are generated by U^2net"
-article = "
"
-examples = [
- ['person512.png',"canvas.png","automatic (U2net)"],
- ['person512.png',"maskexam.png","manual"]
-]
-gr.Interface(infer, inputs, outputs, title=title, description=description, article=article, examples=examples).launch(enable_queue=True,cache_examples=True)
\ No newline at end of file
diff --git a/spaces/CVPR/lama-example/models/ade20k/utils.py b/spaces/CVPR/lama-example/models/ade20k/utils.py
deleted file mode 100644
index f337db7db54c82be041698d694e1403e8918c4c0..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/models/ade20k/utils.py
+++ /dev/null
@@ -1,40 +0,0 @@
-"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch"""
-
-import os
-import sys
-
-import numpy as np
-import torch
-
-try:
- from urllib import urlretrieve
-except ImportError:
- from urllib.request import urlretrieve
-
-
-def load_url(url, model_dir='./pretrained', map_location=None):
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- filename = url.split('/')[-1]
- cached_file = os.path.join(model_dir, filename)
- if not os.path.exists(cached_file):
- sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file))
- urlretrieve(url, cached_file)
- return torch.load(cached_file, map_location=map_location)
-
-
-def color_encode(labelmap, colors, mode='RGB'):
- labelmap = labelmap.astype('int')
- labelmap_rgb = np.zeros((labelmap.shape[0], labelmap.shape[1], 3),
- dtype=np.uint8)
- for label in np.unique(labelmap):
- if label < 0:
- continue
- labelmap_rgb += (labelmap == label)[:, :, np.newaxis] * \
- np.tile(colors[label],
- (labelmap.shape[0], labelmap.shape[1], 1))
-
- if mode == 'BGR':
- return labelmap_rgb[:, :, ::-1]
- else:
- return labelmap_rgb
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/capoo_rip/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/capoo_rip/__init__.py
deleted file mode 100644
index 5711280bc1c3fd1efed76725ff6698e7813067c1..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/capoo_rip/__init__.py
+++ /dev/null
@@ -1,59 +0,0 @@
-from pathlib import Path
-from typing import List
-
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-from meme_generator.utils import save_gif
-
-img_dir = Path(__file__).parent / "images"
-
-
-def capoo_rip(images: List[BuildImage], texts, args):
- img = images[0].convert("RGBA").resize((150, 100), keep_ratio=True)
- img_left = img.crop((0, 0, 75, 100))
- img_right = img.crop((75, 0, 150, 100))
- params1 = [
- [(61, 196), ((140, 68), (0, 59), (33, 0), (165, 8))],
- [(63, 196), ((136, 68), (0, 59), (29, 0), (158, 13))],
- [(62, 195), ((137, 72), (0, 58), (27, 0), (167, 11))],
- [(95, 152), ((0, 8), (155, 0), (163, 107), (13, 112))],
- [(108, 129), ((0, 6), (128, 0), (136, 113), (10, 117))],
- [(84, 160), ((0, 6), (184, 0), (190, 90), (10, 97))],
- ]
- params2 = [
- (
- [(78, 158), ((0, 3), (86, 0), (97, 106), (16, 106))],
- [(195, 156), ((0, 4), (82, 0), (85, 106), (15, 110))],
- ),
- (
- [(89, 156), ((0, 0), (80, 0), (94, 100), (14, 100))],
- [(192, 151), ((0, 7), (79, 3), (82, 107), (11, 112))],
- ),
- ]
- raw_frames = [BuildImage.open(img_dir / f"{i}.png") for i in range(8)]
- for i in range(6):
- pos, points = params1[i]
- raw_frames[i].paste(img.perspective(points), pos, below=True)
- for i in range(2):
- (pos1, points1), (pos2, points2) = params2[i]
- raw_frames[i + 6].paste(img_left.perspective(points1), pos1, below=True)
- raw_frames[i + 6].paste(img_right.perspective(points2), pos2, below=True)
-
- new_frames: List[BuildImage] = []
- for i in range(3):
- new_frames += raw_frames[0:3]
- new_frames += raw_frames[3:]
- new_frames.append(raw_frames[-1])
-
- frames = [frame.image for frame in new_frames]
- return save_gif(frames, 0.1)
-
-
-add_meme(
- "capoo_rip",
- capoo_rip,
- min_images=1,
- max_images=1,
- keywords=["咖波撕"],
-)
diff --git a/spaces/CofAI/chat.b4/g4f/Provider/__init__.py b/spaces/CofAI/chat.b4/g4f/Provider/__init__.py
deleted file mode 100644
index 65f8cb1da5a0279a6639f1427e4bbc0664b6e1bb..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/g4f/Provider/__init__.py
+++ /dev/null
@@ -1,33 +0,0 @@
-from . import Provider
-from .Providers import (
- Aichat,
- Ails,
- Bard,
- Better,
- Bing,
- ChatgptAi,
- ChatgptLogin,
- ChatgptLogin,
- DeepAi,
- Easychat,
- Ezcht,
- Fakeopen,
- Forefront,
- GetGpt,
- Gravityengine,
- H2o,
- hteyun,
- Liaobots,
- Lockchat,
- Mishalsgpt,
- Phind,
- Theb,
- Vercel,
- Weuseing,
- Xiaor,
- Yqcloud,
- You,
- Zeabur
-)
-
-Palm = Bard
diff --git a/spaces/CofAI/picscore1/style.css b/spaces/CofAI/picscore1/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/CofAI/picscore1/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/Cong723/gpt-academic-public/docs/self_analysis.md b/spaces/Cong723/gpt-academic-public/docs/self_analysis.md
deleted file mode 100644
index c88e1e41217eb13a30269f933586f6c241fab38d..0000000000000000000000000000000000000000
--- a/spaces/Cong723/gpt-academic-public/docs/self_analysis.md
+++ /dev/null
@@ -1,256 +0,0 @@
-# chatgpt-academic项目自译解报告
-(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄)
-
-## 对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。
-
-整体概括:
-
-该程序是一个基于自然语言处理和机器学习的科学论文辅助工具,主要功能包括聊天机器人、批量总结PDF文档、批量翻译PDF文档、生成函数注释、解析项目源代码等。程序基于 Gradio 构建 Web 服务,并集成了代理和自动更新功能,提高了用户的使用体验。
-
-文件功能表格:
-
-| 文件名 | 文件功能 |
-| --- | --- |
-| check_proxy.py | 用于检查代理的正确性和可用性 |
-| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 |
-| config.py | 用于全局配置的类 |
-| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 |
-| core_functional.py | 包含一些TextFunctional类和基础功能函数 |
-| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 |
-| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 |
-| theme.py | 包含一些预设置主题的颜色 |
-| toolbox.py | 提供了一些有用的工具函数 |
-| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 |
-| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 |
-| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 |
-| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 |
-| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 |
-| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 |
-| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 |
-| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 |
-| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 |
-| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 |
-| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 |
-| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 |
-| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 |
-| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 |
-| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 |
-| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 |
-| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 |
-| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 |
-| request_llm\bridge_all.py | 处理与LLM的交互 |
-| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 |
-| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 |
-| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 |
-
-
-
-## [0/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\check_proxy.py
-
-该文件主要包括四个函数:check_proxy、backup_and_download、patch_and_restart 和 auto_update。其中,check_proxy 函数用于检查代理是否可用;backup_and_download 用于进行一键更新备份和下载;patch_and_restart 是一键更新协议的重要函数,用于覆盖和重启;auto_update 函数用于查询版本和用户意见,并自动进行一键更新。该文件主要使用了 requests、json、shutil、zipfile、distutils、subprocess 等 Python 标准库和 toolbox 和 colorful 两个第三方库。
-
-## [1/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\colorful.py
-
-该程序文件实现了一些打印文本的函数,使其具有不同的颜色输出。当系统为Linux时直接跳过,否则使用colorama库来实现颜色输出。程序提供了深色和亮色两种颜色输出方式,同时也提供了对打印函数的别名。对于不是终端输出的情况,对所有的打印函数进行重复定义,以便在重定向时能够避免打印错误日志。
-
-## [2/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config.py
-
-该程序文件是一个配置文件,其主要功能是提供使用API密钥等信息,以及对程序的体验进行优化,例如定义对话框高度、布局等。还包含一些其他的设置,例如设置并行使用的线程数、重试次数限制等等。
-
-## [3/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\config_private.py
-
-这是一个名为config_private.py的Python文件,它用于配置API_KEY和代理信息。API_KEY是一个私密密钥,用于访问某些受保护的API。USE_PROXY变量设置为True以应用代理,proxies变量配置了代理网络的地址和协议。在使用该文件时,需要填写正确的API_KEY和代理信息。
-
-## [4/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\core_functional.py
-
-该文件是一个Python模块,名为"core_functional.py"。模块中定义了一个字典,包含了各种核心功能的配置信息,如英语学术润色、中文学术润色、查找语法错误等。每个功能都包含一些前言和后语,在前言中描述了该功能的任务和要求,在后语中提供一些附加信息。此外,有些功能还定义了一些特定的处理函数和按钮颜色。
-
-## [5/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functional.py
-
-这是一个Python程序文件,文件名是crazy_functional.py。它导入了一个名为HotReload的工具箱,并定义了一个名为get_crazy_functions()的函数。这个函数包括三个部分的插件组,分别是已经编写完成的第一组插件、已经测试但距离完美状态还差一点点的第二组插件和尚未充分测试的第三组插件。每个插件都有一个名称、一个按钮颜色、一个函数和一个是否加入下拉菜单中的标志位。这些插件提供了多种功能,包括生成函数注释、解析项目源代码、批量翻译PDF文档、谷歌检索、PDF文档内容理解和Latex文档的全文润色、翻译等功能。其中第三组插件可能还存在一定的bug。
-
-## [6/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\main.py
-
-该Python脚本代码实现了一个用于交互式对话的Chatbot机器人。它使用了Gradio框架来构建一个Web界面,并在此基础之上嵌入了一个文本输入框和与Chatbot进行交互的其他控件,包括提交、重置、停止和清除按钮、选择框和滑块等。此外,它还包括了一些类和函数和一些用于编程分析的工具和方法。整个程序文件的结构清晰,注释丰富,并提供了很多技术细节,使得开发者可以很容易地在其基础上进行二次开发、修改、扩展和集成。
-
-## [7/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\theme.py
-
-该程序文件名为theme.py,主要功能为调节Gradio的全局样式。在该文件中,调节了Gradio的主题颜色、字体、阴影、边框、渐变等等样式。同时,该文件还添加了一些高级CSS样式,比如调整表格单元格的背景和边框,设定聊天气泡的圆角、最大宽度和阴影等等。如果CODE_HIGHLIGHT为True,则还进行了代码高亮显示。
-
-## [8/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\toolbox.py
-
-这是一个名为`toolbox.py`的源代码文件。该文件包含了一系列工具函数和装饰器,用于聊天Bot的开发和调试。其中有一些功能包括将输入参数进行重组、捕捉函数中的异常并记录到历史记录中、生成Markdown格式的聊天记录报告等。该文件中还包含了一些与转换Markdown文本相关的函数。
-
-## [9/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\crazy_utils.py
-
-这是一个Python程序文件 `crazy_utils.py`,它包含了两个函数:
-
-- `input_clipping(inputs, history, max_token_limit)`:这个函数接收三个参数,inputs 是一个字符串,history 是一个列表,max_token_limit 是一个整数。它使用 `tiktoken` 、`numpy` 和 `toolbox` 模块,处理输入文本和历史记录,将其裁剪到指定的最大标记数,避免输入过长导致的性能问题。如果 inputs 长度不超过 max_token_limit 的一半,则只裁剪历史;否则,同时裁剪输入和历史。
-- `request_gpt_model_in_new_thread_with_ui_alive(inputs, inputs_show_user, llm_kwargs, chatbot, history, sys_prompt, refresh_interval=0.2, handle_token_exceed=True, retry_times_at_unknown_error=2)`:这个函数接收八个参数,其中后三个是列表类型,其他为标量或句柄等。它提供对话窗口和刷新控制,执行 `predict_no_ui_long_connection` 方法,将输入数据发送至 GPT 模型并获取结果,如果子任务出错,返回相应的错误信息,否则返回结果。
-
-## [10/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文润色.py
-
-这是一个名为"crazy_functions\Latex全文润色.py"的程序文件,其中包含了两个函数"Latex英文润色"和"Latex中文润色",以及其他辅助函数。这些函数能够对 Latex 项目进行润色处理,其中 "多文件润色" 函数是一个主要函数,它调用了其他辅助函数用于读取和处理 Latex 项目中的文件。函数使用了多线程和机器学习模型进行自然语言处理,对文件进行简化和排版来满足学术标准。注释已删除并可以在函数内部查找。
-
-## [11/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\Latex全文翻译.py
-
-这个程序文件包括一个用于对整个Latex项目进行翻译的函数 `Latex英译中` 和一个用于将中文翻译为英文的函数 `Latex中译英`。这两个函数都会尝试导入依赖库 tiktoken, 若无法导入则会提示用户安装。`Latex英译中` 函数会对 Latex 项目中的文件进行分离并去除注释,然后运行多线程翻译。`Latex中译英` 也做同样的事情,只不过是将中文翻译为英文。这个程序文件还包括其他一些帮助函数。
-
-## [12/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\__init__.py
-
-这是一个 Python 包,包名为 `crazy_functions`,在 `__init__.py` 文件中定义了一些函数,包含以下函数:
-
-- `crazy_addition(a, b)`:对两个数进行加法运算,并将结果返回。
-- `crazy_multiplication(a, b)`:对两个数进行乘法运算,并将结果返回。
-- `crazy_subtraction(a, b)`:对两个数进行减法运算,并将结果返回。
-- `crazy_division(a, b)`:对两个数进行除法运算,并将结果返回。
-- `crazy_factorial(n)`:计算 `n` 的阶乘并返回结果。
-
-这些函数可能会有一些奇怪或者不符合常规的实现方式(由函数名可以看出来),所以这个包的名称为 `crazy_functions`,可能是暗示这些函数会有一些“疯狂”的实现方式。
-
-## [13/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\下载arxiv论文翻译摘要.py
-
-该程序实现了一个名为“下载arxiv论文并翻译摘要”的函数插件,作者是“binary-husky”。该函数的功能是,在输入一篇arxiv论文的链接后,提取摘要、下载PDF文档、翻译摘要为中文,并将翻译结果保存到文件中。程序使用了一些Python库,如requests、pdfminer和beautifulsoup4等。程序入口是名为“下载arxiv论文并翻译摘要”的函数,其中使用了自定义的辅助函数download_arxiv_和get_name。程序中还使用了其他非函数的辅助函数和变量,如update_ui、CatchException、report_exception和get_conf等。
-
-## [14/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\代码重写为全英文_多线程.py
-
-该文件是一个多线程Python脚本,包含多个函数和利用第三方库进行的API请求。主要功能是将给定文件夹内的Python代码文件中所有中文转化为英文,然后输出转化后的英文代码。重要的功能和步骤包括:
-
-1. 清空历史,以免输入溢出
-2. 尝试导入依赖,如果缺少依赖,则给出安装建议
-3. 集合文件
-4. 显示随意内容以防卡顿的感觉
-5. Token限制下的截断与处理
-6. 多线程操作请求转换中文变为英文的代码
-7. 所有线程同时开始执行任务函数
-8. 循环轮询各个线程是否执行完毕
-9. 把结果写入文件
-10. 备份一个文件
-
-## [15/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\总结word文档.py
-
-这是一个名为"总结word文档.py"的程序文件,使用python编写。该文件导入了"toolbox"和"crazy_utils"模块,实现了解析docx格式和doc格式的文件的功能。该文件包含了一个名为"解析docx"的函数,通过对文件内容应用自然语言处理技术,生成文章片段的中英文概述。具体实现过程中,该函数使用了"docx"模块和"win32com.client"模块来实现对docx和doc格式文件的解析,同时使用了"request_gpt_model_in_new_thread_with_ui_alive"函数来向GPT模型发起请求。最后,该文件还实现了一个名为"总结word文档"的函数来批量总结Word文档。
-
-## [16/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量Markdown翻译.py
-
-这个程序文件实现了一个批量Markdown翻译功能,可以将一个源代码项目中的Markdown文本翻译成指定语言(目前支持中<-英和英<-中)。程序主要分为三个函数,`PaperFileGroup`类用于处理长文本的拆分,`多文件翻译`是主要函数调用了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`函数进行多线程翻译并输出结果,`Markdown英译中`和`Markdown中译外`分别是英译中和中译英的入口函数,用于解析项目路径和调用翻译函数。程序依赖于tiktoken等库实现。
-
-## [17/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档.py
-
-这是一个名为“批量总结PDF文档”的Python脚本,包含了多个函数。其中有一个函数名为“clean_text”,可以对PDF提取出的原始文本进行清洗和格式化处理,将连字转换为其基本形式,并根据heuristic规则判断换行符是否是段落分隔,并相应地进行替换。另一个函数名为“解析PDF”,可以接收一个PDF文件清单,并对清单中的每一个PDF进行解析,提取出文本并调用“clean_text”函数进行清洗和格式化处理,然后向用户发送一个包含文章简介信息的问题并等待用户回答。最后,该脚本也包含一个名为“批量总结PDF文档”的主函数,其中调用了“解析PDF”函数来完成对PDF文件的批量处理。
-
-## [18/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量总结PDF文档pdfminer.py
-
-这个文件是一个Python模块,文件名为pdfminer.py,它定义了一个函数批量总结PDF文档。该函数接受一些参数,然后尝试导入pdfminer和beautifulsoup4库。该函数将读取pdf文件或tex文件中的内容,对其进行分析,并使用GPT模型进行自然语言摘要。文件中还有一个辅助函数readPdf,用于读取pdf文件中的内容。
-
-## [19/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\批量翻译PDF文档_多线程.py
-
-这是一个Python脚本,文件名是crazy_functions\批量翻译PDF文档_多线程.py。该脚本提供了一个名为“批量翻译PDF文档”的函数,可以批量翻译PDF文件并生成报告文件。该函数使用了多个模块和函数(如toolbox、crazy_utils、update_ui等),使用了Python的异常处理和多线程功能,还使用了一些文本处理函数和第三方库(如fitz和tiktoken)。在函数执行过程中,它会进行一些参数检查、读取和清理PDF文本、递归地切割PDF文件、获取文章meta信息、多线程翻译、整理报告格式等操作,并更新UI界面和生成报告文件。
-
-## [20/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\理解PDF文档内容.py
-
-这是一个解析PDF文件内容的Python程序,程序文件名为"理解PDF文档内容.py",程序主要由5个步骤组成:第0步是切割PDF文件;第1步是从摘要中提取高价值信息,放到history中;第2步是迭代地历遍整个文章,提取精炼信息;第3步是整理history;第4步是设置一个token上限,防止回答时Token溢出。程序主要用到了Python中的各种模块和函数库,如:toolbox, tiktoken, pymupdf等。
-
-## [21/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\生成函数注释.py
-
-这是一个名为"生成函数注释"的函数,带有一个装饰器"@CatchException",可以捕获异常。该函数接受文件路径、参数和聊天机器人等参数,用于对多个Python或C++文件进行函数注释,使用了"toolbox"和"crazy_utils"模块中的函数。该函数会逐个读取指定文件中的内容,并使用聊天机器人进行交互,向用户请求注释信息,然后将生成的注释与原文件内容一起输出到一个markdown表格中。最后,该函数返回一个字符串,指示任务是否已完成。另外还包含一个名为"批量生成函数注释"的函数,它与"生成函数注释"函数一起用于批量处理多个文件。
-
-## [22/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\解析项目源代码.py
-
-这个程序文件实现了对一个源代码项目进行分析的功能。其中,函数`解析项目本身`、`解析一个Python项目`、`解析一个C项目的头文件`、`解析一个C项目`、`解析一个Java项目`和`解析前端项目`分别用于解析不同类型的项目。函数`解析源代码新`实现了对每一个源代码文件的分析,并将分析结果汇总,同时还实现了分组和迭代处理,提高了效率。最后,函数`write_results_to_file`将所有分析结果写入文件。中间,还用到了`request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency`和`request_gpt_model_in_new_thread_with_ui_alive`来完成请求和响应,并用`update_ui`实时更新界面。
-
-## [23/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\询问多个大语言模型.py
-
-这是一个Python程序,文件名为"crazy_functions\询问多个大语言模型.py"。该程序实现了一个同时向多个大语言模型询问的功能,接收用户输入文本以及模型参数,向ChatGPT和ChatGLM模型发出请求,并将对话记录显示在聊天框中,同时刷新界面。
-
-## [24/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\读文章写摘要.py
-
-该程序文件是一个Python模块,文件名为"读文章写摘要.py",主要包含两个函数:"解析Paper"和"读文章写摘要"。其中,"解析Paper"函数接受文件路径、参数等参数,逐个打印文件内容并使用GPT模型生成对该文件的摘要;"读文章写摘要"函数则接受一段文本内容和参数,将该文本内容及其所有.tex文件逐个传递给"解析Paper"函数进行处理,并使用GPT模型生成文章的中英文摘要。文件还导入了一些工具函数,如异常处理、信息上报和文件写入等。
-
-## [25/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\谷歌检索小助手.py
-
-该文件代码包含了一个名为`get_meta_information`的函数和一个名为`谷歌检索小助手`的装饰器函数,用于从谷歌学术中抓取文章元信息,并从用户提供的搜索页面中分析所有文章的相关信息。该文件使用了许多第三方库,如requests、arxiv、BeautifulSoup等。其中`get_meta_information`函数中还定义了一个名为`string_similar`的辅助函数,用于比较字符串相似度。
-
-## [26/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\crazy_functions\高级功能函数模板.py
-
-该程序文件是一个 Python 模块,包含一个名为“高阶功能模板函数”的函数。该函数接受多个参数,其中包括输入文本、GPT 模型参数、插件模型参数、聊天显示框、聊天历史等。 该函数的主要功能是根据输入文本,使用 GPT 模型生成一些问题,并等待用户回答这些问题(使用 Markdown 格式),然后将用户回答加入到聊天历史中,并更新聊天显示框。该函数还包含了一些异常处理和多线程的相关操作。该程序文件还引用了另一个 Python 模块中的两个函数,分别为“CatchException”和“update_ui”,并且还引用了一个名为“request_gpt_model_in_new_thread_with_ui_alive”的自定义函数。
-
-## [27/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_all.py
-
-这个文件是用来处理与LLM的交互的。包含两个函数,一个是 predict_no_ui_long_connection 用来处理长文本的输出,可以多线程调用;另一个是 predict 用来处理基础的对话功能。这个文件会导入其他文件中定义的方法进行调用,具体调用哪个方法取决于传入的参数。函数中还有一些装饰器和管理多线程的逻辑。
-
-## [28/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatglm.py
-
-这个程序文件实现了一个使用ChatGLM模型进行聊天的功能。具体实现过程是:首先进行初始化,然后使用GetGLMHandle类进行ChatGLM模型的加载和运行。predict_no_ui_long_connection函数用于多线程聊天,而predict函数用于单线程聊天,它们的不同之处在于前者不会更新UI界面,后者会。这个文件还导入了其他模块和库,例如transformers、time、importlib等,并使用了多进程Pipe。
-
-## [29/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_chatgpt.py
-
-这个程序文件是用于对话生成的,主要包含三个函数:predict、predict_no_ui、predict_no_ui_long_connection。其中,predict是用于普通对话的函数,具备完备的交互功能,但不具备多线程能力;predict_no_ui是高级实验性功能模块调用的函数,参数简单,可以多线程并行,方便实现复杂的功能逻辑;predict_no_ui_long_connection解决了predict_no_ui在处理长文档时容易断开连接的问题,同样支持多线程。程序中还包含一些常量和工具函数,用于整合信息,选择LLM模型,生成http请求,发送请求,接收响应等。它需要配置一个config文件,包含代理网址、API等敏感信息。
-
-## [30/31] 请对下面的程序文件做一个概述: H:\chatgpt_academic_resolve\request_llm\bridge_tgui.py
-
-该程序文件实现了一个基于Websockets的文本生成服务和对话功能。其中,有三个函数:`run()`、`predict()`和`predict_no_ui_long_connection()`。`run()`函数用于连接到Websocket服务并生成文本结果;`predict()`函数用于将用户输入作为文本生成的输入,同时在UI上显示对话历史记录,并在不断更新UI的过程中不断更新生成的文本输出;`predict_no_ui_long_connection()`函数与`predict()`函数类似,但没有UI,并在一段时间内返回单个生成的文本。整个程序还引入了多个Python模块来完成相关功能,例如`asyncio`、`websockets`、`json`等等。
-
-## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py)。
-
-程序功能概括:该程序是一个聊天机器人,可以通过 Web 界面与用户进行交互。它包含了丰富的功能,如文本润色、翻译、代码重写、在线查找等,并且支持多线程处理。用户可以通过 Gradio 框架提供的 Web 界面进行交互,程序还提供了一些调试工具,如toolbox 模块,方便程序开发和调试。
-
-下表概述了每个文件的功能:
-
-| 文件名 | 功能 |
-| ----------------------------------------------------------- | ------------------------------------------------------------ |
-| check_proxy.py | 检查代理是否可用 |
-| colorful.py | 用于打印文本的字体颜色输出模块 |
-| config.py | 用于程序中的各种设置,如并行线程数量和重试次数的限制等 |
-| config_private.py | 配置API_KEY和代理信息的文件 |
-| core_functional.py | 包含具体的文本处理功能的模块 |
-| crazy_functional.py | 包括各种插件函数的模块,提供了多种文本处理功能 |
-| main.py | 包含 Chatbot 机器人主程序的模块 |
-| theme.py | 用于调节全局样式的模块 |
-| toolbox.py | 包含工具函数和装饰器,用于聊天Bot的开发和调试 |
-| crazy_functions\crazy_utils.py | 包含一些辅助函数,如文本裁剪和消息捕捉等 |
-| crazy_functions\Latex全文润色.py | 对 Latex 项目进行润色处理的功能模块 |
-| crazy_functions\Latex全文翻译.py | 对 Latex 项目进行翻译的功能模块 |
-| crazy_functions\__init__.py | 定义一些奇特的数学函数等 |
-| crazy_functions\下载arxiv论文翻译摘要.py | 下载 Arxiv 论文并翻译摘要的功能模块 |
-| crazy_functions\代码重写为全英文_多线程.py | 将Python程序中所有中文转化为英文的功能模块 |
-| crazy_functions\总结word文档.py | 解析 docx 和 doc 格式的文件,生成文章片段的中英文概述的功能模块 |
-
-## 根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, theme.py, toolbox.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py, crazy_functions\代码重写为全英文_多线程.py, crazy_functions\总结word文档.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\批量翻译PDF文档_多线程.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py, crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llm\bridge_all.py, request_llm\bridge_chatglm.py, request_llm\bridge_chatgpt.py, request_llm\bridge_tgui.py)。
-
-根据以上分析,整个程序是一个集成了多个有用工具和功能的文本处理和生成工具,提供了多种在不同场景下使用的功能,包括但不限于对话生成、文本摘要、PDF文件批量处理、代码翻译和实用工具等。主要的Python模块包括"toolbox.py"、"config.py"、"core_functional.py"和"crazy_functional.py"等,并且还使用了许多第三方库和模块实现相关功能。以下是每个程序文件的功能:
-
-| 文件名 | 文件功能 |
-| --- | --- |
-| check_proxy.py | 用于检查代理的正确性和可用性 |
-| colorful.py | 包含不同预设置颜色的常量,并用于多种UI元素 |
-| config.py | 用于全局配置的类 |
-| config_private.py | 与config.py文件一起使用的另一个配置文件,用于更改私密信息 |
-| core_functional.py | 包含一些TextFunctional类和基础功能函数 |
-| crazy_functional.py | 包含大量高级功能函数和实验性的功能函数 |
-| main.py | 程序的主入口,包含GUI主窗口和主要的UI管理功能 |
-| theme.py | 包含一些预设置主题的颜色 |
-| toolbox.py | 提供了一些有用的工具函数 |
-| crazy_functions\crazy_utils.py | 包含一些用于实现高级功能的辅助函数 |
-| crazy_functions\Latex全文润色.py | 实现了对LaTeX文件中全文的润色和格式化功能 |
-| crazy_functions\Latex全文翻译.py | 实现了对LaTeX文件中的内容进行翻译的功能 |
-| crazy_functions\_\_init\_\_.py | 用于导入crazy_functional.py中的功能函数 |
-| crazy_functions\下载arxiv论文翻译摘要.py | 从Arxiv上下载论文并提取重要信息 |
-| crazy_functions\代码重写为全英文_多线程.py | 针对中文Python文件,将其翻译为全英文 |
-| crazy_functions\总结word文档.py | 提取Word文件的重要内容来生成摘要 |
-| crazy_functions\批量Markdown翻译.py | 批量翻译Markdown文件 |
-| crazy_functions\批量总结PDF文档.py | 批量从PDF文件中提取摘要 |
-| crazy_functions\批量总结PDF文档pdfminer.py | 批量从PDF文件中提取摘要 |
-| crazy_functions\批量翻译PDF文档_多线程.py | 批量翻译PDF文件 |
-| crazy_functions\理解PDF文档内容.py | 批量分析PDF文件并提取摘要 |
-| crazy_functions\生成函数注释.py | 自动生成Python文件中函数的注释 |
-| crazy_functions\解析项目源代码.py | 解析并分析给定项目的源代码 |
-| crazy_functions\询问多个大语言模型.py | 向多个大语言模型询问输入文本并进行处理 |
-| crazy_functions\读文献写摘要.py | 根据用户输入读取文献内容并生成摘要 |
-| crazy_functions\谷歌检索小助手.py | 利用谷歌学术检索用户提供的论文信息并提取相关信息 |
-| crazy_functions\高级功能函数模板.py | 实现高级功能的模板函数 |
-| request_llm\bridge_all.py | 处理与LLM的交互 |
-| request_llm\bridge_chatglm.py | 使用ChatGLM模型进行聊天 |
-| request_llm\bridge_chatgpt.py | 实现对话生成的各项功能 |
-| request_llm\bridge_tgui.py | 在Websockets中与用户进行交互并生成文本输出 |
-
diff --git a/spaces/Cropinky/hana_hanak_houses/realesrgan/data/realesrgan_dataset.py b/spaces/Cropinky/hana_hanak_houses/realesrgan/data/realesrgan_dataset.py
deleted file mode 100644
index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000
--- a/spaces/Cropinky/hana_hanak_houses/realesrgan/data/realesrgan_dataset.py
+++ /dev/null
@@ -1,192 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import os.path as osp
-import random
-import time
-import torch
-from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
-from basicsr.data.transforms import augment
-from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
-from basicsr.utils.registry import DATASET_REGISTRY
-from torch.utils import data as data
-
-
-@DATASET_REGISTRY.register()
-class RealESRGANDataset(data.Dataset):
- """Dataset used for Real-ESRGAN model:
- Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It loads gt (Ground-Truth) images, and augments them.
- It also generates blur kernels and sinc kernels for generating low-quality images.
- Note that the low-quality images are processed in tensors on GPUS for faster processing.
-
- Args:
- opt (dict): Config for train datasets. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- meta_info (str): Path for meta information file.
- io_backend (dict): IO backend type and other kwarg.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation).
- Please see more options in the codes.
- """
-
- def __init__(self, opt):
- super(RealESRGANDataset, self).__init__()
- self.opt = opt
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- self.gt_folder = opt['dataroot_gt']
-
- # file client (lmdb io backend)
- if self.io_backend_opt['type'] == 'lmdb':
- self.io_backend_opt['db_paths'] = [self.gt_folder]
- self.io_backend_opt['client_keys'] = ['gt']
- if not self.gt_folder.endswith('.lmdb'):
- raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}")
- with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:
- self.paths = [line.split('.')[0] for line in fin]
- else:
- # disk backend with meta_info
- # Each line in the meta_info describes the relative path to an image
- with open(self.opt['meta_info']) as fin:
- paths = [line.strip().split(' ')[0] for line in fin]
- self.paths = [os.path.join(self.gt_folder, v) for v in paths]
-
- # blur settings for the first degradation
- self.blur_kernel_size = opt['blur_kernel_size']
- self.kernel_list = opt['kernel_list']
- self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability
- self.blur_sigma = opt['blur_sigma']
- self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels
- self.betap_range = opt['betap_range'] # betap used in plateau blur kernels
- self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters
-
- # blur settings for the second degradation
- self.blur_kernel_size2 = opt['blur_kernel_size2']
- self.kernel_list2 = opt['kernel_list2']
- self.kernel_prob2 = opt['kernel_prob2']
- self.blur_sigma2 = opt['blur_sigma2']
- self.betag_range2 = opt['betag_range2']
- self.betap_range2 = opt['betap_range2']
- self.sinc_prob2 = opt['sinc_prob2']
-
- # a final sinc filter
- self.final_sinc_prob = opt['final_sinc_prob']
-
- self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21
- # TODO: kernel range is now hard-coded, should be in the configure file
- self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect
- self.pulse_tensor[10, 10] = 1
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- # -------------------------------- Load gt images -------------------------------- #
- # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32.
- gt_path = self.paths[index]
- # avoid errors caused by high latency in reading files
- retry = 3
- while retry > 0:
- try:
- img_bytes = self.file_client.get(gt_path, 'gt')
- except (IOError, OSError) as e:
- logger = get_root_logger()
- logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}')
- # change another file to read
- index = random.randint(0, self.__len__())
- gt_path = self.paths[index]
- time.sleep(1) # sleep 1s for occasional server congestion
- else:
- break
- finally:
- retry -= 1
- img_gt = imfrombytes(img_bytes, float32=True)
-
- # -------------------- Do augmentation for training: flip, rotation -------------------- #
- img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot'])
-
- # crop or pad to 400
- # TODO: 400 is hard-coded. You may change it accordingly
- h, w = img_gt.shape[0:2]
- crop_pad_size = 400
- # pad
- if h < crop_pad_size or w < crop_pad_size:
- pad_h = max(0, crop_pad_size - h)
- pad_w = max(0, crop_pad_size - w)
- img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101)
- # crop
- if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size:
- h, w = img_gt.shape[0:2]
- # randomly choose top and left coordinates
- top = random.randint(0, h - crop_pad_size)
- left = random.randint(0, w - crop_pad_size)
- img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...]
-
- # ------------------------ Generate kernels (used in the first degradation) ------------------------ #
- kernel_size = random.choice(self.kernel_range)
- if np.random.uniform() < self.opt['sinc_prob']:
- # this sinc filter setting is for kernels ranging from [7, 21]
- if kernel_size < 13:
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- else:
- omega_c = np.random.uniform(np.pi / 5, np.pi)
- kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
- else:
- kernel = random_mixed_kernels(
- self.kernel_list,
- self.kernel_prob,
- kernel_size,
- self.blur_sigma,
- self.blur_sigma, [-math.pi, math.pi],
- self.betag_range,
- self.betap_range,
- noise_range=None)
- # pad kernel
- pad_size = (21 - kernel_size) // 2
- kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))
-
- # ------------------------ Generate kernels (used in the second degradation) ------------------------ #
- kernel_size = random.choice(self.kernel_range)
- if np.random.uniform() < self.opt['sinc_prob2']:
- if kernel_size < 13:
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- else:
- omega_c = np.random.uniform(np.pi / 5, np.pi)
- kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
- else:
- kernel2 = random_mixed_kernels(
- self.kernel_list2,
- self.kernel_prob2,
- kernel_size,
- self.blur_sigma2,
- self.blur_sigma2, [-math.pi, math.pi],
- self.betag_range2,
- self.betap_range2,
- noise_range=None)
-
- # pad kernel
- pad_size = (21 - kernel_size) // 2
- kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size)))
-
- # ------------------------------------- the final sinc kernel ------------------------------------- #
- if np.random.uniform() < self.opt['final_sinc_prob']:
- kernel_size = random.choice(self.kernel_range)
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21)
- sinc_kernel = torch.FloatTensor(sinc_kernel)
- else:
- sinc_kernel = self.pulse_tensor
-
- # BGR to RGB, HWC to CHW, numpy to tensor
- img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0]
- kernel = torch.FloatTensor(kernel)
- kernel2 = torch.FloatTensor(kernel2)
-
- return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path}
- return return_d
-
- def __len__(self):
- return len(self.paths)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PdfImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PdfImagePlugin.py
deleted file mode 100644
index c41f8aee0044799050dbcd2d7a01a7726511fae4..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PdfImagePlugin.py
+++ /dev/null
@@ -1,284 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# PDF (Acrobat) file handling
-#
-# History:
-# 1996-07-16 fl Created
-# 1997-01-18 fl Fixed header
-# 2004-02-21 fl Fixes for 1/L/CMYK images, etc.
-# 2004-02-24 fl Fixes for 1 and P images.
-#
-# Copyright (c) 1997-2004 by Secret Labs AB. All rights reserved.
-# Copyright (c) 1996-1997 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-##
-# Image plugin for PDF images (output only).
-##
-
-import io
-import math
-import os
-import time
-
-from . import Image, ImageFile, ImageSequence, PdfParser, __version__, features
-
-#
-# --------------------------------------------------------------------
-
-# object ids:
-# 1. catalogue
-# 2. pages
-# 3. image
-# 4. page
-# 5. page contents
-
-
-def _save_all(im, fp, filename):
- _save(im, fp, filename, save_all=True)
-
-
-##
-# (Internal) Image save plugin for the PDF format.
-
-
-def _save(im, fp, filename, save_all=False):
- is_appending = im.encoderinfo.get("append", False)
- if is_appending:
- existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode="r+b")
- else:
- existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode="w+b")
-
- dpi = im.encoderinfo.get("dpi")
- if dpi:
- x_resolution = dpi[0]
- y_resolution = dpi[1]
- else:
- x_resolution = y_resolution = im.encoderinfo.get("resolution", 72.0)
-
- info = {
- "title": None
- if is_appending
- else os.path.splitext(os.path.basename(filename))[0],
- "author": None,
- "subject": None,
- "keywords": None,
- "creator": None,
- "producer": None,
- "creationDate": None if is_appending else time.gmtime(),
- "modDate": None if is_appending else time.gmtime(),
- }
- for k, default in info.items():
- v = im.encoderinfo.get(k) if k in im.encoderinfo else default
- if v:
- existing_pdf.info[k[0].upper() + k[1:]] = v
-
- #
- # make sure image data is available
- im.load()
-
- existing_pdf.start_writing()
- existing_pdf.write_header()
- existing_pdf.write_comment(f"created by Pillow {__version__} PDF driver")
-
- #
- # pages
- ims = [im]
- if save_all:
- append_images = im.encoderinfo.get("append_images", [])
- for append_im in append_images:
- append_im.encoderinfo = im.encoderinfo.copy()
- ims.append(append_im)
- number_of_pages = 0
- image_refs = []
- page_refs = []
- contents_refs = []
- for im in ims:
- im_number_of_pages = 1
- if save_all:
- try:
- im_number_of_pages = im.n_frames
- except AttributeError:
- # Image format does not have n_frames.
- # It is a single frame image
- pass
- number_of_pages += im_number_of_pages
- for i in range(im_number_of_pages):
- image_refs.append(existing_pdf.next_object_id(0))
- page_refs.append(existing_pdf.next_object_id(0))
- contents_refs.append(existing_pdf.next_object_id(0))
- existing_pdf.pages.append(page_refs[-1])
-
- #
- # catalog and list of pages
- existing_pdf.write_catalog()
-
- page_number = 0
- for im_sequence in ims:
- im_pages = ImageSequence.Iterator(im_sequence) if save_all else [im_sequence]
- for im in im_pages:
- # FIXME: Should replace ASCIIHexDecode with RunLengthDecode
- # (packbits) or LZWDecode (tiff/lzw compression). Note that
- # PDF 1.2 also supports Flatedecode (zip compression).
-
- bits = 8
- params = None
- decode = None
-
- #
- # Get image characteristics
-
- width, height = im.size
-
- if im.mode == "1":
- if features.check("libtiff"):
- filter = "CCITTFaxDecode"
- bits = 1
- params = PdfParser.PdfArray(
- [
- PdfParser.PdfDict(
- {
- "K": -1,
- "BlackIs1": True,
- "Columns": width,
- "Rows": height,
- }
- )
- ]
- )
- else:
- filter = "DCTDecode"
- colorspace = PdfParser.PdfName("DeviceGray")
- procset = "ImageB" # grayscale
- elif im.mode == "L":
- filter = "DCTDecode"
- # params = f"<< /Predictor 15 /Columns {width-2} >>"
- colorspace = PdfParser.PdfName("DeviceGray")
- procset = "ImageB" # grayscale
- elif im.mode == "P":
- filter = "ASCIIHexDecode"
- palette = im.getpalette()
- colorspace = [
- PdfParser.PdfName("Indexed"),
- PdfParser.PdfName("DeviceRGB"),
- 255,
- PdfParser.PdfBinary(palette),
- ]
- procset = "ImageI" # indexed color
- elif im.mode == "RGB":
- filter = "DCTDecode"
- colorspace = PdfParser.PdfName("DeviceRGB")
- procset = "ImageC" # color images
- elif im.mode == "RGBA":
- filter = "JPXDecode"
- colorspace = PdfParser.PdfName("DeviceRGB")
- procset = "ImageC" # color images
- elif im.mode == "CMYK":
- filter = "DCTDecode"
- colorspace = PdfParser.PdfName("DeviceCMYK")
- procset = "ImageC" # color images
- decode = [1, 0, 1, 0, 1, 0, 1, 0]
- else:
- msg = f"cannot save mode {im.mode}"
- raise ValueError(msg)
-
- #
- # image
-
- op = io.BytesIO()
-
- if filter == "ASCIIHexDecode":
- ImageFile._save(im, op, [("hex", (0, 0) + im.size, 0, im.mode)])
- elif filter == "CCITTFaxDecode":
- im.save(
- op,
- "TIFF",
- compression="group4",
- # use a single strip
- strip_size=math.ceil(im.width / 8) * im.height,
- )
- elif filter == "DCTDecode":
- Image.SAVE["JPEG"](im, op, filename)
- elif filter == "JPXDecode":
- Image.SAVE["JPEG2000"](im, op, filename)
- elif filter == "FlateDecode":
- ImageFile._save(im, op, [("zip", (0, 0) + im.size, 0, im.mode)])
- elif filter == "RunLengthDecode":
- ImageFile._save(im, op, [("packbits", (0, 0) + im.size, 0, im.mode)])
- else:
- msg = f"unsupported PDF filter ({filter})"
- raise ValueError(msg)
-
- stream = op.getvalue()
- if filter == "CCITTFaxDecode":
- stream = stream[8:]
- filter = PdfParser.PdfArray([PdfParser.PdfName(filter)])
- else:
- filter = PdfParser.PdfName(filter)
-
- existing_pdf.write_obj(
- image_refs[page_number],
- stream=stream,
- Type=PdfParser.PdfName("XObject"),
- Subtype=PdfParser.PdfName("Image"),
- Width=width, # * 72.0 / x_resolution,
- Height=height, # * 72.0 / y_resolution,
- Filter=filter,
- BitsPerComponent=bits,
- Decode=decode,
- DecodeParms=params,
- ColorSpace=colorspace,
- )
-
- #
- # page
-
- existing_pdf.write_page(
- page_refs[page_number],
- Resources=PdfParser.PdfDict(
- ProcSet=[PdfParser.PdfName("PDF"), PdfParser.PdfName(procset)],
- XObject=PdfParser.PdfDict(image=image_refs[page_number]),
- ),
- MediaBox=[
- 0,
- 0,
- width * 72.0 / x_resolution,
- height * 72.0 / y_resolution,
- ],
- Contents=contents_refs[page_number],
- )
-
- #
- # page contents
-
- page_contents = b"q %f 0 0 %f 0 0 cm /image Do Q\n" % (
- width * 72.0 / x_resolution,
- height * 72.0 / y_resolution,
- )
-
- existing_pdf.write_obj(contents_refs[page_number], stream=page_contents)
-
- page_number += 1
-
- #
- # trailer
- existing_pdf.write_xref_and_trailer()
- if hasattr(fp, "flush"):
- fp.flush()
- existing_pdf.close()
-
-
-#
-# --------------------------------------------------------------------
-
-
-Image.register_save("PDF", _save)
-Image.register_save_all("PDF", _save_all)
-
-Image.register_extension("PDF", ".pdf")
-
-Image.register_mime("PDF", "application/pdf")
diff --git a/spaces/Datasculptor/MusicGen/CONTRIBUTING.md b/spaces/Datasculptor/MusicGen/CONTRIBUTING.md
deleted file mode 100644
index 55b99140204d785d572ada9761dd77f302ae31c6..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/CONTRIBUTING.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Contributing to Audiocraft
-
-We want to make contributing to this project as easy and transparent as
-possible.
-
-## Pull Requests
-
-Audiocraft is the implementation of a research paper.
-Therefore, we do not plan on accepting many pull requests for new features.
-We certainly welcome them for bug fixes.
-
-1. Fork the repo and create your branch from `main`.
-2. If you've added code that should be tested, add tests.
-3. If you've changed APIs, update the documentation.
-4. Ensure the test suite passes.
-5. Make sure your code lints.
-6. If you haven't already, complete the Contributor License Agreement ("CLA").
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Meta's open source projects.
-
-Complete your CLA here:
-
-## Issues
-We use GitHub issues to track public bugs. Please ensure your description is
-clear and has sufficient instructions to be able to reproduce the issue.
-
-Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## License
-By contributing to encodec, you agree that your contributions will be licensed
-under the LICENSE file in the root directory of this source tree.
diff --git a/spaces/Detomo/ai-comic-generation/src/lib/getInitialRenderedScene.ts b/spaces/Detomo/ai-comic-generation/src/lib/getInitialRenderedScene.ts
deleted file mode 100644
index 7c0739bf8bebdaf16aa4acf610eb6bdad9c15fd2..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/lib/getInitialRenderedScene.ts
+++ /dev/null
@@ -1,11 +0,0 @@
-import { RenderedScene } from "@/types"
-
-export const getInitialRenderedScene = (): RenderedScene => ({
- renderId: "",
- status: "pending",
- assetUrl: "",
- alt: "",
- error: "",
- maskUrl: "",
- segments: []
-})
\ No newline at end of file
diff --git a/spaces/Detomo/ai-comic-generation/src/lib/replaceWhiteWithTransparent.ts b/spaces/Detomo/ai-comic-generation/src/lib/replaceWhiteWithTransparent.ts
deleted file mode 100644
index cee490fc1a0b19b2192ce86d6c8f9867a3a6a6d9..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/lib/replaceWhiteWithTransparent.ts
+++ /dev/null
@@ -1,37 +0,0 @@
-export function replaceWhiteWithTransparent(imageBase64: string): Promise {
- return new Promise((resolve, reject) => {
- const img = new Image();
- img.onload = () => {
- const canvas = document.createElement('canvas');
- canvas.width = img.width;
- canvas.height = img.height;
-
- const ctx = canvas.getContext('2d');
- if (!ctx) {
- reject('Unable to get canvas 2D context');
- return;
- }
-
- ctx.drawImage(img, 0, 0);
-
- const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
- const data = imageData.data;
-
- for (let i = 0; i < data.length; i += 4) {
- if (data[i] === 255 && data[i + 1] === 255 && data[i + 2] === 255) {
- data[i + 3] = 0;
- }
- }
-
- ctx.putImageData(imageData, 0, 0);
-
- resolve(canvas.toDataURL());
- };
-
- img.onerror = (err) => {
- reject(err);
- };
-
- img.src = imageBase64;
- });
-}
\ No newline at end of file
diff --git a/spaces/Deva123d/WaveFormBot/README.md b/spaces/Deva123d/WaveFormBot/README.md
deleted file mode 100644
index 0687f20c106f7d1473f100054be9770266f92c43..0000000000000000000000000000000000000000
--- a/spaces/Deva123d/WaveFormBot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: WaveFormBot
-emoji: 👁
-colorFrom: yellow
-colorTo: gray
-sdk: streamlit
-sdk_version: 1.24.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DinoPiteko/youtube-whisper-04/app.py b/spaces/DinoPiteko/youtube-whisper-04/app.py
deleted file mode 100644
index 4a61dc561a016c53ad93a3c556b0ef7bafa964eb..0000000000000000000000000000000000000000
--- a/spaces/DinoPiteko/youtube-whisper-04/app.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import gradio as gr
-import whisper
-from pytube import YouTube
-
-def get_audio(url):
- yt = YouTube(url)
- return yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4")
-
-def get_transcript(url, model_size, lang, format):
-
- model = whisper.load_model(model_size)
-
- if lang == "None":
- lang = None
-
- result = model.transcribe(get_audio(url), fp16=False, language=lang)
-
- if format == "None":
- return result["text"]
- elif format == ".srt":
- return format_to_srt(result["segments"])
-
-def format_to_srt(segments):
- output = ""
- for i, segment in enumerate(segments):
- output += f"{i + 1}\n"
- output += f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n"
- output += f"{segment['text']}\n\n"
- return output
-
-def format_timestamp(t):
- hh = t//3600
- mm = (t - hh*3600)//60
- ss = t - hh*3600 - mm*60
- mi = (t - int(t))*1000
- return f"{int(hh):02d}:{int(mm):02d}:{int(ss):02d},{int(mi):03d}"
-
-
-langs = ["None"] + sorted(list(whisper.tokenizer.LANGUAGES.values()))
-model_size = list(whisper._MODELS.keys())
-
-with gr.Blocks() as demo:
-
- with gr.Row():
-
- with gr.Column():
-
- with gr.Row():
- url = gr.Textbox(placeholder='Youtube video URL', label='URL')
-
- with gr.Row():
-
- model_size = gr.Dropdown(choices=model_size, value='tiny', label="Model")
- lang = gr.Dropdown(choices=langs, value="None", label="Language (Optional)")
- format = gr.Dropdown(choices=["None", ".srt"], value="None", label="Timestamps? (Optional)")
-
- with gr.Row():
- gr.Markdown("Larger models are more accurate, but slower. For 1min video, it'll take ~30s (tiny), ~1min (base), ~3min (small), ~5min (medium), etc.")
- transcribe_btn = gr.Button('Transcribe')
-
- with gr.Column():
- outputs = gr.Textbox(placeholder='Transcription of the video', label='Transcription')
-
- transcribe_btn.click(get_transcript, inputs=[url, model_size, lang, format], outputs=outputs)
-
-demo.launch(debug=True)
diff --git a/spaces/Dogge/bigscience-bloomz-7b1/README.md b/spaces/Dogge/bigscience-bloomz-7b1/README.md
deleted file mode 100644
index 39845e739f40bb253d328974b8440fccea7ccb0e..0000000000000000000000000000000000000000
--- a/spaces/Dogge/bigscience-bloomz-7b1/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Bigscience Bloomz 7b1
-emoji: 🏢
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.13.0
-app_file: app.py
-pinned: false
-license: bigscience-bloom-rail-1.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/dnnlib/util.py b/spaces/DragGan/DragGan-Inversion/PTI/dnnlib/util.py
deleted file mode 100644
index 76725336d01e75e1c68daa88be47f4fde0bbc63b..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/dnnlib/util.py
+++ /dev/null
@@ -1,477 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Miscellaneous utility classes and functions."""
-
-import ctypes
-import fnmatch
-import importlib
-import inspect
-import numpy as np
-import os
-import shutil
-import sys
-import types
-import io
-import pickle
-import re
-import requests
-import html
-import hashlib
-import glob
-import tempfile
-import urllib
-import urllib.request
-import uuid
-
-from distutils.util import strtobool
-from typing import Any, List, Tuple, Union
-
-
-# Util classes
-# ------------------------------------------------------------------------------------------
-
-
-class EasyDict(dict):
- """Convenience class that behaves like a dict but allows access with the attribute syntax."""
-
- def __getattr__(self, name: str) -> Any:
- try:
- return self[name]
- except KeyError:
- raise AttributeError(name)
-
- def __setattr__(self, name: str, value: Any) -> None:
- self[name] = value
-
- def __delattr__(self, name: str) -> None:
- del self[name]
-
-
-class Logger(object):
- """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file."""
-
- def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True):
- self.file = None
-
- if file_name is not None:
- self.file = open(file_name, file_mode)
-
- self.should_flush = should_flush
- self.stdout = sys.stdout
- self.stderr = sys.stderr
-
- sys.stdout = self
- sys.stderr = self
-
- def __enter__(self) -> "Logger":
- return self
-
- def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
- self.close()
-
- def write(self, text: Union[str, bytes]) -> None:
- """Write text to stdout (and a file) and optionally flush."""
- if isinstance(text, bytes):
- text = text.decode()
- if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash
- return
-
- if self.file is not None:
- self.file.write(text)
-
- self.stdout.write(text)
-
- if self.should_flush:
- self.flush()
-
- def flush(self) -> None:
- """Flush written text to both stdout and a file, if open."""
- if self.file is not None:
- self.file.flush()
-
- self.stdout.flush()
-
- def close(self) -> None:
- """Flush, close possible files, and remove stdout/stderr mirroring."""
- self.flush()
-
- # if using multiple loggers, prevent closing in wrong order
- if sys.stdout is self:
- sys.stdout = self.stdout
- if sys.stderr is self:
- sys.stderr = self.stderr
-
- if self.file is not None:
- self.file.close()
- self.file = None
-
-
-# Cache directories
-# ------------------------------------------------------------------------------------------
-
-_dnnlib_cache_dir = None
-
-def set_cache_dir(path: str) -> None:
- global _dnnlib_cache_dir
- _dnnlib_cache_dir = path
-
-def make_cache_dir_path(*paths: str) -> str:
- if _dnnlib_cache_dir is not None:
- return os.path.join(_dnnlib_cache_dir, *paths)
- if 'DNNLIB_CACHE_DIR' in os.environ:
- return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths)
- if 'HOME' in os.environ:
- return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths)
- if 'USERPROFILE' in os.environ:
- return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths)
- return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths)
-
-# Small util functions
-# ------------------------------------------------------------------------------------------
-
-
-def format_time(seconds: Union[int, float]) -> str:
- """Convert the seconds to human readable string with days, hours, minutes and seconds."""
- s = int(np.rint(seconds))
-
- if s < 60:
- return "{0}s".format(s)
- elif s < 60 * 60:
- return "{0}m {1:02}s".format(s // 60, s % 60)
- elif s < 24 * 60 * 60:
- return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60)
- else:
- return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60)
-
-
-def ask_yes_no(question: str) -> bool:
- """Ask the user the question until the user inputs a valid answer."""
- while True:
- try:
- print("{0} [y/n]".format(question))
- return strtobool(input().lower())
- except ValueError:
- pass
-
-
-def tuple_product(t: Tuple) -> Any:
- """Calculate the product of the tuple elements."""
- result = 1
-
- for v in t:
- result *= v
-
- return result
-
-
-_str_to_ctype = {
- "uint8": ctypes.c_ubyte,
- "uint16": ctypes.c_uint16,
- "uint32": ctypes.c_uint32,
- "uint64": ctypes.c_uint64,
- "int8": ctypes.c_byte,
- "int16": ctypes.c_int16,
- "int32": ctypes.c_int32,
- "int64": ctypes.c_int64,
- "float32": ctypes.c_float,
- "float64": ctypes.c_double
-}
-
-
-def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]:
- """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes."""
- type_str = None
-
- if isinstance(type_obj, str):
- type_str = type_obj
- elif hasattr(type_obj, "__name__"):
- type_str = type_obj.__name__
- elif hasattr(type_obj, "name"):
- type_str = type_obj.name
- else:
- raise RuntimeError("Cannot infer type name from input")
-
- assert type_str in _str_to_ctype.keys()
-
- my_dtype = np.dtype(type_str)
- my_ctype = _str_to_ctype[type_str]
-
- assert my_dtype.itemsize == ctypes.sizeof(my_ctype)
-
- return my_dtype, my_ctype
-
-
-def is_pickleable(obj: Any) -> bool:
- try:
- with io.BytesIO() as stream:
- pickle.dump(obj, stream)
- return True
- except:
- return False
-
-
-# Functionality to import modules/objects by name, and call functions by name
-# ------------------------------------------------------------------------------------------
-
-def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]:
- """Searches for the underlying module behind the name to some python object.
- Returns the module and the object name (original name with module part removed)."""
-
- # allow convenience shorthands, substitute them by full names
- obj_name = re.sub("^np.", "numpy.", obj_name)
- obj_name = re.sub("^tf.", "tensorflow.", obj_name)
-
- # list alternatives for (module_name, local_obj_name)
- parts = obj_name.split(".")
- name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) for i in range(len(parts), 0, -1)]
-
- # try each alternative in turn
- for module_name, local_obj_name in name_pairs:
- try:
- module = importlib.import_module(module_name) # may raise ImportError
- get_obj_from_module(module, local_obj_name) # may raise AttributeError
- return module, local_obj_name
- except:
- pass
-
- # maybe some of the modules themselves contain errors?
- for module_name, _local_obj_name in name_pairs:
- try:
- importlib.import_module(module_name) # may raise ImportError
- except ImportError:
- if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"):
- raise
-
- # maybe the requested attribute is missing?
- for module_name, local_obj_name in name_pairs:
- try:
- module = importlib.import_module(module_name) # may raise ImportError
- get_obj_from_module(module, local_obj_name) # may raise AttributeError
- except ImportError:
- pass
-
- # we are out of luck, but we have no idea why
- raise ImportError(obj_name)
-
-
-def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any:
- """Traverses the object name and returns the last (rightmost) python object."""
- if obj_name == '':
- return module
- obj = module
- for part in obj_name.split("."):
- obj = getattr(obj, part)
- return obj
-
-
-def get_obj_by_name(name: str) -> Any:
- """Finds the python object with the given name."""
- module, obj_name = get_module_from_obj_name(name)
- return get_obj_from_module(module, obj_name)
-
-
-def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any:
- """Finds the python object with the given name and calls it as a function."""
- assert func_name is not None
- func_obj = get_obj_by_name(func_name)
- assert callable(func_obj)
- return func_obj(*args, **kwargs)
-
-
-def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any:
- """Finds the python class with the given name and constructs it with the given arguments."""
- return call_func_by_name(*args, func_name=class_name, **kwargs)
-
-
-def get_module_dir_by_obj_name(obj_name: str) -> str:
- """Get the directory path of the module containing the given object name."""
- module, _ = get_module_from_obj_name(obj_name)
- return os.path.dirname(inspect.getfile(module))
-
-
-def is_top_level_function(obj: Any) -> bool:
- """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'."""
- return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__
-
-
-def get_top_level_function_name(obj: Any) -> str:
- """Return the fully-qualified name of a top-level function."""
- assert is_top_level_function(obj)
- module = obj.__module__
- if module == '__main__':
- module = os.path.splitext(os.path.basename(sys.modules[module].__file__))[0]
- return module + "." + obj.__name__
-
-
-# File system helpers
-# ------------------------------------------------------------------------------------------
-
-def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]:
- """List all files recursively in a given directory while ignoring given file and directory names.
- Returns list of tuples containing both absolute and relative paths."""
- assert os.path.isdir(dir_path)
- base_name = os.path.basename(os.path.normpath(dir_path))
-
- if ignores is None:
- ignores = []
-
- result = []
-
- for root, dirs, files in os.walk(dir_path, topdown=True):
- for ignore_ in ignores:
- dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)]
-
- # dirs need to be edited in-place
- for d in dirs_to_remove:
- dirs.remove(d)
-
- files = [f for f in files if not fnmatch.fnmatch(f, ignore_)]
-
- absolute_paths = [os.path.join(root, f) for f in files]
- relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths]
-
- if add_base_to_relative:
- relative_paths = [os.path.join(base_name, p) for p in relative_paths]
-
- assert len(absolute_paths) == len(relative_paths)
- result += zip(absolute_paths, relative_paths)
-
- return result
-
-
-def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None:
- """Takes in a list of tuples of (src, dst) paths and copies files.
- Will create all necessary directories."""
- for file in files:
- target_dir_name = os.path.dirname(file[1])
-
- # will create all intermediate-level directories
- if not os.path.exists(target_dir_name):
- os.makedirs(target_dir_name)
-
- shutil.copyfile(file[0], file[1])
-
-
-# URL helpers
-# ------------------------------------------------------------------------------------------
-
-def is_url(obj: Any, allow_file_urls: bool = False) -> bool:
- """Determine whether the given object is a valid URL string."""
- if not isinstance(obj, str) or not "://" in obj:
- return False
- if allow_file_urls and obj.startswith('file://'):
- return True
- try:
- res = requests.compat.urlparse(obj)
- if not res.scheme or not res.netloc or not "." in res.netloc:
- return False
- res = requests.compat.urlparse(requests.compat.urljoin(obj, "/"))
- if not res.scheme or not res.netloc or not "." in res.netloc:
- return False
- except:
- return False
- return True
-
-
-def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any:
- """Download the given URL and return a binary-mode file object to access the data."""
- assert num_attempts >= 1
- assert not (return_filename and (not cache))
-
- # Doesn't look like an URL scheme so interpret it as a local filename.
- if not re.match('^[a-z]+://', url):
- return url if return_filename else open(url, "rb")
-
- # Handle file URLs. This code handles unusual file:// patterns that
- # arise on Windows:
- #
- # file:///c:/foo.txt
- #
- # which would translate to a local '/c:/foo.txt' filename that's
- # invalid. Drop the forward slash for such pathnames.
- #
- # If you touch this code path, you should test it on both Linux and
- # Windows.
- #
- # Some internet resources suggest using urllib.request.url2pathname() but
- # but that converts forward slashes to backslashes and this causes
- # its own set of problems.
- if url.startswith('file://'):
- filename = urllib.parse.urlparse(url).path
- if re.match(r'^/[a-zA-Z]:', filename):
- filename = filename[1:]
- return filename if return_filename else open(filename, "rb")
-
- assert is_url(url)
-
- # Lookup from cache.
- if cache_dir is None:
- cache_dir = make_cache_dir_path('downloads')
-
- url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest()
- if cache:
- cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*"))
- if len(cache_files) == 1:
- filename = cache_files[0]
- return filename if return_filename else open(filename, "rb")
-
- # Download.
- url_name = None
- url_data = None
- with requests.Session() as session:
- if verbose:
- print("Downloading %s ..." % url, end="", flush=True)
- for attempts_left in reversed(range(num_attempts)):
- try:
- with session.get(url) as res:
- res.raise_for_status()
- if len(res.content) == 0:
- raise IOError("No data received")
-
- if len(res.content) < 8192:
- content_str = res.content.decode("utf-8")
- if "download_warning" in res.headers.get("Set-Cookie", ""):
- links = [html.unescape(link) for link in content_str.split('"') if "export=download" in link]
- if len(links) == 1:
- url = requests.compat.urljoin(url, links[0])
- raise IOError("Google Drive virus checker nag")
- if "Google Drive - Quota exceeded" in content_str:
- raise IOError("Google Drive download quota exceeded -- please try again later")
-
- match = re.search(r'filename="([^"]*)"', res.headers.get("Content-Disposition", ""))
- url_name = match[1] if match else url
- url_data = res.content
- if verbose:
- print(" done")
- break
- except KeyboardInterrupt:
- raise
- except:
- if not attempts_left:
- if verbose:
- print(" failed")
- raise
- if verbose:
- print(".", end="", flush=True)
-
- # Save to cache.
- if cache:
- safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name)
- cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name)
- temp_file = os.path.join(cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name)
- os.makedirs(cache_dir, exist_ok=True)
- with open(temp_file, "wb") as f:
- f.write(url_data)
- os.replace(temp_file, cache_file) # atomic
- if return_filename:
- return cache_file
-
- # Return data as file object.
- assert not return_filename
- return io.BytesIO(url_data)
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/docs/Dataset.md b/spaces/DragGan/DragGan-Inversion/stylegan_human/docs/Dataset.md
deleted file mode 100644
index ef6c56cedab89f3ab09306826240b075af244899..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/docs/Dataset.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# SHHQ Dataset
-
-
-## Overview
-SHHQ is a dataset with high-quality full-body human images in a resolution of 1024 × 512.
-Since we need to follow a rigorous legal review in our institute, we can not release all of the data at once.
-
-For now, SHHQ-1.0 with 40K images is released! More data will be released in the later versions.
-
-
-## Data Sources
-Images are collected in two main ways:
-1) From the Internet.
-We developed a crawler tool with an official API, mainly downloading images from Flickr, Pixabay and Pexels. So you need to meet all the following licenses when using the dataset: CC0, [Pixabay License](https://pixabay.com/service/license/), and [Pexels Licenses](https://www.pexels.com/license/).
-2) From the data providers.
-We purchased images from databases of individual photographers, modeling agencies and other suppliers.
-Images were reviewed by our legal team prior to purchase to ensure permission for use in research.
-
-### Note:
-The composition of SHHQ-1.0:
-
-1) Images obtained from the above sources.
-2) Processed 9991 DeepFashion [[1]](#1) images (retain only full body images).
-3) 1940 African images from the InFashAI [[2]](#2) dataset to increase data diversity.
-
-## Data License
-We are aware of privacy concerns and seriously treat the license and privacy issues. All released data will be ensured under the license of CC0 and free for research use. Also, persons in the dataset are anonymised without additional private or sensitive metadata.
-
-## Agreement
-The SHHQ is available for non-commercial research purposes only.
-
-You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit any portion of the images and any portion of the derived data for commercial purposes.
-
-You agree NOT to further copy, publish or distribute any portion of SHHQ to any third party for any purpose. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.
-
-Shanghai AI Lab reserves the right to terminate your access to the SHHQ at any time.
-
-## Dataset Preview
-For those interested in our dataset, we provide a preview version with 100 images randomly sampled from SHHQ-1.0: [SHHQ-1.0_samples](https://drive.google.com/file/d/1tnNFfmFtzRbYL3qEnNXQ_ShaN9YV5tI5/view?usp=sharing).
-
-In SHHQ-1.0, we provide aligned raw images along with machine-calculated segmentation masks. Later we are planning to release manually annotated human-parsing version of these 40,000 images. Please stay tuned.
-
-> We also provide script [bg_white.py](../bg_white.py) to whiten the background of the raw image using its segmentation mask.
-
-If you want to access the full SHHQ-1.0, please read the following instructions.
-
-## Model trained using SHHQ-1.0
-
-| Structure | 1024x512 | Metric | Scores | 512x256 | Metric | Scores |
-| --------- |:----------:| :----------:| :----------:| :-----: | :-----: | :-----: |
-| StyleGAN1 | to be released | - | - | to be released | - | - |
-| StyleGAN2 | [SHHQ-1.0_sg2_1024.pkl](https://drive.google.com/file/d/1PuvE72xpc69Zq4y58dohuKbG9dFnnjEX/view?usp=sharing) | fid50k_full | 3.56 | [SHHQ-1.0_sg2_512.pkl](https://drive.google.com/file/d/170t2FRWxR8_TG3_y0nVtDBogLPOClnyf/view?usp=sharing) | fid50k_full | 3.68 |
-| StyleGAN3 | to be released | - | - |to be released | - | - |
-
-
-## Download Instructions
-Please download the SHHQ Dataset Release Agreement from [link](./SHHQ_Dataset_Release_Agreement.pdf).
-Read it carefully, complete and sign it appropriately.
-
-Please send the completed form to Jianglin Fu (arlenefu@outlook.com) and Shikai Li (lishikai@pjlab.org.cn), and cc to Wayne Wu (wuwenyan0503@gmail.com) using institutional email address. The email Subject Title is "SHHQ Dataset Release Agreement". We will verify your request and contact you with the dataset link and password to unzip the image data.
-
-Note:
-
-1. We are currently facing large incoming applications, and we need to carefully verify all the applicants, please be patient, and we will reply to you as soon as possible.
-
-2. The signature in the agreement should be hand-written.
-
-## References
-[1]
-Liu, Ziwei and Luo, Ping and Qiu, Shi and Wang, Xiaogang and Tang, Xiaoou. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. CVPR (2016)
-
-[2]
-Hacheme, Gilles and Sayouti, Noureini. Neural fashion image captioning: Accounting for data diversity. arXiv preprint arXiv:2106.12154 (2021)
-
diff --git a/spaces/DragGan/DragGan/gui_utils/imgui_window.py b/spaces/DragGan/DragGan/gui_utils/imgui_window.py
deleted file mode 100644
index 30d539a1382def526050c83978d1118348ac77ad..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/gui_utils/imgui_window.py
+++ /dev/null
@@ -1,103 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import imgui
-import imgui.integrations.glfw
-
-from . import glfw_window
-from . import imgui_utils
-from . import text_utils
-
-#----------------------------------------------------------------------------
-
-class ImguiWindow(glfw_window.GlfwWindow):
- def __init__(self, *, title='ImguiWindow', font=None, font_sizes=range(14,24), **glfw_kwargs):
- if font is None:
- font = text_utils.get_default_font()
- font_sizes = {int(size) for size in font_sizes}
- super().__init__(title=title, **glfw_kwargs)
-
- # Init fields.
- self._imgui_context = None
- self._imgui_renderer = None
- self._imgui_fonts = None
- self._cur_font_size = max(font_sizes)
-
- # Delete leftover imgui.ini to avoid unexpected behavior.
- if os.path.isfile('imgui.ini'):
- os.remove('imgui.ini')
-
- # Init ImGui.
- self._imgui_context = imgui.create_context()
- self._imgui_renderer = _GlfwRenderer(self._glfw_window)
- self._attach_glfw_callbacks()
- imgui.get_io().ini_saving_rate = 0 # Disable creating imgui.ini at runtime.
- imgui.get_io().mouse_drag_threshold = 0 # Improve behavior with imgui_utils.drag_custom().
- self._imgui_fonts = {size: imgui.get_io().fonts.add_font_from_file_ttf(font, size) for size in font_sizes}
- self._imgui_renderer.refresh_font_texture()
-
- def close(self):
- self.make_context_current()
- self._imgui_fonts = None
- if self._imgui_renderer is not None:
- self._imgui_renderer.shutdown()
- self._imgui_renderer = None
- if self._imgui_context is not None:
- #imgui.destroy_context(self._imgui_context) # Commented out to avoid creating imgui.ini at the end.
- self._imgui_context = None
- super().close()
-
- def _glfw_key_callback(self, *args):
- super()._glfw_key_callback(*args)
- self._imgui_renderer.keyboard_callback(*args)
-
- @property
- def font_size(self):
- return self._cur_font_size
-
- @property
- def spacing(self):
- return round(self._cur_font_size * 0.4)
-
- def set_font_size(self, target): # Applied on next frame.
- self._cur_font_size = min((abs(key - target), key) for key in self._imgui_fonts.keys())[1]
-
- def begin_frame(self):
- # Begin glfw frame.
- super().begin_frame()
-
- # Process imgui events.
- self._imgui_renderer.mouse_wheel_multiplier = self._cur_font_size / 10
- if self.content_width > 0 and self.content_height > 0:
- self._imgui_renderer.process_inputs()
-
- # Begin imgui frame.
- imgui.new_frame()
- imgui.push_font(self._imgui_fonts[self._cur_font_size])
- imgui_utils.set_default_style(spacing=self.spacing, indent=self.font_size, scrollbar=self.font_size+4)
-
- def end_frame(self):
- imgui.pop_font()
- imgui.render()
- imgui.end_frame()
- self._imgui_renderer.render(imgui.get_draw_data())
- super().end_frame()
-
-#----------------------------------------------------------------------------
-# Wrapper class for GlfwRenderer to fix a mouse wheel bug on Linux.
-
-class _GlfwRenderer(imgui.integrations.glfw.GlfwRenderer):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.mouse_wheel_multiplier = 1
-
- def scroll_callback(self, window, x_offset, y_offset):
- self.io.mouse_wheel += y_offset * self.mouse_wheel_multiplier
-
-#----------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/encoders/psp_encoders.py b/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/encoders/psp_encoders.py
deleted file mode 100644
index 85da8a41461e20170cc3f3afaff3f25be9f6b2d1..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/pti/pti_models/e4e/encoders/psp_encoders.py
+++ /dev/null
@@ -1,200 +0,0 @@
-from enum import Enum
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import Conv2d, BatchNorm2d, PReLU, Sequential, Module
-
-from pti.pti_models.e4e.encoders.helpers import get_blocks, bottleneck_IR, bottleneck_IR_SE, _upsample_add
-from pti.pti_models.e4e.stylegan2.model import EqualLinear
-
-
-class ProgressiveStage(Enum):
- WTraining = 0
- Delta1Training = 1
- Delta2Training = 2
- Delta3Training = 3
- Delta4Training = 4
- Delta5Training = 5
- Delta6Training = 6
- Delta7Training = 7
- Delta8Training = 8
- Delta9Training = 9
- Delta10Training = 10
- Delta11Training = 11
- Delta12Training = 12
- Delta13Training = 13
- Delta14Training = 14
- Delta15Training = 15
- Delta16Training = 16
- Delta17Training = 17
- Inference = 18
-
-
-class GradualStyleBlock(Module):
- def __init__(self, in_c, out_c, spatial):
- super(GradualStyleBlock, self).__init__()
- self.out_c = out_c
- self.spatial = spatial
- num_pools = int(np.log2(spatial))
- modules = []
- modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1),
- nn.LeakyReLU()]
- for i in range(num_pools - 1):
- modules += [
- Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1),
- nn.LeakyReLU()
- ]
- self.convs = nn.Sequential(*modules)
- self.linear = EqualLinear(out_c, out_c, lr_mul=1)
-
- def forward(self, x):
- x = self.convs(x)
- x = x.view(-1, self.out_c)
- x = self.linear(x)
- return x
-
-
-class GradualStyleEncoder(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(GradualStyleEncoder, self).__init__()
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- self.styles = nn.ModuleList()
- log_size = int(math.log(opts.stylegan_size, 2))
- self.style_count = 2 * log_size - 2
- self.coarse_ind = 3
- self.middle_ind = 7
- for i in range(self.style_count):
- if i < self.coarse_ind:
- style = GradualStyleBlock(512, 512, 16)
- elif i < self.middle_ind:
- style = GradualStyleBlock(512, 512, 32)
- else:
- style = GradualStyleBlock(512, 512, 64)
- self.styles.append(style)
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
-
- def forward(self, x):
- x = self.input_layer(x)
-
- latents = []
- modulelist = list(self.body._modules.values())
- for i, l in enumerate(modulelist):
- x = l(x)
- if i == 6:
- c1 = x
- elif i == 20:
- c2 = x
- elif i == 23:
- c3 = x
-
- for j in range(self.coarse_ind):
- latents.append(self.styles[j](c3))
-
- p2 = _upsample_add(c3, self.latlayer1(c2))
- for j in range(self.coarse_ind, self.middle_ind):
- latents.append(self.styles[j](p2))
-
- p1 = _upsample_add(p2, self.latlayer2(c1))
- for j in range(self.middle_ind, self.style_count):
- latents.append(self.styles[j](p1))
-
- out = torch.stack(latents, dim=1)
- return out
-
-
-class Encoder4Editing(Module):
- def __init__(self, num_layers, mode='ir', opts=None):
- super(Encoder4Editing, self).__init__()
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- self.styles = nn.ModuleList()
- log_size = int(math.log(opts.stylegan_size, 2))
- self.style_count = 2 * log_size - 2
- self.coarse_ind = 3
- self.middle_ind = 7
-
- for i in range(self.style_count):
- if i < self.coarse_ind:
- style = GradualStyleBlock(512, 512, 16)
- elif i < self.middle_ind:
- style = GradualStyleBlock(512, 512, 32)
- else:
- style = GradualStyleBlock(512, 512, 64)
- self.styles.append(style)
-
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
-
- self.progressive_stage = ProgressiveStage.Inference
-
- def get_deltas_starting_dimensions(self):
- ''' Get a list of the initial dimension of every delta from which it is applied '''
- return list(range(self.style_count)) # Each dimension has a delta applied to it
-
- def set_progressive_stage(self, new_stage: ProgressiveStage):
- self.progressive_stage = new_stage
- print('Changed progressive stage to: ', new_stage)
-
- def forward(self, x):
- x = self.input_layer(x)
-
- modulelist = list(self.body._modules.values())
- for i, l in enumerate(modulelist):
- x = l(x)
- if i == 6:
- c1 = x
- elif i == 20:
- c2 = x
- elif i == 23:
- c3 = x
-
- # Infer main W and duplicate it
- w0 = self.styles[0](c3)
- w = w0.repeat(self.style_count, 1, 1).permute(1, 0, 2)
- stage = self.progressive_stage.value
- features = c3
- for i in range(1, min(stage + 1, self.style_count)): # Infer additional deltas
- if i == self.coarse_ind:
- p2 = _upsample_add(c3, self.latlayer1(c2)) # FPN's middle features
- features = p2
- elif i == self.middle_ind:
- p1 = _upsample_add(p2, self.latlayer2(c1)) # FPN's fine features
- features = p1
- delta_i = self.styles[i](features)
- w[:, i] += delta_i
- return w
diff --git a/spaces/DragGan/DragGan/training/networks_stylegan3.py b/spaces/DragGan/DragGan/training/networks_stylegan3.py
deleted file mode 100644
index e34bf87ee23a4e5612094062dd67d0a7f6de5e39..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/training/networks_stylegan3.py
+++ /dev/null
@@ -1,548 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Generator architecture from the paper
-"Alias-Free Generative Adversarial Networks"."""
-
-import numpy as np
-import scipy.signal
-import scipy.optimize
-import torch
-import torch.nn.functional as F
-from torch_utils import misc
-from torch_utils import persistence
-from torch_utils.ops import conv2d_gradfix
-from torch_utils.ops import filtered_lrelu
-from torch_utils.ops import bias_act
-
-#----------------------------------------------------------------------------
-
-@misc.profiled_function
-def modulated_conv2d(
- x, # Input tensor: [batch_size, in_channels, in_height, in_width]
- w, # Weight tensor: [out_channels, in_channels, kernel_height, kernel_width]
- s, # Style tensor: [batch_size, in_channels]
- demodulate = True, # Apply weight demodulation?
- padding = 0, # Padding: int or [padH, padW]
- input_gain = None, # Optional scale factors for the input channels: [], [in_channels], or [batch_size, in_channels]
-):
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- batch_size = int(x.shape[0])
- out_channels, in_channels, kh, kw = w.shape
- misc.assert_shape(w, [out_channels, in_channels, kh, kw]) # [OIkk]
- misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW]
- misc.assert_shape(s, [batch_size, in_channels]) # [NI]
-
- # Pre-normalize inputs.
- if demodulate:
- w = w * w.square().mean([1,2,3], keepdim=True).rsqrt()
- s = s * s.square().mean().rsqrt()
-
- # Modulate weights.
- w = w.unsqueeze(0) # [NOIkk]
- w = w * s.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Demodulate weights.
- if demodulate:
- dcoefs = (w.square().sum(dim=[2,3,4]) + 1e-8).rsqrt() # [NO]
- w = w * dcoefs.unsqueeze(2).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Apply input scaling.
- if input_gain is not None:
- input_gain = input_gain.expand(batch_size, in_channels) # [NI]
- w = w * input_gain.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk]
-
- # Execute as one fused op using grouped convolution.
- x = x.reshape(1, -1, *x.shape[2:])
- w = w.reshape(-1, in_channels, kh, kw)
- x = conv2d_gradfix.conv2d(input=x, weight=w.to(x.dtype), padding=padding, groups=batch_size)
- x = x.reshape(batch_size, -1, *x.shape[2:])
- return x
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class FullyConnectedLayer(torch.nn.Module):
- def __init__(self,
- in_features, # Number of input features.
- out_features, # Number of output features.
- activation = 'linear', # Activation function: 'relu', 'lrelu', etc.
- bias = True, # Apply additive bias before the activation function?
- lr_multiplier = 1, # Learning rate multiplier.
- weight_init = 1, # Initial standard deviation of the weight tensor.
- bias_init = 0, # Initial value of the additive bias.
- ):
- super().__init__()
- self.in_features = in_features
- self.out_features = out_features
- self.activation = activation
- self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) * (weight_init / lr_multiplier))
- bias_init = np.broadcast_to(np.asarray(bias_init, dtype=np.float32), [out_features])
- self.bias = torch.nn.Parameter(torch.from_numpy(bias_init / lr_multiplier)) if bias else None
- self.weight_gain = lr_multiplier / np.sqrt(in_features)
- self.bias_gain = lr_multiplier
-
- def forward(self, x):
- w = self.weight.to(x.dtype) * self.weight_gain
- b = self.bias
- if b is not None:
- b = b.to(x.dtype)
- if self.bias_gain != 1:
- b = b * self.bias_gain
- if self.activation == 'linear' and b is not None:
- x = torch.addmm(b.unsqueeze(0), x, w.t())
- else:
- x = x.matmul(w.t())
- x = bias_act.bias_act(x, b, act=self.activation)
- return x
-
- def extra_repr(self):
- return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}'
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class MappingNetwork(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- c_dim, # Conditioning label (C) dimensionality, 0 = no labels.
- w_dim, # Intermediate latent (W) dimensionality.
- num_ws, # Number of intermediate latents to output.
- num_layers = 2, # Number of mapping layers.
- lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers.
- w_avg_beta = 0.998, # Decay for tracking the moving average of W during training.
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.num_ws = num_ws
- self.num_layers = num_layers
- self.w_avg_beta = w_avg_beta
-
- # Construct layers.
- self.embed = FullyConnectedLayer(self.c_dim, self.w_dim) if self.c_dim > 0 else None
- features = [self.z_dim + (self.w_dim if self.c_dim > 0 else 0)] + [self.w_dim] * self.num_layers
- for idx, in_features, out_features in zip(range(num_layers), features[:-1], features[1:]):
- layer = FullyConnectedLayer(in_features, out_features, activation='lrelu', lr_multiplier=lr_multiplier)
- setattr(self, f'fc{idx}', layer)
- self.register_buffer('w_avg', torch.zeros([w_dim]))
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False):
- misc.assert_shape(z, [None, self.z_dim])
- if truncation_cutoff is None:
- truncation_cutoff = self.num_ws
-
- # Embed, normalize, and concatenate inputs.
- x = z.to(torch.float32)
- x = x * (x.square().mean(1, keepdim=True) + 1e-8).rsqrt()
- if self.c_dim > 0:
- misc.assert_shape(c, [None, self.c_dim])
- y = self.embed(c.to(torch.float32))
- y = y * (y.square().mean(1, keepdim=True) + 1e-8).rsqrt()
- x = torch.cat([x, y], dim=1) if x is not None else y
-
- # Execute layers.
- for idx in range(self.num_layers):
- x = getattr(self, f'fc{idx}')(x)
-
- # Update moving average of W.
- if update_emas:
- self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta))
-
- # Broadcast and apply truncation.
- x = x.unsqueeze(1).repeat([1, self.num_ws, 1])
- if truncation_psi != 1:
- x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi)
- return x
-
- def extra_repr(self):
- return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}'
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class SynthesisInput(torch.nn.Module):
- def __init__(self,
- w_dim, # Intermediate latent (W) dimensionality.
- channels, # Number of output channels.
- size, # Output spatial size: int or [width, height].
- sampling_rate, # Output sampling rate.
- bandwidth, # Output bandwidth.
- ):
- super().__init__()
- self.w_dim = w_dim
- self.channels = channels
- self.size = np.broadcast_to(np.asarray(size), [2])
- self.sampling_rate = sampling_rate
- self.bandwidth = bandwidth
-
- # Draw random frequencies from uniform 2D disc.
- freqs = torch.randn([self.channels, 2])
- radii = freqs.square().sum(dim=1, keepdim=True).sqrt()
- freqs /= radii * radii.square().exp().pow(0.25)
- freqs *= bandwidth
- phases = torch.rand([self.channels]) - 0.5
-
- # Setup parameters and buffers.
- self.weight = torch.nn.Parameter(torch.randn([self.channels, self.channels]))
- self.affine = FullyConnectedLayer(w_dim, 4, weight_init=0, bias_init=[1,0,0,0])
- self.register_buffer('transform', torch.eye(3, 3)) # User-specified inverse transform wrt. resulting image.
- self.register_buffer('freqs', freqs)
- self.register_buffer('phases', phases)
-
- def forward(self, w):
- # Introduce batch dimension.
- transforms = self.transform.unsqueeze(0) # [batch, row, col]
- freqs = self.freqs.unsqueeze(0) # [batch, channel, xy]
- phases = self.phases.unsqueeze(0) # [batch, channel]
-
- # Apply learned transformation.
- t = self.affine(w) # t = (r_c, r_s, t_x, t_y)
- t = t / t[:, :2].norm(dim=1, keepdim=True) # t' = (r'_c, r'_s, t'_x, t'_y)
- m_r = torch.eye(3, device=w.device).unsqueeze(0).repeat([w.shape[0], 1, 1]) # Inverse rotation wrt. resulting image.
- m_r[:, 0, 0] = t[:, 0] # r'_c
- m_r[:, 0, 1] = -t[:, 1] # r'_s
- m_r[:, 1, 0] = t[:, 1] # r'_s
- m_r[:, 1, 1] = t[:, 0] # r'_c
- m_t = torch.eye(3, device=w.device).unsqueeze(0).repeat([w.shape[0], 1, 1]) # Inverse translation wrt. resulting image.
- m_t[:, 0, 2] = -t[:, 2] # t'_x
- m_t[:, 1, 2] = -t[:, 3] # t'_y
- transforms = m_r @ m_t @ transforms # First rotate resulting image, then translate, and finally apply user-specified transform.
-
- # Transform frequencies.
- phases = phases + (freqs @ transforms[:, :2, 2:]).squeeze(2)
- freqs = freqs @ transforms[:, :2, :2]
-
- # Dampen out-of-band frequencies that may occur due to the user-specified transform.
- amplitudes = (1 - (freqs.norm(dim=2) - self.bandwidth) / (self.sampling_rate / 2 - self.bandwidth)).clamp(0, 1)
-
- # Construct sampling grid.
- theta = torch.eye(2, 3, device=w.device)
- theta[0, 0] = 0.5 * self.size[0] / self.sampling_rate
- theta[1, 1] = 0.5 * self.size[1] / self.sampling_rate
- grids = torch.nn.functional.affine_grid(theta.unsqueeze(0), [1, 1, self.size[1], self.size[0]], align_corners=False)
-
- # Compute Fourier features.
- x = (grids.unsqueeze(3) @ freqs.permute(0, 2, 1).unsqueeze(1).unsqueeze(2)).squeeze(3) # [batch, height, width, channel]
- x = x + phases.unsqueeze(1).unsqueeze(2)
- x = torch.sin(x * (np.pi * 2))
- x = x * amplitudes.unsqueeze(1).unsqueeze(2)
-
- # Apply trainable mapping.
- weight = self.weight / np.sqrt(self.channels)
- x = x @ weight.t()
-
- # Ensure correct shape.
- x = x.permute(0, 3, 1, 2) # [batch, channel, height, width]
- misc.assert_shape(x, [w.shape[0], self.channels, int(self.size[1]), int(self.size[0])])
- return x
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, channels={self.channels:d}, size={list(self.size)},',
- f'sampling_rate={self.sampling_rate:g}, bandwidth={self.bandwidth:g}'])
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class SynthesisLayer(torch.nn.Module):
- def __init__(self,
- w_dim, # Intermediate latent (W) dimensionality.
- is_torgb, # Is this the final ToRGB layer?
- is_critically_sampled, # Does this layer use critical sampling?
- use_fp16, # Does this layer use FP16?
-
- # Input & output specifications.
- in_channels, # Number of input channels.
- out_channels, # Number of output channels.
- in_size, # Input spatial size: int or [width, height].
- out_size, # Output spatial size: int or [width, height].
- in_sampling_rate, # Input sampling rate (s).
- out_sampling_rate, # Output sampling rate (s).
- in_cutoff, # Input cutoff frequency (f_c).
- out_cutoff, # Output cutoff frequency (f_c).
- in_half_width, # Input transition band half-width (f_h).
- out_half_width, # Output Transition band half-width (f_h).
-
- # Hyperparameters.
- conv_kernel = 3, # Convolution kernel size. Ignored for final the ToRGB layer.
- filter_size = 6, # Low-pass filter size relative to the lower resolution when up/downsampling.
- lrelu_upsampling = 2, # Relative sampling rate for leaky ReLU. Ignored for final the ToRGB layer.
- use_radial_filters = False, # Use radially symmetric downsampling filter? Ignored for critically sampled layers.
- conv_clamp = 256, # Clamp the output to [-X, +X], None = disable clamping.
- magnitude_ema_beta = 0.999, # Decay rate for the moving average of input magnitudes.
- ):
- super().__init__()
- self.w_dim = w_dim
- self.is_torgb = is_torgb
- self.is_critically_sampled = is_critically_sampled
- self.use_fp16 = use_fp16
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.in_size = np.broadcast_to(np.asarray(in_size), [2])
- self.out_size = np.broadcast_to(np.asarray(out_size), [2])
- self.in_sampling_rate = in_sampling_rate
- self.out_sampling_rate = out_sampling_rate
- self.tmp_sampling_rate = max(in_sampling_rate, out_sampling_rate) * (1 if is_torgb else lrelu_upsampling)
- self.in_cutoff = in_cutoff
- self.out_cutoff = out_cutoff
- self.in_half_width = in_half_width
- self.out_half_width = out_half_width
- self.conv_kernel = 1 if is_torgb else conv_kernel
- self.conv_clamp = conv_clamp
- self.magnitude_ema_beta = magnitude_ema_beta
-
- # Setup parameters and buffers.
- self.affine = FullyConnectedLayer(self.w_dim, self.in_channels, bias_init=1)
- self.weight = torch.nn.Parameter(torch.randn([self.out_channels, self.in_channels, self.conv_kernel, self.conv_kernel]))
- self.bias = torch.nn.Parameter(torch.zeros([self.out_channels]))
- self.register_buffer('magnitude_ema', torch.ones([]))
-
- # Design upsampling filter.
- self.up_factor = int(np.rint(self.tmp_sampling_rate / self.in_sampling_rate))
- assert self.in_sampling_rate * self.up_factor == self.tmp_sampling_rate
- self.up_taps = filter_size * self.up_factor if self.up_factor > 1 and not self.is_torgb else 1
- self.register_buffer('up_filter', self.design_lowpass_filter(
- numtaps=self.up_taps, cutoff=self.in_cutoff, width=self.in_half_width*2, fs=self.tmp_sampling_rate))
-
- # Design downsampling filter.
- self.down_factor = int(np.rint(self.tmp_sampling_rate / self.out_sampling_rate))
- assert self.out_sampling_rate * self.down_factor == self.tmp_sampling_rate
- self.down_taps = filter_size * self.down_factor if self.down_factor > 1 and not self.is_torgb else 1
- self.down_radial = use_radial_filters and not self.is_critically_sampled
- self.register_buffer('down_filter', self.design_lowpass_filter(
- numtaps=self.down_taps, cutoff=self.out_cutoff, width=self.out_half_width*2, fs=self.tmp_sampling_rate, radial=self.down_radial))
-
- # Compute padding.
- pad_total = (self.out_size - 1) * self.down_factor + 1 # Desired output size before downsampling.
- pad_total -= (self.in_size + self.conv_kernel - 1) * self.up_factor # Input size after upsampling.
- pad_total += self.up_taps + self.down_taps - 2 # Size reduction caused by the filters.
- pad_lo = (pad_total + self.up_factor) // 2 # Shift sample locations according to the symmetric interpretation (Appendix C.3).
- pad_hi = pad_total - pad_lo
- self.padding = [int(pad_lo[0]), int(pad_hi[0]), int(pad_lo[1]), int(pad_hi[1])]
-
- def forward(self, x, w, noise_mode='random', force_fp32=False, update_emas=False):
- assert noise_mode in ['random', 'const', 'none'] # unused
- misc.assert_shape(x, [None, self.in_channels, int(self.in_size[1]), int(self.in_size[0])])
- misc.assert_shape(w, [x.shape[0], self.w_dim])
-
- # Track input magnitude.
- if update_emas:
- with torch.autograd.profiler.record_function('update_magnitude_ema'):
- magnitude_cur = x.detach().to(torch.float32).square().mean()
- self.magnitude_ema.copy_(magnitude_cur.lerp(self.magnitude_ema, self.magnitude_ema_beta))
- input_gain = self.magnitude_ema.rsqrt()
-
- # Execute affine layer.
- styles = self.affine(w)
- if self.is_torgb:
- weight_gain = 1 / np.sqrt(self.in_channels * (self.conv_kernel ** 2))
- styles = styles * weight_gain
-
- # Execute modulated conv2d.
- dtype = torch.float16 if (self.use_fp16 and not force_fp32 and x.device.type == 'cuda') else torch.float32
- x = modulated_conv2d(x=x.to(dtype), w=self.weight, s=styles,
- padding=self.conv_kernel-1, demodulate=(not self.is_torgb), input_gain=input_gain)
-
- # Execute bias, filtered leaky ReLU, and clamping.
- gain = 1 if self.is_torgb else np.sqrt(2)
- slope = 1 if self.is_torgb else 0.2
- x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype),
- up=self.up_factor, down=self.down_factor, padding=self.padding, gain=gain, slope=slope, clamp=self.conv_clamp)
-
- # Ensure correct shape and dtype.
- misc.assert_shape(x, [None, self.out_channels, int(self.out_size[1]), int(self.out_size[0])])
- assert x.dtype == dtype
- return x
-
- @staticmethod
- def design_lowpass_filter(numtaps, cutoff, width, fs, radial=False):
- assert numtaps >= 1
-
- # Identity filter.
- if numtaps == 1:
- return None
-
- # Separable Kaiser low-pass filter.
- if not radial:
- f = scipy.signal.firwin(numtaps=numtaps, cutoff=cutoff, width=width, fs=fs)
- return torch.as_tensor(f, dtype=torch.float32)
-
- # Radially symmetric jinc-based filter.
- x = (np.arange(numtaps) - (numtaps - 1) / 2) / fs
- r = np.hypot(*np.meshgrid(x, x))
- f = scipy.special.j1(2 * cutoff * (np.pi * r)) / (np.pi * r)
- beta = scipy.signal.kaiser_beta(scipy.signal.kaiser_atten(numtaps, width / (fs / 2)))
- w = np.kaiser(numtaps, beta)
- f *= np.outer(w, w)
- f /= np.sum(f)
- return torch.as_tensor(f, dtype=torch.float32)
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, is_torgb={self.is_torgb},',
- f'is_critically_sampled={self.is_critically_sampled}, use_fp16={self.use_fp16},',
- f'in_sampling_rate={self.in_sampling_rate:g}, out_sampling_rate={self.out_sampling_rate:g},',
- f'in_cutoff={self.in_cutoff:g}, out_cutoff={self.out_cutoff:g},',
- f'in_half_width={self.in_half_width:g}, out_half_width={self.out_half_width:g},',
- f'in_size={list(self.in_size)}, out_size={list(self.out_size)},',
- f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}'])
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class SynthesisNetwork(torch.nn.Module):
- def __init__(self,
- w_dim, # Intermediate latent (W) dimensionality.
- img_resolution, # Output image resolution.
- img_channels, # Number of color channels.
- channel_base = 32768, # Overall multiplier for the number of channels.
- channel_max = 512, # Maximum number of channels in any layer.
- num_layers = 14, # Total number of layers, excluding Fourier features and ToRGB.
- num_critical = 2, # Number of critically sampled layers at the end.
- first_cutoff = 2, # Cutoff frequency of the first layer (f_{c,0}).
- first_stopband = 2**2.1, # Minimum stopband of the first layer (f_{t,0}).
- last_stopband_rel = 2**0.3, # Minimum stopband of the last layer, expressed relative to the cutoff.
- margin_size = 10, # Number of additional pixels outside the image.
- output_scale = 0.25, # Scale factor for the output image.
- num_fp16_res = 4, # Use FP16 for the N highest resolutions.
- **layer_kwargs, # Arguments for SynthesisLayer.
- ):
- super().__init__()
- self.w_dim = w_dim
- self.num_ws = num_layers + 2
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.num_layers = num_layers
- self.num_critical = num_critical
- self.margin_size = margin_size
- self.output_scale = output_scale
- self.num_fp16_res = num_fp16_res
-
- # Geometric progression of layer cutoffs and min. stopbands.
- last_cutoff = self.img_resolution / 2 # f_{c,N}
- last_stopband = last_cutoff * last_stopband_rel # f_{t,N}
- exponents = np.minimum(np.arange(self.num_layers + 1) / (self.num_layers - self.num_critical), 1)
- cutoffs = first_cutoff * (last_cutoff / first_cutoff) ** exponents # f_c[i]
- stopbands = first_stopband * (last_stopband / first_stopband) ** exponents # f_t[i]
-
- # Compute remaining layer parameters.
- sampling_rates = np.exp2(np.ceil(np.log2(np.minimum(stopbands * 2, self.img_resolution)))) # s[i]
- half_widths = np.maximum(stopbands, sampling_rates / 2) - cutoffs # f_h[i]
- sizes = sampling_rates + self.margin_size * 2
- sizes[-2:] = self.img_resolution
- channels = np.rint(np.minimum((channel_base / 2) / cutoffs, channel_max))
- channels[-1] = self.img_channels
-
- # Construct layers.
- self.input = SynthesisInput(
- w_dim=self.w_dim, channels=int(channels[0]), size=int(sizes[0]),
- sampling_rate=sampling_rates[0], bandwidth=cutoffs[0])
- self.layer_names = []
- for idx in range(self.num_layers + 1):
- prev = max(idx - 1, 0)
- is_torgb = (idx == self.num_layers)
- is_critically_sampled = (idx >= self.num_layers - self.num_critical)
- use_fp16 = (sampling_rates[idx] * (2 ** self.num_fp16_res) > self.img_resolution)
- layer = SynthesisLayer(
- w_dim=self.w_dim, is_torgb=is_torgb, is_critically_sampled=is_critically_sampled, use_fp16=use_fp16,
- in_channels=int(channels[prev]), out_channels= int(channels[idx]),
- in_size=int(sizes[prev]), out_size=int(sizes[idx]),
- in_sampling_rate=int(sampling_rates[prev]), out_sampling_rate=int(sampling_rates[idx]),
- in_cutoff=cutoffs[prev], out_cutoff=cutoffs[idx],
- in_half_width=half_widths[prev], out_half_width=half_widths[idx],
- **layer_kwargs)
- name = f'L{idx}_{layer.out_size[0]}_{layer.out_channels}'
- setattr(self, name, layer)
- self.layer_names.append(name)
-
- def forward(self, ws, return_feature=False, **layer_kwargs):
- features = []
- misc.assert_shape(ws, [None, self.num_ws, self.w_dim])
- ws = ws.to(torch.float32).unbind(dim=1)
-
- # Execute layers.
- x = self.input(ws[0])
- for name, w in zip(self.layer_names, ws[1:]):
- x = getattr(self, name)(x, w, **layer_kwargs)
- features.append(x)
- if self.output_scale != 1:
- x = x * self.output_scale
-
- # Ensure correct shape and dtype.
- misc.assert_shape(x, [None, self.img_channels, self.img_resolution, self.img_resolution])
- x = x.to(torch.float32)
- if return_feature:
- return x, features
- else:
- return x
-
- def extra_repr(self):
- return '\n'.join([
- f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},',
- f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},',
- f'num_layers={self.num_layers:d}, num_critical={self.num_critical:d},',
- f'margin_size={self.margin_size:d}, num_fp16_res={self.num_fp16_res:d}'])
-
-#----------------------------------------------------------------------------
-
-@persistence.persistent_class
-class Generator(torch.nn.Module):
- def __init__(self,
- z_dim, # Input latent (Z) dimensionality.
- c_dim, # Conditioning label (C) dimensionality.
- w_dim, # Intermediate latent (W) dimensionality.
- img_resolution, # Output resolution.
- img_channels, # Number of output color channels.
- mapping_kwargs = {}, # Arguments for MappingNetwork.
- resize=None,
- **synthesis_kwargs, # Arguments for SynthesisNetwork.
- ):
- super().__init__()
- self.z_dim = z_dim
- self.c_dim = c_dim
- self.w_dim = w_dim
- self.img_resolution = img_resolution
- self.img_channels = img_channels
- self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs)
- self.num_ws = self.synthesis.num_ws
- self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)
- self.resize = resize
-
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, input_is_w=False, return_feature=False, **synthesis_kwargs):
- if input_is_w:
- ws = z
- if ws.dim() == 2:
- ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1])
- else:
- ws = self.mapping(z, c, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff, update_emas=update_emas)
- img = self.synthesis(ws, update_emas=update_emas, return_feature=return_feature, **synthesis_kwargs)
- if return_feature:
- img, feature = img
- if self.resize is not None:
- img = imresize(img, [self.resize, self.resize])
- if return_feature:
- return img, feature
- else:
- return img
-
-#----------------------------------------------------------------------------
-
-def imresize(image, size):
- dim = image.dim()
- if dim == 3:
- image = image.unsqueeze(1)
- b, _, h, w = image.shape
- if size[0] > h:
- image = F.interpolate(image, size, mode='bilinear')
- elif size[0] < h:
- image = F.interpolate(image, size, mode='area')
- if dim == 3:
- image = image.squeeze(1)
- return image
diff --git a/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/matching.py b/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/matching.py
deleted file mode 100644
index cc7abab60f86e5e84994071fc0ec0dd2f89c0377..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tutorials/motr/mot_online/matching.py
+++ /dev/null
@@ -1,196 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import lap
-import numpy as np
-import scipy
-from cython_bbox import bbox_overlaps as bbox_ious
-from scipy.spatial.distance import cdist
-
-chi2inv95 = {
- 1: 3.8415,
- 2: 5.9915,
- 3: 7.8147,
- 4: 9.4877,
- 5: 11.070,
- 6: 12.592,
- 7: 14.067,
- 8: 15.507,
- 9: 16.919}
-
-def merge_matches(m1, m2, shape):
- O,P,Q = shape
- m1 = np.asarray(m1)
- m2 = np.asarray(m2)
-
- M1 = scipy.sparse.coo_matrix((np.ones(len(m1)), (m1[:, 0], m1[:, 1])), shape=(O, P))
- M2 = scipy.sparse.coo_matrix((np.ones(len(m2)), (m2[:, 0], m2[:, 1])), shape=(P, Q))
-
- mask = M1*M2
- match = mask.nonzero()
- match = list(zip(match[0], match[1]))
- unmatched_O = tuple(set(range(O)) - set([i for i, j in match]))
- unmatched_Q = tuple(set(range(Q)) - set([j for i, j in match]))
-
- return match, unmatched_O, unmatched_Q
-
-
-def _indices_to_matches(cost_matrix, indices, thresh):
- matched_cost = cost_matrix[tuple(zip(*indices))]
- matched_mask = (matched_cost <= thresh)
-
- matches = indices[matched_mask]
- unmatched_a = tuple(set(range(cost_matrix.shape[0])) - set(matches[:, 0]))
- unmatched_b = tuple(set(range(cost_matrix.shape[1])) - set(matches[:, 1]))
-
- return matches, unmatched_a, unmatched_b
-
-
-def linear_assignment(cost_matrix, thresh):
- if cost_matrix.size == 0:
- return np.empty((0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(range(cost_matrix.shape[1]))
- matches, unmatched_a, unmatched_b = [], [], []
- cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh)
- for ix, mx in enumerate(x):
- if mx >= 0:
- matches.append([ix, mx])
- unmatched_a = np.where(x < 0)[0]
- unmatched_b = np.where(y < 0)[0]
- matches = np.asarray(matches)
- return matches, unmatched_a, unmatched_b
-
-
-def ious(atlbrs, btlbrs):
- """
- Compute cost based on IoU
- :type atlbrs: list[tlbr] | np.ndarray
- :type atlbrs: list[tlbr] | np.ndarray
- :rtype ious np.ndarray
- """
- ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float)
- if ious.size == 0:
- return ious
-
- ious = bbox_ious(
- np.ascontiguousarray(atlbrs, dtype=np.float),
- np.ascontiguousarray(btlbrs, dtype=np.float)
- )
-
- return ious
-
-
-def iou_distance(atracks, btracks):
- """
- Compute cost based on IoU
- :type atracks: list[STrack]
- :type btracks: list[STrack]
- :rtype cost_matrix np.ndarray
- """
-
- if (len(atracks)>0 and isinstance(atracks[0], np.ndarray)) or (len(btracks) > 0 and isinstance(btracks[0], np.ndarray)):
- atlbrs = atracks
- btlbrs = btracks
- else:
- atlbrs = [track.tlbr for track in atracks]
- btlbrs = [track.tlbr for track in btracks]
- _ious = ious(atlbrs, btlbrs)
- cost_matrix = 1 - _ious
-
- return cost_matrix
-
-def embedding_distance(tracks, detections, metric='cosine'):
- """
- :param tracks: list[STrack]
- :param detections: list[BaseTrack]
- :param metric:
- :return: cost_matrix np.ndarray
- """
-
- cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float)
- if cost_matrix.size == 0:
- return cost_matrix
- det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float)
- #for i, track in enumerate(tracks):
- #cost_matrix[i, :] = np.maximum(0.0, cdist(track.smooth_feat.reshape(1,-1), det_features, metric))
- track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float)
- cost_matrix = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features
- return cost_matrix
-
-def embedding_distance2(tracks, detections, metric='cosine'):
- """
- :param tracks: list[STrack]
- :param detections: list[BaseTrack]
- :param metric:
- :return: cost_matrix np.ndarray
- """
-
- cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float)
- if cost_matrix.size == 0:
- return cost_matrix
- det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float)
- #for i, track in enumerate(tracks):
- #cost_matrix[i, :] = np.maximum(0.0, cdist(track.smooth_feat.reshape(1,-1), det_features, metric))
- track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float)
- cost_matrix = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features
- track_features = np.asarray([track.features[0] for track in tracks], dtype=np.float)
- cost_matrix2 = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features
- track_features = np.asarray([track.features[len(track.features)-1] for track in tracks], dtype=np.float)
- cost_matrix3 = np.maximum(0.0, cdist(track_features, det_features, metric)) # Nomalized features
- for row in range(len(cost_matrix)):
- cost_matrix[row] = (cost_matrix[row]+cost_matrix2[row]+cost_matrix3[row])/3
- return cost_matrix
-
-
-def vis_id_feature_A_distance(tracks, detections, metric='cosine'):
- track_features = []
- det_features = []
- leg1 = len(tracks)
- leg2 = len(detections)
- cost_matrix = np.zeros((leg1, leg2), dtype=np.float)
- cost_matrix_det = np.zeros((leg1, leg2), dtype=np.float)
- cost_matrix_track = np.zeros((leg1, leg2), dtype=np.float)
- det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float)
- track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float)
- if leg2 != 0:
- cost_matrix_det = np.maximum(0.0, cdist(det_features, det_features, metric))
- if leg1 != 0:
- cost_matrix_track = np.maximum(0.0, cdist(track_features, track_features, metric))
- if cost_matrix.size == 0:
- return track_features, det_features, cost_matrix, cost_matrix_det, cost_matrix_track
- cost_matrix = np.maximum(0.0, cdist(track_features, det_features, metric))
- if leg1 > 10:
- leg1 = 10
- tracks = tracks[:10]
- if leg2 > 10:
- leg2 = 10
- detections = detections[:10]
- det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float)
- track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float)
- return track_features, det_features, cost_matrix, cost_matrix_det, cost_matrix_track
-
-def gate_cost_matrix(kf, cost_matrix, tracks, detections, only_position=False):
- if cost_matrix.size == 0:
- return cost_matrix
- gating_dim = 2 if only_position else 4
- gating_threshold = chi2inv95[gating_dim]
- measurements = np.asarray([det.to_xyah() for det in detections])
- for row, track in enumerate(tracks):
- gating_distance = kf.gating_distance(
- track.mean, track.covariance, measurements, only_position)
- cost_matrix[row, gating_distance > gating_threshold] = np.inf
- return cost_matrix
-
-
-def fuse_motion(kf, cost_matrix, tracks, detections, only_position=False, lambda_=0.98):
- if cost_matrix.size == 0:
- return cost_matrix
- gating_dim = 2 if only_position else 4
- gating_threshold = chi2inv95[gating_dim]
- measurements = np.asarray([det.to_xyah() for det in detections])
- for row, track in enumerate(tracks):
- gating_distance = kf.gating_distance(
- track.mean, track.covariance, measurements, only_position, metric='maha')
- cost_matrix[row, gating_distance > gating_threshold] = np.inf
- cost_matrix[row] = lambda_ * cost_matrix[row] + (1 - lambda_) * gating_distance
- return cost_matrix
diff --git a/spaces/EDGAhab/Aatrox-Talking/attentions.py b/spaces/EDGAhab/Aatrox-Talking/attentions.py
deleted file mode 100644
index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000
--- a/spaces/EDGAhab/Aatrox-Talking/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Edisonymy/buy-or-rent/src/utils/__init__.py b/spaces/Edisonymy/buy-or-rent/src/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/parser.py b/spaces/EronSamez/RVC_HFmeu/demucs/parser.py
deleted file mode 100644
index 4e8a19cf976e3c6dfe411da64b8dce3e9a4548e0..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/demucs/parser.py
+++ /dev/null
@@ -1,244 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import os
-from pathlib import Path
-
-
-def get_parser():
- parser = argparse.ArgumentParser("demucs", description="Train and evaluate Demucs.")
- default_raw = None
- default_musdb = None
- if 'DEMUCS_RAW' in os.environ:
- default_raw = Path(os.environ['DEMUCS_RAW'])
- if 'DEMUCS_MUSDB' in os.environ:
- default_musdb = Path(os.environ['DEMUCS_MUSDB'])
- parser.add_argument(
- "--raw",
- type=Path,
- default=default_raw,
- help="Path to raw audio, can be faster, see python3 -m demucs.raw to extract.")
- parser.add_argument("--no_raw", action="store_const", const=None, dest="raw")
- parser.add_argument("-m",
- "--musdb",
- type=Path,
- default=default_musdb,
- help="Path to musdb root")
- parser.add_argument("--is_wav", action="store_true",
- help="Indicate that the MusDB dataset is in wav format (i.e. MusDB-HQ).")
- parser.add_argument("--metadata", type=Path, default=Path("metadata/"),
- help="Folder where metadata information is stored.")
- parser.add_argument("--wav", type=Path,
- help="Path to a wav dataset. This should contain a 'train' and a 'valid' "
- "subfolder.")
- parser.add_argument("--samplerate", type=int, default=44100)
- parser.add_argument("--audio_channels", type=int, default=2)
- parser.add_argument("--samples",
- default=44100 * 10,
- type=int,
- help="number of samples to feed in")
- parser.add_argument("--data_stride",
- default=44100,
- type=int,
- help="Stride for chunks, shorter = longer epochs")
- parser.add_argument("-w", "--workers", default=10, type=int, help="Loader workers")
- parser.add_argument("--eval_workers", default=2, type=int, help="Final evaluation workers")
- parser.add_argument("-d",
- "--device",
- help="Device to train on, default is cuda if available else cpu")
- parser.add_argument("--eval_cpu", action="store_true", help="Eval on test will be run on cpu.")
- parser.add_argument("--dummy", help="Dummy parameter, useful to create a new checkpoint file")
- parser.add_argument("--test", help="Just run the test pipeline + one validation. "
- "This should be a filename relative to the models/ folder.")
- parser.add_argument("--test_pretrained", help="Just run the test pipeline + one validation, "
- "on a pretrained model. ")
-
- parser.add_argument("--rank", default=0, type=int)
- parser.add_argument("--world_size", default=1, type=int)
- parser.add_argument("--master")
-
- parser.add_argument("--checkpoints",
- type=Path,
- default=Path("checkpoints"),
- help="Folder where to store checkpoints etc")
- parser.add_argument("--evals",
- type=Path,
- default=Path("evals"),
- help="Folder where to store evals and waveforms")
- parser.add_argument("--save",
- action="store_true",
- help="Save estimated for the test set waveforms")
- parser.add_argument("--logs",
- type=Path,
- default=Path("logs"),
- help="Folder where to store logs")
- parser.add_argument("--models",
- type=Path,
- default=Path("models"),
- help="Folder where to store trained models")
- parser.add_argument("-R",
- "--restart",
- action='store_true',
- help='Restart training, ignoring previous run')
-
- parser.add_argument("--seed", type=int, default=42)
- parser.add_argument("-e", "--epochs", type=int, default=180, help="Number of epochs")
- parser.add_argument("-r",
- "--repeat",
- type=int,
- default=2,
- help="Repeat the train set, longer epochs")
- parser.add_argument("-b", "--batch_size", type=int, default=64)
- parser.add_argument("--lr", type=float, default=3e-4)
- parser.add_argument("--mse", action="store_true", help="Use MSE instead of L1")
- parser.add_argument("--init", help="Initialize from a pre-trained model.")
-
- # Augmentation options
- parser.add_argument("--no_augment",
- action="store_false",
- dest="augment",
- default=True,
- help="No basic data augmentation.")
- parser.add_argument("--repitch", type=float, default=0.2,
- help="Probability to do tempo/pitch change")
- parser.add_argument("--max_tempo", type=float, default=12,
- help="Maximum relative tempo change in %% when using repitch.")
-
- parser.add_argument("--remix_group_size",
- type=int,
- default=4,
- help="Shuffle sources using group of this size. Useful to somewhat "
- "replicate multi-gpu training "
- "on less GPUs.")
- parser.add_argument("--shifts",
- type=int,
- default=10,
- help="Number of random shifts used for the shift trick.")
- parser.add_argument("--overlap",
- type=float,
- default=0.25,
- help="Overlap when --split_valid is passed.")
-
- # See model.py for doc
- parser.add_argument("--growth",
- type=float,
- default=2.,
- help="Number of channels between two layers will increase by this factor")
- parser.add_argument("--depth",
- type=int,
- default=6,
- help="Number of layers for the encoder and decoder")
- parser.add_argument("--lstm_layers", type=int, default=2, help="Number of layers for the LSTM")
- parser.add_argument("--channels",
- type=int,
- default=64,
- help="Number of channels for the first encoder layer")
- parser.add_argument("--kernel_size",
- type=int,
- default=8,
- help="Kernel size for the (transposed) convolutions")
- parser.add_argument("--conv_stride",
- type=int,
- default=4,
- help="Stride for the (transposed) convolutions")
- parser.add_argument("--context",
- type=int,
- default=3,
- help="Context size for the decoder convolutions "
- "before the transposed convolutions")
- parser.add_argument("--rescale",
- type=float,
- default=0.1,
- help="Initial weight rescale reference")
- parser.add_argument("--no_resample", action="store_false",
- default=True, dest="resample",
- help="No Resampling of the input/output x2")
- parser.add_argument("--no_glu",
- action="store_false",
- default=True,
- dest="glu",
- help="Replace all GLUs by ReLUs")
- parser.add_argument("--no_rewrite",
- action="store_false",
- default=True,
- dest="rewrite",
- help="No 1x1 rewrite convolutions")
- parser.add_argument("--normalize", action="store_true")
- parser.add_argument("--no_norm_wav", action="store_false", dest='norm_wav', default=True)
-
- # Tasnet options
- parser.add_argument("--tasnet", action="store_true")
- parser.add_argument("--split_valid",
- action="store_true",
- help="Predict chunks by chunks for valid and test. Required for tasnet")
- parser.add_argument("--X", type=int, default=8)
-
- # Other options
- parser.add_argument("--show",
- action="store_true",
- help="Show model architecture, size and exit")
- parser.add_argument("--save_model", action="store_true",
- help="Skip traning, just save final model "
- "for the current checkpoint value.")
- parser.add_argument("--save_state",
- help="Skip training, just save state "
- "for the current checkpoint value. You should "
- "provide a model name as argument.")
-
- # Quantization options
- parser.add_argument("--q-min-size", type=float, default=1,
- help="Only quantize layers over this size (in MB)")
- parser.add_argument(
- "--qat", type=int, help="If provided, use QAT training with that many bits.")
-
- parser.add_argument("--diffq", type=float, default=0)
- parser.add_argument(
- "--ms-target", type=float, default=162,
- help="Model size target in MB, when using DiffQ. Best model will be kept "
- "only if it is smaller than this target.")
-
- return parser
-
-
-def get_name(parser, args):
- """
- Return the name of an experiment given the args. Some parameters are ignored,
- for instance --workers, as they do not impact the final result.
- """
- ignore_args = set([
- "checkpoints",
- "deterministic",
- "eval",
- "evals",
- "eval_cpu",
- "eval_workers",
- "logs",
- "master",
- "rank",
- "restart",
- "save",
- "save_model",
- "save_state",
- "show",
- "workers",
- "world_size",
- ])
- parts = []
- name_args = dict(args.__dict__)
- for name, value in name_args.items():
- if name in ignore_args:
- continue
- if value != parser.get_default(name):
- if isinstance(value, Path):
- parts.append(f"{name}={value.name}")
- else:
- parts.append(f"{name}={value}")
- if parts:
- name = " ".join(parts)
- else:
- name = "default"
- return name
diff --git a/spaces/EronSamez/RVC_HFmeu/utils/i18n.py b/spaces/EronSamez/RVC_HFmeu/utils/i18n.py
deleted file mode 100644
index 8e75d2bc26ff86ab1716b8d7f239ad9f5cc1e32d..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/utils/i18n.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import locale
-import json
-import os
-
-
-def load_language_list(language):
- with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
- language_list = json.load(f)
- return language_list
-
-
-class I18nAuto:
- def __init__(self, language=None):
- if language in ["Auto", None]:
- language = "es_ES"
- if not os.path.exists(f"./i18n/{language}.json"):
- language = "es_ES"
- language = "es_ES"
- self.language = language
- # print("Use Language:", language)
- self.language_map = load_language_list(language)
-
- def __call__(self, key):
- return self.language_map.get(key, key)
-
- def print(self):
- # print("Use Language:", self.language)
- print("")
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/drrg_r50_fpn_unet.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/drrg_r50_fpn_unet.py
deleted file mode 100644
index 78156cca6030bcf7ac12b75287342915882eb0b3..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/drrg_r50_fpn_unet.py
+++ /dev/null
@@ -1,21 +0,0 @@
-model = dict(
- type='DRRG',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='BN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPN_UNet', in_channels=[256, 512, 1024, 2048], out_channels=32),
- bbox_head=dict(
- type='DRRGHead',
- in_channels=32,
- text_region_thr=0.3,
- center_region_thr=0.4,
- loss=dict(type='DRRGLoss'),
- postprocessor=dict(type='DRRGPostprocessor', link_thr=0.80)))
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_models/abinet.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_models/abinet.py
deleted file mode 100644
index 19c6b66731f0b205741037ece8d6b49f91d0110b..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_models/abinet.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# num_chars depends on the configuration of label_convertor. The actual
-# dictionary size is 36 + 1 ().
-# TODO: Automatically update num_chars based on the configuration of
-# label_convertor
-num_chars = 37
-max_seq_len = 26
-
-label_convertor = dict(
- type='ABIConvertor',
- dict_type='DICT36',
- with_unknown=False,
- with_padding=False,
- lower=True,
-)
-
-model = dict(
- type='ABINet',
- backbone=dict(type='ResNetABI'),
- encoder=dict(
- type='ABIVisionModel',
- encoder=dict(
- type='TransformerEncoder',
- n_layers=3,
- n_head=8,
- d_model=512,
- d_inner=2048,
- dropout=0.1,
- max_len=8 * 32,
- ),
- decoder=dict(
- type='ABIVisionDecoder',
- in_channels=512,
- num_channels=64,
- attn_height=8,
- attn_width=32,
- attn_mode='nearest',
- use_result='feature',
- num_chars=num_chars,
- max_seq_len=max_seq_len,
- init_cfg=dict(type='Xavier', layer='Conv2d')),
- ),
- decoder=dict(
- type='ABILanguageDecoder',
- d_model=512,
- n_head=8,
- d_inner=2048,
- n_layers=4,
- dropout=0.1,
- detach_tokens=True,
- use_self_attn=False,
- pad_idx=num_chars - 1,
- num_chars=num_chars,
- max_seq_len=max_seq_len,
- init_cfg=None),
- fuser=dict(
- type='ABIFuser',
- d_model=512,
- num_chars=num_chars,
- init_cfg=None,
- max_seq_len=max_seq_len,
- ),
- loss=dict(
- type='ABILoss',
- enc_weight=1.0,
- dec_weight=1.0,
- fusion_weight=1.0,
- num_classes=num_chars),
- label_convertor=label_convertor,
- max_seq_len=max_seq_len,
- iter_size=3)
diff --git a/spaces/FL33TW00D/whisper-turbo/_next/static/css/68f98a9e0e1cc1b3.css b/spaces/FL33TW00D/whisper-turbo/_next/static/css/68f98a9e0e1cc1b3.css
deleted file mode 100644
index 53c2f7b23d52f1224c213bfe1478365fda093436..0000000000000000000000000000000000000000
--- a/spaces/FL33TW00D/whisper-turbo/_next/static/css/68f98a9e0e1cc1b3.css
+++ /dev/null
@@ -1 +0,0 @@
-@font-face{font-family:__VT323_2a9463;font-style:normal;font-weight:400;font-display:swap;src:url(/_next/static/media/61cd2e7f311e7836.woff2) format("woff2");unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01a0-01a1,U+01af-01b0,U+0300-0301,U+0303-0304,U+0308-0309,U+0323,U+0329,U+1ea0-1ef9,U+20ab}@font-face{font-family:__VT323_2a9463;font-style:normal;font-weight:400;font-display:swap;src:url(/_next/static/media/fd428b69af9ef976.woff2) format("woff2");unicode-range:U+0100-02af,U+0304,U+0308,U+0329,U+1e00-1e9f,U+1ef2-1eff,U+2020,U+20a0-20ab,U+20ad-20cf,U+2113,U+2c60-2c7f,U+a720-a7ff}@font-face{font-family:__VT323_2a9463;font-style:normal;font-weight:400;font-display:swap;src:url(/_next/static/media/f36ad5a94261c3ca.woff2) format("woff2");unicode-range:U+00??,U+0131,U+0152-0153,U+02bb-02bc,U+02c6,U+02da,U+02dc,U+0304,U+0308,U+0329,U+2000-206f,U+2074,U+20ac,U+2122,U+2191,U+2193,U+2212,U+2215,U+feff,U+fffd}@font-face{font-family:__VT323_Fallback_2a9463;src:local("Arial");ascent-override:91.26%;descent-override:22.82%;line-gap-override:0.00%;size-adjust:87.66%}.__className_2a9463{font-family:__VT323_2a9463,__VT323_Fallback_2a9463;font-weight:400;font-style:normal}
\ No newline at end of file
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/__init__.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/__init__.py
deleted file mode 100644
index 2988b28937a22c3d039dde6590bcc1ac8dd3b89a..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# https://github.com/xinntao/BasicSR
-# flake8: noqa
-from .archs import *
-from .data import *
-from .losses import *
-from .metrics import *
-from .models import *
-from .ops import *
-from .train import *
-from .utils import *
-# from version import __gitsha__, __version__
diff --git a/spaces/FridaZuley/RVC_HFKawaii/go-tensorboard.bat b/spaces/FridaZuley/RVC_HFKawaii/go-tensorboard.bat
deleted file mode 100644
index cb81c17d3865513adec8eb0b832b7888cd1e4078..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/go-tensorboard.bat
+++ /dev/null
@@ -1,2 +0,0 @@
-python fixes/tensor-launch.py
-pause
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/train/data_utils.py b/spaces/FridaZuley/RVC_HFKawaii/train/data_utils.py
deleted file mode 100644
index 71c0eff1815469a52399dc90a093a2f8a29223eb..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/train/data_utils.py
+++ /dev/null
@@ -1,512 +0,0 @@
-import os, traceback
-import numpy as np
-import torch
-import torch.utils.data
-
-from mel_processing import spectrogram_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-
-
-class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 5000)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv])
- lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- file = audiopath_and_text[0]
- phone = audiopath_and_text[1]
- pitch = audiopath_and_text[2]
- pitchf = audiopath_and_text[3]
- dv = audiopath_and_text[4]
-
- phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf)
- spec, wav = self.get_audio(file)
- dv = self.get_sid(dv)
-
- len_phone = phone.size()[0]
- len_spec = spec.size()[-1]
- # print(123,phone.shape,pitch.shape,spec.shape)
- if len_phone != len_spec:
- len_min = min(len_phone, len_spec)
- # amor
- len_wav = len_min * self.hop_length
-
- spec = spec[:, :len_min]
- wav = wav[:, :len_wav]
-
- phone = phone[:len_min, :]
- pitch = pitch[:len_min]
- pitchf = pitchf[:len_min]
-
- return (spec, wav, phone, pitch, pitchf, dv)
-
- def get_labels(self, phone, pitch, pitchf):
- phone = np.load(phone)
- phone = np.repeat(phone, 2, axis=0)
- pitch = np.load(pitch)
- pitchf = np.load(pitchf)
- n_num = min(phone.shape[0], 900) # DistributedBucketSampler
- # print(234,phone.shape,pitch.shape)
- phone = phone[:n_num, :]
- pitch = pitch[:n_num]
- pitchf = pitchf[:n_num]
- phone = torch.FloatTensor(phone)
- pitch = torch.LongTensor(pitch)
- pitchf = torch.FloatTensor(pitchf)
- return phone, pitch, pitchf
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio
- # audio_norm = audio / self.max_wav_value
- # audio_norm = audio / np.abs(audio).max()
-
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- try:
- spec = torch.load(spec_filename)
- except:
- print(spec_filename, traceback.format_exc())
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- return spec, audio_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollateMultiNSFsid:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_spec_len = max([x[0].size(1) for x in batch])
- max_wave_len = max([x[1].size(1) for x in batch])
- spec_lengths = torch.LongTensor(len(batch))
- wave_lengths = torch.LongTensor(len(batch))
- spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
- wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
- spec_padded.zero_()
- wave_padded.zero_()
-
- max_phone_len = max([x[2].size(0) for x in batch])
- phone_lengths = torch.LongTensor(len(batch))
- phone_padded = torch.FloatTensor(
- len(batch), max_phone_len, batch[0][2].shape[1]
- ) # (spec, wav, phone, pitch)
- pitch_padded = torch.LongTensor(len(batch), max_phone_len)
- pitchf_padded = torch.FloatTensor(len(batch), max_phone_len)
- phone_padded.zero_()
- pitch_padded.zero_()
- pitchf_padded.zero_()
- # dv = torch.FloatTensor(len(batch), 256)#gin=256
- sid = torch.LongTensor(len(batch))
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- spec = row[0]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wave = row[1]
- wave_padded[i, :, : wave.size(1)] = wave
- wave_lengths[i] = wave.size(1)
-
- phone = row[2]
- phone_padded[i, : phone.size(0), :] = phone
- phone_lengths[i] = phone.size(0)
-
- pitch = row[3]
- pitch_padded[i, : pitch.size(0)] = pitch
- pitchf = row[4]
- pitchf_padded[i, : pitchf.size(0)] = pitchf
-
- # dv[i] = row[5]
- sid[i] = row[5]
-
- return (
- phone_padded,
- phone_lengths,
- pitch_padded,
- pitchf_padded,
- spec_padded,
- spec_lengths,
- wave_padded,
- wave_lengths,
- # dv
- sid,
- )
-
-
-class TextAudioLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 5000)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text, dv in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text, dv])
- lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- file = audiopath_and_text[0]
- phone = audiopath_and_text[1]
- dv = audiopath_and_text[2]
-
- phone = self.get_labels(phone)
- spec, wav = self.get_audio(file)
- dv = self.get_sid(dv)
-
- len_phone = phone.size()[0]
- len_spec = spec.size()[-1]
- if len_phone != len_spec:
- len_min = min(len_phone, len_spec)
- len_wav = len_min * self.hop_length
- spec = spec[:, :len_min]
- wav = wav[:, :len_wav]
- phone = phone[:len_min, :]
- return (spec, wav, phone, dv)
-
- def get_labels(self, phone):
- phone = np.load(phone)
- phone = np.repeat(phone, 2, axis=0)
- n_num = min(phone.shape[0], 900) # DistributedBucketSampler
- phone = phone[:n_num, :]
- phone = torch.FloatTensor(phone)
- return phone
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError(
- "{} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate
- )
- )
- audio_norm = audio
- # audio_norm = audio / self.max_wav_value
- # audio_norm = audio / np.abs(audio).max()
-
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- try:
- spec = torch.load(spec_filename)
- except:
- print(spec_filename, traceback.format_exc())
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- else:
- spec = spectrogram_torch(
- audio_norm,
- self.filter_length,
- self.sampling_rate,
- self.hop_length,
- self.win_length,
- center=False,
- )
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
- return spec, audio_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollate:
- """Zero-pads model inputs and targets"""
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
- )
-
- max_spec_len = max([x[0].size(1) for x in batch])
- max_wave_len = max([x[1].size(1) for x in batch])
- spec_lengths = torch.LongTensor(len(batch))
- wave_lengths = torch.LongTensor(len(batch))
- spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
- wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
- spec_padded.zero_()
- wave_padded.zero_()
-
- max_phone_len = max([x[2].size(0) for x in batch])
- phone_lengths = torch.LongTensor(len(batch))
- phone_padded = torch.FloatTensor(
- len(batch), max_phone_len, batch[0][2].shape[1]
- )
- phone_padded.zero_()
- sid = torch.LongTensor(len(batch))
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- spec = row[0]
- spec_padded[i, :, : spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wave = row[1]
- wave_padded[i, :, : wave.size(1)] = wave
- wave_lengths[i] = wave.size(1)
-
- phone = row[2]
- phone_padded[i, : phone.size(0), :] = phone
- phone_lengths[i] = phone.size(0)
-
- sid[i] = row[3]
-
- return (
- phone_padded,
- phone_lengths,
- spec_padded,
- spec_lengths,
- wave_padded,
- wave_lengths,
- sid,
- )
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
-
- def __init__(
- self,
- dataset,
- batch_size,
- boundaries,
- num_replicas=None,
- rank=None,
- shuffle=True,
- ):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, -1, -1): #
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i + 1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (
- total_batch_size - (len_bucket % total_batch_size)
- ) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = (
- ids_bucket
- + ids_bucket * (rem // len_bucket)
- + ids_bucket[: (rem % len_bucket)]
- )
-
- # subsample
- ids_bucket = ids_bucket[self.rank :: self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [
- bucket[idx]
- for idx in ids_bucket[
- j * self.batch_size : (j + 1) * self.batch_size
- ]
- ]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/gen5_build_car.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/gen5_build_car.sh
deleted file mode 100644
index 0770a5f946691a4930793d89e694740b807c0ce7..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/gen5_build_car.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/bin/bash
-#SBATCH -c 10
-#SBATCH -n 1
-#SBATCH -o logs/%j.out
-#SBATCH --exclusive
-
-STEPS=${1-'15000'}
-
-sh scripts/traintest_scripts/train_test_multi_task_goal.sh data \
- "[align-rope,sweeping-piles,align-box-corner,towers-of-hanoi-seq-seen-colors,assembling-kits-seq-seen-colors]" "[build-car]" \
- 5taskgen_unrelated $STEPS
-
-sh scripts/traintest_scripts/train_test_multi_task_goal.sh data \
-"[build-two-circles,build-wheel,build-bridge,towers-of-hanoi-seq-seen-colors,stack-block-pyramid-seq-seen-colors]" "[build-car]" \
- 5taskgen_related $STEPS
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_video_model.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_video_model.md
deleted file mode 100644
index 0ad5c85804c1f8636c3720a652b40bbd9df0fe2e..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/anime_video_model.md
+++ /dev/null
@@ -1,136 +0,0 @@
-# Anime Video Models
-
-:white_check_mark: We add small models that are optimized for anime videos :-)
-More comparisons can be found in [anime_comparisons.md](anime_comparisons.md)
-
-- [How to Use](#how-to-use)
-- [PyTorch Inference](#pytorch-inference)
-- [ncnn Executable File](#ncnn-executable-file)
- - [Step 1: Use ffmpeg to extract frames from video](#step-1-use-ffmpeg-to-extract-frames-from-video)
- - [Step 2: Inference with Real-ESRGAN executable file](#step-2-inference-with-real-esrgan-executable-file)
- - [Step 3: Merge the enhanced frames back into a video](#step-3-merge-the-enhanced-frames-back-into-a-video)
-- [More Demos](#more-demos)
-
-| Models | Scale | Description |
-| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- |
-| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X4 1 | Anime video model with XS size |
-
-Note:
-1 This model can also be used for X1, X2, X3.
-
----
-
-The following are some demos (best view in the full screen mode).
-
-
-
-
-
-
-
-## How to Use
-
-### PyTorch Inference
-
-```bash
-# download model
-wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P weights
-# single gpu and single process inference
-CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2
-# single gpu and multi process inference (you can use multi-processing to improve GPU utilization)
-CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2
-# multi gpu and multi process inference
-CUDA_VISIBLE_DEVICES=0,1,2,3 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2
-```
-
-```console
-Usage:
---num_process_per_gpu The total number of process is num_gpu * num_process_per_gpu. The bottleneck of
- the program lies on the IO, so the GPUs are usually not fully utilized. To alleviate
- this issue, you can use multi-processing by setting this parameter. As long as it
- does not exceed the CUDA memory
---extract_frame_first If you encounter ffmpeg error when using multi-processing, you can turn this option on.
-```
-
-### NCNN Executable File
-
-#### Step 1: Use ffmpeg to extract frames from video
-
-```bash
-ffmpeg -i onepiece_demo.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 tmp_frames/frame%08d.png
-```
-
-- Remember to create the folder `tmp_frames` ahead
-
-#### Step 2: Inference with Real-ESRGAN executable file
-
-1. Download the latest portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU**
-
-1. Taking the Windows as example, run:
-
- ```bash
- ./realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n realesr-animevideov3 -s 2 -f jpg
- ```
-
- - Remember to create the folder `out_frames` ahead
-
-#### Step 3: Merge the enhanced frames back into a video
-
-1. First obtain fps from input videos by
-
- ```bash
- ffmpeg -i onepiece_demo.mp4
- ```
-
- ```console
- Usage:
- -i input video path
- ```
-
- You will get the output similar to the following screenshot.
-
-
-
-
-
-2. Merge frames
-
- ```bash
- ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -c:v libx264 -r 23.98 -pix_fmt yuv420p output.mp4
- ```
-
- ```console
- Usage:
- -i input video path
- -c:v video encoder (usually we use libx264)
- -r fps, remember to modify it to meet your needs
- -pix_fmt pixel format in video
- ```
-
- If you also want to copy audio from the input videos, run:
-
- ```bash
- ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -i onepiece_demo.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 23.98 -pix_fmt yuv420p output_w_audio.mp4
- ```
-
- ```console
- Usage:
- -i input video path, here we use two input streams
- -c:v video encoder (usually we use libx264)
- -r fps, remember to modify it to meet your needs
- -pix_fmt pixel format in video
- ```
-
-## More Demos
-
-- Input video for One Piece:
-
-
-
-- Out video for One Piece
-
-
-
-**More comparisons**
-
-
diff --git a/spaces/GotAudio/Understanding-Women/app.py b/spaces/GotAudio/Understanding-Women/app.py
deleted file mode 100644
index 4ce2640225d5fbd80a096cc5997410a28632c3b5..0000000000000000000000000000000000000000
--- a/spaces/GotAudio/Understanding-Women/app.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import gradio as gr
-from fastai import *
-from fastai.vision.all import *
-
-import pathlib
-plt = platform.system()
-if plt == 'Linux': pathlib.WindowsPath = pathlib.PosixPath
-
-learn_inf = load_learner("export.pkl")
-
-def predict_mood(img):
-
- pred, pred_idx, probs = learn_inf.predict(img)
- return f'Prediction: {pred}; Probability: {probs[pred_idx]:.04f}'
-
-gr.inputs.Image(tool=False, optional=False)
-webpage = gr.Interface(fn=predict_mood, inputs=gr.inputs.Image(tool=False, optional=False), outputs="text", title="Women's Mood Detector", live=True, theme="dark-peach", description="It detects wether the woman is Angry, Happy, or Sad.", examples=[["example1.jpg"], ["example2.jpg"], ["example3.jpg"]])
-webpage.launch()
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/anime-colorization/scripts/image_nll.py b/spaces/Gradio-Blocks/anime-colorization/scripts/image_nll.py
deleted file mode 100644
index 2b72bfd3810d63270a873f7889dddfd2512387b3..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/anime-colorization/scripts/image_nll.py
+++ /dev/null
@@ -1,96 +0,0 @@
-"""
-Approximate the bits/dimension for an image model.
-"""
-
-import argparse
-import os
-
-import numpy as np
-import torch.distributed as dist
-
-from pixel_guide_diffusion import dist_util, logger
-from pixel_guide_diffusion.image_datasets import load_data
-from pixel_guide_diffusion.script_util import (
- model_and_diffusion_defaults,
- create_model_and_diffusion,
- add_dict_to_argparser,
- args_to_dict,
-)
-
-
-def main():
- args = create_argparser().parse_args()
-
- dist_util.setup_dist()
- logger.configure()
-
- logger.log("creating model and diffusion...")
- model, diffusion = create_model_and_diffusion(
- **args_to_dict(args, model_and_diffusion_defaults().keys())
- )
- model.load_state_dict(
- dist_util.load_state_dict(args.model_path, map_location="cpu")
- )
- model.to(dist_util.dev())
- model.eval()
-
- logger.log("creating data loader...")
- data = load_data(
- data_dir=args.data_dir,
- batch_size=args.batch_size,
- image_size=args.image_size,
- class_cond=args.class_cond,
- deterministic=True,
- )
-
- logger.log("evaluating...")
- run_bpd_evaluation(model, diffusion, data, args.num_samples, args.clip_denoised)
-
-
-def run_bpd_evaluation(model, diffusion, data, num_samples, clip_denoised):
- all_bpd = []
- all_metrics = {"vb": [], "mse": [], "xstart_mse": []}
- num_complete = 0
- while num_complete < num_samples:
- batch, model_kwargs = next(data)
- batch = batch.to(dist_util.dev())
- model_kwargs = {k: v.to(dist_util.dev()) for k, v in model_kwargs.items()}
- minibatch_metrics = diffusion.calc_bpd_loop(
- model, batch, clip_denoised=clip_denoised, model_kwargs=model_kwargs
- )
-
- for key, term_list in all_metrics.items():
- terms = minibatch_metrics[key].mean(dim=0) / dist.get_world_size()
- dist.all_reduce(terms)
- term_list.append(terms.detach().cpu().numpy())
-
- total_bpd = minibatch_metrics["total_bpd"]
- total_bpd = total_bpd.mean() / dist.get_world_size()
- dist.all_reduce(total_bpd)
- all_bpd.append(total_bpd.item())
- num_complete += dist.get_world_size() * batch.shape[0]
-
- logger.log(f"done {num_complete} samples: bpd={np.mean(all_bpd)}")
-
- if dist.get_rank() == 0:
- for name, terms in all_metrics.items():
- out_path = os.path.join(logger.get_dir(), f"{name}_terms.npz")
- logger.log(f"saving {name} terms to {out_path}")
- np.savez(out_path, np.mean(np.stack(terms), axis=0))
-
- dist.barrier()
- logger.log("evaluation complete")
-
-
-def create_argparser():
- defaults = dict(
- data_dir="", clip_denoised=True, num_samples=1000, batch_size=1, model_path=""
- )
- defaults.update(model_and_diffusion_defaults())
- parser = argparse.ArgumentParser()
- add_dict_to_argparser(parser, defaults)
- return parser
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/HaMerL/ChaosinChat/assets/custom.js b/spaces/HaMerL/ChaosinChat/assets/custom.js
deleted file mode 100644
index b8071034f3618c541e3f4169c7fc6d6593d56f44..0000000000000000000000000000000000000000
--- a/spaces/HaMerL/ChaosinChat/assets/custom.js
+++ /dev/null
@@ -1,224 +0,0 @@
-
-// custom javascript here
-
-const MAX_HISTORY_LENGTH = 32;
-
-var key_down_history = [];
-var currentIndex = -1;
-var user_input_ta;
-
-var gradioContainer = null;
-var user_input_ta = null;
-var user_input_tb = null;
-var userInfoDiv = null;
-var appTitleDiv = null;
-var chatbot = null;
-var apSwitch = null;
-
-var ga = document.getElementsByTagName("gradio-app");
-var targetNode = ga[0];
-var isInIframe = (window.self !== window.top);
-
-// gradio 页面加载好了么??? 我能动你的元素了么??
-function gradioLoaded(mutations) {
- for (var i = 0; i < mutations.length; i++) {
- if (mutations[i].addedNodes.length) {
- gradioContainer = document.querySelector(".gradio-container");
- user_input_tb = document.getElementById('user_input_tb');
- userInfoDiv = document.getElementById("user_info");
- appTitleDiv = document.getElementById("app_title");
- chatbot = document.querySelector('#chuanhu_chatbot');
- apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
-
- if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没?
- adjustDarkMode();
- }
- if (user_input_tb) { // user_input_tb 加载出来了没?
- selectHistory();
- }
- if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没?
- setTimeout(showOrHideUserInfo(), 2000);
- }
- if (chatbot) { // chatbot 加载出来了没?
- setChatbotHeight()
- }
- }
- }
-}
-
-function selectHistory() {
- user_input_ta = user_input_tb.querySelector("textarea");
- if (user_input_ta) {
- observer.disconnect(); // 停止监听
- // 在 textarea 上监听 keydown 事件
- user_input_ta.addEventListener("keydown", function (event) {
- var value = user_input_ta.value.trim();
- // 判断按下的是否为方向键
- if (event.code === 'ArrowUp' || event.code === 'ArrowDown') {
- // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作
- if (value && key_down_history.indexOf(value) === -1)
- return;
- // 对于需要响应的动作,阻止默认行为。
- event.preventDefault();
- var length = key_down_history.length;
- if (length === 0) {
- currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置
- return;
- }
- if (currentIndex === -1) {
- currentIndex = length;
- }
- if (event.code === 'ArrowUp' && currentIndex > 0) {
- currentIndex--;
- user_input_ta.value = key_down_history[currentIndex];
- } else if (event.code === 'ArrowDown' && currentIndex < length - 1) {
- currentIndex++;
- user_input_ta.value = key_down_history[currentIndex];
- }
- user_input_ta.selectionStart = user_input_ta.value.length;
- user_input_ta.selectionEnd = user_input_ta.value.length;
- const input_event = new InputEvent("input", { bubbles: true, cancelable: true });
- user_input_ta.dispatchEvent(input_event);
- } else if (event.code === "Enter") {
- if (value) {
- currentIndex = -1;
- if (key_down_history.indexOf(value) === -1) {
- key_down_history.push(value);
- if (key_down_history.length > MAX_HISTORY_LENGTH) {
- key_down_history.shift();
- }
- }
- }
- }
- });
- }
-}
-
-function toggleUserInfoVisibility(shouldHide) {
- if (userInfoDiv) {
- if (shouldHide) {
- userInfoDiv.classList.add("hideK");
- } else {
- userInfoDiv.classList.remove("hideK");
- }
- }
-}
-function showOrHideUserInfo() {
- var sendBtn = document.getElementById("submit_btn");
-
- // Bind mouse/touch events to show/hide user info
- appTitleDiv.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
- userInfoDiv.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
- sendBtn.addEventListener("mouseenter", function () {
- toggleUserInfoVisibility(false);
- });
-
- appTitleDiv.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
- userInfoDiv.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
- sendBtn.addEventListener("mouseleave", function () {
- toggleUserInfoVisibility(true);
- });
-
- appTitleDiv.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
- userInfoDiv.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
- sendBtn.ontouchstart = function () {
- toggleUserInfoVisibility(false);
- };
-
- appTitleDiv.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000);
- };
- userInfoDiv.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000);
- };
- sendBtn.ontouchend = function () {
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 3000); // Delay 1 second to hide user info
- };
-
- // Hide user info after 2 second
- setTimeout(function () {
- toggleUserInfoVisibility(true);
- }, 2000);
-}
-
-function toggleDarkMode(isEnabled) {
- if (isEnabled) {
- gradioContainer.classList.add("dark");
- document.body.style.setProperty("background-color", "var(--neutral-950)", "important");
- } else {
- gradioContainer.classList.remove("dark");
- document.body.style.backgroundColor = "";
- }
-}
-function adjustDarkMode() {
- const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)");
-
- // 根据当前颜色模式设置初始状态
- apSwitch.checked = darkModeQuery.matches;
- toggleDarkMode(darkModeQuery.matches);
- // 监听颜色模式变化
- darkModeQuery.addEventListener("change", (e) => {
- apSwitch.checked = e.matches;
- toggleDarkMode(e.matches);
- });
- // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
- apSwitch.addEventListener("change", (e) => {
- toggleDarkMode(e.target.checked);
- });
-}
-
-function setChatbotHeight() {
- const screenWidth = window.innerWidth;
- const statusDisplay = document.querySelector('#status_display');
- const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0;
- const wrap = chatbot.querySelector('.wrap');
- const vh = window.innerHeight * 0.01;
- document.documentElement.style.setProperty('--vh', `${vh}px`);
- if (isInIframe) {
- chatbot.style.height = `700px`;
- wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`
- } else {
- if (screenWidth <= 320) {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- } else if (screenWidth <= 499) {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- } else {
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`;
- wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
- }
- }
-}
-
-// 监视页面内部 DOM 变动
-var observer = new MutationObserver(function (mutations) {
- gradioLoaded(mutations);
-});
-observer.observe(targetNode, { childList: true, subtree: true });
-
-// 监视页面变化
-window.addEventListener("DOMContentLoaded", function () {
- isInIframe = (window.self !== window.top);
-});
-window.addEventListener('resize', setChatbotHeight);
-window.addEventListener('scroll', setChatbotHeight);
-window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode);
\ No newline at end of file
diff --git a/spaces/Hallucinate/demo/k_diffusion/models/image_v1.py b/spaces/Hallucinate/demo/k_diffusion/models/image_v1.py
deleted file mode 100644
index 9ffd5f2c4d6c9d086107d5fac67452419696c723..0000000000000000000000000000000000000000
--- a/spaces/Hallucinate/demo/k_diffusion/models/image_v1.py
+++ /dev/null
@@ -1,156 +0,0 @@
-import math
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from .. import layers, utils
-
-
-def orthogonal_(module):
- nn.init.orthogonal_(module.weight)
- return module
-
-
-class ResConvBlock(layers.ConditionedResidualBlock):
- def __init__(self, feats_in, c_in, c_mid, c_out, group_size=32, dropout_rate=0.):
- skip = None if c_in == c_out else orthogonal_(nn.Conv2d(c_in, c_out, 1, bias=False))
- super().__init__(
- layers.AdaGN(feats_in, c_in, max(1, c_in // group_size)),
- nn.GELU(),
- nn.Conv2d(c_in, c_mid, 3, padding=1),
- nn.Dropout2d(dropout_rate, inplace=True),
- layers.AdaGN(feats_in, c_mid, max(1, c_mid // group_size)),
- nn.GELU(),
- nn.Conv2d(c_mid, c_out, 3, padding=1),
- nn.Dropout2d(dropout_rate, inplace=True),
- skip=skip)
-
-
-class DBlock(layers.ConditionedSequential):
- def __init__(self, n_layers, feats_in, c_in, c_mid, c_out, group_size=32, head_size=64, dropout_rate=0., downsample=False, self_attn=False, cross_attn=False, c_enc=0):
- modules = [nn.Identity()]
- for i in range(n_layers):
- my_c_in = c_in if i == 0 else c_mid
- my_c_out = c_mid if i < n_layers - 1 else c_out
- modules.append(ResConvBlock(feats_in, my_c_in, c_mid, my_c_out, group_size, dropout_rate))
- if self_attn:
- norm = lambda c_in: layers.AdaGN(feats_in, c_in, max(1, my_c_out // group_size))
- modules.append(layers.SelfAttention2d(my_c_out, max(1, my_c_out // head_size), norm, dropout_rate))
- if cross_attn:
- norm = lambda c_in: layers.AdaGN(feats_in, c_in, max(1, my_c_out // group_size))
- modules.append(layers.CrossAttention2d(my_c_out, c_enc, max(1, my_c_out // head_size), norm, dropout_rate))
- super().__init__(*modules)
- self.set_downsample(downsample)
-
- def set_downsample(self, downsample):
- self[0] = layers.Downsample2d() if downsample else nn.Identity()
- return self
-
-
-class UBlock(layers.ConditionedSequential):
- def __init__(self, n_layers, feats_in, c_in, c_mid, c_out, group_size=32, head_size=64, dropout_rate=0., upsample=False, self_attn=False, cross_attn=False, c_enc=0):
- modules = []
- for i in range(n_layers):
- my_c_in = c_in if i == 0 else c_mid
- my_c_out = c_mid if i < n_layers - 1 else c_out
- modules.append(ResConvBlock(feats_in, my_c_in, c_mid, my_c_out, group_size, dropout_rate))
- if self_attn:
- norm = lambda c_in: layers.AdaGN(feats_in, c_in, max(1, my_c_out // group_size))
- modules.append(layers.SelfAttention2d(my_c_out, max(1, my_c_out // head_size), norm, dropout_rate))
- if cross_attn:
- norm = lambda c_in: layers.AdaGN(feats_in, c_in, max(1, my_c_out // group_size))
- modules.append(layers.CrossAttention2d(my_c_out, c_enc, max(1, my_c_out // head_size), norm, dropout_rate))
- modules.append(nn.Identity())
- super().__init__(*modules)
- self.set_upsample(upsample)
-
- def forward(self, input, cond, skip=None):
- if skip is not None:
- input = torch.cat([input, skip], dim=1)
- return super().forward(input, cond)
-
- def set_upsample(self, upsample):
- self[-1] = layers.Upsample2d() if upsample else nn.Identity()
- return self
-
-
-class MappingNet(nn.Sequential):
- def __init__(self, feats_in, feats_out, n_layers=2):
- layers = []
- for i in range(n_layers):
- layers.append(orthogonal_(nn.Linear(feats_in if i == 0 else feats_out, feats_out)))
- layers.append(nn.GELU())
- super().__init__(*layers)
-
-
-class ImageDenoiserModelV1(nn.Module):
- def __init__(self, c_in, feats_in, depths, channels, self_attn_depths, cross_attn_depths=None, mapping_cond_dim=0, unet_cond_dim=0, cross_cond_dim=0, dropout_rate=0., patch_size=1, skip_stages=0, has_variance=False):
- super().__init__()
- self.c_in = c_in
- self.channels = channels
- self.unet_cond_dim = unet_cond_dim
- self.patch_size = patch_size
- self.has_variance = has_variance
- self.timestep_embed = layers.FourierFeatures(1, feats_in)
- if mapping_cond_dim > 0:
- self.mapping_cond = nn.Linear(mapping_cond_dim, feats_in, bias=False)
- self.mapping = MappingNet(feats_in, feats_in)
- self.proj_in = nn.Conv2d((c_in + unet_cond_dim) * self.patch_size ** 2, channels[max(0, skip_stages - 1)], 1)
- self.proj_out = nn.Conv2d(channels[max(0, skip_stages - 1)], c_in * self.patch_size ** 2 + (1 if self.has_variance else 0), 1)
- nn.init.zeros_(self.proj_out.weight)
- nn.init.zeros_(self.proj_out.bias)
- if cross_cond_dim == 0:
- cross_attn_depths = [False] * len(self_attn_depths)
- d_blocks, u_blocks = [], []
- for i in range(len(depths)):
- my_c_in = channels[max(0, i - 1)]
- d_blocks.append(DBlock(depths[i], feats_in, my_c_in, channels[i], channels[i], downsample=i > skip_stages, self_attn=self_attn_depths[i], cross_attn=cross_attn_depths[i], c_enc=cross_cond_dim, dropout_rate=dropout_rate))
- for i in range(len(depths)):
- my_c_in = channels[i] * 2 if i < len(depths) - 1 else channels[i]
- my_c_out = channels[max(0, i - 1)]
- u_blocks.append(UBlock(depths[i], feats_in, my_c_in, channels[i], my_c_out, upsample=i > skip_stages, self_attn=self_attn_depths[i], cross_attn=cross_attn_depths[i], c_enc=cross_cond_dim, dropout_rate=dropout_rate))
- self.u_net = layers.UNet(d_blocks, reversed(u_blocks), skip_stages=skip_stages)
-
- def forward(self, input, sigma, mapping_cond=None, unet_cond=None, cross_cond=None, cross_cond_padding=None, return_variance=False):
- c_noise = sigma.log() / 4
- timestep_embed = self.timestep_embed(utils.append_dims(c_noise, 2))
- mapping_cond_embed = torch.zeros_like(timestep_embed) if mapping_cond is None else self.mapping_cond(mapping_cond)
- mapping_out = self.mapping(timestep_embed + mapping_cond_embed)
- cond = {'cond': mapping_out}
- if unet_cond is not None:
- input = torch.cat([input, unet_cond], dim=1)
- if cross_cond is not None:
- cond['cross'] = cross_cond
- cond['cross_padding'] = cross_cond_padding
- if self.patch_size > 1:
- input = F.pixel_unshuffle(input, self.patch_size)
- input = self.proj_in(input)
- input = self.u_net(input, cond)
- input = self.proj_out(input)
- if self.has_variance:
- input, logvar = input[:, :-1], input[:, -1].flatten(1).mean(1)
- if self.patch_size > 1:
- input = F.pixel_shuffle(input, self.patch_size)
- if self.has_variance and return_variance:
- return input, logvar
- return input
-
- def set_skip_stages(self, skip_stages):
- self.proj_in = nn.Conv2d(self.proj_in.in_channels, self.channels[max(0, skip_stages - 1)], 1)
- self.proj_out = nn.Conv2d(self.channels[max(0, skip_stages - 1)], self.proj_out.out_channels, 1)
- nn.init.zeros_(self.proj_out.weight)
- nn.init.zeros_(self.proj_out.bias)
- self.u_net.skip_stages = skip_stages
- for i, block in enumerate(self.u_net.d_blocks):
- block.set_downsample(i > skip_stages)
- for i, block in enumerate(reversed(self.u_net.u_blocks)):
- block.set_upsample(i > skip_stages)
- return self
-
- def set_patch_size(self, patch_size):
- self.patch_size = patch_size
- self.proj_in = nn.Conv2d((self.c_in + self.unet_cond_dim) * self.patch_size ** 2, self.channels[max(0, self.u_net.skip_stages - 1)], 1)
- self.proj_out = nn.Conv2d(self.channels[max(0, self.u_net.skip_stages - 1)], self.c_in * self.patch_size ** 2 + (1 if self.has_variance else 0), 1)
- nn.init.zeros_(self.proj_out.weight)
- nn.init.zeros_(self.proj_out.bias)
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/demo_classification_afqmc_roberta.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/demo_classification_afqmc_roberta.sh
deleted file mode 100644
index bad55f2de72f66f02b583d9b191802c55cfe0a4b..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/classification/demo_classification_afqmc_roberta.sh
+++ /dev/null
@@ -1,62 +0,0 @@
-MODEL_NAME="IDEA-CCNL/Erlangshen-Roberta-110M-NLI"
-
-TEXTA_NAME=sentence1
-TEXTB_NAME=sentence2
-LABEL_NAME=label
-ID_NAME=id
-
-BATCH_SIZE=1
-VAL_BATCH_SIZE=1
-
-DATA_ARGS="\
- --dataset_name IDEA-CCNL/AFQMC \
- --train_batchsize $BATCH_SIZE \
- --valid_batchsize $VAL_BATCH_SIZE \
- --max_length 128 \
- --texta_name $TEXTA_NAME \
- --textb_name $TEXTB_NAME \
- --label_name $LABEL_NAME \
- --id_name $ID_NAME \
- "
-
-MODEL_ARGS="\
- --learning_rate 1e-5 \
- --weight_decay 1e-2 \
- --warmup_ratio 0.01 \
- --num_labels 2 \
- --model_type huggingface-auto \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_acc \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 0 \
- --save_weights_only True \
- --dirpath . \
- --filename model-{epoch:02d}-{val_acc:.4f} \
- "
-
-
-TRAINER_ARGS="\
- --max_epochs 67 \
- --gpus 1 \
- --num_nodes 1 \
- --strategy ddp \
- --gradient_clip_val 1.0 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 1.0 \
- --precision 16 \
- --default_root_dir . \
- "
-
-options=" \
- --pretrained_model_path $MODEL_NAME \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
- "
-
-python3 finetune_classification.py $options
-
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh
deleted file mode 100644
index 7fab2998437ef8c12dcd93466371d0324eec4c79..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_large_weibo.sh
+++ /dev/null
@@ -1,91 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=zen2_large_weibo # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=1 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id)
-
-
-# export CUDA_VISIBLE_DEVICES='2'
-export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions
-
-MODEL_NAME=zen2_large
-
-TASK=weibo
-
-ZERO_STAGE=1
-STRATEGY=deepspeed_stage_${ZERO_STAGE}
-
-ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK}
-if [ ! -d ${ROOT_DIR} ];then
- mkdir -p ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/weibo/
-PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_large_2.0
-
-CHECKPOINT_PATH=${ROOT_DIR}/ckpt/
-OUTPUT_PATH=${ROOT_DIR}/predict.json
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train.all.bmes \
- --valid_data test.all.bmes \
- --test_data test.all.bmes \
- --train_batchsize 16 \
- --valid_batchsize 16 \
- --max_seq_length 256 \
- --task_name weibo \
- "
-
-MODEL_ARGS="\
- --learning_rate 3e-5 \
- --weight_decay 0.1 \
- --warmup_ratio 0.01 \
- --markup bioes \
- --middle_prefix M- \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_f1 \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 100 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_f1:.4f} \
- "
-
-TRAINER_ARGS="\
- --max_epochs 30 \
- --gpus 1 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 20 \
- --default_root_dir $ROOT_DIR \
- "
-
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \
- --do_lower_case \
- --output_save_path $OUTPUT_PATH \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
-"
-SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py
-/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
-# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif
-# python3 $SCRIPT_PATH $options
-# source activate base
-# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/replabels.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/replabels.py
deleted file mode 100644
index 441f1bd432b95865fc981c6c695cee299b07ed62..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/replabels.py
+++ /dev/null
@@ -1,70 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Replabel transforms for use with flashlight's ASG criterion.
-"""
-
-
-def replabel_symbol(i):
- """
- Replabel symbols used in flashlight, currently just "1", "2", ...
- This prevents training with numeral tokens, so this might change in the future
- """
- return str(i)
-
-
-def pack_replabels(tokens, dictionary, max_reps):
- """
- Pack a token sequence so that repeated symbols are replaced by replabels
- """
- if len(tokens) == 0 or max_reps <= 0:
- return tokens
-
- replabel_value_to_idx = [0] * (max_reps + 1)
- for i in range(1, max_reps + 1):
- replabel_value_to_idx[i] = dictionary.index(replabel_symbol(i))
-
- result = []
- prev_token = -1
- num_reps = 0
- for token in tokens:
- if token == prev_token and num_reps < max_reps:
- num_reps += 1
- else:
- if num_reps > 0:
- result.append(replabel_value_to_idx[num_reps])
- num_reps = 0
- result.append(token)
- prev_token = token
- if num_reps > 0:
- result.append(replabel_value_to_idx[num_reps])
- return result
-
-
-def unpack_replabels(tokens, dictionary, max_reps):
- """
- Unpack a token sequence so that replabels are replaced by repeated symbols
- """
- if len(tokens) == 0 or max_reps <= 0:
- return tokens
-
- replabel_idx_to_value = {}
- for i in range(1, max_reps + 1):
- replabel_idx_to_value[dictionary.index(replabel_symbol(i))] = i
-
- result = []
- prev_token = -1
- for token in tokens:
- try:
- for _ in range(replabel_idx_to_value[token]):
- result.append(prev_token)
- prev_token = -1
- except KeyError:
- result.append(token)
- prev_token = token
- return result
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py
deleted file mode 100644
index e7465bc889fd1ba6ca2c60905a2eb6ff5cc62b9d..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py
+++ /dev/null
@@ -1,488 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from typing import Tuple, List
-
-import torch
-import torch.nn.functional as F
-from fairseq.models import FairseqEncoder
-from fairseq.models.speech_to_text import (
- ConvTransformerEncoder,
-)
-from fairseq.models.speech_to_text.utils import attention_suppression
-from fairseq.models.speech_to_text.utils import (
- lengths_to_encoder_padding_mask,
- segments_to_sequence,
- sequence_to_segments,
-)
-from fairseq.modules import MultiheadAttention, TransformerEncoderLayer
-from torch import nn, Tensor
-
-# ------------------------------------------------------------------------------
-# AugmentedMemoryConvTransformerEncoder
-# ------------------------------------------------------------------------------
-
-
-class AugmentedMemoryConvTransformerEncoder(ConvTransformerEncoder):
- def __init__(self, args):
- super().__init__(args)
-
- args.encoder_stride = self.stride()
-
- self.left_context = args.left_context // args.encoder_stride
-
- self.right_context = args.right_context // args.encoder_stride
-
- self.left_context_after_stride = args.left_context // args.encoder_stride
- self.right_context_after_stride = args.right_context // args.encoder_stride
-
- self.transformer_layers = nn.ModuleList([])
- self.transformer_layers.extend(
- [
- AugmentedMemoryTransformerEncoderLayer(args)
- for i in range(args.encoder_layers)
- ]
- )
-
- def stride(self):
- # Hard coded here. Should infer from convs in future
- stride = 4
- return stride
-
- def forward(self, src_tokens, src_lengths, states=None):
- """Encode input sequence.
- :param torch.Tensor xs: input tensor
- :param torch.Tensor masks: input mask
- :return: position embedded tensor and mask
- :rtype Tuple[torch.Tensor, torch.Tensor]:
- """
- bsz, max_seq_len, _ = src_tokens.size()
- x = (
- src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim)
- .transpose(1, 2)
- .contiguous()
- )
- x = self.conv(x)
- bsz, _, output_seq_len, _ = x.size()
- x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1)
- x = self.out(x)
- x = self.embed_scale * x
-
- subsampling_factor = 1.0 * max_seq_len / output_seq_len
- input_lengths = torch.max(
- (src_lengths.float() / subsampling_factor).ceil().long(),
- x.size(0) * src_lengths.new_ones([src_lengths.size(0)]).long(),
- )
-
- encoder_padding_mask, _ = lengths_to_encoder_padding_mask(
- input_lengths, batch_first=True
- )
-
- # TODO: fix positional embedding
- positions = self.embed_positions(encoder_padding_mask).transpose(0, 1)
-
- x += positions
- x = F.dropout(x, p=self.dropout, training=self.training)
-
- # State to store memory banks etc.
- if states is None:
- states = [
- {"memory_banks": None, "encoder_states": None}
- for i in range(len(self.transformer_layers))
- ]
-
- for i, layer in enumerate(self.transformer_layers):
- # x size:
- # (self.left_size + self.segment_size + self.right_size)
- # / self.stride, num_heads, dim
- # TODO: Consider mask here
- x = layer(x, states[i])
- states[i]["encoder_states"] = x[
- self.left_context_after_stride : -self.right_context_after_stride
- ]
-
- lengths = (
- (
- ~encoder_padding_mask[
- :, self.left_context_after_stride : -self.right_context_after_stride
- ]
- )
- .sum(dim=1, keepdim=True)
- .long()
- )
-
- return states[-1]["encoder_states"], lengths, states
-
-
-# ------------------------------------------------------------------------------
-# AugmentedMemoryTransformerEncoderLayer
-# ------------------------------------------------------------------------------
-class AugmentedMemoryTransformerEncoderLayer(TransformerEncoderLayer):
- def __init__(self, args):
- super().__init__(args)
-
- self.left_context = args.left_context // args.encoder_stride
- self.right_context = args.right_context // args.encoder_stride
-
- def forward(self, x, state):
-
- length, batch_size, x_dim = x.size()
-
- residual = x
-
- if self.normalize_before:
- x = self.self_attn_layer_norm(x)
-
- # init_state
- if state.get("memory_banks", None) is None:
- state["memory_banks"] = []
-
- # TODO reseach new sum_query method
- seg_start = self.left_context
- seg_end = length - self.right_context
- if seg_start < seg_end:
- summarization_query = torch.mean(x[seg_start:seg_end], keepdim=True, dim=0)
- else:
- summarization_query = x.new_zeros(1, batch_size, x_dim)
-
- x = torch.cat([x, summarization_query], dim=0)
-
- x = self.self_attn(input_and_summary=x, state=state)
-
- x = self.dropout_module(x)
- x = residual + x
-
- if not self.normalize_before:
- x = self.self_attn_layer_norm(x)
-
- residual = x
- if self.normalize_before:
- x = self.final_layer_norm(x)
-
- x = self.activation_fn(self.fc1(x))
- x = self.activation_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = residual + x
- if not self.normalize_before:
- x = self.final_layer_norm(x)
-
- return x
-
- def build_self_attention(self, embed_dim, args):
- return AugmentedMemoryMultiheadAttention(
- embed_dim=embed_dim,
- num_heads=args.encoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=True,
- q_noise=self.quant_noise,
- qn_block_size=self.quant_noise_block_size,
- tanh_on_mem=True,
- max_memory_size=args.max_memory_size,
- )
-
-
-# ------------------------------------------------------------------------------
-# AugmentedMemoryMultiheadAttention
-# ------------------------------------------------------------------------------
-class AugmentedMemoryMultiheadAttention(MultiheadAttention):
- """
- Augmented Memory Attention from
- Streaming Transformer-based Acoustic Models
- Using Self-attention with Augmented Memory
- https://arxiv.org/abs/2005.08042
- """
-
- def __init__(
- self,
- embed_dim,
- num_heads,
- kdim=None,
- vdim=None,
- dropout=0.0,
- bias=True,
- add_bias_kv=False,
- add_zero_attn=False,
- self_attention=False,
- encoder_decoder_attention=False,
- q_noise=0.0,
- qn_block_size=8,
- tanh_on_mem=False,
- memory_dim=None,
- std_scale=0.5, # 0.5 based on https://arxiv.org/abs/2005.09137
- max_memory_size=-1,
- disable_mem_on_mem_attn=True,
- ):
- super().__init__(
- embed_dim,
- num_heads,
- kdim,
- vdim,
- dropout,
- bias,
- add_bias_kv,
- add_zero_attn,
- self_attention,
- encoder_decoder_attention,
- q_noise,
- qn_block_size,
- )
-
- self.memory_dim = memory_dim if memory_dim is not None else embed_dim
- self.std_scale = std_scale
- self.disable_mem_on_mem_attn = disable_mem_on_mem_attn
-
- # This Operator was used for factorization in PySpeech
- self.v2e = lambda x: x
-
- if tanh_on_mem:
- self.squash_mem = torch.tanh
- self.nonlinear_squash_mem = True
- else:
- self.squash_mem = lambda x: x
- self.nonlinear_squash_mem = False
-
- self.max_memory_size = max_memory_size
-
- def forward(self, input_and_summary, state):
- """
- input: Encoder states of current segment with left or right context,
- plus one summarization query
-
- """
-
- length, batch_size, _ = input_and_summary.shape
- length = length - 1 # not include sum_query, last index
-
- memory = state["memory_banks"]
- # TODO: positional embedding on memory
-
- if self.max_memory_size > -1 and len(memory) > self.max_memory_size:
- # TODO: need to fix here
- if self.max_memory_size == 0:
- memory = memory.new_zeros(1, memory.size(1), self.memory_dim)
- else:
- memory = memory[-self.max_memory_size :]
-
- memory_and_input = torch.cat(memory + [input_and_summary[:-1]], dim=0)
- input_and_sum_query = input_and_summary
-
- q = self.q_proj(self.v2e(input_and_sum_query))
- k = self.k_proj(self.v2e(memory_and_input))
- v = self.v_proj(self.v2e(memory_and_input))
-
- q = (
- q.contiguous()
- .view(-1, batch_size * self.num_heads, self.head_dim)
- .transpose(0, 1)
- * self.scaling
- )
- k = (
- k.contiguous()
- .view(-1, batch_size * self.num_heads, self.head_dim)
- .transpose(0, 1)
- )
-
- v = (
- v.contiguous()
- .view(-1, batch_size * self.num_heads, self.head_dim)
- .transpose(0, 1)
- )
-
- attention_weights = torch.bmm(q, k.transpose(1, 2))
-
- if self.disable_mem_on_mem_attn:
- attention_weights = self.suppress_mem_on_mem_attention(
- batch_size, self.num_heads, len(memory), attention_weights
- )
-
- if self.std_scale is not None:
- attention_weights = attention_suppression(attention_weights, self.std_scale)
-
- assert list(attention_weights.shape) == [
- batch_size * self.num_heads,
- length + 1,
- length + len(memory),
- ]
-
- attention_weights = torch.nn.functional.softmax(
- attention_weights.float(), dim=-1
- ).type_as(attention_weights)
-
- attention_probs = self.dropout_module(attention_weights)
-
- # [T, T, B, n_head] + [T, B, n_head, d_head] -> [T, B, n_head, d_head]
- attention = torch.bmm(attention_probs, v)
-
- assert list(attention.shape) == [
- batch_size * self.num_heads,
- length + 1,
- self.head_dim,
- ]
-
- attention = (
- attention.transpose(0, 1)
- .contiguous()
- .view(length + 1, batch_size, self.embed_dim)
- )
-
- output_and_memory = self.out_proj(attention)
-
- next_m = output_and_memory[-1:]
- next_m = self.squash_mem(next_m)
- output = output_and_memory[:-1]
-
- state["memory_banks"].append(next_m)
-
- return output
-
- def suppress_mem_on_mem_attention(
- self, B: int, num_heads: int, mem_size: int, attention_weight: Tensor
- ):
- """
- Arguments:
- - B: batch size
- - num_heads: number of attention heads
- - mem_size: size of memory bank
- - attention_weight: a [B*num_heads, T + 1, T + mem_size] vector
-
- Return:
- modified attention_weight with [B*num_heads, -1, :mem_size] = -inf
- """
- attention_weight[:, -1, :mem_size] = float("-inf")
- return attention_weight
-
-
-# ------------------------------------------------------------------------------
-# SequenceEncoder
-# ------------------------------------------------------------------------------
-class SequenceEncoder(FairseqEncoder):
- """
- SequenceEncoder encodes sequences.
-
- More specifically, `src_tokens` and `src_lengths` in `forward()` should
- describe a batch of "complete" sequences rather than segments.
-
- Segment-by-segment inference can be triggered by `segment_size`:
- 1) `segment_size` is None:
- SequenceEncoder treats the input sequence as one single segment.
- 2) `segment_size` is not None (some int instead):
- SequenceEncoder does the following:
- 1. breaks the input sequence into several segments
- 2. inference on each segment and collect the outputs
- 3. concatanete segment outputs into the output sequence.
- Note that `segment_size` here shouldn't include additional left/right
- contexts needed, for example if we wish to infer with LC-BLSTM where the
- middle chunk size is 100 and right context is 20, `segment_size` should be
- 100.
- """
-
- def __init__(self, args, module):
- super().__init__(None)
-
- self.module = module
- self.input_time_axis = 1
- self.output_time_axis = 0
- self.segment_size = args.segment_size
- self.left_context = args.left_context
- self.right_context = args.right_context
-
- def forward(
- self,
- src_tokens: Tensor,
- src_lengths: Tensor,
- states=None,
- ):
-
- seg_src_tokens_lengths = sequence_to_segments(
- sequence=src_tokens,
- time_axis=self.input_time_axis,
- lengths=src_lengths,
- segment_size=self.segment_size,
- extra_left_context=self.left_context,
- extra_right_context=self.right_context,
- )
-
- seg_encoder_states_lengths: List[Tuple[Tensor, Tensor]] = []
-
- for seg_src_tokens, seg_src_lengths in seg_src_tokens_lengths:
- (seg_encoder_states, seg_enc_lengths, states) = self.module(
- seg_src_tokens,
- seg_src_lengths,
- states=states,
- )
-
- seg_encoder_states_lengths.append((seg_encoder_states, seg_enc_lengths))
-
- encoder_out, enc_lengths = segments_to_sequence(
- segments=seg_encoder_states_lengths, time_axis=self.output_time_axis
- )
-
- encoder_padding_mask, _ = lengths_to_encoder_padding_mask(
- enc_lengths, batch_first=True
- )
-
- if not encoder_padding_mask.any():
- encoder_padding_mask = None
-
- return {
- "encoder_out": [encoder_out],
- "encoder_padding_mask": [encoder_padding_mask],
- "encoder_embedding": [],
- "encoder_states": [states],
- "src_tokens": [],
- "src_lengths": [],
- }
-
- def incremental_encode(
- self,
- seg_src_tokens: Tensor,
- seg_src_lengths: Tensor,
- states=None,
- ):
- """
- Different from forward function, this function takes segmented speech
- as input, and append encoder states to previous states
- """
- (seg_encoder_states, seg_enc_lengths, states) = self.module(
- seg_src_tokens,
- seg_src_lengths,
- states=states,
- )
- return seg_encoder_states, seg_enc_lengths, states
-
-
-# ------------------------------------------------------------------------------
-# Augmented memory model decorator
-# ------------------------------------------------------------------------------
-def augmented_memory(klass):
- class StreamSeq2SeqModel(klass):
- @staticmethod
- def add_args(parser):
- super(StreamSeq2SeqModel, StreamSeq2SeqModel).add_args(parser)
- parser.add_argument(
- "--segment-size", type=int, required=True, help="Length of the segment."
- )
- parser.add_argument(
- "--left-context",
- type=int,
- default=0,
- help="Left context for the segment.",
- )
- parser.add_argument(
- "--right-context",
- type=int,
- default=0,
- help="Right context for the segment.",
- )
- parser.add_argument(
- "--max-memory-size",
- type=int,
- default=-1,
- help="Right context for the segment.",
- )
-
- StreamSeq2SeqModel.__name__ = klass.__name__
- return StreamSeq2SeqModel
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/README.md b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/README.md
deleted file mode 100644
index ea8958397bb5b5fdcb96cd966bd040050ece6fd6..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Vakyansh Hindi TTS
-emoji: 🐨
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 2.8.13
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/langinfo.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/langinfo.py
deleted file mode 100644
index efb7e372feeb67d7106eb5c443de2e14053fd204..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/langinfo.py
+++ /dev/null
@@ -1,488 +0,0 @@
-#
-# Copyright (c) 2013-present, Anoop Kunchukuttan
-# All rights reserved.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-#
-
-## language codes
-LC_TA='ta'
-
-SCRIPT_RANGES={
- 'pa':[0x0a00,0x0a7f] ,
- 'gu':[0x0a80,0x0aff] ,
- 'or':[0x0b00,0x0b7f] ,
- 'ta':[0x0b80,0x0bff] ,
- 'te':[0x0c00,0x0c7f] ,
- 'kn':[0x0c80,0x0cff] ,
- 'ml':[0x0d00,0x0d7f] ,
- 'si':[0x0d80,0x0dff] ,
- 'hi':[0x0900,0x097f] ,
- 'mr':[0x0900,0x097f] ,
- 'kK':[0x0900,0x097f] ,
- 'sa':[0x0900,0x097f] ,
- 'ne':[0x0900,0x097f] ,
- 'sd':[0x0900,0x097f] ,
- 'bn':[0x0980,0x09ff] ,
- 'as':[0x0980,0x09ff] ,
- }
-
-DRAVIDIAN_LANGUAGES=['ta', 'te', 'kn', 'ml',]
-IE_LANGUAGES=['hi', 'mr', 'kK', 'sa', 'ne', 'sd', 'bn', 'as', 'pa', 'gu', 'or', 'si', ]
-DANDA_DELIM_LANGUAGES=['as','bn','hi','ne','or','pa','sa','sd']
-
-URDU_RANGES=[
- [0x0600,0x06ff],
- [0x0750,0x077f],
- [0xfb50,0xfdff],
- [0xfe70,0xfeff],
- ]
-
-COORDINATED_RANGE_START_INCLUSIVE=0
-COORDINATED_RANGE_END_INCLUSIVE=0x6f
-
-NUMERIC_OFFSET_START=0x66
-NUMERIC_OFFSET_END=0x6f
-
-HALANTA_OFFSET=0x4d
-AUM_OFFSET=0x50
-NUKTA_OFFSET=0x3c
-
-RUPEE_SIGN=0x20b9
-
-DANDA=0x0964
-DOUBLE_DANDA=0x0965
-
-#TODO: add missing fricatives and approximants
-VELAR_RANGE=[0x15,0x19]
-PALATAL_RANGE=[0x1a,0x1e]
-RETROFLEX_RANGE=[0x1f,0x23]
-DENTAL_RANGE=[0x24,0x29]
-LABIAL_RANGE=[0x2a,0x2e]
-
-# verify
-VOICED_LIST=[0x17,0x18,0x1c,0x1d,0x21,0x22,0x26,0x27,0x2c,0x2d]
-UNVOICED_LIST=[0x15,0x16,0x1a,0x1b,0x1f,0x20,0x24,0x25,0x2a,0x2b] #TODO: add sibilants/sonorants
-ASPIRATED_LIST=[0x16,0x18,0x1b,0x1d,0x20,0x22,0x25,0x27,0x2b,0x2d]
-UNASPIRATED_LIST=[0x15,0x17,0x1a,0x1c,0x1f,0x21,0x24,0x26,0x2a,0x2c]
-NASAL_LIST=[0x19,0x1e,0x23,0x28,0x29,0x2d]
-FRICATIVE_LIST=[0x36,0x37,0x38]
-APPROXIMANT_LIST=[0x2f,0x30,0x31,0x32,0x33,0x34,0x35]
-
-#TODO: ha has to be properly categorized
-
-def is_danda_delim(lang):
- """
- Returns True if danda/double danda is a possible delimiter for the language
- """
- return lang in DANDA_DELIM_LANGUAGES
-
-def get_offset(c,lang):
- """
- Applicable to Brahmi derived Indic scripts
- """
- return ord(c)-SCRIPT_RANGES[lang][0]
-
-def offset_to_char(c,lang):
- """
- Applicable to Brahmi derived Indic scripts
- """
- return chr(c+SCRIPT_RANGES[lang][0])
-
-def in_coordinated_range(c_offset):
- """
- Applicable to Brahmi derived Indic scripts
- """
- return (c_offset>=COORDINATED_RANGE_START_INCLUSIVE and c_offset<=COORDINATED_RANGE_END_INCLUSIVE)
-
-def is_indiclang_char(c,lang):
- """
- Applicable to Brahmi derived Indic scripts
- """
- o=get_offset(c,lang)
- return (o>=0 and o<=0x7f) or ord(c)==DANDA or ord(c)==DOUBLE_DANDA
-
-# def is_vowel(c,lang):
-# """
-# Is the character a vowel
-# """
-# o=get_offset(c,lang)
-# return (o>=0x04 and o<=0x14)
-
-# def is_vowel_sign(c,lang):
-# """
-# Is the character a vowel sign (maatraa)
-# """
-# o=get_offset(c,lang)
-# return (o>=0x3e and o<=0x4c)
-
-# def is_halanta(c,lang):
-# """
-# Is the character the halanta character
-# """
-# o=get_offset(c,lang)
-# return (o==HALANTA_OFFSET)
-
-# def is_nukta(c,lang):
-# """
-# Is the character the halanta character
-# """
-# o=get_offset(c,lang)
-# return (o==NUKTA_OFFSET)
-
-# def is_aum(c,lang):
-# """
-# Is the character a vowel sign (maatraa)
-# """
-# o=get_offset(c,lang)
-# return (o==AUM_OFFSET)
-
-# def is_consonant(c,lang):
-# """
-# Is the character a consonant
-# """
-# o=get_offset(c,lang)
-# return (o>=0x15 and o<=0x39)
-
-# def is_velar(c,lang):
-# """
-# Is the character a velar
-# """
-# o=get_offset(c,lang)
-# return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1])
-
-# def is_palatal(c,lang):
-# """
-# Is the character a palatal
-# """
-# o=get_offset(c,lang)
-# return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1])
-
-# def is_retroflex(c,lang):
-# """
-# Is the character a retroflex
-# """
-# o=get_offset(c,lang)
-# return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1])
-
-# def is_dental(c,lang):
-# """
-# Is the character a dental
-# """
-# o=get_offset(c,lang)
-# return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1])
-
-# def is_labial(c,lang):
-# """
-# Is the character a labial
-# """
-# o=get_offset(c,lang)
-# return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1])
-
-# def is_voiced(c,lang):
-# """
-# Is the character a voiced consonant
-# """
-# o=get_offset(c,lang)
-# return o in VOICED_LIST
-
-# def is_unvoiced(c,lang):
-# """
-# Is the character a unvoiced consonant
-# """
-# o=get_offset(c,lang)
-# return o in UNVOICED_LIST
-
-# def is_aspirated(c,lang):
-# """
-# Is the character a aspirated consonant
-# """
-# o=get_offset(c,lang)
-# return o in ASPIRATED_LIST
-
-# def is_unaspirated(c,lang):
-# """
-# Is the character a unaspirated consonant
-# """
-# o=get_offset(c,lang)
-# return o in UNASPIRATED_LIST
-
-# def is_nasal(c,lang):
-# """
-# Is the character a nasal consonant
-# """
-# o=get_offset(c,lang)
-# return o in NASAL_LIST
-
-# def is_fricative(c,lang):
-# """
-# Is the character a fricative consonant
-# """
-# o=get_offset(c,lang)
-# return o in FRICATIVE_LIST
-
-# def is_approximant(c,lang):
-# """
-# Is the character an approximant consonant
-# """
-# o=get_offset(c,lang)
-# return o in APPROXIMANT_LIST
-
-# def is_number(c,lang):
-# """
-# Is the character a number
-# """
-# o=get_offset(c,lang)
-# return (o>=0x66 and o<=0x6f)
-
-
-def is_vowel(c,lang):
- """
- Is the character a vowel
- """
- o=get_offset(c,lang)
- return (o>=0x04 and o<=0x14)
-
-def is_vowel_sign(c,lang):
- """
- Is the character a vowel sign (maatraa)
- """
- o=get_offset(c,lang)
- return (o>=0x3e and o<=0x4c)
-
-def is_halanta(c,lang):
- """
- Is the character the halanta character
- """
- o=get_offset(c,lang)
- return (o==HALANTA_OFFSET)
-
-def is_nukta(c,lang):
- """
- Is the character the halanta character
- """
- o=get_offset(c,lang)
- return (o==NUKTA_OFFSET)
-
-def is_aum(c,lang):
- """
- Is the character a vowel sign (maatraa)
- """
- o=get_offset(c,lang)
- return (o==AUM_OFFSET)
-
-def is_consonant(c,lang):
- """
- Is the character a consonant
- """
- o=get_offset(c,lang)
- return (o>=0x15 and o<=0x39)
-
-def is_velar(c,lang):
- """
- Is the character a velar
- """
- o=get_offset(c,lang)
- return (o>=VELAR_RANGE[0] and o<=VELAR_RANGE[1])
-
-def is_palatal(c,lang):
- """
- Is the character a palatal
- """
- o=get_offset(c,lang)
- return (o>=PALATAL_RANGE[0] and o<=PALATAL_RANGE[1])
-
-def is_retroflex(c,lang):
- """
- Is the character a retroflex
- """
- o=get_offset(c,lang)
- return (o>=RETROFLEX_RANGE[0] and o<=RETROFLEX_RANGE[1])
-
-def is_dental(c,lang):
- """
- Is the character a dental
- """
- o=get_offset(c,lang)
- return (o>=DENTAL_RANGE[0] and o<=DENTAL_RANGE[1])
-
-def is_labial(c,lang):
- """
- Is the character a labial
- """
- o=get_offset(c,lang)
- return (o>=LABIAL_RANGE[0] and o<=LABIAL_RANGE[1])
-
-def is_voiced(c,lang):
- """
- Is the character a voiced consonant
- """
- o=get_offset(c,lang)
- return o in VOICED_LIST
-
-def is_unvoiced(c,lang):
- """
- Is the character a unvoiced consonant
- """
- o=get_offset(c,lang)
- return o in UNVOICED_LIST
-
-def is_aspirated(c,lang):
- """
- Is the character a aspirated consonant
- """
- o=get_offset(c,lang)
- return o in ASPIRATED_LIST
-
-def is_unaspirated(c,lang):
- """
- Is the character a unaspirated consonant
- """
- o=get_offset(c,lang)
- return o in UNASPIRATED_LIST
-
-def is_nasal(c,lang):
- """
- Is the character a nasal consonant
- """
- o=get_offset(c,lang)
- return o in NASAL_LIST
-
-def is_fricative(c,lang):
- """
- Is the character a fricative consonant
- """
- o=get_offset(c,lang)
- return o in FRICATIVE_LIST
-
-def is_approximant(c,lang):
- """
- Is the character an approximant consonant
- """
- o=get_offset(c,lang)
- return o in APPROXIMANT_LIST
-
-def is_number(c,lang):
- """
- Is the character a number
- """
- o=get_offset(c,lang)
- return (o>=0x66 and o<=0x6f)
-
-
-##################################################
-
-def is_vowel_offset(c_offset):
- """
- Is the offset a vowel
- """
- return (c_offset>=0x04 and c_offset<=0x14)
-
-def is_vowel_sign_offset(c_offset):
- """
- Is the offset a vowel sign (maatraa)
- """
- return (c_offset>=0x3e and c_offset<=0x4c)
-
-def is_halanta_offset(c_offset):
- """
- Is the offset the halanta offset
- """
- return (c_offset==HALANTA_OFFSET)
-
-def is_nukta_offset(c_offset):
- """
- Is the offset the halanta offset
- """
- return (c_offset==NUKTA_OFFSET)
-
-def is_aum_offset(c_offset):
- """
- Is the offset a vowel sign (maatraa)
- """
- return (c_offset==AUM_OFFSET)
-
-def is_consonant_offset(c_offset):
- """
- Is the offset a consonant
- """
- return (c_offset>=0x15 and c_offset<=0x39)
-
-def is_velar_offset(c_offset):
- """
- Is the offset a velar
- """
- return (c_offset>=VELAR_RANGE[0] and c_offset<=VELAR_RANGE[1])
-
-def is_palatal_offset(c_offset):
- """
- Is the offset a palatal
- """
- return (c_offset>=PALATAL_RANGE[0] and c_offset<=PALATAL_RANGE[1])
-
-def is_retroflex_offset(c_offset):
- """
- Is the offset a retroflex
- """
- return (c_offset>=RETROFLEX_RANGE[0] and c_offset<=RETROFLEX_RANGE[1])
-
-def is_dental_offset(c_offset):
- """
- Is the offset a dental
- """
- return (c_offset>=DENTAL_RANGE[0] and c_offset<=DENTAL_RANGE[1])
-
-def is_labial_offset(c_offset):
- """
- Is the offset a labial
- """
- return (c_offset>=LABIAL_RANGE[0] and c_offset<=LABIAL_RANGE[1])
-
-def is_voiced_offset(c_offset):
- """
- Is the offset a voiced consonant
- """
- return c_offset in VOICED_LIST
-
-def is_unvoiced_offset(c_offset):
- """
- Is the offset a unvoiced consonant
- """
- return c_offset in UNVOICED_LIST
-
-def is_aspirated_offset(c_offset):
- """
- Is the offset a aspirated consonant
- """
- return c_offset in ASPIRATED_LIST
-
-def is_unaspirated_offset(c_offset):
- """
- Is the offset a unaspirated consonant
- """
- return c_offset in UNASPIRATED_LIST
-
-def is_nasal_offset(c_offset):
- """
- Is the offset a nasal consonant
- """
- return c_offset in NASAL_LIST
-
-def is_fricative_offset(c_offset):
- """
- Is the offset a fricative consonant
- """
- return c_offset in FRICATIVE_LIST
-
-def is_approximant_offset(c_offset):
- """
- Is the offset an approximant consonant
- """
- return c_offset in APPROXIMANT_LIST
-
-def is_number_offset(c_offset):
- """
- Is the offset a number
- """
- return (c_offset>=0x66 and c_offset<=0x6f)
diff --git a/spaces/Harveenchadha/en_to_indic_translation/scripts/remove_train_devtest_overlaps.py b/spaces/Harveenchadha/en_to_indic_translation/scripts/remove_train_devtest_overlaps.py
deleted file mode 100644
index 6107bb6b3e430457d55e65e19c95d4ef241035e1..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/en_to_indic_translation/scripts/remove_train_devtest_overlaps.py
+++ /dev/null
@@ -1,265 +0,0 @@
-import os
-import string
-import shutil
-from itertools import permutations, chain
-from collections import defaultdict
-from tqdm import tqdm
-import sys
-
-INDIC_LANGS = ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"]
-# we will be testing the overlaps of training data with all these benchmarks
-# benchmarks = ['wat2021-devtest', 'wat2020-devtest', 'wat-2018', 'wmt-news', 'ufal-ta', 'pmi']
-
-
-def read_lines(path):
- # if path doesnt exist, return empty list
- if not os.path.exists(path):
- return []
- with open(path, "r") as f:
- lines = f.readlines()
- return lines
-
-
-def create_txt(outFile, lines):
- add_newline = not "\n" in lines[0]
- outfile = open("{0}".format(outFile), "w")
- for line in lines:
- if add_newline:
- outfile.write(line + "\n")
- else:
- outfile.write(line)
-
- outfile.close()
-
-
-def pair_dedup_files(src_file, tgt_file):
- src_lines = read_lines(src_file)
- tgt_lines = read_lines(tgt_file)
- len_before = len(src_lines)
-
- src_dedupped, tgt_dedupped = pair_dedup_lists(src_lines, tgt_lines)
-
- len_after = len(src_dedupped)
- num_duplicates = len_before - len_after
-
- print(f"Dropped duplicate pairs in {src_file} Num duplicates -> {num_duplicates}")
- create_txt(src_file, src_dedupped)
- create_txt(tgt_file, tgt_dedupped)
-
-
-def pair_dedup_lists(src_list, tgt_list):
- src_tgt = list(set(zip(src_list, tgt_list)))
- src_deduped, tgt_deduped = zip(*src_tgt)
- return src_deduped, tgt_deduped
-
-
-def strip_and_normalize(line):
- # lowercase line, remove spaces and strip punctuation
-
- # one of the fastest way to add an exclusion list and remove that
- # list of characters from a string
- # https://towardsdatascience.com/how-to-efficiently-remove-punctuations-from-a-string-899ad4a059fb
- exclist = string.punctuation + "\u0964"
- table_ = str.maketrans("", "", exclist)
-
- line = line.replace(" ", "").lower()
- # dont use this method, it is painfully slow
- # line = "".join([i for i in line if i not in string.punctuation])
- line = line.translate(table_)
- return line
-
-
-def expand_tupled_list(list_of_tuples):
- # convert list of tuples into two lists
- # https://stackoverflow.com/questions/8081545/how-to-convert-list-of-tuples-to-multiple-lists
- # [(en, as), (as, bn), (bn, gu)] - > [en, as, bn], [as, bn, gu]
- list_a, list_b = map(list, zip(*list_of_tuples))
- return list_a, list_b
-
-
-def get_src_tgt_lang_lists(many2many=False):
- if many2many is False:
- SRC_LANGS = ["en"]
- TGT_LANGS = INDIC_LANGS
- else:
- all_languages = INDIC_LANGS + ["en"]
- # lang_pairs = list(permutations(all_languages, 2))
-
- SRC_LANGS, TGT_LANGS = all_languages, all_languages
-
- return SRC_LANGS, TGT_LANGS
-
-
-def normalize_and_gather_all_benchmarks(devtest_dir, many2many=False):
-
- # This is a dict of dict of lists
- # the first keys are for lang-pair, the second keys are for src/tgt
- # the values are the devtest lines.
- # so devtest_pairs_normalized[en-as][src] will store src(en lines)
- # so devtest_pairs_normalized[en-as][tgt] will store tgt(as lines)
- devtest_pairs_normalized = defaultdict(lambda: defaultdict(list))
- SRC_LANGS, TGT_LANGS = get_src_tgt_lang_lists(many2many)
- benchmarks = os.listdir(devtest_dir)
- for dataset in benchmarks:
- for src_lang in SRC_LANGS:
- for tgt_lang in TGT_LANGS:
- if src_lang == tgt_lang:
- continue
- if dataset == "wat2021-devtest":
- # wat2021 dev and test sets have differnet folder structure
- src_dev = read_lines(f"{devtest_dir}/{dataset}/dev.{src_lang}")
- tgt_dev = read_lines(f"{devtest_dir}/{dataset}/dev.{tgt_lang}")
- src_test = read_lines(f"{devtest_dir}/{dataset}/test.{src_lang}")
- tgt_test = read_lines(f"{devtest_dir}/{dataset}/test.{tgt_lang}")
- else:
- src_dev = read_lines(
- f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/dev.{src_lang}"
- )
- tgt_dev = read_lines(
- f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/dev.{tgt_lang}"
- )
- src_test = read_lines(
- f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/test.{src_lang}"
- )
- tgt_test = read_lines(
- f"{devtest_dir}/{dataset}/{src_lang}-{tgt_lang}/test.{tgt_lang}"
- )
-
- # if the tgt_pair data doesnt exist for a particular test set,
- # it will be an empty list
- if tgt_test == [] or tgt_dev == []:
- # print(f'{dataset} does not have {src_lang}-{tgt_lang} data')
- continue
-
- # combine both dev and test sets into one
- src_devtest = src_dev + src_test
- tgt_devtest = tgt_dev + tgt_test
-
- src_devtest = [strip_and_normalize(line) for line in src_devtest]
- tgt_devtest = [strip_and_normalize(line) for line in tgt_devtest]
-
- devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"].extend(
- src_devtest
- )
- devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"].extend(
- tgt_devtest
- )
-
- # dedup merged benchmark datasets
- for src_lang in SRC_LANGS:
- for tgt_lang in TGT_LANGS:
- if src_lang == tgt_lang:
- continue
- src_devtest, tgt_devtest = (
- devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"],
- devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"],
- )
- # if the devtest data doesnt exist for the src-tgt pair then continue
- if src_devtest == [] or tgt_devtest == []:
- continue
- src_devtest, tgt_devtest = pair_dedup_lists(src_devtest, tgt_devtest)
- (
- devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["src"],
- devtest_pairs_normalized[f"{src_lang}-{tgt_lang}"]["tgt"],
- ) = (
- src_devtest,
- tgt_devtest,
- )
-
- return devtest_pairs_normalized
-
-
-def remove_train_devtest_overlaps(train_dir, devtest_dir, many2many=False):
-
- devtest_pairs_normalized = normalize_and_gather_all_benchmarks(
- devtest_dir, many2many
- )
-
- SRC_LANGS, TGT_LANGS = get_src_tgt_lang_lists(many2many)
-
- if not many2many:
- all_src_sentences_normalized = []
- for key in devtest_pairs_normalized:
- all_src_sentences_normalized.extend(devtest_pairs_normalized[key]["src"])
- # remove all duplicates. Now this contains all the normalized
- # english sentences in all test benchmarks across all lang pair
- all_src_sentences_normalized = list(set(all_src_sentences_normalized))
- else:
- all_src_sentences_normalized = None
-
- src_overlaps = []
- tgt_overlaps = []
- for src_lang in SRC_LANGS:
- for tgt_lang in TGT_LANGS:
- if src_lang == tgt_lang:
- continue
- new_src_train = []
- new_tgt_train = []
-
- pair = f"{src_lang}-{tgt_lang}"
- src_train = read_lines(f"{train_dir}/{pair}/train.{src_lang}")
- tgt_train = read_lines(f"{train_dir}/{pair}/train.{tgt_lang}")
-
- len_before = len(src_train)
- if len_before == 0:
- continue
-
- src_train_normalized = [strip_and_normalize(line) for line in src_train]
- tgt_train_normalized = [strip_and_normalize(line) for line in tgt_train]
-
- if all_src_sentences_normalized:
- src_devtest_normalized = all_src_sentences_normalized
- else:
- src_devtest_normalized = devtest_pairs_normalized[pair]["src"]
-
- tgt_devtest_normalized = devtest_pairs_normalized[pair]["tgt"]
-
- # compute all src and tgt super strict overlaps for a lang pair
- overlaps = set(src_train_normalized) & set(src_devtest_normalized)
- src_overlaps.extend(list(overlaps))
-
- overlaps = set(tgt_train_normalized) & set(tgt_devtest_normalized)
- tgt_overlaps.extend(list(overlaps))
- # dictionaries offer o(1) lookup
- src_overlaps_dict = {}
- tgt_overlaps_dict = {}
- for line in src_overlaps:
- src_overlaps_dict[line] = 1
- for line in tgt_overlaps:
- tgt_overlaps_dict[line] = 1
-
- # loop to remove the ovelapped data
- idx = -1
- for src_line_norm, tgt_line_norm in tqdm(
- zip(src_train_normalized, tgt_train_normalized), total=len_before
- ):
- idx += 1
- if src_overlaps_dict.get(src_line_norm, None):
- continue
- if tgt_overlaps_dict.get(tgt_line_norm, None):
- continue
- new_src_train.append(src_train[idx])
- new_tgt_train.append(tgt_train[idx])
-
- len_after = len(new_src_train)
- print(
- f"Detected overlaps between train and devetest for {pair} is {len_before - len_after}"
- )
- print(f"saving new files at {train_dir}/{pair}/")
- create_txt(f"{train_dir}/{pair}/train.{src_lang}", new_src_train)
- create_txt(f"{train_dir}/{pair}/train.{tgt_lang}", new_tgt_train)
-
-
-if __name__ == "__main__":
- train_data_dir = sys.argv[1]
- # benchmarks directory should contains all the test sets
- devtest_data_dir = sys.argv[2]
- if len(sys.argv) == 3:
- many2many = False
- elif len(sys.argv) == 4:
- many2many = sys.argv[4]
- if many2many.lower() == "true":
- many2many = True
- else:
- many2many = False
- remove_train_devtest_overlaps(train_data_dir, devtest_data_dir, many2many)
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_f0.py b/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_f0.py
deleted file mode 100644
index df721d683113b44957149cfc3cddaba36520a22c..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_synthesis/evaluation/eval_f0.py
+++ /dev/null
@@ -1,266 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Signal processing-based evaluation using waveforms
-"""
-import numpy as np
-import os.path as op
-
-import torchaudio
-import tqdm
-from tabulate import tabulate
-
-from examples.speech_synthesis.utils import (
- gross_pitch_error, voicing_decision_error, f0_frame_error
-)
-from examples.speech_synthesis.evaluation.eval_sp import load_eval_spec
-
-
-def difference_function(x, n, tau_max):
- """
- Compute difference function of data x. This solution is implemented directly
- with Numpy fft.
-
-
- :param x: audio data
- :param n: length of data
- :param tau_max: integration window size
- :return: difference function
- :rtype: list
- """
-
- x = np.array(x, np.float64)
- w = x.size
- tau_max = min(tau_max, w)
- x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum()))
- size = w + tau_max
- p2 = (size // 32).bit_length()
- nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32)
- size_pad = min(x * 2 ** p2 for x in nice_numbers if x * 2 ** p2 >= size)
- fc = np.fft.rfft(x, size_pad)
- conv = np.fft.irfft(fc * fc.conjugate())[:tau_max]
- return x_cumsum[w:w - tau_max:-1] + x_cumsum[w] - x_cumsum[:tau_max] - \
- 2 * conv
-
-
-def cumulative_mean_normalized_difference_function(df, n):
- """
- Compute cumulative mean normalized difference function (CMND).
-
- :param df: Difference function
- :param n: length of data
- :return: cumulative mean normalized difference function
- :rtype: list
- """
-
- # scipy method
- cmn_df = df[1:] * range(1, n) / np.cumsum(df[1:]).astype(float)
- return np.insert(cmn_df, 0, 1)
-
-
-def get_pitch(cmdf, tau_min, tau_max, harmo_th=0.1):
- """
- Return fundamental period of a frame based on CMND function.
-
- :param cmdf: Cumulative Mean Normalized Difference function
- :param tau_min: minimum period for speech
- :param tau_max: maximum period for speech
- :param harmo_th: harmonicity threshold to determine if it is necessary to
- compute pitch frequency
- :return: fundamental period if there is values under threshold, 0 otherwise
- :rtype: float
- """
- tau = tau_min
- while tau < tau_max:
- if cmdf[tau] < harmo_th:
- while tau + 1 < tau_max and cmdf[tau + 1] < cmdf[tau]:
- tau += 1
- return tau
- tau += 1
-
- return 0 # if unvoiced
-
-
-def compute_yin(sig, sr, w_len=512, w_step=256, f0_min=100, f0_max=500,
- harmo_thresh=0.1):
- """
-
- Compute the Yin Algorithm. Return fundamental frequency and harmonic rate.
-
- https://github.com/NVIDIA/mellotron adaption of
- https://github.com/patriceguyot/Yin
-
- :param sig: Audio signal (list of float)
- :param sr: sampling rate (int)
- :param w_len: size of the analysis window (samples)
- :param w_step: size of the lag between two consecutives windows (samples)
- :param f0_min: Minimum fundamental frequency that can be detected (hertz)
- :param f0_max: Maximum fundamental frequency that can be detected (hertz)
- :param harmo_thresh: Threshold of detection. The yalgorithmù return the
- first minimum of the CMND function below this threshold.
-
- :returns:
-
- * pitches: list of fundamental frequencies,
- * harmonic_rates: list of harmonic rate values for each fundamental
- frequency value (= confidence value)
- * argmins: minimums of the Cumulative Mean Normalized DifferenceFunction
- * times: list of time of each estimation
- :rtype: tuple
- """
-
- tau_min = int(sr / f0_max)
- tau_max = int(sr / f0_min)
-
- # time values for each analysis window
- time_scale = range(0, len(sig) - w_len, w_step)
- times = [t/float(sr) for t in time_scale]
- frames = [sig[t:t + w_len] for t in time_scale]
-
- pitches = [0.0] * len(time_scale)
- harmonic_rates = [0.0] * len(time_scale)
- argmins = [0.0] * len(time_scale)
-
- for i, frame in enumerate(frames):
- # Compute YIN
- df = difference_function(frame, w_len, tau_max)
- cm_df = cumulative_mean_normalized_difference_function(df, tau_max)
- p = get_pitch(cm_df, tau_min, tau_max, harmo_thresh)
-
- # Get results
- if np.argmin(cm_df) > tau_min:
- argmins[i] = float(sr / np.argmin(cm_df))
- if p != 0: # A pitch was found
- pitches[i] = float(sr / p)
- harmonic_rates[i] = cm_df[p]
- else: # No pitch, but we compute a value of the harmonic rate
- harmonic_rates[i] = min(cm_df)
-
- return pitches, harmonic_rates, argmins, times
-
-
-def extract_f0(samples):
- f0_samples = []
- for sample in tqdm.tqdm(samples):
- if not op.isfile(sample["ref"]) or not op.isfile(sample["syn"]):
- f0_samples.append(None)
- continue
-
- # assume single channel
- yref, sr = torchaudio.load(sample["ref"])
- ysyn, _sr = torchaudio.load(sample["syn"])
- yref, ysyn = yref[0], ysyn[0]
- assert sr == _sr, f"{sr} != {_sr}"
-
- yref_f0 = compute_yin(yref, sr)
- ysyn_f0 = compute_yin(ysyn, sr)
-
- f0_samples += [
- {
- "ref": yref_f0,
- "syn": ysyn_f0
- }
- ]
-
- return f0_samples
-
-
-def eval_f0_error(samples, distortion_fn):
- results = []
- for sample in tqdm.tqdm(samples):
- if sample is None:
- results.append(None)
- continue
- # assume single channel
- yref_f, _, _, yref_t = sample["ref"]
- ysyn_f, _, _, ysyn_t = sample["syn"]
-
- yref_f = np.array(yref_f)
- yref_t = np.array(yref_t)
- ysyn_f = np.array(ysyn_f)
- ysyn_t = np.array(ysyn_t)
-
- distortion = distortion_fn(yref_t, yref_f, ysyn_t, ysyn_f)
- results.append((distortion.item(),
- len(yref_f),
- len(ysyn_f)
- ))
- return results
-
-
-def eval_gross_pitch_error(samples):
- return eval_f0_error(samples, gross_pitch_error)
-
-
-def eval_voicing_decision_error(samples):
- return eval_f0_error(samples, voicing_decision_error)
-
-
-def eval_f0_frame_error(samples):
- return eval_f0_error(samples, f0_frame_error)
-
-
-def print_results(results, show_bin):
- results = np.array(list(filter(lambda x: x is not None, results)))
-
- np.set_printoptions(precision=3)
-
- def _print_result(results):
- res = {
- "nutt": len(results),
- "error": results[:, 0].mean(),
- "std": results[:, 0].std(),
- "dur_ref": int(results[:, 1].sum()),
- "dur_syn": int(results[:, 2].sum()),
- }
- print(tabulate([res.values()], res.keys(), floatfmt=".4f"))
-
- print(">>>> ALL")
- _print_result(results)
-
- if show_bin:
- edges = [0, 200, 400, 600, 800, 1000, 2000, 4000]
- for i in range(1, len(edges)):
- mask = np.logical_and(results[:, 1] >= edges[i-1],
- results[:, 1] < edges[i])
- if not mask.any():
- continue
- bin_results = results[mask]
- print(f">>>> ({edges[i-1]}, {edges[i]})")
- _print_result(bin_results)
-
-
-def main(eval_f0, gpe, vde, ffe, show_bin):
- samples = load_eval_spec(eval_f0)
- if gpe or vde or ffe:
- f0_samples = extract_f0(samples)
-
- if gpe:
- print("===== Evaluate Gross Pitch Error =====")
- results = eval_gross_pitch_error(f0_samples)
- print_results(results, show_bin)
- if vde:
- print("===== Evaluate Voicing Decision Error =====")
- results = eval_voicing_decision_error(f0_samples)
- print_results(results, show_bin)
- if ffe:
- print("===== Evaluate F0 Frame Error =====")
- results = eval_f0_frame_error(f0_samples)
- print_results(results, show_bin)
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("eval_f0")
- parser.add_argument("--gpe", action="store_true")
- parser.add_argument("--vde", action="store_true")
- parser.add_argument("--ffe", action="store_true")
- parser.add_argument("--show-bin", action="store_true")
- args = parser.parse_args()
-
- main(args.eval_f0, args.gpe, args.vde, args.ffe, args.show_bin)
diff --git a/spaces/Illumotion/Koboldcpp/common/console.h b/spaces/Illumotion/Koboldcpp/common/console.h
deleted file mode 100644
index ec175269b9d8af48803d0b6e618d008a9ab99b4d..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/common/console.h
+++ /dev/null
@@ -1,19 +0,0 @@
-// Console functions
-
-#pragma once
-
-#include
-
-namespace console {
- enum display_t {
- reset = 0,
- prompt,
- user_input,
- error
- };
-
- void init(bool use_simple_io, bool use_advanced_display);
- void cleanup();
- void set_display(display_t display);
- bool readline(std::string & line, bool multiline_input);
-}
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/dataloader.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/dataloader.py
deleted file mode 100644
index 039b9ec3645b2a4626ff47c221e372f32a6ad339..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/data/dataloader.py
+++ /dev/null
@@ -1,425 +0,0 @@
-import torch
-import torch.multiprocessing as multiprocessing
-from torch._C import _set_worker_signal_handlers, \
- _remove_worker_pids, _error_if_any_worker_fails
-try:
- from torch._C import _set_worker_pids
-except:
- from torch._C import _update_worker_pids as _set_worker_pids
-from .sampler import SequentialSampler, RandomSampler, BatchSampler
-import signal
-import collections
-import re
-import sys
-import threading
-import traceback
-from torch._six import string_classes, int_classes
-import numpy as np
-
-if sys.version_info[0] == 2:
- import Queue as queue
-else:
- import queue
-
-
-class ExceptionWrapper(object):
- r"Wraps an exception plus traceback to communicate across threads"
-
- def __init__(self, exc_info):
- self.exc_type = exc_info[0]
- self.exc_msg = "".join(traceback.format_exception(*exc_info))
-
-
-_use_shared_memory = False
-"""Whether to use shared memory in default_collate"""
-
-
-def _worker_loop(dataset, index_queue, data_queue, collate_fn, seed, init_fn, worker_id):
- global _use_shared_memory
- _use_shared_memory = True
-
- # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal
- # module's handlers are executed after Python returns from C low-level
- # handlers, likely when the same fatal signal happened again already.
- # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1
- _set_worker_signal_handlers()
-
- torch.set_num_threads(1)
- torch.manual_seed(seed)
- np.random.seed(seed)
-
- if init_fn is not None:
- init_fn(worker_id)
-
- while True:
- r = index_queue.get()
- if r is None:
- break
- idx, batch_indices = r
- try:
- samples = collate_fn([dataset[i] for i in batch_indices])
- except Exception:
- data_queue.put((idx, ExceptionWrapper(sys.exc_info())))
- else:
- data_queue.put((idx, samples))
-
-
-def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id):
- if pin_memory:
- torch.cuda.set_device(device_id)
-
- while True:
- try:
- r = in_queue.get()
- except Exception:
- if done_event.is_set():
- return
- raise
- if r is None:
- break
- if isinstance(r[1], ExceptionWrapper):
- out_queue.put(r)
- continue
- idx, batch = r
- try:
- if pin_memory:
- batch = pin_memory_batch(batch)
- except Exception:
- out_queue.put((idx, ExceptionWrapper(sys.exc_info())))
- else:
- out_queue.put((idx, batch))
-
-numpy_type_map = {
- 'float64': torch.DoubleTensor,
- 'float32': torch.FloatTensor,
- 'float16': torch.HalfTensor,
- 'int64': torch.LongTensor,
- 'int32': torch.IntTensor,
- 'int16': torch.ShortTensor,
- 'int8': torch.CharTensor,
- 'uint8': torch.ByteTensor,
-}
-
-
-def default_collate(batch):
- "Puts each data field into a tensor with outer dimension batch size"
-
- error_msg = "batch must contain tensors, numbers, dicts or lists; found {}"
- elem_type = type(batch[0])
- if torch.is_tensor(batch[0]):
- out = None
- if _use_shared_memory:
- # If we're in a background process, concatenate directly into a
- # shared memory tensor to avoid an extra copy
- numel = sum([x.numel() for x in batch])
- storage = batch[0].storage()._new_shared(numel)
- out = batch[0].new(storage)
- return torch.stack(batch, 0, out=out)
- elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
- and elem_type.__name__ != 'string_':
- elem = batch[0]
- if elem_type.__name__ == 'ndarray':
- # array of string classes and object
- if re.search('[SaUO]', elem.dtype.str) is not None:
- raise TypeError(error_msg.format(elem.dtype))
-
- return torch.stack([torch.from_numpy(b) for b in batch], 0)
- if elem.shape == (): # scalars
- py_type = float if elem.dtype.name.startswith('float') else int
- return numpy_type_map[elem.dtype.name](list(map(py_type, batch)))
- elif isinstance(batch[0], int_classes):
- return torch.LongTensor(batch)
- elif isinstance(batch[0], float):
- return torch.DoubleTensor(batch)
- elif isinstance(batch[0], string_classes):
- return batch
- elif isinstance(batch[0], collections.Mapping):
- return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
- elif isinstance(batch[0], collections.Sequence):
- transposed = zip(*batch)
- return [default_collate(samples) for samples in transposed]
-
- raise TypeError((error_msg.format(type(batch[0]))))
-
-
-def pin_memory_batch(batch):
- if torch.is_tensor(batch):
- return batch.pin_memory()
- elif isinstance(batch, string_classes):
- return batch
- elif isinstance(batch, collections.Mapping):
- return {k: pin_memory_batch(sample) for k, sample in batch.items()}
- elif isinstance(batch, collections.Sequence):
- return [pin_memory_batch(sample) for sample in batch]
- else:
- return batch
-
-
-_SIGCHLD_handler_set = False
-"""Whether SIGCHLD handler is set for DataLoader worker failures. Only one
-handler needs to be set for all DataLoaders in a process."""
-
-
-def _set_SIGCHLD_handler():
- # Windows doesn't support SIGCHLD handler
- if sys.platform == 'win32':
- return
- # can't set signal in child threads
- if not isinstance(threading.current_thread(), threading._MainThread):
- return
- global _SIGCHLD_handler_set
- if _SIGCHLD_handler_set:
- return
- previous_handler = signal.getsignal(signal.SIGCHLD)
- if not callable(previous_handler):
- previous_handler = None
-
- def handler(signum, frame):
- # This following call uses `waitid` with WNOHANG from C side. Therefore,
- # Python can still get and update the process status successfully.
- _error_if_any_worker_fails()
- if previous_handler is not None:
- previous_handler(signum, frame)
-
- signal.signal(signal.SIGCHLD, handler)
- _SIGCHLD_handler_set = True
-
-
-class DataLoaderIter(object):
- "Iterates once over the DataLoader's dataset, as specified by the sampler"
-
- def __init__(self, loader):
- self.dataset = loader.dataset
- self.collate_fn = loader.collate_fn
- self.batch_sampler = loader.batch_sampler
- self.num_workers = loader.num_workers
- self.pin_memory = loader.pin_memory and torch.cuda.is_available()
- self.timeout = loader.timeout
- self.done_event = threading.Event()
-
- self.sample_iter = iter(self.batch_sampler)
-
- if self.num_workers > 0:
- self.worker_init_fn = loader.worker_init_fn
- self.index_queue = multiprocessing.SimpleQueue()
- self.worker_result_queue = multiprocessing.SimpleQueue()
- self.batches_outstanding = 0
- self.worker_pids_set = False
- self.shutdown = False
- self.send_idx = 0
- self.rcvd_idx = 0
- self.reorder_dict = {}
-
- base_seed = torch.LongTensor(1).random_(0, 2**31-1)[0]
- self.workers = [
- multiprocessing.Process(
- target=_worker_loop,
- args=(self.dataset, self.index_queue, self.worker_result_queue, self.collate_fn,
- base_seed + i, self.worker_init_fn, i))
- for i in range(self.num_workers)]
-
- if self.pin_memory or self.timeout > 0:
- self.data_queue = queue.Queue()
- if self.pin_memory:
- maybe_device_id = torch.cuda.current_device()
- else:
- # do not initialize cuda context if not necessary
- maybe_device_id = None
- self.worker_manager_thread = threading.Thread(
- target=_worker_manager_loop,
- args=(self.worker_result_queue, self.data_queue, self.done_event, self.pin_memory,
- maybe_device_id))
- self.worker_manager_thread.daemon = True
- self.worker_manager_thread.start()
- else:
- self.data_queue = self.worker_result_queue
-
- for w in self.workers:
- w.daemon = True # ensure that the worker exits on process exit
- w.start()
-
- _set_worker_pids(id(self), tuple(w.pid for w in self.workers))
- _set_SIGCHLD_handler()
- self.worker_pids_set = True
-
- # prime the prefetch loop
- for _ in range(2 * self.num_workers):
- self._put_indices()
-
- def __len__(self):
- return len(self.batch_sampler)
-
- def _get_batch(self):
- if self.timeout > 0:
- try:
- return self.data_queue.get(timeout=self.timeout)
- except queue.Empty:
- raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout))
- else:
- return self.data_queue.get()
-
- def __next__(self):
- if self.num_workers == 0: # same-process loading
- indices = next(self.sample_iter) # may raise StopIteration
- batch = self.collate_fn([self.dataset[i] for i in indices])
- if self.pin_memory:
- batch = pin_memory_batch(batch)
- return batch
-
- # check if the next sample has already been generated
- if self.rcvd_idx in self.reorder_dict:
- batch = self.reorder_dict.pop(self.rcvd_idx)
- return self._process_next_batch(batch)
-
- if self.batches_outstanding == 0:
- self._shutdown_workers()
- raise StopIteration
-
- while True:
- assert (not self.shutdown and self.batches_outstanding > 0)
- idx, batch = self._get_batch()
- self.batches_outstanding -= 1
- if idx != self.rcvd_idx:
- # store out-of-order samples
- self.reorder_dict[idx] = batch
- continue
- return self._process_next_batch(batch)
-
- next = __next__ # Python 2 compatibility
-
- def __iter__(self):
- return self
-
- def _put_indices(self):
- assert self.batches_outstanding < 2 * self.num_workers
- indices = next(self.sample_iter, None)
- if indices is None:
- return
- self.index_queue.put((self.send_idx, indices))
- self.batches_outstanding += 1
- self.send_idx += 1
-
- def _process_next_batch(self, batch):
- self.rcvd_idx += 1
- self._put_indices()
- if isinstance(batch, ExceptionWrapper):
- raise batch.exc_type(batch.exc_msg)
- return batch
-
- def __getstate__(self):
- # TODO: add limited pickling support for sharing an iterator
- # across multiple threads for HOGWILD.
- # Probably the best way to do this is by moving the sample pushing
- # to a separate thread and then just sharing the data queue
- # but signalling the end is tricky without a non-blocking API
- raise NotImplementedError("DataLoaderIterator cannot be pickled")
-
- def _shutdown_workers(self):
- try:
- if not self.shutdown:
- self.shutdown = True
- self.done_event.set()
- # if worker_manager_thread is waiting to put
- while not self.data_queue.empty():
- self.data_queue.get()
- for _ in self.workers:
- self.index_queue.put(None)
- # done_event should be sufficient to exit worker_manager_thread,
- # but be safe here and put another None
- self.worker_result_queue.put(None)
- finally:
- # removes pids no matter what
- if self.worker_pids_set:
- _remove_worker_pids(id(self))
- self.worker_pids_set = False
-
- def __del__(self):
- if self.num_workers > 0:
- self._shutdown_workers()
-
-
-class DataLoader(object):
- """
- Data loader. Combines a dataset and a sampler, and provides
- single- or multi-process iterators over the dataset.
-
- Arguments:
- dataset (Dataset): dataset from which to load the data.
- batch_size (int, optional): how many samples per batch to load
- (default: 1).
- shuffle (bool, optional): set to ``True`` to have the data reshuffled
- at every epoch (default: False).
- sampler (Sampler, optional): defines the strategy to draw samples from
- the dataset. If specified, ``shuffle`` must be False.
- batch_sampler (Sampler, optional): like sampler, but returns a batch of
- indices at a time. Mutually exclusive with batch_size, shuffle,
- sampler, and drop_last.
- num_workers (int, optional): how many subprocesses to use for data
- loading. 0 means that the data will be loaded in the main process.
- (default: 0)
- collate_fn (callable, optional): merges a list of samples to form a mini-batch.
- pin_memory (bool, optional): If ``True``, the data loader will copy tensors
- into CUDA pinned memory before returning them.
- drop_last (bool, optional): set to ``True`` to drop the last incomplete batch,
- if the dataset size is not divisible by the batch size. If ``False`` and
- the size of dataset is not divisible by the batch size, then the last batch
- will be smaller. (default: False)
- timeout (numeric, optional): if positive, the timeout value for collecting a batch
- from workers. Should always be non-negative. (default: 0)
- worker_init_fn (callable, optional): If not None, this will be called on each
- worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as
- input, after seeding and before data loading. (default: None)
-
- .. note:: By default, each worker will have its PyTorch seed set to
- ``base_seed + worker_id``, where ``base_seed`` is a long generated
- by main process using its RNG. You may use ``torch.initial_seed()`` to access
- this value in :attr:`worker_init_fn`, which can be used to set other seeds
- (e.g. NumPy) before data loading.
-
- .. warning:: If ``spawn'' start method is used, :attr:`worker_init_fn` cannot be an
- unpicklable object, e.g., a lambda function.
- """
-
- def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None,
- num_workers=0, collate_fn=default_collate, pin_memory=False, drop_last=False,
- timeout=0, worker_init_fn=None):
- self.dataset = dataset
- self.batch_size = batch_size
- self.num_workers = num_workers
- self.collate_fn = collate_fn
- self.pin_memory = pin_memory
- self.drop_last = drop_last
- self.timeout = timeout
- self.worker_init_fn = worker_init_fn
-
- if timeout < 0:
- raise ValueError('timeout option should be non-negative')
-
- if batch_sampler is not None:
- if batch_size > 1 or shuffle or sampler is not None or drop_last:
- raise ValueError('batch_sampler is mutually exclusive with '
- 'batch_size, shuffle, sampler, and drop_last')
-
- if sampler is not None and shuffle:
- raise ValueError('sampler is mutually exclusive with shuffle')
-
- if self.num_workers < 0:
- raise ValueError('num_workers cannot be negative; '
- 'use num_workers=0 to disable multiprocessing.')
-
- if batch_sampler is None:
- if sampler is None:
- if shuffle:
- sampler = RandomSampler(dataset)
- else:
- sampler = SequentialSampler(dataset)
- batch_sampler = BatchSampler(sampler, batch_size, drop_last)
-
- self.sampler = sampler
- self.batch_sampler = batch_sampler
-
- def __iter__(self):
- return DataLoaderIter(self)
-
- def __len__(self):
- return len(self.batch_sampler)
diff --git a/spaces/IvaElen/find_my_pic/app.py b/spaces/IvaElen/find_my_pic/app.py
deleted file mode 100644
index 5f7c29ab977a8c54ac4e97248ec6d58c4a233d63..0000000000000000000000000000000000000000
--- a/spaces/IvaElen/find_my_pic/app.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import zipfile
-import random
-from PIL import Image
-
-import pandas as pd
-import numpy as np
-import streamlit as st
-
-import clip
-import torch
-import torchvision.transforms as transforms
-
-from get_similiarty import get_similiarity
-
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-#load model -resnet50
-model_resnet, prerocess = clip.load("RN50", device=device)
-#load model - ViT-B/32
-model_vit, preprocess = clip.load('ViT-B/32', device)
-
-#Распаковка ZIP-файла с фотографиями
-zip_file_path = "sample.zip"
-target_folder = "sample/"
-with zipfile.ZipFile(zip_file_path, 'r') as zip_ref:
- zip_ref.extractall(target_folder)
-
-df = pd.read_csv('results.csv',
- sep = '|',
- names = ['image_name', 'comment_number', 'comment'],
- header=0)
-
-def find_image_disc(prompt, df, top_k):
- img_descs = []
- img_descs_vit = []
- list_images_names, list_images_names_vit = get_similiarity(prompt, model_resnet, model_vit, top_k)
- for img in list_images_names:
- img_descs.append(random.choice(df[df['image_name'] == img.split('/')[-1]]['comment'].values).replace('.', ''))
- #vit
- for img in list_images_names_vit:
- img_descs_vit.append(random.choice(df[df['image_name'] == img.split('/')[-1]]['comment'].values).replace('.', ''))
-
- return list_images_names, img_descs, list_images_names_vit, img_descs_vit
-
-st.image('image.png')
-# st.title('Find my pic!')
-col3, col4 = st.columns(2)
-with col3:
- st.image('3bd0e1e6-6b8a-4aa6-828a-c1756c6d38b2.jpeg')
-with col4:
- txt = st.text_area("Describe the picture you'd like to see")
-
-top_k = st.slider('Number of images', 1, 5, 3)
-
-if txt is not None:
- if st.button('Find!'):
- list_images, img_desc, list_images_vit, img_descs_vit = find_image_disc(txt, df, top_k)
- col1, col2 = st.columns(2)
- col1.header('ResNet50')
- col2.header('ViT 32')
- for ind, pic in enumerate(zip(list_images, list_images_vit)):
- with col1:
- st.image(pic[0])
- st.write(img_desc[ind])
- with col2:
- st.image(pic[1])
- st.write(img_descs_vit[ind])
\ No newline at end of file
diff --git a/spaces/JanBabela/Riffusion-Melodiff-v1/index.html b/spaces/JanBabela/Riffusion-Melodiff-v1/index.html
deleted file mode 100644
index 8d0f38d02e80f7629999d9802221fa76dc118ba9..0000000000000000000000000000000000000000
--- a/spaces/JanBabela/Riffusion-Melodiff-v1/index.html
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
-
-
-
- Riffusion-Melodiff-v1
-
-
-
-
-
Riffusion-Melodiff-v1
-
Riffusion-Melodiff is simple, but interesting idea, (that I have not seen anywhere else) how to create cover versions from songs.
-
Riffusion-Melodiff is built on a top of
- Riffusion
- model, which is fine-tuned Stable Diffusion model to generate Mel Spectrograms. (Spectrogram is kind of
- visual representation of music by dividing waveforms into frequencies.) Riffusion-Melodiff does not contain new model, there was no new training, nor fine-tunig.
- It uses the same model as Riffusion only in a different way.
-
Riffusion-Melodiff uses Img2Img pipeline from Diffusers library to modify images of Mel Spectrograms to produce new versions of music. Just upload your audio
- in wav format (if you have audio in a different format, transfer it first to wav by online converter). Then you may use Img2img pipeline from the Diffusers library
- with your prompt, seed and strength. Stregth parameter decides, how much will modified audio relate to initial audio and how much it will relate to the prompt.
- When strength is too low the spectrogram is too similar with original one and we do not receive new modification. When strength is too high, then spectrogram is too
- close to the new promopt, which may cause loss of melody and/or tempo from the base image. Good values of strength are usually about 0,4-0,5.
-
Good modifications are possible for proper prompt, seed and strength values. Those modifications will keep the tempo and melody from the initial audio, but
- they will change eg. instrument, playing that melody. Also with this pipeline longer than 5s music modifications are possible. If you cut your audio into 5s pieces
- and use the same prompt, seed and strength for each modification, generated samples will be somewhat consistent. So if you concatenate them together, you will have
- longer audio modified.
-
Quality of the generated music is not amazing, (mediocre, I would say) and it needs a bit of prompt and seed engineering. But it shows one way, how to make cover
- versions of music in the future.
-
- Colab notebook is included, where you can find step by step, how to do it.
- Melodiff_v1.
-
-
Examples of music generated by modifying the underlying song:
-
- Amazing Grace, originally played by flute, modified to be played by violin
-
-
-
- Bella Cao, originally played by violin, modified to be played by saxophone
-
-
-
- Iko iko, originally played by accordion, modified to be played by saxophone
-
-
-
- When the Saints, originally played by violin, modified to be sang by vocals
-
-
-
Examples of longer music samples:
-
- Iko iko, originally played by accordion, modified to be played by saxophone
-
-
-
- Iko iko, originally played by accordion, modified to be played by violin
-
-
-
- When the Saints, originally played by piano, modified to be played by flute
-
-
-
Im using standard (not paid) Google Colab Gpu configuration for inference. Im using default values for number of inference steps (23) from the underlying
- pipelines. With this setup it takes about 8s to produce 5s long modified sample. For start it is ok, I would say.